Machine vision and Industries – ZWO group on Facebook about microscopes and noise imaging

David,

The lens is usually a fisheye or other lens with 1X magnification. You have to add it to a microscope meant for a human eyeball. A decent microscope is expensive and odd shaped for humans, so it does not fit in an industrial or lab or security environment.

What I want is a 10x or 50x or 100x tube that can screw into the ZWO camera so it is a “camera with a microscope (or telescope) lents. Not a human microscope with an add on camera.

Machine vision has different requirements from a human using these microscopes and telescopes. In a process environment (chemical plant, factory, refinery, air or ground vehicle) you need something small, easily mounted, and suited to the monitoring task. I have in mind monitors for surface cracks, vibration and bending, accumulation of dust and particles, radiation damage, small motions. That sort of thing.

I am asking about ZWO, because there is a fairly large community of users, and many of them have been processing and analyzing images in quantitative terms for a long time. And many people using these sensors are more aggressively and seriously using them for machine vision tasks. One of the biggest hurdles now is getting raw data from the many environments and observing tasks – to then develop the needed “AI” or machine vision, statistical or control applications.

I am doing this for the Internet Foundation, because there are literally billions of people affected by small things like machine vision. I see the merging and combining of tens of thousands of groups on the Internet involved in the creation and growth of the new industries. Trying to understand all that, I look for ways to teach the coming generations, and to find and encourage growth where there are critical technologies and methods needed. One tiny missing piece can stop a whole industry for decades.

Richard Collins, Director, The Internet Foundation


Susan,

I was just talking with a 3D printing group on Hackaday.IO. One of them mentioned they printed an adapter and it was quite satisfactory. Don’t have a printer just now, I gave the last one away. I asked them about printing the whole lens (they were using clear materials). I can correct a lot of lens distortions and variations using mathematics.

After extensive searching, I want to mount a 100x microscope objective directly to the camera. But that requires changing the focal length of the lens to shorten the tube length that is long by industry standards for human microscopes. I need ( for one of many experiments) a 100x magnification and the high resolution and frame rates of these kinds of cameras.

Forcing the use of a ZWO standard adapter and box size on the problem, just raised the cost, size and complexity. I checked on some design groups and suppliers. The software for astrophotography is not satisfactory because it is not open and low latency for machine vision. And the machine vision is poor on calibration and registration often. All signs of industries and technologies in transition. Poor people stuck in the middle.

I see RMS and DIN threads. But the worst problem is they aim for a human eyepiece and eyeball, not an image sensor. A 3D printed adaptor would work, but I have short space so a long tube won’t work. Looking to design a 100x magnification lens from scratch. The price gouging is horrible. The cameras are relatively stable and reliable by comparison.

Here is a tutorial on microscope objectives. I won’t buy anything from Edmund. They were great when i was growing up, but got bought out and no longer help people starting out. But the article is good. Trying to see if I can find the lenses to make these. The holder with threads could be 3D printed and screwed directly into a ZWO or other standard camera. The missing pieces is support for low latency user written algorithms, for each application.

https://www.edmundoptics.com/knowledge-center/application-notes/microscopy/understanding-microscopes-and-objectives/

With a little effort, ZWO and its competitors could serve a growing machine vision industry – from very small to very large. The algorithms are all the same, just change the lens and the low latency data handling.

Richard


Susan,

You are doing some nice work!! I have been following 3D printing for more than 30 years, and try to stay up with all the materials and approaches. From 1993 to 2003, I created and built Sculptor.Org to track everything related to sculptors and sculpture on the web. I profiled all the early 3D scanning and printing companies and technologies. For ice, butter, chocolate, sand, snow, clay, and other ephemeral material artists, I recommended taking multiple 2D images – anticipating there would be algorithms to convert that to 3D one day (now there are), and that they owned their art and designs and could copyright and “print” them in any material by addition or subtraction.

When it got to more than 50,000 unique visitors each month, I could not handle that much work part time and as a free service to that global community. I set standards for sites, contracts, policies, methods. I spent another 20 years and am applying those lessons learned to the whole Internet.

Sorry, you probably don’t care, but you reminded me why I spend so much time on 3D imaging and printing. But I have no time for it myself. And, if 50,000 people wore me down, now I am trying to deal with 4.8 Billion and growing. LOL!


Susan Parker I will look as see what the micromachining and milling groups are doing. Small robotic arms, removing, rather than adding. If I can think of it, there are probably 10,000 people and groups who have been working on it for decades.


Susan Parker The particular thing I am trying to make right now is a low cost gravitational sensor for schools to use in global imaging arrays. Any accelerometer that is sensitive enough to track the sun moon tidal acceleration can be calibrated against them for location and orientation, then used to image things like the atmosphere, oceans, magma and their motions. There is need for earthquake early warning. The gravitational signals from seismic waves or tsunamis spreading from an earthquake are well understood now and the signal, that travels at the speed of light and gravity, can be picked up by many types of sensors. Getting the cost down is part of that group becoming a global sustainable community. The camera needs to record fine details of motion of something as simple as a pendulum. I try to pick things that are work and are good examples for the future technologies that are coming online now and in the next few years.


LIGO is rather large because they used light for the interferometer. Atom interferometer based detectors can be chip sized. The MEMS detectors are basically modified cell phone MEMS accelerometers. There are about 40 technologies that have been tried, and many more proposed. What I was doing is looking for experiments that anyone with a good math science background at a high school level could do, and learn the basics of time of flight imaging, noise correlations, 3D imaging. The mathematics, sensor integration, data engineering are the same. Since cell phones and cameras are almost universal now, that seems like a useful starting point. Either by using and correlating dark noise, by imaging interferometry, imaging spectroscopy, and imaging accelerometry.

LIGO strain data is only being collected and shared at 16384 reading per second. So for earth based and solar system imaging, it is not really very good. And it is too large and too costly. I managed to convince them to share a little more. and to start sharing their “vibration isolation” and “environmental data”. They go to great lengths to remove all the earth based “gravitational noise” and only care about thing far away. At the rate they are going, it will take them ten years to get up to sharing practices that others are already using. The geophysical groups are moving much faster and integrating more quickly. I studied gravitational detection at UMD College Park so studied with and met some of the founders of LIGO. But they were all aiming at a massive project that would pay decades of graduate students and researchers salaries.

This is getting a little off-topic. The kinds of cameras that ZWO is using can be used universally for many things besides telescopes. With a little effort the sensors can be used for noise imaging, 3D imaging, and much more. I have been looking at the implications of the many live webcams on the Internet for the last couple of years. That live sharing of images could be extended to laboratories, microscopes, machine and process monitoring, phenomena study and algorithm development generally. I keep asking different people to start putting all sky cameras on all the satellites going into orbit. and all the space vehicles. We fight for tiny bits of the universe through an every more turbulent atmosphere, when we could put globally shared all sky (in the full sense) cameras and sensors in space.

If you have specific things to talk about, you can send me a private message. I don’t know this ZWO group that well. I don’t want to get outside their normal discussions. I think noise in image sensors is a fascinating and useful approach to many problems and applications. But they seem to mostly want to serve fairly traditional optical astronomy. I don’t see any serious efforts at raw data sharing, archiving, collaborative algorithm development, and other things that indicate the start of global communities around a topic.


Susan Parker,

I get to work on the most interesting things. Early this morning it was “greening the deserts”. Then a few hours helping people with finding their birth parents and trying to simplify and improve global DNA genealogy sharing and practices.

I think I found a way to use synchronized cell phones (global network) and cameras to track the moon using gravity. Covered and using the dark currents and noise. The main signals are magnetic noise from lightning, magnetosphere and human sources, gravitational noise from many sources, and correlation of high frequency noise from a differential measurement of “moon” against a local reference volume in the vacuum.

You can find some of what I am doing by searching “Richard Collins, Director, The Internet Foundation” I have to deal with the whole of the Internet, and people put so many things on it now, for so many purposes. I even see a book I edited. But I will try to stick with ZWO issues here, though ANY thing that attracts interest to a group counts on the Internet.

I am buying lenses to make my own microscopes and telescopes for lab use and teaching. I wrote camera tools that run in the browser to get my own statistics. It also can read any video from the Internet, so I am looking at all sky cameras to summarize days long records, and to register and identify stars, planets, sun, moon, comets, lightning, clouds, planes, satellites. I don’t have any telescopes of my own (down town Houston too many trees, lights, clouds and fog) but I can use most anything on the Internet.

If there is anything I can help you with, please ask. I learn by trying new things. It is hard to break out of our favorite topics and ideas.

Richard Collins, Director, The Internet Foundation

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *