Project update – omega squared for high sampling rate gravimeters

Accelerometers and gradiometers get better at higher frequencies, use the Internet to multiply capabilities many fold

https://www.researchgate.net/project/Solar-System-Gravimetry-and-Gravitational-Engineering

These “updates” are just things I think are important to remember, and what I can find time and energy to write down. If you see something you disagree with, write me or make a comment. I got a notice about a comment somewhere in the updates, but the link did not take me there, and I could not find it. So messages or email or Twitter might be best.

If you look at the noise model for LIGO type signals, or seismometers which can be operated as position, velocity or acceleration sensors – you see on the LIGO this low noise at basically milliSps to KiloSps sampling rates. For analog components milliHertz to KiloHertz. This is because they want to wipe out all the “Newtonian noise” from earth and the solar system, and only look far away.

Anyway, human organizational foibles aside, if you have the Fourier transform of a 3D gravitational signal passing through your sensor in a particular direction, or you are focusing an array of gravimeters on a single point (in the convection zone in the sun, on a region of the boundary of the earth inner core, on the surface of the earth following seismic waves or tsunamis, in voxels in the atmosphere or in ocean currents) – it is the usual sin(omega*t)

If you use a strain sensor like LIGO (position), you have to have sensitivity for really tiny displacements. But if you use a high frequency (Gsps or GHertz) and a gravimeter (sensitive accelerometer or vibration sensor), then the time derivative gives you an omega multiplier for the velocity, and then an omega squared multiplier for the acceleration.

I always have to stop and remember. The frequency in Hertz times 2 pi per cycle is the angular rate. So 2*pi*1Giga is 6.283E9 and its square is 3.95E19. Every sensor is unique. There are lots of common elements for global sensor arrays. Otherwise I would not try to use them all. But when you get to any one technology, the creativity and persistence of the groups matters a lot. Often it is a character trait – caring, wanting to help others. There are lots of people “going for the prize”. Maybe schools and media are all about “winning” and “the big one”. But for society as a whole, those things tend to wash into turbulence and random chance.

I am working hard on global policies and standards for the Internet. 24 years of study and testing, drafting and testing policies at all scales, for all sites and groups and for those who are not on the Internet. A few simple changes will make a large difference.

Like, for instance, if the people who work on “quantum” and “AI” and “fusion” would quit trying to “win the big one” for their school, for themselves, for some tiny rewards. Then the whole human race might win. And these days and centuries, it seems the game is serious. Will billions die each year? Maybe millions. Nothing to evolution. On the global scale with our current social values, people matter not at all. In my Internet, they all matter.

Why am I talking about Internet when this is a “gravitational engineering” project? Because ideas are absolutely useless. No number of MC^2 or big ideas matters at all if people are starving, or flooded, or dying from AIDs or covid or the latest global pandemic. Mostly physics hides from such difficult problems. That is why the LIGOs are in their ivory towers and have the gates drawn tightly closed (I am exaggerating for effect. I switch from gravitational physics as a career very deliberately to work on global issues – global population, country planning, global economic modeling, global labor force, global education, global energy issues (climate changes, clean air, alternative fuels, hydrogen, nuclear, etc etc etc). I cannot write it all down. You haven’t been there, I would sound like Marco Polo describing with common words things you have never seen or tried to solve yourself.

Just remember the higher time derivatives multiply the signal by large constants if you use high frequencies or sampling rates. And that also applies to disaggregating sources and detectors into spatially distributed networks. One reading of one variable is OK, but it is nearly imposssible to reference or verify. But work with ALL the sensors and data streams, get ALL the groups using common units and best Internet sharing practices – there are many orders of magnitude available. People work on milliAmperes and then struggle to get KiloAmperes. But most ALL the domains (Kilo, Mega, Giga, etc etc etc) are well documented and available quickly – if you follow Internet best practices for sharing.

My rule of thumb for the Internet is that ANY group or topic can usually expect about 250 times improvement. That can be speed (doing in a day what ordinarily takes a year) or doing in a few weeks what is now taking decades. That was not possible five or ten years ago, but now it is. The costs have come down, the methods are improving, the details are easier to understand.

Did you ever play “Lunar Lander”? If you try to fly it as a human, you will mostly crash, until you memorize the things that work. Most of the problems at global scale are that kind of thing. They are beyond everyday human comprehension. Not beyond human comprehension, just most people are not willing to work hard enough to learn and practice and refine, and build social algorithms that solve such kinds of problems.
 
I am pretty smart, but the problems I wanted to work on required computers and sensors that were not available for many decades. I have had to wait and look and try to encourage key technologies and methods just to get the basics. I studied random neural networks in high school (55 years ago). I learned when I took linear algebra and other things that I could see the solutions to things in seconds that others could not see at all. I wanted to put that into the computer, into algorithms and systems and devices. I took a graduate course in Artificial Intelligence when I was a sophomore. I was really disappointed because they only use canned methods and did not have the methods evolve. The emphasis still is on making AI practitioners experts and professionals, to store in their memories stuff that can be sold at the highest prices. What is needed is to put the methods and learning into the computer memory and systems and get it out of human brains and memories. Humans are no longer better at anything. They are cheap and learn quickly, but forget. The systems reward large organizations with high staff numbers, big salary line items. Investors buy “lots of people doing something” even when there are other ways to go.
 
Richard Collins, The Internet Foundation

 

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *