Tokenizing and compressing, so humans and computers can solve large problems with finite memory – where it matters

Yi Ma @YiMaTweets Learning is all about maximizing information. We compress to learn and we learn to compress.
Replying to @YiMaTweets


Filed as Tokenizing and compressing, so humans and computers can solve large problems with finite memory – where it matters

Yi Ma, It is not maximizing information so much as maximizing the chance that the information is remembered and applied correctly in critical situations, especially where human lives are on the line. Where it matters, a lot. Where communications are limited, tokenizing to global open verified identifiers greatly compresses global scale systems. That compression allows getting critical messages through without error.

We compress to allow our finite human memories to hold more in memory at once. There are many problems where “having the whole problem in mind at once” is a critical part of solving it. So tokenizing (one way of compressing) is simply making it so the problem can be solved at all – using unaided humans. A global open token is easier to remember and apply consistently so there is no ambiguity in knowing what the pieces mean. In a game or process, knowing the precise steps, practicing, and then doing them exactly might save lives, or get the job done where it really matters.

Compressing to learn – there are problems where the steps are hard for humans to remember exactly but a correct resolution requires that steps be done precisely as the right time. The movie, Hidden Figures, about women doing calculations “by hand” is a good example. By constant practice, and by their skill in remembering things, they could come to solutions to safely get humans safely to orbit and back. Any one piece meant the difference between life and death.
 
There are many life-critical things in life. Knowing what to avoid, and where to go, how to find things, how to do things. These are good survival skills. If you have no food, will you ask your neighbors? In mathematics, the symbols for things are small, and if your brain is trained to instantly encode precisely and without error, a finite human brain can hold all the pieces in mind, in order to make correct decisions and choices. We have hierarchies where data is collected and encoded, put into computers and filtered to the right people who can hold things in mind. They can then make good decisions. But if the computer systems are using words that can be ambiguous, they (with their finite memories allocated) will not be able to solve it. We see that already in AI applications might have not stored enough of the original data to used it because the sources of data is lost. Literally not stored or not accessible.
 
It is hard to know what you are aiming to do. Your posts are short.
 
I found that ( any datasets and systems that were built “by accretion” by humans — generally they can be compressed by a factor of about 200x by fairly simple algorithms.)  So taking problems where they have a billion items, that is likely to compress, losslessly, to about 5E6 (5 Million). When all you have are computers with their limited memories, and the job “has to be done exactly right or people die” you encode where needed, do not leave anything out, and likely pray constantly that you do it right.
 
In large meetings, in large organizations, with complex problems evolving quickly, I have learned to listen and read, encode everything efficiently, and then be able to summarize it in mind as one scene or one movie, or one dynamic process – and then give directions that work for the whole.
 
Please try to remember this I am writing now. I think it might make a difference to you and people around you.
 
Richard Collins, The Internet Foundation
 
 
 
 
 
Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *