Biases are stored in large language models from their too narrow input data and lack of global open permanent memory

Biases are stored in large language models from their too narrow input data and lack of global open permanent memory

This is most likely from the strong social biases that are embedded in the training data, which mostly comes from open free sources. Just as “human languages” can be learned at a one shot level (almost unconscious) and intuitive, so are many subtle conventions and choices. I think of it as a kinds of spectrum where the details at the micro level are important, but also at the milli and centi and Kilo and Mega and Giga levels.

I try to get the AI groups to stop wasting decades learning something basic and important for human species survival alongside machine intelligence – NEVER use one shot alone. Even the youngest human child learns early and often – “keep track of your sources”, “do not trust what you hear”, “never talk to strangers”, “measure twice cut once”. And “memorize lessons exactly”.
 
The core is not that large, and there are ways to precisely integrate information from many varied sources. But they are at least as difficult as calculus, differential equations, public accounting and finance. “One shot” is lazy and error prone. Just in those places young people, new workers, and startups all need to be careful. Global integration is needed (all countries, all human languages, all knowledge, all domain specific languages). It is not hard, it just takes conscientious and focused effort by all humans and human groups.
 
Richard Collins, The Internet Foundation
Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *