Machines are only as smart as their designers and handlers

Flowers @flowersslop  Except GPT-4 is around 21 million times bigger than this. Our brain is also magnitudes bigger, more complex, and still more efficient than GPT-4. But at the level of the neurons, both do nothing fundamentally different. And *something* emerges from all this. In both cases.


Machines are only as smart as their designers and handlers, and only truly intelligent and caring “when they remember”, “think deeply” and “spend more than milliseconds considering global issues”.

Pretty good animation. I say “thank you”, “please”, “would you”, “are you” – because I was taught from the earliest age to form complete sentences, which are also courteous. I also am programmed, from a lifetime of speaking and writing to humans, to expect the people behind the machines, making it look smarter; to respond politely. The “courtesy words” have social value. They are meant to trigger formal changes in the conversations meaning.

The key element is personal memory and validation. If the LLM wrapper software does not remember the conversations, does not spend processor time and memory to go over what it sees in the search and conversation. How it related to things – it is bad wrapper program design and execution, not an fundamental flaw in the GPT process. All the current chat AIs are that – just chat, just toys, just shallow, just one shot. They are only as good or useful as the memory management, relevance, seriousness of the effort to answer is shallow.
 
Managing what memory is used for is what it important. What the tiny tiny few staff who write the wrappers and respond to human users — they are much too few, limited in their experience, design for what they want, and completely ignore users now in all AI chat efforts.
 
I strongly recommend permanent lossless memory of conversations, lossless indexing and traceback to the materials and people behind any statement. A human knows to link what it says to things it knows.
 
My life is nearly over. 58 years ago I started working seriously on using machines to store memories and then use that to help the human species survive. I gave up on “prosperity” because a few pushy ones always get to the lions kill first. Bad software good humans can work. Good software bad humans does not.
 
For the Internet Foundation, I recommend “a global effort towards” “all human languages”, “all domain specific languages”, “all devices” “by all humans” “for all humans”. I marked it that way because chunks matter and are NOT supposed to be arbitrarily parsed. Words and their meanings, and the evidence, matter.
 
I can keep it all in mind if I use automations like your pictures (It need living labels and background data for human interaction)
 
Filed as (Machines are only as smart as their designers, and handlers)
 
Richard Collins, The Internet Foundation
 
#Intelligence #DeepLearning #GlobalIssues #HumanSurvival #ProblemsAffectingMillions #ProblemsAffectingBillions #IntelligentMachines #IntelligentSystems #Bureacracy #Bureacracies #IntelligentCorporations #IntelligentOrganizations #DeepEfforts #LosslessIndexing #Verification #Validation #Audit #Motives #Purposes #Listening #Values
Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *