Hold systems and their makers and operators accountable, require AIs to be independently certified for jobs

Katie (Kathryn) Conrad @KatieConradKS  I think you’re approaching this from a different angle than your average SV guy trying to sell us on the idea that generative systems are the same in some fundamental way as human intelligences. I take your approach as holding systems accountable.  Replying to @KatieConradKS

Hold systems and their makers and operators accountable, require AIs to be independent certified for jobs

Katie, for the last 27 years of the Internet Foundation, I have tried to see why all the global scale problems are not being solved. I ended up looking at why all human knowledge is not being used efficiently and responsively.

 I started with “artificial intelligence” in high school (1963-1967) and one of my science fair projects sas “random neural network optimization” which I was using to study how computer circuits of infinite size could be compressed in lossless ways. And how data on the actual operation of human nerves and signals could be inverted and modeled with elementary (and buildable) components.

By the time I was a sophomore at CWRU they let me take a graduate course in Artificial Intelligence. But my approach was more attuned to how humans and systems learn. And they then were into selling computer hardware that they had.

 
I ended up working in global population studies, international development, modeling global economies, and building “sustainable systems”. Since I was in the middle of the discussions when that term came into the international development community, I have wondered if it is my term in the first place.
 
I set up the Famine Early Warning System for USAID and the State Department but many countries and organizations were involved. We found that “famine” is a tractable problem that is easy to map, monitor and often where it is possible to intervene. The exception where despotic leaders and groups are let do whatever they want. “Because the UN never intervenes, no matter what.”
 
In the 27 years of the Internet Foundation (23 Jul 1998) but I worked on it before. I found that the reason large organizations stagnate is because they rely on human memory and actions for organizational intelligence. Humans cannot work 24/7 and cannot hold enough information in mind and make decisions fast enough.
 
The reason that organizations fail (governments are organizations) is because it is not possible to audit human based systems at current real time speeds. Covid is good bad example where a simple infectious disease the response was so slow – because all the information was forced through country legacy human-in-the-loop systems – just to gather and record the information. Leaving aside the analysis and decisions for weeks meant a sluggish and irresponsible global response.
 
Holding AIs accountable, holding humans controlling the training and behavior of AIs accountable, is just part of a global situation were small groups can coopt and distress the entire world economy and systems. One person can spend a few hundred billion now on a whim, and there is no recourse or protections.
 
I am not exactly answering your question. “Independent checks and balances” is needed. But hard to implement. I feel that most of the many tens of armed conflicts and wars now can be monitored and corrected before the usual hundreds of thousands or millions are killed or let die. Those famine in Africa in the 1980s were led by human groups, not only by locust, drought and poverty.
 
I am getting ready to retire again. No group today has any intention to let go their perks and benefits to help all humans. The rich want to get richer and the powerful want more. I think the Internet can provide tools and data and methods at “global best” level and constantly updated. But the components of the Internet were cobbled from many independent groups and is an absolute mess when closely and systemically studied as I have.
 
What does this have to do with AIs? The ones now use free (or “no one can see where we got it”) materials from the Internet. I spent much of the last two years testing all the AIs that are advertising heavily and might get into use by public institutions, governments, corporation, large groups and communities. They cannot solve simple mathematics and engineering problems and most real world problems — because they are trying to derive mathematics from bad examples. They are not engaging the people who work at real jobs and real global tasks in the design, monitoring, correction and verification of the systems.
 
It is late and I cannot summarize nearly 60 years of working on how knowledge is gathered, stored, used and applied. If the organizational methods are not trustworthy, nothing can assure the “AIs” will behave in human, caring and trustworthy fashion.
 

I had an interesting chat with GPT this morning about groups working on real faster than light problems. They are not knee-jerk giving up because we did not have computers and control systems, and energy densities needed. I wish I could share it with you. I would like there to be ways to discuss and work on things that take a hundred years to work out for a few hundred, or a few years by a few million.

But OpenAI refuses to share the conversations in global open form for collaborations and discussion, to gather and improve our collective intelligence on topics where hundreds of millions of humans are working on some things, and billions affected.

 
Richard Collins, The Internet Foundation
 
 
 
 
 
Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *