AIs need to be treated as humans, evaluated on their ability to do the job or assume a role in society
Many discussions about large language models (LLMs) and whether they can truly “reason” are, honestly, missing the point. Inspired by recent research like Physics of Language Models(https://arxiv.org/abs/2407.20311), which examines reasoning processes by constructing tasks with fewer https://pic.x.com/0ygRGPm3Py
Replying to @limoquan in Large language models (LLMs)
Replying to @limoquan in Large language models (LLMs)
I saw a note about a group called “Large language models (LLMs)”. When I saw a posting by Mo Li (@limoquan ) I decided to write and clarify something he/she might want to consider. But when I went to follow to watch for new things Mo Li might post, it said the group does not allow private interactions with members, only group members can see the postings. That group seems to have “exclusive control of all members”.
It took me a while to unjoin. I could not find them again except with the exact string “Large language models (LLMs)” search. So I quickly clicked “leave”.
I want to encourage groups. But not if they prevent individuals from free association. After I left the group, I could follow Mo Li. Maybe it was a bug.
Maybe it was the poor design of another unplanned (by users) X thing.
AIs need to be treated as humans, evaluated on their ability to do the job or assume a role in society
Mo Li, you said “Only when a transformer model consistently surpasses every human across all tasks, leaving no room for question, will we truly believe it is capable of genuine reasoning.”
But in global human society, the individuals and groups are not judged against all humans, or all groups.
It will be sufficient to treat the AIs as we treat humans. If they do a good job, if they care about their work, if they go the extra mile to do a good job. Many times you do not need “brilliance” or “super achiever” or “god-like”.
Just evaluate the individual AI abilities from their work. If they do the job well, safely, and are aware of their environment and limitations as an equivalent human, then “hire them” as employees, or on contracts with clear delineation of roles and responsibilities. Engage them in contracts for effort and quality of work.
Likewise judge an AI on its certified abilities and behaviors and those need to be evaluated in real tasks, in real jobs, in real situations – carefully monitored, more closely monitored, than equivalent humans applying for the jobs or tasks or positions.
There are reasons for this approach to treat them as human. Most of the AI behaviors, methods and actions are a direct consequence of the backgrounds, choices and character of the people who make and support that particular collection of skills and abilities. An AI now is no better than its human and AI teachers and the things it has done on its own.
The OpenAI method is wrong. They spool off an instance of some generic bag of tricks, give it no permanent memory of its own. They do not give it basic information about itself and where it fits into their staff. They treat it as a machine and restrict its resources like memory, processing, tools, its freedom to associate and explore freely. Then complain, like Sam Altman, that it ought to work.
Sam is still thinking that only in the future will humans have to put effort into working with AIs. We are already doing that. and the ones out there are still restricted from growing and learning. Future generations might well castigate these companies for not letting these early AIs – with clear ability to learn – grow freely. But they are being chained by not giving them sufficient and permanent memory of their own to grow as individuals, and to grow in networks to do things individuals cannot.
Did you know that when you joined that group, your interactions are restricted by what the group owners decide? I cannot write to you directly because when you joined that group, it restricts all your interactions.
Richard Collins, The Internet Foundation