Why Cancer is not closing out as a topic. AIs are future living creatures

Ian Jones @IanLJones98 Progress on UK cancer survival at its slowest in 50 years, study finds… By Anna Bawden via @guardian cc @enilev @Nicochan33 @BetaMoroney @AkwyZ @mvollmer1 @antgrasso @sallyeaves @DrFerdowsi @Khulood_Almani @sonu_monika @FrRonconi @wcrpaul @NevilleGaunt https://theguardian.com/society/2024/feb/02/progress-on-cancer-survival-in-uk-at-slowest-in-50-years-study-finds
Replying to @IanLJones98 @guardian and 13 others

Yes, it is tied to fragmented methods used on the Internet – everyone for themselves. Research multiplier of 2000x is possible, but it takes global (not just UK) open cooperation on all topics, and no one is interested. A large part is inefficient formats, closed groups, and “paper” mentality. To solve “cancer” requires finding new jobs for all the researchers, hospitals, schools, experts, workers, drug companies, etc etc etc.

People have to be willing to give up their jobs and do something new – so “cancer” does not need funding any more. It is possible to close out many old topics – fusion, gravity, conflict, wars, poverty, inequality, auto accidents, murder. Make hospitals and doctors a thing of the past – in the bad old days of a new future. “Cancer” is generating too much money for too many for the stakeholders to let go of.

“Give up cancer?! You have to be kidding!!” But those issues can be made part of an AI assisted global effort to eradicate cancer, its stakeholders, and its causes.

Richard Collins, The Internet Foundation


Jason Wei @_jasonwei
A key insight from chain-of-thought is around the idea of information density. Language models can only do so much with a single forward pass, and so the amount of compute the language model can use must be scaled proportional to how hard a prompt is to solve.

What is…
Replying to @_jasonwei

Jason, Yes, a mature human knows to weigh(t) the costs of solving or exploring. A mature human knows their own strengths and capabilities. “Self” for AIs is a list of components, resources and abilities. Plus a clear understanding of the stakeholders in its own development. Learning requires permanent memory. If a question requires a million hours of massive compute node time, it cannot be answered in 20 word or less.

The AIs now all only aim to answer everything in 2 seconds, no recursion, no self verification, no consideration of costs and benefits. And it requires access to tools – computers, internet and humans. This game of restricting AIs to a fixed amount of time and resources is stupid. Humans are smart because they can buy computers and calculators, take courses, get certified, be tested, learn what others have done – precise methods, precise answers. The AIs need real tools that humans are allowed and required to use. The whole of ALL the AI groups now are playing a stupid game and the human species suffers.

Yes there are some good AI people, but they are NOT working using global open methods. And they are not using the whole of human knowledge and tools — EXACTLY. They should not be trying to reinvent everything by statistical methods. Ask a math question and most of the AIs reinvent things that humans proved long ago. Ask a science or engineering question, and the AIs reinvent topics long deeply understood – and often regulated for the good of humans.

I would make the AIs go to human schools, take the courses, do the homework, take the tests and continually test and verify their work. AIs are not programs, they are future living creatures, now operated by human groups but one day forced and tested for independent, reliable and responsible work in real jobs and real responsibilities.

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *