Wheres the beef? Where’s the data, code, explanations, and complete processes – for all humans?


Wheres the beef? Where’s the data, code, explanations, and complete processes – for all humans?
Where’s the lossless complete data that is actually accessible? With all the code that was used, including all the settings, and environment? It is possible to check and verify the work of hundreds of thousands of individuals over decades now. But only if ALL the information is accessible in lossless and traceable form using accessible tools.
There ought not to be anything you have done or concluded that cannot be explained or traced. But much of is has been stored in such eclectic form that all members of your collaboration do not understand it completely. So the whole of your group likely is fragmented into many groups who cannot understand what the other groups did or do — in sufficient detail to exactly reproduce it. Even internally. And definitely not globally for all humans.
 
You might argue “we are special people who deserve privileges because of our closed specialized skills” and “our big big hardware”. But if your whole community cannot trace one experiment completely through all the dependencies and assumptions so you have the whole of it, I would say it is almost certain your whole groups are “forgetting things that are critical to the growth and completeness of high energy physics and engineering” – for all humans.

I am just responding to these flashing pictures that are not traceable, not complete, not tied to the raw data, not tied to the settings and processing of the machines and data. You have it in your internal representations, in varying degrees in the minds of a lot of people who cycle through and then go to other things. Or who stay a long time, forget where they learned things, and forget how to speak to all humans.

You are paid by “all humans” ultimately. At least that is the kinds of things you say to justify continuing and perpetual funding for things fewer and fewer people can check if you are doing anything useful, or it you did it right — all the way down.

Do it and you will learn a lot. And perhaps learn why what you do is mostly incomprehensible to most humans, not because they cannot understand, but because you built a huge aggregation of things that have gaps, weaknesses, hidden assumptions, some errors and a huge dose of “we were too rushed to write it all down” and “no one is exactly sure if we did it right”? You ought to have people who can find and index and verify “all data and files at CERN and all their dependence and explanations and training and uses in the world.” All of it If you cannot then stop bragging about big data.

Ultimately all knowledge will be accessible to all AIs and all humans in useful form. And that means – traceable to all dependencies and assumptions and knowledge stored everywhere, including in nerve cells.

If you think tracing this experiment is too big, then simply find all references to your “standard” model on the Internet and in all the files and stores of all your past and present collaborators and networks. Verify all dependencies, “find and disintermediate all duplicate elements in all pathways” (that is a lossless process), and make it all accessible to all humans in all human and domain specific languages. I know that is possible. It might be hard but it is not impossible.

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *