NIST employee considers Internet users “too stupid to understand, why share?”

I am reading https://www.nist.gov/srd/nist-standard-reference-database-71 and downloaded the program. But it has no verified publisher information in the install for windows so it is treated as insecure by Windows.

This looks like it was produced in 2010 or so and the security and sharing policies have changed a lot in the last two years, let alone more than a decade.

It there an open source version of this program and its datasets? I want to check and possibly extend it, but not work with a black box. Just because I can do it from scratch, does not mean I want to. Running a program where the explanation of what is going on is vague is not very satisfying. If it is going to be used to teach, poor students are told “just trust it blindly, it is from NIST”?

I am not being critical, just trying to indicate the problems of using old compiled programs. Also, it is for Windows, but Linux, Python, Android, Apple and many other platforms and computer languages are used now. For the Internet Foundation, there are also the hundreds of human languages for the roughly 5.2 Billion humans with some access to the Internet, and another billion who have no computer or no Internet, no written language or other reasons.

If the software, data and papers were shared on GitHub.com or Data.gov, then the topic has some chance of being used globally, and collaborative communities developing.

There are programs EStar PStar as well that I was looking at. And, again, black box “use it but you cannot see how we implemented it and how it is calculated” is not very useful.

Note: At https://physics.nist.gov/cgi-bin/Star/e_table.pl for instance, the “download data” button for Hydrogen only lists the energy column and NOT the table.

Richard Collins, The Internet Foundation


Hello Cindy,

Yes, I agree with you.  I just did not want to have to re-engineer the programs from reading lots of papers over many years.  The source code has everything, or ought to.  It is common in published software now that the software does not match published papers (out of date or incomplete or small mistakes).

The transition to open software is slow. There are groups actively working where everyone sees how things are actually calculated.

I have been programming for just about 60 years now.  It is common to read the source code and it does not match the papers.  I do not have years now to guess what way certain things are done, when there might be several ways to implement in the actual software.

I was tired when I wrote that, and was thinking about the last 26 years of the Internet Foundation. There has been much change, but groups all make up their own methods and rules. NIST ought to know these things.

Nowhere did I say they are not doing a good job. I respect NIST and the people there.  But it still is hard to guess what is in the source code and to recreate the exact operation of that particular compiled version. One line different, one constant different, one small change in the source code and the results would be different and untraceable.

I can derive the equations and data from scratch.

It is not that important. I was just curious how they had done certain things at low energies, and much easier to read the source code and look, than go through so many years of papers, that ended up in a calculation that I cannot see.  Machine code hides all that.  It is a true black box.  We had a black box experiment in my third chemistry class about 1966 and I hated it.  I got it exactly right, but I hated wasting time guessing for some game, when the goal is common understanding.

If your attitude is that “all those unwashed billions are too stupid to understand”, you are wrong. The world is interconnected now in ways that will dramatically change how all things are done. I am ashamed to hear a representative of a leading scientific and engineering organization say that out loud. Isn’t that “let them eat cake”? or “We are so smart, just believe us and trust our program is the best possible, but you cannot see it”?

Actually there is interest in all things now, globally. That is why I said that. But much of NIST “shared” material, the background materials are not accessible on the Internet. If there were only ten thousand people interested in this program, and it takes them each weeks to recreate what is in the program, that is still a waste.

How did they actually calculate things?  The programmers have a lot of latitude what goes in the programs to interpret the many assumptions and ideas in those papers.  Now NIST is only one of many groups that worry about such things.  The ones that use open methods are the ones growing and getting funding. That is just how it goes.Actually some of the background material is in publications behind paywalls that even I cannot access.  And if so, most of those billions. “the masses” you said, they have no chance at all.  Internet users will just go to Europe, China and other places to find things, and ignore NIST, regardless of the hard work of your “rock star” researchers.

I spent the last 26 years checking what those billions using the Internet actually do and are interested in. Many very good people are blocked by casual methods of too proud researchers. Millions, not tens or hundreds. But since you will not know, best to treat them all with respect and do the best you can to remove barriers and  share in forms that are traceable, not hidden or locked away.

You said, “These data are not meant for the ‘masses’ – the “5.2 Billion humans with internet access” who have absolutely no interest in these data and could not possibly understand or use these data.”

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *