“Write a parser to find a second author”? How compact can that be?
Commenting on https://x.com/_jasonwei/status/1910398763476320422
RichardKCollin2 wrote:
Used as a search engine, full LLM queries are inefficient. The AI could write the web scraper and run it – if precise, more intelligent parts can be embedded. “find a cat”, “look for second author from _” – in ordinary programs as needed. “embedded AI algorithms for coding”.
It can be run independent of the AI any time and extensively where billions of web pages have to be processed.
It means distilling out coding the essence of “find something more complex” in ordinary programming languages. What are the roots of “find a second author” It is now in SQL and Regex and parse trees and things like that. It could be in really tiny neural nets and other things.
“Write a parser to find a second author”? How compact can that be?
Could the “second authors” all be coded in the web pages? That is the purpose of “use global open tokens for the whole internet so the Internet is pre-tokenized for instant AI uses”. Not scraping unverified text but pre-compile and encode the Internet as a whole.
It allows the AI groups to standardize, index and independently query the full internet.
Richard Collins, The Internet Foundation
Commenting on https://x.com/ai_ctrl/status/1910690884229771502
There is 100% chance that humans using AIs will be trying to take over NOW. If AIs can operate on their own, that is simply escalation of what humans already face. How to hold humans and their corporations using AIs accountable comes first. Not “super-intelligence” but humans.
Commenting on https://x.com/AllenInstitute/status/1910410726071546358
That is NOT complex, just a few dozen major branches, and a tiny 84,000 nodes and their links. Yes a good step, beautiful, but only a pretty picture now. Share those neuron networks in an open format to make it useful to 5.4 Billion humans using the Internet. Make it low cost.