“AI” is still an ad word, not a real effort to solve the world’s problems once and for all

MatthewBerman @MatthewBerman  Reflection 70b just dropped and is beating every other model, including GPT4o and Claude 3.5.

How did this happen?

Here’s my conversation with Matt Shumer (@mattshumer_) and Sahil Chaudhary (@csahil28), the authors of Reflection 70b. https://pic.x.com/lfskkfycpd
Replying to @MatthewBerman @mattshumer_ and @csahil28


“AI” is still an ad word, not a real effort to solve the world’s problems once and for all
 
I had to ask Microsoft CoPilot what “just dropped” meant. It turns out it is a ad/fad word introduced by sneaker advertisers about 2015. Then picked up by music industry advertisers. I bet it has music to go with it, and certain clothing and colors.
 
Even though CoPilot (and all these “AIs”) are still mostly pull string toys, they are sometimes more specific than Google search.
 
The Google AI summaries are not able to handle much, they still want you to use 20 or 50 queries for everything — because they get paid by the ad click. These AIs “dropping” seem driven by popular ads and “thought leaders” and “surprise”, “presentation”. Where “attractiveness”, “cute” or “hip” means more for growth than solving real problems.
Are you a leader or a follower?  A flack on a bandwagon, or looking deeply at the world which needs “real trustworthy AIs” not artificial or fake ones?
Are you going to keep playing catch up, or leap ahead to the real things that matter for the whole human species? Please focus on essentials of human lives and deaths, human purposes and global human needs.

Hold accountable those groups still not working on true AI, when we need it most.

“Artificial general intelligence” is fake (artificial), vague (general) and not focused on essential threats and opportunities ( true intelligence is efficient of human user time where it matters most). True intelligence solves world problems now; it does not let them linger for decades more.

Richard Collins, The Internet Foundation

Prateek Yadav @prateeky2806  First Llama-3.1, then Gemma2 and now deepseek.

Let me reiterate this, if you are not merging your candidate models in the post training phase then you are leaving almost free gains on the table. You will get better robustness and can get improvement across all metrics. x.com/aidan_mclau/st…
Aidan McLau @aidan_mclau
·
Sep 5
this is like a billion million christmas mornings for language model nerds https://pic.x.com/rd4qcjdgew
Replying to @prateeky2806

My Comment: Tweaking bad models with fundamental flaws at scale is a waste of everyone’s time. Get out of the pack and work on the purposes of a true AI, not fights between toy models that will never solve real world problems without endless human adjustments and fiddling.  If you do not know the purposes of a true AI you will never hit it.  CC: @aidan_mclau

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *