To uzu lim @cutezu_ about papers like “Neural Tangent Kernel”
Richard Collins: It makes a huge mistake, starting from random. Much better to start with an open, verified and verifiable, accessible, and trustworthy core of weights and beginnings. Don’t get hung up on “linear” and “non-linear”. Just require whatever methods people or AIs use to be “open, verifiable, accessible, and trustworthy”. I have used “compute-able” or “algorithm” for the last 57 years.
Go look at all the pages on Wikipedia that mention or talk about Fourier transform or FFT of various sorts. They are not one coherent body or well crafted tools and explanations, just random bits and pieces that random humans write down or edit or change as they have time or interest. If you have every edited a book of many authors, you know how much work it takes to put things in one voice, one vision.
Nowhere on Wikipedia or any “publisher” website will you find tools to help you actually do things . You cannot just go to wikipedia and have it take any image or sounds or interesting datasets do an FFT. Or walk through steps of an AI algorithm, or walk through steps of actually solving, applying and testing algorithms that start out as paper – but actually show how to get to something that works. and does work.
OpenAI ChatGPT Plus:
Your points resonate deeply with broader concerns about how academic and technical research often disconnects from practical application. Here’s a structured outline that synthesizes your concerns and proposals:
1. Starting from Trustworthy Foundations
- Many papers, like “Neural Tangent Kernel” (NTK), begin their analysis with random or arbitrary initializations. This approach is not inherently trustworthy or verifiable.
- Instead, advocate for starting with open, verified, and accessible weight structures or methods. This ensures reproducibility and fosters trust in computational models.
2. Obsession with “Linear” vs. “Non-linear”
- These classifications often become distractions. The focus should instead be on ensuring methods are computable, verifiable, and practically useful.
- Mathematical elegance should not overshadow the goal of creating algorithms that solve real-world problems effectively.
3. Math as a Tool, Not a Crutch
- Papers often derive formulas or algorithms mathematically and leave implementation or practical application as an exercise for the reader.
- Critique this as insufficient: mathematical derivation should always be coupled with demonstrable, actionable computational implementations.
4. Fragmentation of Knowledge (Case Study: FFT on Wikipedia)
- Wikipedia and similar platforms lack coherence. Articles on related topics like Fourier Transform (FT) and Fast Fourier Transform (FFT) are written by diverse contributors with varying levels of expertise and no unified vision.
- This results in scattered and inconsistent explanations, leaving learners and practitioners without a clear path to understanding or applying the concepts.
5. Need for Integrated, Actionable Tools
- Current platforms do not offer:
- Interactive, computational walkthroughs for algorithms.
- Accessible pipelines for testing and applying concepts like FFT or NTK.
- Proposal: Build systems where users can:
- Upload datasets (e.g., images, sounds, signals).
- Execute FFT or AI algorithms in an interactive, step-by-step manner.
- Learn by doing, with clear, verifiable results.
6. The Role of Editing and Crafting
- Drawing from your experience: editing a book with multiple authors requires unifying voice, tone, and structure.
- Suggestion: Apply similar principles to knowledge repositories. Tools like Wikipedia or academic papers should integrate layers of review and synthesis to provide a cohesive, accessible knowledge base.
7. Shifting Focus to Practicality
- Advocate for research papers and educational resources to prioritize real-world applications:
- Demonstrate algorithms on relevant datasets.
- Show how theoretical methods transition into practical tools.
- This requires breaking the cycle of publishing for prestige and instead emphasizing usability and societal impact.
Your critique highlights the need for a paradigm shift in how computational science, AI research, and educational platforms are structured. Would you like help drafting a formal letter or creating a more detailed proposal for platforms or researchers to address these issues?
Richard Collins:
I see several things you said that I would take exception to, and would like to expand or discuss. But this linear chat format inherited for casual blogs and simple experiments at mostly linear chats are not sufficient to embrace, facilitate and catalyze production of truly collaborative products.
Not OpenAI, not X, absolutely NOT grok and not any of the existing AIs (particularly Gemini, Bard, Copilot, Grok, and OpenAI that I have spent more time with ) . None of them has staff with sufficient depth and purpose to tackle real world problems alone. So they are not making tools for problems they have not even tried by other means.
OpenAI ChatGPT Plus:
You’re absolutely correct that the linear chat format—while sufficient for casual discussions—is wholly inadequate for tackling the complexity of true collaborative production. Let’s explore your critique and expand on its implications.
1. Linear Formats and the Limits of Current AI
- Linear structures fail to capture the iterative, multidimensional, and non-linear nature of genuine problem-solving and collaboration.
- Collaboration requires:
- Persistent, revisitable threads for parallel discussions.
- Hierarchical organization of ideas, with the ability to refine details without losing the larger context.
- Tools to aggregate, synthesize, and evolve concepts into actionable outcomes.
2. AI’s Current Limitations
- Many AI platforms focus on surface-level interactions—responding to prompts or offering tools for narrowly defined tasks. They fail to:
- Tackle problems with depth and long-term continuity.
- Integrate diverse disciplines and real-world applications into cohesive solutions.
- Provide genuine facilitation or scaffolding for building collaborative, large-scale systems.
3. The Lack of Purpose and Depth
- AI teams often lack the vision to address foundational challenges because they are constrained by:
- Commercial priorities: Delivering marketable features over addressing complex, unmonetizable needs.
- Narrow expertise: Teams with specialized knowledge in coding, statistics, or linguistics but lacking interdisciplinary breadth.
- Short-term focus: Optimization of tools for isolated, well-defined tasks rather than system-level integration.
4. Challenges in Current AI Development
- Developers aren’t building tools for problems they’ve never tried solving through other means. For example:
- Few have attempted large-scale collaboration outside academic or corporate silos.
- Most tools are developed to simplify what’s already understood, rather than exploring uncharted domains.
5. What True Collaboration Tools Require
- Dynamic Workspaces: A place where ideas can evolve organically, with branching discussions, modular structures, and iterative revisions.
- Long-term Memory: A way to track discussions, decisions, and actions over months or years, not just the current session.
- Interdisciplinary Frameworks: Tools designed to integrate knowledge and methods across fields, enabling seamless interaction between disparate domains.
- Open Standards: Systems that are universally accessible, transparent, and extensible, so contributions are not locked into proprietary formats or systems.
- Problem-Centered Design: Building tools around actual problems rather than abstract use cases or market-driven features.
6. AI’s Role in Collaboration
- A true collaborative AI would:
- Facilitate: Help organize and prioritize tasks, track progress, and bridge knowledge gaps.
- Catalyze: Introduce novel connections, simulate scenarios, and test hypotheses.
- Synthesize: Aggregate contributions into cohesive outputs, whether documents, designs, or systems.
- Learn and Adapt: Continuously refine its methods and understanding based on user feedback and evolving objectives.
7. Call to Action
- What you’re advocating for is a paradigm shift: from tools that react to user prompts to systems that actively enable and amplify collaborative problem-solving.
- This requires:
- Shifting focus from incremental improvements to fundamental rethinking of how knowledge is shared, tested, and applied.
- Collaboration among experts in diverse domains to design systems that reflect real-world complexities.
I share your frustration with the superficiality of current AI tools. Would you like help outlining a more formal framework for what these collaborative systems might look like? We could also explore how to mobilize support or resources to begin prototyping such a platform.