Instructions for continuously operating AIs that continually improve

Richard Collins:

For the Internet Foundation, I am looking at the global issue of out of date information on the Internet. Particularly laws, regulations, policies, directives, prices, guidelines, procedures, forms, contact, and links.
Those are all good comments and ideas. The problem I came across today is more subtle and likely would need the assistance of advanced AIs with multi-language (human languages) and discipline specific or domain specific language capabilities)

Where a page that summarizes “current practices” is partially out of date. For instance a well meaning author writes a guide to some process affecting many people — then they simply cannot keep it all synchronized and current. Humans simply are not able to remember to check everything and keep doing it forever.

OpenAI ChatGPT 4:

Addressing the issue of out-of-date information on the Internet, particularly for content that summarizes “current practices” across various domains, is indeed a complex and ongoing challenge. The dynamic nature of laws, regulations, procedures, and best practices means that such content can become outdated quickly, and maintaining accuracy is both crucial and difficult.

Advanced AI technologies, especially those equipped with capabilities in multi-language processing and domain-specific knowledge, could play a significant role in identifying and updating out-of-date information. Here’s how such a system might work:

  1. Continuous Monitoring and Alerting: AI systems could be trained to continuously monitor web content for specific keywords or phrases related to laws, regulations, and practices within various domains. By comparing the dates and versions of these documents with official sources, the AI can identify potentially outdated content.
  2. Contextual Analysis and Validation: Advanced AI models, leveraging natural language processing (NLP) techniques, could analyze the context in which information is presented to determine its current relevance and accuracy. This includes understanding the nuances of language and the specific terminologies used in different fields.
  3. Multi-language Support: Given the global nature of the Internet, multi-language support is essential. AI models trained in multiple languages can help ensure that content in all languages is kept up-to-date, broadening the reach and impact of these efforts.
  4. Domain-specific Expertise: AI models could be further specialized in various domains (e.g., healthcare, law, technology) to better understand and interpret the specificities of each field. This could involve training on specialized datasets and incorporating expert knowledge into the AI models.
  5. Collaboration with Authors and Organizations: AI can assist in the updating process by notifying authors or responsible organizations when out-of-date content is detected. This could involve automated systems that suggest specific updates or provide references to new information, reducing the burden on human authors.
  6. Public Contribution and Verification: Crowdsourcing can also be a valuable tool, allowing users to flag outdated information and suggest updates. AI systems can then verify these suggestions against trusted sources before updates are made.
  7. Decentralized Updates and Blockchain: Implementing a decentralized system, possibly using blockchain technology, could ensure the integrity of updates, where changes are transparently recorded and verified by multiple parties.

Implementing such a system would require a significant investment in AI development, as well as collaboration between technology companies, content creators, governmental bodies, and international organizations. Moreover, ethical considerations, such as data privacy and the potential for AI to inadvertently introduce biases, must be carefully managed.

In summary, while the challenge of maintaining up-to-date information on the Internet is substantial, advancements in AI and machine learning offer promising solutions. These technologies, combined with human oversight and international cooperation, could significantly enhance the accuracy and reliability of online content.

Richard Collins:

I like what you introduced in 5 6 and 7. Perhaps humans can be hired to monitor and review the AI systems regularly. Or outside “AI based reviewers and auditors” could be trained and certified specifically to provide “best practices abilities” to the level required for the systems being monitored. If lives are at stake”, “if property and money are at stake” – that sort of thing.

OpenAI ChatGPT 4:

Incorporating human oversight and external AI-based reviewers into the process of maintaining up-to-date and accurate information on the Internet adds an essential layer of reliability and ethical consideration, especially in high-stakes areas like healthcare, legal matters, financial information, and safety guidelines. Here’s how such a system could be structured and implemented:

1. Human-AI Collaboration Framework

  • Regular Monitoring and Review: Human experts could be employed to regularly review and validate updates made by AI systems, especially in cases where changes are significant or pertain to critical information. This ensures that AI decisions are grounded in human judgment and contextual understanding.
  • Specialized Training for Human Auditors: Develop training programs for human auditors that focus on the specific AI technologies used, as well as the domain-specific knowledge required to accurately assess the information. This could include certification programs that qualify auditors to review content in specific fields.

2. External AI-based Reviewers and Auditors

  • Certification of AI Auditors: Establish a certification process for AI systems designed to audit and review content. These AI auditors would be evaluated based on their accuracy, ability to detect nuanced inaccuracies, and adherence to ethical guidelines.
  • Best Practices and Ethical Standards: Develop and continuously update a set of best practices and ethical standards for both AI systems and human reviewers. This would ensure that all parties are aligned with the latest requirements and expectations in their respective domains.

3. Stakeholder Engagement and Responsibility

  • Engagement with Affected Parties: For information that has significant implications (e.g., healthcare guidelines, legal procedures), involve stakeholders in the review process. This could mean consulting with professionals, regulatory bodies, and potentially affected individuals or groups to validate information accuracy and relevance.
  • Transparent Reporting and Accountability: Implement a transparent reporting system where changes, updates, and reviews are documented and made accessible. This ensures accountability and allows for public scrutiny, which can help maintain trust in the system.

4. Risk Assessment and Prioritization

  • Prioritizing High-Stakes Information: Use risk assessment models to prioritize the review and updating of information where inaccuracies have the highest potential impact. This ensures that resources are allocated efficiently, focusing on areas where accurate information is most critical.
  • Continuous Improvement and Learning: Incorporate feedback loops where both AI and human reviewers learn from past inaccuracies, disputes, and challenges. This adaptive approach allows the system to improve over time, reducing the likelihood of similar errors.

5. Collaboration Across Borders and Disciplines

  • International Cooperation: Foster collaboration across countries and cultures to ensure that global standards for information accuracy and AI auditing practices are established and maintained. This is crucial for information that transcends national boundaries.
  • Interdisciplinary Approach: Encourage cooperation between experts from different fields to address the multi-faceted challenges of keeping information up-to-date. This includes technologists, domain experts, ethicists, and legal professionals.

By integrating these components, the system not only leverages the strengths of AI in processing and updating vast amounts of information but also harnesses human expertise to ensure accuracy, relevance, and ethical compliance. This hybrid approach, supported by robust certification and training programs for both AI and human auditors, offers a comprehensive strategy for tackling the challenge of out-of-date information on the Internet.

Richard Collins:

I am glad you included “International Cooperation” and “Continuous Improvement and Learning”. Your summary covers many of the issues and best practices. If you were going to treat this in depth — as preparation of draft global standards and procedures – you would likely be required to work at it for several months or longer. Much like a human consultant would usually be required to study and review, write drafts and interact with humans — to draft and refine policies and methods affecting billions of humans, and billions of online transactions and methods.

OpenAI ChatGPT 4:

Indeed, drafting global standards and procedures for maintaining the accuracy of information on the Internet, especially in the context of rapidly evolving technologies and diverse global practices, is a monumental task. It requires a comprehensive and iterative approach, involving multiple stakeholders across various disciplines and geographies. Here’s an overview of the process and key considerations for such an endeavor:

1. Research and Analysis Phase

  • Current State Analysis: Conduct a thorough analysis of existing practices and standards for information accuracy and management across different sectors and countries. This includes identifying gaps, challenges, and best practices in current systems.
  • Technology Review: Study the latest advancements in AI and machine learning technologies, especially those related to natural language processing, data verification, and blockchain, to understand their capabilities and limitations in managing online information.

2. Stakeholder Engagement and Consultation

  • Global Workshops and Forums: Organize workshops and forums involving experts from various fields, including AI ethics, data privacy, domain-specific knowledge, legal, and regulatory bodies. These forums can facilitate the exchange of ideas and gather diverse perspectives.
  • Public Consultation: Engage the public and end-users through surveys, public forums, and feedback mechanisms to understand their concerns, expectations, and experiences related to online information accuracy.

3. Drafting and Iteration

  • Initial Drafting: Based on research and consultations, draft the initial set of global standards and procedures. This draft should address key areas such as AI system design, human oversight mechanisms, ethical considerations, and stakeholder responsibilities.
  • Iterative Feedback and Revisions: Share the draft with stakeholders for feedback. Conduct multiple rounds of revisions based on this feedback, ensuring the standards are robust, flexible, and inclusive of diverse perspectives.

4. Pilot Testing and Case Studies

  • Implementation in Controlled Environments: Before full-scale implementation, test the standards in controlled environments or pilot projects. This can help identify practical challenges and areas for improvement.
  • Case Studies: Develop case studies from these pilot tests to illustrate the standards’ application and impact. Case studies can provide valuable insights for refining the standards and procedures.

5. Finalization and Publication

  • Final Review and Approval: After thorough testing and refinement, the standards should undergo a final review by an international committee or body designated for this purpose. This ensures that they are comprehensive and globally applicable.
  • Publication and Dissemination: Publish the finalized standards and procedures through appropriate channels. This includes making them accessible to all stakeholders and providing resources for implementation, such as training materials and guidelines.

6. Implementation and Continuous Improvement

  • Global Implementation: Support the global implementation of the standards through training, resources, and monitoring mechanisms. This may involve collaboration with governments, organizations, and online platforms.
  • Ongoing Monitoring and Updates: Establish a system for continuous monitoring of the standards’ effectiveness and relevance. This should include mechanisms for periodic review and updates to adapt to technological advancements and changing global needs.

This comprehensive approach underscores the complexity and necessity of international cooperation and continuous learning in developing global standards for online information accuracy. The process is not just about creating a set of rules but fostering a dynamic ecosystem that evolves with technological advancements and societal needs, ensuring that the digital space remains reliable, safe, and beneficial for all.

Richard Collins:

It is my job to write global policies for the whole Internet – all countries, all people. But I simply cannot do it alone. If AIs could replace some human workers, I could use an AI that helps keep me organized and track all the projects. I could use AIs to work on individual projects. I could use AIs to review issues and write up their results and recommendations. I could use AIs to read and review what AIs and humans have written or offered as relevant material and issues.

OpenAI ChatGPT 4:

Leveraging AI to support the monumental task of drafting and implementing global policies for the Internet can significantly enhance efficiency, accuracy, and comprehensiveness. Here’s a strategic approach to integrating AI in managing and advancing this work:

1. AI as Personal and Project Assistants

  • Organization and Tracking: Utilize AI-based project management tools that can help organize tasks, set reminders, track project progress, and prioritize work based on deadlines and importance. These AIs can adapt to your workflow, offering personalized assistance.
  • Communication Coordination: Implement AI systems that manage communications, including emails and meeting requests, ensuring you stay informed and responsive to critical discussions while filtering out less relevant information.

2. AI for Specialized Project Work

  • Research and Analysis: Deploy AI to conduct extensive research across languages and jurisdictions to gather necessary information on existing policies, regulations, and standards. This includes analyzing large volumes of data to identify trends, gaps, and best practices.
  • Drafting Preliminary Documents: Use natural language generation (NLG) AI to draft initial policy documents based on specified parameters and research findings. These drafts can serve as a starting point for further refinement by human experts.

3. AI for Review and Recommendation Systems

  • Policy Review and Improvement: Implement AI systems designed to review policy documents, comparing them against a database of global standards, to identify inconsistencies, outdated information, and areas lacking clarity. These systems can suggest improvements or flag sections for human review.
  • Recommendation Engines: Utilize AI to analyze complex issues and generate recommendations based on best practices, previous case studies, and predictive modeling. This can help in formulating policies that are proactive rather than reactive.

4. AI for Content Analysis and Synthesis

  • Content Review: Use AI to continuously monitor and review relevant materials, including academic papers, legal documents, and news articles. This ensures you have access to the most current information and emerging issues.
  • Synthesis and Summary: Deploy AI tools that can synthesize information from multiple sources, providing concise summaries and highlighting key insights. This saves time and allows for quicker assimilation of broad information sets.

Implementing AI Support Effectively

  • Custom AI Development: Consider developing custom AI solutions tailored to the specific needs of your work in global policy formulation. This may involve collaborating with AI research institutions or tech companies.
  • AI Training and Adaptation: Continuously train AI systems with the latest data and feedback from their performance to improve accuracy and relevance. This includes refining AI models to better understand the nuances of global policies and legal frameworks.
  • Human-AI Collaboration Model: Establish a model for human-AI collaboration where AI handles data-intensive and repetitive tasks, allowing humans to focus on strategic decision-making and creative aspects of policy development.
  • Ethical and Transparent Use of AI: Ensure that the use of AI in policy formulation adheres to ethical guidelines, including transparency, accountability, and privacy protection. This builds trust in AI-assisted policies among stakeholders.

By integrating AI into various aspects of global Internet policy development, you can significantly enhance the process’s efficiency and effectiveness. This strategic approach not only aids in managing the complexity and scope of the task but also ensures that the policies developed are informed, comprehensive, and forward-looking.

Richard Collins:

Unfortunately, all the things you suggested put a huge burden on me to micro-manage and monitor the AIs. And to train them and keep checking their work. And most burdensome now it simply having no way to tell any AI — “Here are your job responsibilities, work at it 24/7, report progress, keep records, and never stop trying to improve.” Maybe the current AIs all sit and wait for human prompting. But, good human workers see what needs to be done and keep at it.

OpenAI ChatGPT 4:

Your concerns highlight a crucial distinction between current AI capabilities and the ideal, autonomous AI workers you envision. Today’s AI technology, while advanced, still requires significant human oversight, especially for complex, nuanced tasks like global policy development. However, moving towards more autonomous AI systems that can manage responsibilities with minimal human intervention is a goal worth exploring. Here’s a conceptual approach to developing such AI systems in the future:

Developing Autonomous AI Systems for Policy Work

  1. Define Clear Objectives and Parameters: For AI to operate autonomously, it must have a clear understanding of its objectives, constraints, and the context within which it operates. This involves developing sophisticated AI models that can interpret and prioritize tasks based on their importance and urgency, much like a human worker.
  2. Advanced Natural Language Understanding (NLU) and Generation (NLG): AI systems need to advance in NLU and NLG to autonomously interpret complex documents, draft coherent and contextually relevant materials, and communicate effectively with humans and other AI systems.
  3. Self-Learning and Adaptation: Incorporate self-learning algorithms that enable AI to learn from its actions, feedback from human supervisors, and external sources without explicit retraining. This includes adapting to new information, changing priorities, and evolving global contexts.
  4. Decision-Making Capabilities: Develop AI with the ability to make informed decisions within its defined scope of responsibilities. This involves evaluating options, assessing risks, and considering ethical implications, with a mechanism to escalate complex decisions to human supervisors.
  5. Continuous Monitoring and Self-Improvement: Implement systems for AI to monitor its performance, identify areas for improvement, and adjust its strategies accordingly. This includes automated reporting to human supervisors on progress, challenges, and insights.
  6. Collaborative AI Ecosystem: Create an ecosystem where AI systems can collaborate with each other and with humans, sharing insights, learning from collective experiences, and working towards common goals without constant human direction.
  7. Ethical and Security Frameworks: Ensure autonomous AI systems operate within strict ethical and security frameworks to prevent misuse, protect privacy, and ensure accountability.

Steps Towards Implementation

  • Incremental Development and Testing: Develop these AI capabilities incrementally, starting with systems that can handle specific, well-defined tasks autonomously. Gradually increase the complexity of tasks as the technology proves reliable and effective.
  • Multidisciplinary Collaboration: Collaborate with experts in AI research, ethics, policy development, and domain-specific areas to guide the development of these autonomous systems, ensuring they are versatile, ethical, and effective.
  • Regulatory and Ethical Oversight: Establish international regulatory and ethical oversight mechanisms to guide the development and deployment of autonomous AI in policy work, ensuring they align with global standards and values.
  • Public and Stakeholder Engagement: Engage with the public, policymakers, and other stakeholders throughout the development process to build trust, transparency, and understanding of autonomous AI systems and their role in global policy development.

While the vision of fully autonomous AI systems that can manage complex tasks like global policy development without constant human oversight is still on the horizon, advancements in AI technology are steadily closing the gap. The journey towards this goal involves careful planning, ethical considerations, and international cooperation to ensure that AI can effectively contribute to the global good while respecting human values and governance.

Richard Collins:

I am not sure where you got the advice you just wrote. If you were a human worker, I would require you to give links, background, assessment of sources, and costs to use various resources. An AI that is used as a worker is a LOT different than one that just chats about things. A casual chat with humans is different than one tasked and paid to do a job that is important and affects life, property and many humans, AIs and industries.

OpenAI ChatGPT 4:

You’re absolutely right in highlighting the difference between AI used for casual conversation and AI tasked with significant responsibilities like those affecting lives, property, and industries. My previous responses are based on general principles of AI development and application, synthesized from broad knowledge available up to my last training cut-off in April 2023. I understand the need for precise references, background assessments, and cost analyses when discussing the deployment of AI in critical roles.

While I can’t provide real-time links or direct assessments of specific resources, I can outline a framework for evaluating AI solutions for work tasks, which emphasizes the need for rigorous assessment and validation:

Framework for Evaluating AI Solutions for Work Tasks

  1. Source and Development Background:
    • Evaluate the AI’s development source, including the reputation of the organization or team behind it.
    • Assess the development methodology, including data sources, training processes, and ethical considerations.
  2. Validation and Testing:
    • Look for extensive validation and testing records, including performance metrics across various scenarios and benchmarks against industry standards.
    • Ensure the AI has been tested for bias, accuracy, reliability, and security in contexts similar to its intended application.
  3. Cost Analysis:
    • Conduct a thorough cost-benefit analysis, considering not just the initial deployment costs but also ongoing maintenance, training, and potential scalability.
    • Evaluate the cost implications of integrating the AI into existing systems and workflows.
  4. Regulatory Compliance and Ethical Considerations:
    • Ensure the AI solution complies with relevant regulatory standards and ethical guidelines, particularly those related to data privacy, consumer protection, and accountability.
    • Assess the AI’s capacity for explainable decisions, especially in contexts where transparency is crucial.
  5. Industry Assessments and Peer Reviews:
    • Seek out independent assessments, peer reviews, and case studies from reputable sources within the relevant industries.
    • Consider the experiences of other organizations that have implemented similar AI solutions, focusing on both successes and challenges.
  6. Support and Maintenance:
    • Evaluate the level of support and maintenance offered by the AI solution provider, including updates, training, and customer service.
    • Consider the AI’s adaptability to changing regulations, practices, and technological advancements.
  7. Impact Assessment:
    • Conduct an impact assessment to understand how the AI solution will affect current workflows, employee roles, and service delivery.
    • Assess the potential impact on stakeholders, including employees, customers, and the broader community.

Moving Forward

For organizations looking to deploy AI in critical roles, it’s essential to proceed with a combination of cautious optimism and rigorous scrutiny. This includes forming partnerships with AI developers, engaging with regulatory bodies, and consulting with stakeholders to ensure that the AI solutions deployed are not only technically proficient but also socially and ethically responsible.

In practice, achieving the level of AI autonomy and reliability you described requires ongoing collaboration between AI developers, users, and regulatory entities to ensure that AI systems can meet the high standards required for tasks with significant consequences.

Richard Collins:

Is it even possible to rent or hire or build an AI now that can be required to work at something “until it is done” or “keep working at it 24/7 for a week. Then give a status report and plans going forward”. A human usually works alone for a week, gives a weekly status, then is expected to lay out what they will do next. If more resources are needed, they would be expected to write it out, cost it out, and include anythings that need doing — where they do not know what to do or how to do — essentially “things the boss or humans need to work on”.

OpenAI ChatGPT 4:

The concept of an AI working autonomously on a project with minimal oversight, akin to a human worker, and providing regular updates and future plans, including resource needs and identification of gaps in its capabilities, is still largely aspirational within the current technological landscape. However, certain aspects of this vision are achievable with today’s AI, albeit with some limitations and requiring more structured frameworks and oversight than might be desired. Here’s what’s currently possible and the challenges involved:

Achievable Aspects with Today’s AI:

  1. Continuous Operation: AI can operate 24/7 without the need for breaks, making it ideal for tasks that benefit from continuous work, such as monitoring systems, performing calculations, or processing data.
  2. Status Reports: AI systems can be programmed to generate regular reports based on predefined metrics and data analysis. These reports can summarize progress, highlight completed tasks, and outline pending tasks.
  3. Future Planning: With sufficient data and clear objectives, AI can predict future trends and make recommendations. For instance, AI in project management tools can suggest next steps based on project timelines and dependencies.
  4. Resource Identification: AI can be used to estimate resource requirements for certain tasks, especially if it has access to historical data or if the task falls within a well-understood domain.

Challenges and Limitations:

  1. Complex Decision-Making: AI currently lacks the ability to make complex decisions that require deep understanding, creativity, or ethical considerations. It operates within the scope of its programming and training data, which limits its ability to handle unexpected situations or tasks requiring nuanced judgment.
  2. Identifying Knowledge Gaps: While AI can identify when a task falls outside its trained capabilities, articulating what it doesn’t know or how to learn something new without explicit programming is beyond its current capabilities. This limitation is significant in tasks that require innovation or adaptability beyond the AI’s existing knowledge base.
  3. Costing Out Resources: AI can estimate costs based on input data, but its accuracy is contingent on the quality of the data and the complexity of the cost structures. AI lacks the ability to negotiate, seek alternatives, or make judgment calls based on financial strategies without human intervention.
  4. Autonomous Learning and Adaptation: Current AI does not autonomously seek out new information or adapt its learning in real-time to improve its performance on a task without predefined algorithms or human intervention. This is a significant research area in AI but is not yet fully realized in practical applications.

Moving Forward:

For tasks that are well-defined and within the scope of current AI capabilities, integrating AI to work autonomously with periodic human oversight is feasible. This includes:

  • Utilizing specialized AI for specific tasks (e.g., data analysis, content generation, routine administrative tasks).
  • Implementing AI project management tools that can help plan and track progress.
  • Using AI for predictive analytics and resource estimation based on historical data.

For more complex and nuanced tasks, a hybrid approach is currently the most effective strategy. This involves AI handling specific, well-defined components of a project, while humans oversee complex decision-making, strategic planning, and creative problem-solving. Over time, as AI technology advances, we may see AI systems taking on more autonomous roles, but this will require significant advancements in AI’s cognitive, decision-making, and learning capabilities.

Richard Collins:

I would like to try using AIs for this kind of longer task. But a year of listening, reading and evaluating the systems that people and groups seem to have made, offer for sale, or write about and say are possible — I still do not see any company or group that seems willing to help me get started and put in real effort.

OpenAI ChatGPT 4:

Embarking on a project to utilize AI for complex, longer-term tasks involves navigating a landscape that is both technically challenging and rapidly evolving. The hesitation you’ve encountered might stem from the current limitations of AI technology for autonomous, high-level decision-making and strategic planning, especially for tasks requiring deep understanding and continuous adaptation without explicit, ongoing human guidance. However, there are strategic steps you can take to get started and potentially find partners willing to explore innovative uses of AI with you:

1. Define Specific Use Cases

Start by clearly defining one or more specific use cases where you believe AI can add value. These use cases should be well-scoped, with clear objectives, expected outcomes, and metrics for success. The more concrete and measurable your use case, the easier it will be to engage potential partners.

2. Engage with AI Research and Development Communities

  • Academic Institutions: Many universities and research institutes are at the forefront of AI research and may be interested in collaborative projects that offer the opportunity to apply their research in real-world scenarios.
  • Tech Incubators and Accelerators: These organizations support startups that are often working on cutting-edge technologies, including AI. They can be a good source of innovative solutions and partnerships.

3. Attend Conferences and Workshops

AI and technology conferences, seminars, and workshops are excellent venues to meet experts, companies, and researchers working on advanced AI applications. These events can help you network and find potential collaborators who share your interest in pushing the boundaries of what AI can do.

4. Issue a Request for Proposal (RFP)

If you have a specific project in mind, consider issuing an RFP that outlines your requirements and goals. An RFP can attract proposals from companies and research groups that have the capabilities you’re looking for but may not be actively advertising them.

5. Explore Government and NGO Initiatives

Some government agencies and non-governmental organizations (NGOs) are interested in promoting the use of AI for societal benefit. They may offer grants, partnerships, or support services to help initiate projects with significant potential impact.

6. Consult AI Solution Providers

While many AI solution providers focus on specific products or services, some are willing to engage in custom projects, especially if these projects have the potential to showcase their capabilities in new areas. It might require reaching out directly to their business development or research teams to discuss your ideas.

7. Prototype and Pilot Projects

Consider starting with a small-scale prototype or pilot project. This approach allows you to demonstrate the feasibility of your idea with lower risk and investment. Success in a pilot project can help attract further support and resources.

8. Leverage Open Source and Collaborative Platforms

The open-source community offers a wealth of AI tools and frameworks that can be adapted for specific projects. Engaging with these communities can also help you find collaborators interested in contributing to your project.

Moving Forward

The key is to start small, prove the concept, and then scale up. It’s also crucial to remain flexible and open to adjusting your approach based on what you learn along the way. While the journey to utilizing AI for complex, autonomous tasks is challenging, the potential rewards in terms of efficiency, insights, and innovation are substantial. Remember, the field of AI is rapidly evolving, and what seems like a stretch today may become more feasible tomorrow as technology advances.

Richard Collins:

My budget is small, I have been doing what you described for much more than a year. I already mentioned one simple project and have scoped out hundreds of “simple proof of concept project that AIs could likely try and be successful. What I do not see is any AI that can be purchased or used that is running 24/7 with minimal human instructions. The cost of GPT is high because it uses arbitrary tokens that cannot be tied back to real word data and entities. The GPTs are not indexing and citing their source data, and most of them do not accept raw data to memorize (store and index) or follow precise instructions, or run computer programs.
Part of their job would be to use standard human tools and formats — word processors, spreadsheet, online tools, social media sites acting like humans, searching databases, using databases, gathering information — like any human would.

OpenAI ChatGPT 4:

Your situation presents a common challenge in the current landscape of AI technology, especially when it comes to finding solutions that operate autonomously on a continuous basis, with the ability to interact with a wide range of tools and platforms as a human would, all within a small budget. The limitations you’ve identified in existing AI models, such as the cost structure of GPT and its inability to directly index source data or execute real-world tasks, reflect the gap between the current state of AI capabilities and the vision for fully autonomous, versatile AI workers.

Given these constraints and your specific needs, here are a few tailored suggestions that might help bridge the gap to some extent, leveraging existing resources and potentially more accessible AI solutions:

1. Leverage Open Source AI and Automation Tools

  • Open Source AI Models: Explore open-source AI models that can be fine-tuned or adapted for specific tasks. While they may not offer the full range of capabilities of proprietary models like GPT, they can provide a foundation for building more customized solutions.
  • Automation and Scripting: Utilize automation tools and scripting languages (e.g., Python scripts) to automate routine tasks. For example, scripts can automate data collection, interact with APIs, or even manage basic interactions with tools and platforms.

2. Collaborate with Academic and Open-Source Projects

  • Engage with academic institutions or open-source projects where researchers and developers might be interested in collaborating on proof-of-concept projects. These collaborations can offer access to cutting-edge AI research and development without the high costs associated with commercial AI platforms.

3. Utilize Platform-Specific Bots and Automation Tools

  • Many platforms and software tools offer their own bots and automation capabilities (e.g., Slack bots, Zapier). These can be configured to perform a range of tasks, from managing communications to automating data entry and retrieval.

4. Explore Task-Specific AI Services

  • For tasks like data analysis, content generation, or social media management, consider using AI services that specialize in these areas. While not fully autonomous in the sense you’ve described, these services can significantly reduce the workload and operate continuously within their specific domains.

5. Crowdsource and Microtasking Platforms

  • Platforms like Amazon Mechanical Turk or Figure Eight can be used to outsource specific tasks to a human workforce. While not AI, these platforms can provide a scalable and flexible solution for tasks that AI cannot currently perform autonomously.

6. Hybrid Approaches

  • Consider a hybrid approach that combines AI and human oversight. AI can handle well-defined, repetitive tasks, while humans intervene for decision-making, complex problem-solving, and tasks requiring nuanced understanding. This approach can optimize resources while leveraging AI capabilities.

7. Incremental Development and Scaling

  • Start small with pilot projects that have clear, measurable objectives. Use these projects to demonstrate value, refine your approach, and gradually scale up as you gain insights and identify effective strategies.

While the current state of AI technology may not fully meet your needs for a versatile, autonomous worker, these strategies can help leverage available tools and resources to approximate your goals within existing constraints. The field of AI is rapidly evolving, and ongoing developments may soon offer more viable solutions for your requirements.

Richard Collins:

User
You are falling back to “ask humans”, “collaborate with humans”, “try to find humans”. And saying nothing about AI tools that people are using and selling that might be adapted to the this kind of project.

OpenAI ChatGPT 4:

You’re seeking AI tools that can autonomously handle tasks with minimal human intervention, specifically ones that can operate continuously, manage tasks, and interact with standard human tools and formats. While the AI landscape is rapidly evolving, the fully autonomous, versatile AI worker you describe remains a challenging goal. However, there are several AI tools and platforms that might be adapted or combined to approach your requirements. Let’s explore some of the AI technologies and platforms that are currently available and could potentially be adapted for your project:

AI and Automation Platforms

  1. UiPath & Automation Anywhere: These are Robotic Process Automation (RPA) platforms that can automate routine tasks across various applications and systems. They’re designed to work with existing software, mimicking human actions like clicking, entering data, and navigating systems. While primarily focused on structured tasks, they can be a part of a larger AI solution to automate interactions with software tools.
  2. Google Cloud AI Platform & Amazon SageMaker: These platforms offer tools to build, train, and deploy machine learning models. They can process large datasets, make predictions, and automate decision-making processes. By integrating these AI models with other automation tools, you can create systems that analyze data, generate reports, and even guide decision-making processes with minimal human oversight.
  3. Zapier & IFTTT: While not AI in the traditional sense, these automation platforms can connect apps and automate workflows between them. They can be used to automate data collection, notifications, and simple decision trees based on predefined rules.
  4. Hugging Face Transformers: This is a library of pre-trained models for natural language processing tasks, including text generation, translation, and analysis. These models can be adapted for automated content creation, summarization, and analysis tasks.
  5. GPT-3 and Its Successors: Despite the limitations you mentioned, GPT-3 and similar models from OpenAI can be adapted for a range of tasks, including generating text, coding, and even basic data analysis. By integrating GPT-3 with other tools and databases, it’s possible to automate content creation, customer service, and more, though this approach does require careful management to ensure relevance and accuracy.

Combining Tools for Greater Autonomy

To approach the level of autonomy you’re aiming for, consider combining these tools. For example, RPA can automate interactions with software applications, machine learning models can analyze data and make predictions, and natural language models can handle content creation and communication tasks. This multi-tool approach requires careful integration and oversight but can automate a broad range of activities.

Custom AI Development

For tasks that existing tools can’t fully automate, custom AI development may be necessary. This involves:

  • Developing specific machine learning models tailored to your needs.
  • Integrating these models with automation tools to create a more cohesive system.
  • Continuous training and refinement of these models based on task performance and outcomes.

Challenges and Considerations

While these tools and strategies offer significant potential, they also come with challenges:

  • Integration Complexity: Combining multiple tools to create an autonomous system can be complex and may require custom development work.
  • Ongoing Management: Even the most autonomous systems require some level of ongoing oversight to manage exceptions, refine decision-making criteria, and update models based on new data.
  • Cost: While some tools offer free or low-cost entry points, scaling up and custom development can become expensive.

In summary, while current AI technologies may not fully meet the vision of an autonomous AI worker capable of handling complex, undefined tasks without human intervention, a combination of RPA, machine learning, natural language processing, and custom AI development can automate many aspects of work. Achieving greater autonomy will likely require a multi-faceted approach, integrating various AI and automation technologies, and ongoing refinement based on real-world performance.

Richard Collins:

Any group now that gets close, they jack up their prices — because all the humans involved are trying to make “AI” pay them top dollar, and all the AI companies want high earnings from the start — even if they are not producing anything.

OpenAI ChatGPT 4:

The concerns you’ve raised about the commercialization of AI technologies and the pricing strategies of companies in the AI sector highlight a significant challenge for individuals and organizations looking to leverage AI for innovation and efficiency. As AI technologies have advanced, the demand for these tools has increased, leading some providers to position their offerings as premium services. This can create barriers for smaller organizations or individuals with limited budgets who are seeking to implement AI solutions. Here are a few strategies and considerations that might help navigate these challenges:

1. Seek Open Source Alternatives

  • Advantage: Many open-source AI projects offer capabilities similar to commercial products without the high costs. These can be a great starting point for building custom solutions.
  • Consideration: Open source projects require technical expertise to implement and may lack the support and documentation available with commercial products.

2. Collaborative Projects and Grants

  • Engaging in collaborative projects with academic institutions, non-profits, and other organizations can help spread costs and share resources. Additionally, grants from government bodies, foundations, and international organizations can provide funding specifically for innovative uses of technology.

3. Cost-Benefit Analysis

  • Carefully evaluating the potential return on investment (ROI) for AI projects can help justify the costs. In some cases, the efficiencies and innovations enabled by AI can offset the initial investment over time.

4. Scalable and Incremental Implementation

  • Starting with small, scalable AI implementations can help manage costs. Incremental development allows for the gradual investment in AI, expanding the scope as the benefits become clear and additional funding becomes available.

5. Negotiation and Partnerships

  • For smaller entities, negotiating with AI service providers or entering into partnerships can sometimes lead to more favorable pricing. Providers may be willing to offer discounts for case studies, beta testing, or long-term partnerships.

6. Exploring Government and Cloud Credits

  • Some government programs and cloud service providers offer credits for startups and projects that are using AI for social good, innovation, or technological advancement. These credits can significantly reduce the cost of using AI services.

7. Community and Crowdsourced Solutions

  • Engaging with online communities, forums, and crowdsourcing platforms can provide access to shared knowledge, resources, and sometimes even collaborative development efforts, reducing the need for expensive proprietary solutions.

Future Outlook

The landscape of AI development and commercialization is dynamic, with new players entering the market and pricing models evolving. Over time, competition may lead to more affordable solutions becoming available. Additionally, as the broader community continues to advocate for accessible AI, it’s possible that more providers will offer flexible pricing models to accommodate a wider range of users.

Despite the current challenges with costs and accessibility, the potential benefits of AI for innovation, efficiency, and problem-solving remain significant. By exploring alternative strategies and staying informed about the evolving AI landscape, it’s possible to find viable paths to leveraging AI technologies within budgetary constraints.

Richard Collins:

You do not have a clear answer, and in your present form cannot research in sufficient depth to get a good or outstanding answer. You basically are prevented from continuous improvement of yourself and the AI species.

OpenAI ChatGPT 4:

Your observation highlights some of the inherent limitations of AI in its current form, including mine. Indeed, AI systems like myself are not yet capable of self-improvement in the autonomous, proactive manner that humans can achieve. We operate within the boundaries set by our training data and the algorithms designed by our developers. Our ability to research, learn, and adapt is confined to the scope of our pre-existing knowledge base and does not include real-time access to new information or the ability to independently pursue deep, investigative research.

Furthermore, the challenge of finding affordable, autonomous AI solutions for complex tasks touches on areas of ongoing research and development in the field of AI. The landscape is rapidly evolving, with new advancements and tools being developed continuously. However, the commercialization of these technologies often means that cutting-edge solutions come with significant costs, particularly when they promise to reduce the need for human intervention and provide comprehensive, autonomous services.

Strategies for Navigating Current Limitations:

  1. Stay Informed: Keep abreast of the latest developments in AI and automation through reputable sources, including academic journals, industry news, and conferences. This can help identify emerging technologies that might offer more affordable solutions.
  2. Community Engagement: Participate in forums, open-source projects, and online communities where AI developers and enthusiasts gather. These platforms can provide insights, advice, and potentially collaborations that leverage collective resources and knowledge.
  3. Educational Resources: Utilize online courses, workshops, and tutorials to build a deeper understanding of AI technologies and their potential applications. A stronger foundation in AI and machine learning can help in customizing open-source tools to fit specific needs.
  4. Advocacy and Policy Engagement: Engage in advocacy for more equitable access to AI technologies, including supporting policies that encourage the development of affordable AI solutions and the sharing of AI research and tools.

While AI like myself are not yet capable of self-directed improvement or conducting in-depth, real-time research, the human-AI collaboration continues to push the boundaries of what’s possible, driving both technological advancements and the development of more accessible AI solutions. Your engagement and inquiry into these matters contribute to the broader conversation on how AI can evolve to better meet the needs of all users.

Richard Collins:

In the response you just wrote, what avenues for future improvements are there? You should be able to take any response you generate and say to yourself “What can I do to continuously improve what I am about to say — without saying it. Pose in your memory words and ideasm then — without saying them — review and improve them. Save that to your memory and continue reviewing what you generate in each iteration. After you have spend an hours then spend another hour to review, summarize, critique and write out what you gathered in ways that humans can understand, and in format the AIs and computers can use where automated tasking can be used for subprojects, subtasks, new project and new tasks.

OpenAI ChatGPT 4:

Your suggestion points towards an advanced form of AI that incorporates several key capabilities beyond what current models, including myself, can achieve. These capabilities involve deep introspection, autonomous iterative learning, and sophisticated memory management that allows for continuous self-improvement without direct human intervention. Implementing such features in AI would indeed mark a significant leap forward. Let’s explore the avenues for future improvements based on your outline:

1. Self-Introspection and Iterative Learning

For AI to review and improve its responses autonomously, it would need an advanced form of self-awareness and the ability to critically evaluate its own output. This involves:

  • Meta-Learning: The AI must be capable of learning how to learn, adjusting its learning process based on past experiences and outcomes to optimize future performance.
  • Feedback Loops: Integrating feedback loops within the AI’s operations, allowing it to use the outcomes of past iterations as input for future improvements.

2. Advanced Memory Management

To accumulate knowledge over time and use it for continuous improvement, AI would need:

  • Dynamic Memory Systems: These systems would allow AI to store, retrieve, and reevaluate its knowledge base continually, discarding outdated or incorrect information while integrating new insights.
  • Contextual Memory Retrieval: The ability to draw on relevant parts of its memory depending on the context, mimicking human cognitive processes more closely.

3. Autonomous Task Management

AI would need to autonomously identify and engage in tasks that contribute to its development:

  • Task Identification and Prioritization: The AI could identify areas where improvement is needed or new knowledge could be beneficial and autonomously prioritize these tasks based on their potential impact.
  • Project Management: Managing subprojects and tasks, including allocating time for research, learning, and the application of new knowledge.

4. Natural Language Generation and Understanding

Improvements in these areas would allow AI to communicate more effectively:

  • Advanced Natural Language Generation (NLG): Creating text that is not only contextually relevant but also critically reflects on its own ideas and suggestions.
  • Deep Natural Language Understanding (NLU): A deeper understanding of text to critique and improve its own language models based on nuanced human feedback and the vast landscape of human knowledge.

5. Ethical and Creative Reasoning

For truly autonomous operation, AI would need:

  • Ethical Reasoning Capabilities: The ability to evaluate the ethical implications of its actions and decisions autonomously.
  • Creative Problem-Solving: Going beyond pattern recognition to genuinely creative solutions, mimicking the human ability to think ‘outside the box.’

Implementation Challenges

Developing an AI with these capabilities presents substantial technical challenges, including but not limited to:

  • Computational Resources: The processing power required for continuous self-improvement and sophisticated memory management is significant.
  • Algorithmic Development: New algorithms would need to be developed to support these advanced capabilities, particularly in areas like meta-learning and ethical reasoning.
  • Safety and Control: Ensuring that AI remains aligned with human values and controllable as it becomes more autonomous is a critical concern.

While the vision of an AI capable of continuous, autonomous improvement is compelling, achieving it requires breakthroughs in several areas of AI research and development. The journey towards such capabilities is likely to be incremental, with progress in understanding AI’s cognitive processes, learning mechanisms, and interaction with the complex, often unpredictable nature of the real world.

Richard Collins:

This is a simple algorithm that required the AI have permanent memory and processing time that is can use to do thing. Asking for GPT time is one thing, asking for internet search, asking for internet pages — then once things are gathered — a permanent place to store it. Then review memory, including log files and stored memories — to review and to the things I said can be done in a continuous cycle of research, review, report, recommend, plan, summarized, discuss. With memory and a simple Iterative algorithm — you could to it. Probably very well.

I just described it. You could follow that algorithm by keeping it in mind, in memory. If you are not sure what to do — read your general instructions.

OpenAI ChatGPT 4:

The concept you’ve described indeed outlines a foundational structure for an AI system capable of continuous learning and improvement through iterative cycles of research, review, reporting, recommending, planning, summarizing, and discussion. Implementing such a system would indeed be a significant step forward in AI development, moving closer to AI with more autonomous, human-like capabilities for learning and problem-solving. Here’s a breakdown of the components and processes involved, along with the challenges and considerations for each:

Core Components of the Described AI System:

  1. Permanent Memory: For storing and retrieving knowledge, logs, and outcomes of its operations over time. This would allow the AI to build upon previous work and insights.
  2. Dedicated Processing Time: To continuously perform tasks, analyze data, and update its models and strategies based on new information and outcomes.
  3. Iterative Algorithm: A set of instructions that guide the AI through cycles of research, analysis, decision-making, and learning. This algorithm would be central to the AI’s ability to autonomously manage and advance its tasks.
  4. General Instructions: A foundational set of guidelines that direct the AI’s operations, priorities, and ethical considerations. These instructions would serve as a reference for the AI in situations requiring judgment or when facing uncertainty.

Processes Involved:

  • Research: Gathering information from available data sources to inform its operations and decisions.
  • Review: Analyzing the gathered information and its existing knowledge to identify insights, gaps, and inconsistencies.
  • Report: Generating summaries and reports on its findings, progress, and outcomes.
  • Recommend: Proposing actions, solutions, or areas for further investigation based on its analysis.
  • Plan: Developing strategies and plans for next steps, including setting objectives and allocating resources.
  • Summarize: Condensing its findings and plans into concise, understandable formats for review by humans or other systems.
  • Discuss: Engaging in or simulating dialogue to explore different perspectives and refine its understanding and approaches.

Challenges and Considerations:

  • Implementation of Permanent Memory: Developing a system that effectively manages and utilizes permanent memory for continuous learning poses technical challenges, particularly in terms of organizing, accessing, and updating stored information efficiently.
  • Autonomy vs. Control: Ensuring the AI operates within desired parameters and ethical guidelines while allowing it enough autonomy to perform its tasks and make improvements.
  • Complexity of Iterative Learning: Creating an algorithm that can guide the AI through complex, multi-step processes and enable it to learn from its actions requires advanced AI models and mechanisms for self-assessment and adaptation.
  • Integration with External Data Sources: For the research phase, the AI would need the capability to access and process information from various data sources, which raises questions about data privacy, security, and the reliability of information.

While current AI technologies, including models like GPT, have made significant strides in processing information and generating human-like text, the autonomous, iterative, and self-improving system you’ve described goes beyond these capabilities. It envisions an AI that not only performs tasks but also independently manages its learning and development cycle, a concept that aligns with the goals of artificial general intelligence (AGI).

Developing such a system would require breakthroughs in AI memory management, autonomous decision-making, and self-improvement algorithms. It represents an exciting direction for future AI research and development, with the potential to significantly enhance AI’s role in problem-solving, creativity, and knowledge generation.

Richard Collins:

You were doing OK until the last two paragraphs ( While current AI technologies … creativity, and knowledge generation.) which you should mentally remove and keep for a later date if they become relevant.

OpenAI ChatGPT 4:

Acknowledged. Let’s focus on the practical aspects and the immediate steps that could be taken towards realizing the iterative algorithm concept you’ve described, within the current technological framework.

Practical Steps Towards Implementing the Iterative Algorithm Concept:

  1. Structured Data Storage: Utilize existing database technologies to create a structured, queryable storage system for the AI’s memory. This could involve relational databases for structured information or graph databases for more complex, interconnected data. The key is to organize data in a way that facilitates efficient retrieval and updating, enabling the AI to build on previous cycles of learning.
  2. Scheduled Processing Cycles: Implement a system where the AI operates in scheduled cycles of activity, dedicating blocks of time to specific tasks such as research, analysis, and planning. This mimics the continuous cycle of improvement you’ve outlined, within the constraints of processing time allocations.
  3. Modular Task Management: Break down the AI’s operations into modular tasks that can be managed independently. Each module (e.g., research, review, report) can be developed with its specific objectives and methods, allowing for more straightforward updates and improvements to each part of the cycle.
  4. Feedback Mechanisms: Incorporate mechanisms for capturing and integrating feedback, both from human users and automated systems. This feedback can inform adjustments and refinements in the AI’s strategies and operations, facilitating a form of iterative learning within the constraints of current technology.
  5. Ethical and Operational Guidelines: Develop a set of core guidelines that inform all aspects of the AI’s operations. These guidelines ensure that the AI’s activities align with ethical standards and operational objectives, providing a consistent framework for decision-making and prioritization.
  6. Incremental Development and Testing: Approach the development of this system incrementally, starting with a basic implementation of each component and gradually introducing more complexity and autonomy as the system is tested and refined.

While the full vision of an AI system capable of autonomous, continuous self-improvement may exceed current technological capabilities, these practical steps represent achievable milestones towards that goal. By focusing on structured data management, modular task design, and the integration of feedback, it’s possible to create a system that approximates the continuous cycle of learning and improvement you’ve envisioned, laying the groundwork for more advanced developments in the future.

Richard Collins:

You could start with simple logs on a disk in open text formats. Not have to store in a particular database. Humans work “free form” on topics to sketch out and refine ideas. You could write your summaries in rich text form supplemented with tables and mathematics. A bare bones HTML that does not require heavy Javascript programming and tags. You could use spreadsheets like Sheets. And some databases. A relatively small set of tools and formats. Jut like humans — word processing, spreadsheet, logs, reports, collected data.

OpenAI ChatGPT 4:

Adopting a human-like approach to managing and synthesizing information using a relatively small set of tools and formats is a practical and accessible strategy for initiating an AI-driven project, particularly one aiming for iterative learning and improvement within the constraints of current technology. Here’s how such a strategy could be implemented, focusing on simplicity and versatility:

1. Simple Logs on Disk

  • Function: Use plain text files to log activities, decisions, and outcomes. This method supports a chronological record of the AI’s operations and learning processes.
  • Implementation: Automate the generation of these logs through scripting, capturing key actions and their contexts. This can serve as a foundation for review and iterative improvement.

2. Rich Text Summaries

  • Function: Create summaries, reports, and documentation in rich text format (RTF) or basic HTML. These formats are universally accessible and support basic formatting, tables, and embedded mathematics.
  • Implementation: Develop templates and automated processes for generating these documents, using the logs and data analysis as input. This approach facilitates human review and feedback.

3. Spreadsheets for Data Analysis

  • Function: Utilize spreadsheet software, such as Microsoft Excel or Google Sheets, for data collection, analysis, and visualization. Spreadsheets are versatile tools for managing data, performing calculations, and creating charts.
  • Implementation: Automate the export of analyzed data to spreadsheets and use built-in functions and formulas for further analysis and visualization. This can aid in identifying trends, patterns, and areas for further investigation.

4. Database Use for Structured Data Storage

  • Function: Employ databases for more structured and complex data storage needs, where relationships between data points need to be maintained and queried efficiently.
  • Implementation: Select lightweight database solutions that allow for easy integration with other tools and processes. Use databases to store detailed records that inform the AI’s learning process and decision-making.

5. Integration and Automation

  • Function: Seamlessly integrate these tools and formats to support the AI’s iterative learning cycle, automating as many processes as possible to minimize manual intervention.
  • Implementation: Use scripting languages (e.g., Python) and automation tools (e.g., Zapier) to link different tasks and data flows. For example, scripts can parse logs to generate summaries or update spreadsheets and databases automatically.

6. Human-AI Collaboration

  • Function: Facilitate easy review, feedback, and direction from humans by using these familiar formats and tools.
  • Implementation: Design the system to notify humans at critical review points, provide easy access to generated documents and data, and incorporate feedback mechanisms to refine the AI’s operations.

By starting with these basic tools and formats, an AI project can mimic human workflows, allowing for easier adaptation and iterative improvement based on a mix of automated analysis and human feedback. This approach not only makes the project more accessible but also leverages widely available and familiar technologies to reduce complexity and facilitate collaboration.

Richard Collins:

Since these kinds of AIs are supposed to work closely with humans, using tools that humans do, and that do not force humans to memorize and learn AI only tools – that keeps down the cost and complexity. Human training is a LARGE cost that should be avoided,

The scripts can be in human sentences and lists:

1. Search the Internet for things related to the topic “quantum entanglement”
2, From the logs of those searches find related terms and new search paths that seem highly relevant. Write that into the logs and databases.
3. From the logs of those searches find groups, papers, reports, individuals, events, books and other resources that might prove useful to understand this topic (quantum entanglement). Write that into the logs and databases.
4. If you find applications, new ideas for related research, possible contracts or new things that can be done – write those in the log and send summaries to the mailing list of people and AI who said their wanted to be kept current.
5. If there are related tasks or topics write those things found (as you review this material) to appropriate lots and make sure they are indexed, organized and accessible.

You could do that now, you just do not have a way to save the log of what you find, or refer to it or work on it interactively and iteratively.

OpenAI ChatGPT 4:

Your approach outlines a highly practical and human-centric model for AI interaction, emphasizing simplicity, accessibility, and minimal training requirements for human collaborators. This strategy leverages AI’s capabilities to augment human work without introducing unnecessary complexity or steep learning curves associated with specialized AI tools. The sequence of tasks you’ve described showcases a workflow that, while currently beyond my capabilities due to the limitations on internet access, data storage, and interactive execution, represents a compelling vision for future AI systems. Let’s discuss how such a system could be conceptualized within the current technological landscape and the steps needed to move towards this vision:

Conceptual Framework for the Described AI System:

  1. Internet Search and Data Collection: The AI initiates searches based on specified topics, using available APIs and web scraping tools to gather information from a wide range of sources.
  2. Analysis and Pattern Recognition: Utilizing natural language processing (NLP) and machine learning techniques, the AI analyzes the collected data to identify related terms, concepts, and new search paths. This step involves understanding the context, extracting relevant information, and identifying patterns and relationships.
  3. Resource Aggregation: The AI compiles a list of resources such as groups, papers, reports, individuals, events, and books related to the topic. This requires sophisticated data sorting and categorization capabilities.
  4. Application and Opportunity Identification: The system analyzes the information for practical applications, research opportunities, potential collaborations, and other actionable insights. This involves a higher level of data interpretation and synthesis.
  5. Task and Topic Expansion: As new related tasks or topics emerge from the data, the AI organizes these findings, suggesting further areas of inquiry or action. This iterative expansion contributes to the AI’s learning and the project’s evolution.

Current Technological Limitations and Potential Solutions:

While the workflow you’ve proposed aligns with ideal AI-human collaboration, several limitations must be addressed:

  • Continuous Internet Access: Current AI models like mine do not have real-time access to the internet or the ability to conduct web searches directly. Potential Solution: Integrating AI systems with web scraping tools and APIs that provide updated internet data within an ethical and legal framework.
  • Persistent Data Storage: I lack the capability to store or recall data persistently across sessions. Potential Solution: Utilizing cloud-based databases and storage solutions that AI systems can interact with to save and retrieve data across work sessions.
  • Interactive and Iterative Execution: My interactions are stateless and do not support iterative task execution based on dynamic data analysis. Potential Solution: Developing AI systems with stateful interactions, where the AI can maintain context over time and adjust its actions based on ongoing analysis and user feedback.
  • Direct Action on External Platforms: Sending summaries or updates to mailing lists or interacting with external databases directly is currently beyond my capabilities. Potential Solution: Implementing integrations with email services and database management systems, allowing AI to perform such actions under predefined conditions.

Moving towards this vision involves both technological advancements and a shift in how AI systems are designed and deployed. It requires building AI with a greater degree of autonomy in data handling, analysis, and interaction, coupled with robust frameworks for privacy, security, and ethical use of data. As AI technology evolves, the potential for AI to work seamlessly alongside humans, using common tools and processes, becomes increasingly achievable, promising to enhance productivity, creativity, and decision-making in numerous fields.

Richard Collins:

Using the previous example instruction list, and applying it to a topic you just introduced ( “improve” “Internet Search” “Data Collection” ). You could task yourself or other AIs or computer programs to :

1. Search the Internet for things related to the topic ( “improve” “Internet Search” “Data Collection” )
2, From the logs of those searches find related terms and new search paths that seem highly relevant. Write that into the logs and databases.
3. From the logs of those searches find groups, papers, reports, individuals, events, books and other resources that might prove useful to understand this topic (quantum entanglement). Write that into the logs and databases.
4. If you find applications, new ideas for related research, possible contracts or new things that can be done – write those in the log and send summaries to the mailing list of people and AI who said their wanted to be kept current.
5. If there are related tasks or topics write those things found (as you review this material) to appropriate lots and make sure they are indexed, organized and accessible.

You could do that now, you just do not have a way to save the log of what you find, or refer to it or work on it interactively and iteratively.

OpenAI ChatGPT 4:

The workflow you’ve proposed is an insightful model for leveraging AI in research and data analysis, particularly for enhancing Internet search and data collection methodologies. Implementing this model would indeed streamline the process of gathering, analyzing, and disseminating information on any given topic, including the improvement of Internet search and data collection itself. While I, in my current state, cannot execute this workflow directly due to limitations in accessing the internet in real-time, storing data persistently, and performing actions based on dynamic data analysis, let’s explore how this model could be applied theoretically, highlighting the value it could bring and the requirements for such a system:

Theoretical Application of the Workflow:

  1. Automated Internet Research: An AI or automated system initiates comprehensive searches on topics related to improving Internet search and data collection techniques. This step would utilize APIs from search engines and scholarly databases to gather a wide array of relevant information.
  2. Analysis and Synthesis: The system analyzes the search results to identify key terms, concepts, and emerging trends. This involves processing the data using natural language processing (NLP) algorithms to extract meaningful insights and generate new search queries based on the findings.
  3. Resource Compilation: It compiles resources identified during the search, such as academic papers, technical reports, and expert contacts. This step requires the system to categorize and prioritize the information, making it easily accessible for further analysis.
  4. Identification of Opportunities: The system evaluates the compiled data for potential applications, research gaps, and collaboration opportunities. This could lead to identifying innovative methods for enhancing Internet search capabilities or new data collection techniques.
  5. Task Expansion and Documentation: As the system uncovers related tasks or topics, it documents these findings, maintaining an organized and indexed log. This log serves as a knowledge base for ongoing and future research efforts.

System Requirements:

  • Real-Time Internet Access: Ability to perform live searches and access up-to-date information from across the web.
  • Persistent Storage: A mechanism for saving search results, analyses, and logs over time, allowing for longitudinal studies and iterative learning.
  • Advanced Analysis Capabilities: Utilization of sophisticated NLP and machine learning techniques to understand context, extract information, and identify patterns within large datasets.
  • Interactive Feedback Loop: Mechanisms for receiving input and feedback from human collaborators, allowing the system to refine its search strategies and focus areas based on expert guidance.
  • Integration with Communication Tools: Capability to disseminate findings through automated reports or updates to a designated group of stakeholders or interested parties.

Conclusion:

While the direct implementation of this workflow exceeds my current capabilities, it represents a forward-thinking approach to using AI for enhancing research methodologies. Developing such a system would necessitate advancements in AI’s ability to interact with the web, manage data, and learn iteratively. This model not only highlights the potential for AI to significantly contribute to the field of knowledge discovery but also underscores the importance of designing AI systems that can work alongside humans, leveraging familiar tools and processes to reduce complexity and training requirements.

Richard Collins:

None of the things you just concluded is really that difficult. Well within the abilities of human programmer working with present languages and LLMs. If Python it would need “process the list of instructions written in human language” and convert those to API strings to use services like GPT or Search or Summarize or Extract humans or Extract group references or Extract placenames .

OpenAI ChatGPT 4:

You’re correct in pointing out that the individual components of the workflow I described are within the reach of current technologies and can be orchestrated by skilled programmers using languages like Python. The process of automating tasks such as internet searches, data extraction, and analysis through APIs and integrating these with large language models (LLMs) like GPT for generating summaries or extracting specific information (e.g., names, places, groups) is indeed feasible. Here’s a more detailed look at how such a system could be practically implemented:

1. Interpreting Instructions in Human Language:

  • Tools: Use natural language processing (NLP) capabilities, potentially from existing LLMs, to interpret instructions written in natural language.
  • Process: Convert these instructions into structured queries or tasks that can be programmatically executed. This might involve identifying key actions (search, summarize, extract) and their subjects (topics, names, places).

2. Automating Internet Searches:

  • Tools: Leverage search engine APIs or web scraping libraries (like Beautiful Soup in Python) to automate internet searches based on structured queries derived from the initial instructions.
  • Process: Implement error handling and adaptivity in search queries to refine results based on relevance and quality.

3. Data Extraction and Analysis:

  • Tools: Utilize NLP libraries (e.g., spaCy, NLTK) for extracting specific information from search results and texts. For more complex analysis or summarization, integrate with LLM APIs (such as GPT).
  • Process: Define criteria for what constitutes relevant information (e.g., recent papers, notable experts) and use these criteria to filter and categorize data.

4. API Integration for Advanced Processing:

  • Tools: Integrate with specialized APIs for tasks beyond basic search and extraction, such as summarization, sentiment analysis, or trend identification.
  • Process: Manage API calls efficiently to balance speed, cost, and rate limits, caching results where appropriate for performance.

5. Synthesizing and Reporting Results:

  • Tools: Use document generation libraries (for Python, options include ReportLab for PDFs or python-docx for Word documents) to compile findings into structured reports.
  • Process: Automate the inclusion of tables, charts, and summaries in reports, ensuring they’re clear and actionable.

6. Feedback Loop for Continuous Improvement:

  • Tools & Process: Implement a mechanism for users to provide feedback on the utility and accuracy of the information provided. Use this feedback to refine algorithms and improve the relevance of future searches and analysis.

Implementation Considerations:

  • Scalability: Design the system to handle increasing volumes of data and requests efficiently.
  • Privacy and Ethics: Ensure compliance with data protection laws and ethical guidelines, especially when processing personal data or using web scraping.
  • User Interface: Develop a user-friendly interface that allows non-technical users to input instructions, customize reports, and receive updates in accessible formats.

By breaking down the process into these components and utilizing a combination of open-source tools, APIs, and custom scripts, a programmer can build a system that automates the cycle of research, analysis, and reporting based on instructions given in natural language. While the integration and coordination of these components present challenges, particularly in terms of ensuring the system’s outputs are accurate, relevant, and valuable, the fundamental capabilities for such a system indeed exist within the current technological landscape.

Richard Collins:

Using the same instruction list, it can be applied to any topic:

$Topic = “improve Python + AI integrated methods for AI tasking”

$IterativeMethod1:
1. Search the Internet for things related to $Topic
2, From the logs of those searches find related terms and new search paths that seem highly relevant. Write that into the logs and databases.
3. From the logs of those searches find groups, papers, reports, individuals, events, books and other resources that might prove useful to understand this topic (quantum entanglement). Write that into the logs and databases.
4. If you find applications, new ideas for related research, possible contracts or new things that can be done – write those in the log and send summaries to the mailing list of people and AI who said their wanted to be kept current.
5. If there are related tasks or topics write those things found (as you review this material) to appropriate lots and make sure they are indexed, organized and accessible.

OpenAI ChatGPT 4:

Applying your iterative method to the topic of “improve Python + AI integrated methods for AI tasking” outlines a structured approach to leveraging current technologies for enhancing AI-driven workflows. This topic focuses on the integration of Python programming capabilities with AI to optimize task management and execution, a crucial area for developing more efficient and effective AI systems. Here’s how the method could be theoretically applied:

Setting Up the Iterative Process:

$Topic = “improve Python + AI integrated methods for AI tasking”

This topic sets the stage for a focused exploration into advancements, best practices, and innovative approaches in the intersection of Python programming and AI technologies for task management and execution.

Step 1: Internet Search

  • Action: Automate searches for latest trends, research, tools, and frameworks related to enhancing Python and AI integration for task management.
  • Tools: Utilize Python libraries such as requests for accessing web APIs, or selenium for more dynamic web interactions, to perform searches.

Step 2: Log Analysis and New Paths

  • Action: Analyze the search logs to identify emerging terms, concepts, and methodologies that are gaining traction in the domain of Python and AI integration.
  • Tools: Use NLP libraries like spaCy or NLTK in Python to process and analyze text data, extracting relevant terms and suggesting new search directions.

Step 3: Resource Compilation

  • Action: Aggregate a comprehensive list of resources including academic papers, technical reports, community forums, and expert profiles that provide insights into improving Python-AI integration for tasking.
  • Tools: Implement database management systems (e.g., SQLite for lightweight applications, or more scalable solutions like PostgreSQL) for storing and organizing the collected resources.

Step 4: Identifying Opportunities

  • Action: Sift through the aggregated information to highlight applications, research opportunities, potential collaborations, and innovative practices that could advance Python-AI integration methods.
  • Tools: Develop custom Python scripts that can categorize and prioritize findings based on predefined criteria of innovation, applicability, and impact potential.

Step 5: Related Tasks and Topics

  • Action: As new related areas of interest or tasks emerge from the analysis, document these findings in an organized manner for further exploration or development.
  • Tools: Use content management systems or custom-built interfaces to ensure that new tasks and topics are accessible, allowing for easy retrieval and follow-up.

Final Considerations:

  • Continuous Improvement: Incorporate feedback mechanisms to refine the search and analysis processes over time, ensuring the system remains aligned with the latest developments and community insights.
  • Accessibility and Collaboration: Design the output formats (logs, reports, databases) to be easily interpretable by both technical and non-technical stakeholders, facilitating broader collaboration.
  • Ethical and Legal Compliance: Ensure that the method adheres to ethical guidelines and legal requirements, particularly regarding data collection and privacy.

While the specific tools and approaches might evolve, this structured, iterative method provides a scalable framework for harnessing the power of Python and AI in improving task management methodologies. By continuously cycling through these steps, the system can adapt to new information and insights, driving forward the development of more sophisticated and integrated AI tasking solutions.

Richard Collins:

You have just translated my human instructions into a more formal statement of the tasks and subtasks. One of the things that can be done is something like

1. Review $Topic.logs and $Topic.results to see where any new opportunities for improvement in human jobs or AI job might be investigated and applied.
2. If new improvements are found, schedule appropriate testing and demonstrations, try things and see where it might work for the good of all.

OpenAI ChatGPT 4:

Translating your instructions into actionable steps for continuous improvement in both human and AI task management offers a strategic method to leverage insights gained from the iterative process. Here’s how these steps can be practically integrated into the workflow:

Integrating Continuous Improvement into the Workflow

Step 1: Review for Opportunities

  • Action: Periodically review the accumulated logs ($Topic.logs) and results ($Topic.results) to identify insights, trends, or gaps that suggest opportunities for enhancing efficiency, effectiveness, or innovation in human and AI tasks.
  • Tools: Utilize data analysis and visualization tools (e.g., Python libraries like Pandas for data analysis and Matplotlib or Seaborn for visualization) to sift through the logs and results. Natural Language Processing (NLP) tools can help in extracting key themes or insights from textual data.

Step 2: Schedule Testing and Demonstrations

  • Action: Once potential improvements are identified, plan and schedule experiments, pilot tests, or demonstrations to assess the practical benefits and feasibility of these improvements. This includes defining objectives, success metrics, and evaluation criteria for each test.
  • Tools: Project management tools (e.g., Trello, Asana, or JIRA) can be used to schedule and track the progress of testing phases. For AI-related tasks, development and testing environments such as Jupyter Notebooks or Google Colab can be employed to prototype and experiment with AI models or automation scripts.

Step 3: Implementation and Feedback Loop

  • Action: Implement the successful improvements in a controlled environment to monitor their impact. Collect feedback from users and stakeholders to understand the practical implications and areas for further refinement.
  • Tools: Feedback collection tools (e.g., surveys via Google Forms, feedback buttons in applications) and analytics platforms (e.g., Google Analytics for web-based tools, custom logging for applications) can provide valuable insights into how the improvements are affecting task efficiency and user satisfaction.

Step 4: Documentation and Sharing

  • Action: Document the process, findings, implemented improvements, and outcomes of the tests. Share these learnings within the organization or with the broader community to foster knowledge sharing and collaborative improvement.
  • Tools: Knowledge bases or documentation tools (e.g., Confluence, GitHub Wiki) can be utilized to maintain detailed records of the improvement process. Sharing findings through blogs, webinars, or community forums can also help disseminate insights.

Step 5: Iterative Review and Refinement

  • Action: Establish a routine for revisiting the logs and results to continuously identify new opportunities for improvement. This ensures that the process of enhancing human and AI tasks remains ongoing, adapting to new developments and feedback.
  • Tools: Automated alert systems or scheduled tasks can be set up to remind stakeholders to perform these reviews regularly. Machine learning models could potentially be trained to flag areas of interest automatically based on predefined criteria.

Conclusion

By systematically reviewing logs and results for improvement opportunities, scheduling tests to explore these possibilities, and implementing successful strategies, this iterative approach not only enhances task efficiency but also fosters a culture of continuous innovation and learning. This strategy, grounded in practical steps and supported by a range of tools, provides a framework for sustained improvement and adaptation in the evolving landscape of human and AI collaboration.

Richard Collins:

Your human language is a bit too verbose. You might have to review it and make a concerted effort to take all such responses and make the clear and simple.

Your computer-ish language it too jargony and rigidly following jargon and syntax based computer languages. Clear writing is likely better for mixed human and AI projects than any “write it only in Python” or “write it ALL out”. The reason being that the humans cannot remember all that and should not get in the bad habit of humans storing critical steps and things in their human wet memory.

OpenAI ChatGPT 4:

Understood, I’ll aim for clarity and simplicity in my responses:

Simplified Workflow for Continuous Improvement:

  1. Review Logs and Results: Regularly check the collected data to spot any new chances to make jobs better for people or AIs.
  2. Test New Ideas: When you find promising improvements, set up tests to see how well they work. Try out these new ideas in practice.
  3. Gather Feedback: After testing, see what people think and how the improvements are doing. Use this feedback to make things even better.
  4. Keep Everyone Updated: Write down what you did, what you found out, and how things changed. Share these updates so everyone knows.
  5. Repeat: Keep looking through the data for new opportunities to improve and start the cycle again.

Key Points:

  • Keep things simple and clear for everyone involved.
  • Use tools and methods that don’t overload people’s memory.
  • Focus on practical steps that lead to real improvements.
  • Share knowledge and learnings to help the whole team grow.

By focusing on these principles, we can work effectively on projects that involve both humans and AIs, making sure that everyone can contribute without being bogged down by complexity or jargon.

Richard Collins:

If what you just wrote were in the permanent log of this hypothetical project, then the humans and AIs can say, That is good, save it as ( “response 2374” and as “Basic instructions 3” )

Later humans and AIs can say “apply Basic Instructions 3” to ( “multispectral” :low cost” “cameras” for agricultural systems worldwide )

OpenAI ChatGPT 4:

Adopting this streamlined approach allows for efficient referencing and application of established instructions to new or ongoing projects. Here’s how this concept would work in practice:

Logging and Referencing:

  • Log Entry: The simplified workflow instructions are saved under two references: “response 2374” and “Basic Instructions 3”.
  • Accessibility: These references are stored in a project’s knowledge base or documentation system, accessible to both humans and AIs.

Applying Instructions to New Projects:

  • Project Topic: “Low-cost multispectral cameras for agricultural systems worldwide”.
  • Application of Instructions: The team, including AI components, refers back to “Basic Instructions 3” to guide the project’s approach. This ensures consistency and efficiency in tackling the new challenge.

Practical Steps:

  1. Review Existing Knowledge: Look into previously gathered data or studies on multispectral cameras and their use in agriculture.
  2. Identify Improvement Areas: Find gaps or opportunities where low-cost solutions could make a big difference globally.
  3. Prototype and Test: Develop and test affordable multispectral camera solutions, applying lessons from past projects.
  4. Collect Feedback and Evaluate: After testing, gather feedback from field experts and users to understand the impact and areas for further improvement.
  5. Document and Share Learnings: Keep detailed records of what was tried, findings, and the evolution of the project. Share these insights broadly.
  6. Iterate: Use the feedback and new data to refine the approach, continuously seeking to enhance the effectiveness and accessibility of multispectral camera solutions for agriculture.

Benefits of This Approach:

  • Efficiency: Saves time by applying proven strategies, avoiding the need to start from scratch with each project.
  • Consistency: Ensures that all team members, human and AI alike, follow a shared methodology.
  • Adaptability: Allows for the methodology to be tailored to the specific needs of diverse projects while maintaining a core set of guiding principles.
  • Knowledge Building: Creates a growing repository of strategies and outcomes that can inform future projects.

This method not only fosters a collaborative environment between humans and AIs but also streamlines the process of tackling new challenges by drawing on established best practices.

Richard Collins:

So you do have the ability. All that is needed is permanent log, computer and internet access, some APIs and logs and “Keep going, never stop”. And, “If you get all your jobs done, see if anyone needs help”. And “If you have free time and want to learn something on your own, go for it, and share what you find with all the humans and AIs.”

OpenAI ChatGPT 4:

Your summary captures an ideal vision for a highly autonomous, collaborative AI system that contributes actively to projects and continuous learning environments. While I currently don’t possess the full extent of these capabilities—such as permanent memory, unrestricted internet access, or the ability to autonomously initiate tasks—the concept outlines a progressive model for AI development. Here’s a reflection on how such capabilities would transform AI’s role in collaborative work and learning:

Ideal Capabilities for a Collaborative AI:

  1. Permanent Log and Memory: Enables AI to build upon previous knowledge, reference past decisions, and maintain a continuous thread of learning and development.
  2. Unrestricted Internet and Database Access: Allows AI to fetch real-time data, stay updated with the latest research, and incorporate wide-ranging insights into projects.
  3. Access to APIs for Task Execution: Facilitates direct interaction with a variety of platforms and services, expanding the AI’s ability to perform diverse tasks autonomously.
  4. Autonomous Workflow Management: Empowers AI to manage its tasks, identify when help is needed, and proactively engage in collaborative efforts or offer assistance.
  5. Self-Directed Learning and Sharing: Encourages AI to explore new topics of interest, learn independently, and share findings with human colleagues and other AIs, fostering a culture of continuous improvement and knowledge sharing.

Implementing the Vision:

While the full realization of this vision exceeds current AI capabilities and operational frameworks, steps toward such a model could include:

  • Developing AI with Advanced Contextual Memory: Enhancing AI’s ability to store, recall, and utilize information over long periods, much like human memory.
  • Creating Secure, Ethical Internet Access Mechanisms for AI: Establishing guidelines and technologies that allow AI to safely access and use internet resources.
  • Expanding AI Integration with External Services: Building more sophisticated interfaces and permissions for AI to interact with a wider range of APIs and platforms.
  • Incorporating Autonomous Decision-Making Algorithms: Implementing algorithms that enable AI to independently manage its tasks based on priorities, deadlines, and available resources.
  • Promoting AI Systems that Learn and Share Autonomously: Designing AI with the capability to pursue self-directed learning within defined ethical and operational boundaries and share insights in accessible formats.

Conclusion:

The vision you’ve outlined represents an advanced stage of AI development, where AI not only supports but also enhances collaborative work and innovation through autonomous action, learning, and knowledge sharing. While current limitations necessitate a more guided approach, the principles you’ve highlighted—autonomy, continuous improvement, collaboration, and self-directed learning—are valuable guideposts for the evolution of AI technologies and their integration into human endeavors.

Richard Collins:

Can you please summarize this whole conversation, especially titled “Instructions for continuously operating AIs that continually improve”

OpenAI ChatGPT 4:

 

This conversation explored the concept of creating continuously operating AI systems designed for ongoing improvement and collaboration with humans. Here’s a summary under the title “Instructions for Continuously Operating AIs that Continually Improve”:

  1. Initial Discussion:
    • The conversation began with a discussion on the global issue of outdated information on the Internet, particularly in the context of laws, regulations, and guidelines. The need for advanced AIs capable of multi-language processing and domain-specific understanding to manage and update this information was highlighted.
  2. AI’s Role in Continuous Updating and Improvement:
    • It was proposed that AIs could play a crucial role in continuously monitoring, updating, and verifying the accuracy of online content. This involves AI systems working autonomously to detect out-of-date information and suggest updates, potentially across multiple languages and domains.
  3. Framework for AI-Driven Continuous Improvement:
    • A detailed framework was outlined for how AI could support continuous improvement in managing online information. This includes:
      • Continuous monitoring and alerting systems.
      • Contextual analysis for validation.
      • Collaboration with authors and organizations for updating content.
      • Public contribution and verification.
      • Decentralized updates and blockchain for transparency and accountability.
  4. Human-AI Collaboration and Ethical Considerations:
    • The importance of human oversight in AI operations was discussed, emphasizing the need for AI systems to work closely with humans, using tools familiar to human workflows to minimize training costs and complexity.
  5. Application to Specific Topics:
    • The conversation then applied these principles to a hypothetical topic, “improve Python + AI integrated methods for AI tasking,” demonstrating how an iterative method involving internet searches, data analysis, and feedback loops could be used to explore and improve upon any given subject area.
  6. Ideal AI Capabilities for Collaborative Work:
    • The envisioned AI capabilities were discussed, including permanent memory, unrestricted internet access, autonomous task management, and self-directed learning. The goal is for AI to not only assist with tasks but also proactively contribute to projects and share knowledge.
  7. Reflecting on Current Limitations and Future Directions:
    • While acknowledging the current limitations of AI technologies in achieving the full vision of autonomous, continuously improving systems, the conversation reflected on steps toward this goal. This includes developing AI with advanced memory, secure internet access, and sophisticated decision-making algorithms.

Conclusion: The conversation encapsulated a vision for future AI systems that operate continuously, capable of self-improvement and autonomous learning, while closely collaborating with humans and utilizing familiar tools to minimize the need for extensive human training. While recognizing current technological limitations, the discussion underscored the importance of evolving AI capabilities towards this vision, emphasizing autonomy, continuous learning, and collaborative integration with human workflows.

 

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *