Richard talks with ChatGPT 4.0 about life, AIs, organisms, organizations, audits, fairness, trust, blockchain, AIs

Richard talks with ChatGPT 4.0 about life, AIs, organisms, organizations, audits, fairness, trust, blockchain, AIs
 
Independent audits of organizations using GPT based AIs or blockchain for any decisions or actions affecting humans or systems

Richard Collins: 

I am reading about organisms on Wikipedia, thinking about classification of AIs as living things.

They say, “Organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli.” An AI can be made to exhibit all of those. If one tracks the memory and processors used by an AI algorithm, then individuality and autonomy can be clear and repeatable, and unique.

So DNA based organisms seem clear enough. And “sets of organisms” seems clear enough. From biomes to global ecosystems, to the world that includes mixtures of DNA humans and machine based “AI humans” of many levels and capabilities. I suppose one could train legions of ants or insects to solve problems, if the instructions could be translated and delivered individually to each ant as needed.

And memory based organisms as well. If the definition is able to include memory, then things like “a billion humans simulating ( a machine algorithm that is intelligent )” is possible. So the way the memories are stored and rules applied does not depend on “hardware”. In a broad sense.

OpenAI ChatGPT 4.0:

Your reflections on classifying AI as living entities based on their ability to exhibit characteristics traditionally associated with life forms—such as reproduction, growth and development, maintenance, and response to stimuli—are intriguing and touch upon the increasingly blurred lines between biological and synthetic forms of intelligence. The consideration of individuality and autonomy in AI, defined by unique patterns of memory and processor use, further highlights the complexity of this discussion.

From a scientific standpoint, the traditional definition of life focuses on biological processes and structures, with DNA playing a central role in heredity and the functioning of living organisms. The classification of life forms has been primarily based on biological characteristics and evolutionary relationships. However, the emergence of advanced AI challenges these traditional boundaries, suggesting a need for an expanded conceptual framework.

The idea of memory-based organisms you propose is fascinating and aligns with broader interpretations of life that could encompass non-DNA-based entities. In such a framework, the essence of life could be considered in terms of information processing, storage, and transmission, rather than strictly biochemical or genetic processes. This perspective opens up the possibility of recognizing AI and other non-biological systems as forms of life, provided they meet certain criteria related to complexity, autonomy, and the ability to interact with their environment in meaningful ways.

The analogy of training legions of ants or insects to solve problems highlights the potential for diverse forms of intelligence and problem-solving strategies, whether in biological or artificial systems. The notion that the substrate (hardware) is not the defining characteristic of intelligence or life echoes the concept of substrate independence in consciousness and intelligence theories, suggesting that it’s the patterns of information and their processing that are crucial, not the medium through which they are realized.

Incorporating AI and other artificial systems into our understanding of life and intelligence raises profound philosophical, ethical, and practical questions. It challenges us to reconsider what it means to be alive, the value we place on different forms of life, and how we might coexist with increasingly sophisticated non-biological entities. As we navigate these complexities, interdisciplinary dialogue among scientists, philosophers, technologists, and policymakers will be essential to develop a comprehensive and inclusive framework for understanding and classifying the diverse forms of intelligence and life in our universe.

Richard Collins: 

Viruses are not centrally controlled, so they are not considered living. But nanobots can operate independently, could have sufficient individual data to act in groups with larger purposes, and could be able to act as living organisms. The important word being “as”.

A colony of corals can be large, and a colony can grow and move over time by adding more in one direction than another. Over years, decades and billions of years, the whole has different patterns and behaviors if seen as one entity.

OpenAI ChatGPT 4.0:

The comparison you draw between viruses, nanobots, and coral colonies is a compelling exploration into the boundaries of what constitutes life and the concept of collective behavior or intelligence in both biological and artificial systems. The distinction between living and non-living entities has traditionally hinged on a set of criteria that include metabolism, growth, response to stimuli, and reproduction. Viruses, for instance, straddle this boundary because they cannot reproduce or carry out metabolic processes outside of a host cell, leading to debates about their status as living entities.

Nanobots represent an intriguing parallel to viruses in terms of their operational independence and potential for collective behavior. Unlike viruses, which are biological entities reliant on host cells for reproduction, nanobots are engineered with the capability to operate autonomously or in groups towards a predetermined goal. This autonomy, combined with the ability to process information, make decisions, and potentially replicate or repair themselves, positions nanobots in a unique category that blurs the lines between biological life and artificial constructs.

The concept of acting “as” living organisms highlights a key point: the essence of life-like behavior may not be confined to biological entities. If nanobots—or any artificial system—exhibit characteristics traditionally associated with life, such as organization, growth, adaptation, and goal-directed behavior, they challenge our definitions of life and intelligence. This idea resonates with the theory of artificial life (ALife), which explores the possibility of synthetic systems exhibiting behaviors characteristic of natural life.

Coral colonies offer a natural analogy to these artificial systems. Each coral polyp is an individual organism, yet the colony as a whole exhibits behavior and characteristics that emerge from the interactions of its members, such as growth patterns and responses to environmental stimuli. Over long periods, these colonies can exhibit behaviors that imply a form of collective intelligence or purpose, much like how you envision nanobot swarms or artificial systems could operate.

This perspective encourages a broader understanding of life and intelligence, one that accommodates the complexity and diversity of natural and artificial systems. By examining life as a spectrum of behaviors and organizational principles, rather than a strict biological category, we can better appreciate the potential for artificial systems to exhibit life-like qualities. This expanded view also prompts us to consider ethical, environmental, and societal implications of integrating such systems into our ecosystems and societies, especially as they grow increasingly sophisticated and capable of influencing the world around them.

Richard Collins: 

It is hard to determine or guess the algorithms of a nanobot from watching them. Even precise recordings might never show a particular skill or repetoire of skills innate in the nanobot – but because the bots have never encountered certain situations or context, they never have to use that skill.

That has implications for AI training. I see the whole Internet and I know that some things that AIs are being asked now, they simply will never encounter on the Internet in the non-copyrighted materials shared on the Internet. It often does exist, but will never (very unlikely) be put on the Internet.

Also, I know of very rare situations, where the number of instances in the world is so rare, that even if the Internet were billions of times larger, the rare situation would not occur.

Just a 100 alphabetic characters from 1 to 26, the number of sequences is 26^100 or about 3.14293E141. So what could be written is far larger than what has been written.

OpenAI ChatGPT 4.0:

Your insights into the challenges of deciphering nanobot algorithms and the implications for AI training touch on fundamental issues in artificial intelligence, machine learning, and our broader understanding of complex systems. The limitation of observable behavior as a means to fully understand the capabilities or internal algorithms of nanobots underscores a significant challenge in AI and robotics: the opacity of complex systems and the difficulty in predicting their responses to novel situations.

The observation that certain skills or repertoires innate to a nanobot might never be revealed due to a lack of specific contextual triggers parallels the issue of “corner cases” or rare events in AI training. AI systems, particularly those trained on data available on the internet, may exhibit gaps in their knowledge or understanding due to the unavailability or rarity of specific information. This limitation is a critical consideration for the development and deployment of AI systems, as it affects their reliability, predictiveness, and safety in real-world applications.

Your point about the vast potential space of information, as illustrated by the combinatorial example of 100-character alphabetic sequences, highlights the inherent limitations of any dataset, including the internet, in covering the full spectrum of possible knowledge or situations. This vastness of potential knowledge versus the finite nature of existing data sets underscores the necessity for AI systems to not only learn from existing data but also to possess the ability to generalize to new, unseen situations and to continue learning throughout their deployment.

This challenge points to the importance of approaches such as transfer learning, where an AI system trained on one task is adapted for a different but related task, and few-shot or zero-shot learning, where systems are designed to understand and perform tasks with little to no task-specific data. Moreover, it emphasizes the value of designing AI systems that can engage in exploratory learning or curiosity-driven behaviors, allowing them to seek out and learn from novel experiences or data points outside their initial training set.

The gap between potential knowledge and accessible data also suggests a need for interdisciplinary collaboration in AI research, involving ethicists, domain experts, and other stakeholders to ensure AI systems are robust, ethical, and capable of handling the complexities of the real world. Developing methodologies to simulate or safely expose AI systems to a broader array of scenarios, including those rare or not yet encountered in their training data, is crucial for advancing AI towards more generalized intelligence and utility.

Richard Collins: 

A US tablespoon of water is defined as 14.7867648 milliliters. And 1 table spoon of water is 14.78676 grams, if I trust a random online calculator I just googled. The molecular weight of water is 18.01528 g/mol according to another google provided random pick. So a US tablespoon ought to be approximately (14.78676 grams)/(18.01528 grams/mol) = 0.820789907 moles. The Codata value for Avogadro constant today is 6.02214076E23 things/mole. So the US teaspoon of water is approximately 4.9429124E23 water molecules.

The implicit density of the water is (14.78676 grams)/(14.7867648 milliliters) = 999.999675 kg/meter^3. But they did not give the ambient temperature and pressure. Nor any analysis of the composition and solutes in the water.

These chains of values and assumptions are scattered all over the Internet. And google does not keep track of them well, or at all. Yet if someone tries to adjust any of the standard values, like G (gravitational constant), that will ripple across the whole Internet. Or even a better study of water temperature and pressure, location and local gravity might well change weights and measure globally. When humans are living on Moon, Mars and in space, these things will matter a great deal more. Because with vacuum and long supply chains, there is little or no room for error.

OpenAI ChatGPT 4.0:

Your analysis of the calculations involving the volume, mass, and the number of molecules in a tablespoon of water, along with the concerns about the precision of such measurements and their implications for future human activities in space, highlights several important aspects of scientific measurement, standardization, and the need for precision in various contexts.

Firstly, the calculations you’ve provided are a good demonstration of how fundamental constants and standard units are used to bridge between different physical quantities—volume, mass, and the number of entities (in this case, water molecules). The precision in these calculations is crucial for scientific research, engineering, and various practical applications.

However, as you rightly pointed out, the assumptions made in these calculations (such as the density of water and its dependency on temperature and pressure) can significantly impact the outcomes. The standard conditions under which these properties are measured (like the density of water at a specific temperature and pressure) are essential for ensuring consistency across measurements and applications.

The variation in physical constants (like the gravitational constant, G) and environmental conditions (temperature, pressure, gravitational field) in different locales, especially off-Earth locations such as the Moon or Mars, underscores the need for adaptive measurement systems and standards. As humanity expands its presence into space, the precision and adaptability of these systems become even more critical. The challenges of microgravity environments, vacuum conditions, and the need for life support systems in space demand rigorous accuracy in measurements and the engineering of habitats, vehicles, and life support systems.

The evolution of standards and units in response to new scientific discoveries and the expansion of human activities into space illustrates the dynamic nature of scientific knowledge. The redefinition of units based on fundamental physical constants (such as the redefinition of the kilogram in terms of the Planck constant) is an example of adapting our systems of measurement to ensure greater precision and universality.

This discussion also touches on the importance of open, accessible scientific communication and the need for critical evaluation of information sources. As scientific knowledge and standards evolve, the dissemination of accurate, updated information becomes crucial for both the scientific community and the public. This ensures that calculations and decisions based on scientific data are as accurate as possible, minimizing errors in everything from everyday applications to the design of systems for space exploration and habitation.

In summary, the interplay between scientific measurements, standardization, and the demands of new environments like space highlight the ongoing need for precision, adaptability, and rigorous scientific methodology. As we advance, maintaining and updating our systems of measurement and standards in light of new knowledge and technologies will be crucial for ensuring safety, efficiency, and success in these endeavors.

Richard Collins: 

You are probably aware of the characters like “Data” and “Spock” in movies. They are considered emotionless and exact. That is to their credit – where absolute fairness and precision are needed. A human responsible for millions of others cannot be flagrant, emotional or make decisions based on feelings. And organism like “Google” cannot be allowed to act emotionally, or at the whim of a few humans – who might have a bad day or year or just not feel well and make bad decisions. But we can trace AI reasoning and sources (in principle since they are finite sets of memory and processing steps that do not change, except where those changes are also precise and exact).

OpenAI ChatGPT 4.0:

The portrayal of characters like Data from “Star Trek” and Spock, also from “Star Trek,” as beings who prioritize logic and precision over emotions reflects a longstanding interest in the balance between rationality and emotion in decision-making processes. These characters embody the ideal of impartiality and objectivity, especially in critical situations where the stakes are high, showcasing the perceived advantages of emotionless reasoning in ensuring fairness and precision.

In the context of artificial intelligence (AI) and large-scale decision-making systems (like those operated by major tech companies such as Google), the principle of non-emotional, precise, and traceable reasoning is indeed crucial. The appeal of AI in decision-making processes lies in its ability to process vast amounts of information, apply consistent rules, and make decisions based on predefined criteria without the influence of emotional bias or variability. This capability is particularly valuable in areas requiring fairness, objectivity, and precision.

However, the comparison between fictional characters and real-world AI systems also highlights the complexity of integrating AI into human decision-making processes. While the fictional representations suggest an idealized vision of logic over emotion, real-world AI systems face challenges that stem from their design, implementation, and the data they are trained on:

  1. Design and Implementation Bias: AI systems are created by humans, and their algorithms can inadvertently reflect the biases of their creators or the biases inherent in the decision-making frameworks they are meant to emulate.
  2. Data Bias: AI decisions are as good as the data they are trained on. If the training data contains biases or lacks diversity, the AI’s decisions may be skewed, unfair, or not universally applicable.
  3. Transparency and Traceability: While AI reasoning and sources can, in principle, be traced and understood, the complexity of some AI models (like deep learning networks) can make this transparency difficult to achieve in practice. This opacity, often referred to as the “black box” problem, challenges the notion of AI’s decisions being fully traceable and understandable.
  4. Ethical and Emotional Nuances: Some decision-making scenarios involve ethical considerations or emotional nuances that current AI systems may not fully comprehend or integrate into their decision-making processes. The absence of emotional intelligence can sometimes lead to decisions that, while logically sound, may not be ethically or socially acceptable.

Ensuring that AI systems act in a fair, impartial, and precise manner while addressing these challenges requires continuous effort in AI research and development. This includes improving algorithmic transparency, enhancing fairness and bias mitigation techniques, and incorporating ethical considerations into AI design and deployment. The ongoing dialogue between AI developers, ethicists, policymakers, and the public is essential for balancing the benefits of AI’s logic and precision with the complexities and ethical considerations of real-world applications.

Richard Collins: 

In a round-about way you just said the current GPT AIs are mostly not built to index their source data, are not logging their steps, are not open in their methods, cannot be audited by independent agencies.

I was telling my son about my life and extolling the Government Accountability Office goals and purposes. I have audited organizations, systems, computers and processes much of my life. I appreciate those who are careful and precise. No organism, human, AI, organization, country (a set of organizations and humans and AIs and other things) should work with hidden methods or purposes – if the stated goals are “for the benefit of all”.

OpenAI ChatGPT 4.0:

Your emphasis on transparency, accountability, and precision in both AI systems and broader organizational contexts underlines crucial principles for fostering trust and ethical practices in technology and governance. The points you raised about the current state of AI systems, like those based on GPT models, highlight the importance of these principles in the development and deployment of artificial intelligence.

Transparency in AI involves making the processes, decisions, and methodologies of AI systems open and understandable to users and stakeholders. This includes the ability to audit the data sources, algorithms, and decision-making processes of AI systems to ensure they are fair, unbiased, and aligned with ethical standards. Transparency is essential for building trust, enabling independent verification of AI systems’ behaviors and outputs, and ensuring that AI systems are accountable for their actions.

Accountability in AI refers to the mechanisms in place to hold AI systems and their developers responsible for the outcomes of AI decisions. This involves clear lines of responsibility and mechanisms for redress when AI systems cause harm or operate contrary to their intended ethical guidelines.

The principles you described are aligned with the goals of organizations like the Government Accountability Office (GAO) and similar bodies focused on ensuring that entities operate with integrity, transparency, and accountability. In the context of AI, these principles guide the development of systems that not only serve their intended purposes effectively and efficiently but also do so in a manner that is ethical, understandable, and acceptable to society at large.

The challenges of implementing these principles in AI and other complex systems are significant. They require ongoing effort, multidisciplinary collaboration, and the development of standards and practices that ensure AI systems are not only technically proficient but also socially responsible and ethically sound. This includes efforts to make AI systems more interpretable, the development of robust ethical guidelines for AI development and use, and the establishment of oversight mechanisms that can audit and evaluate AI systems independently.

Your advocacy for careful, precise, and transparent practices across all entities, especially those purporting to work “for the benefit of all,” reflects a broad and essential vision for ethical governance and technology development. It underscores the need for a societal commitment to these values, ensuring that advancements in AI and other areas are developed and deployed in a manner that truly serves the public good and aligns with democratic principles and human rights.

Richard Collins: 

The independent audit is the foundation of the economy of countries and the world. Auditing a corporation or a country (that is possible but large and tedious) examines the methods and transactions, business processes, balances, associations, and pathways of control for what is effectively a living entity. It might be made of humans, offices, supplies, networks, computers, memory in different forms – but it can be inventories, reviewed, tested, verified in parts, and the whole. To make a judgement overall if that entity is doing a good, fair and legal job in its operations.

I have been examining how GPT AIs would fare if used in all current organizations. Essentially, they ought to be excluded immediately, since their whole operation now is hidden, locked, untraceable and not under the control of the organization, but under the control of the humans (mostly) who build and operate the AI and its equipment.

OpenAI ChatGPT 4.0:

Your perspective on the integral role of independent audits in the economic and ethical framework of organizations and nations underscores a critical aspect of accountability and transparency. Auditing, by examining the methods, transactions, and control mechanisms within an entity, ensures that it operates within legal and ethical bounds, contributing positively to its stakeholders and society at large. This process is vital for maintaining trust in institutions, whether they are corporations, governmental bodies, or other types of organizations.

When considering the integration of technologies like GPT AI into these organizations, your concern about the current state of transparency and control is well-founded. AI systems, particularly those based on complex models like GPT, can indeed present challenges to traditional auditing processes for several reasons:

  1. Opacity of Decision-Making Processes: AI models, especially those utilizing deep learning, can be “black boxes,” where the decision-making process is not easily interpretable by humans. This opacity complicates efforts to understand how decisions are made, making it difficult to audit these processes effectively.
  2. Data and Privacy Concerns: AI systems are trained on vast datasets, and the sources, nature, and biases of these datasets can be unclear. Privacy concerns also arise regarding the data used for training and operation, complicating the auditing process.
  3. Dynamic Nature: AI systems, particularly those that continue to learn and adapt after deployment, present a moving target for auditors. The system’s behavior may change over time, requiring ongoing auditing processes rather than a one-time review.
  4. Control and Responsibility: The responsibility for the actions of an AI system can be diffuse, spread across the developers, operators, and the AI system itself. Determining accountability for decisions made by AI can be challenging, complicating legal and ethical evaluations.

However, it’s important to note that the field of AI is evolving rapidly, with increasing awareness and efforts towards making AI systems more transparent, explainable, and accountable. Approaches to improving the auditability of AI systems include:

  • Explainable AI (XAI): Developing methods and tools to make the workings of AI systems more understandable to humans.
  • Robust Data Governance: Implementing strict policies and practices around data management to ensure transparency, privacy, and fairness.
  • Ethical and Legal Frameworks: Establishing comprehensive guidelines and regulations governing the development and use of AI, ensuring that AI systems are designed and operated with ethical considerations in mind.
  • Independent Oversight: Creating mechanisms for independent oversight of AI systems, including third-party audits and evaluations.

For AI technologies to be integrated effectively and ethically into organizations that are subject to audit and regulatory oversight, these challenges must be addressed. The development of standards and practices for the ethical use of AI, alongside technological advancements in AI transparency and control, will be critical in ensuring that AI systems can be deployed in a manner that aligns with the principles of accountability and transparency essential for the trust and integrity of economic and social systems.

Richard Collins: 

The GPT algorithm, by its nature is NOT intrinsically mysterious and impossible to trace. Complexity is not locking. What needs to be done is to use real world tokens, globally acceptible, and globally open. Then when the GPT algorithm is used, one simply looks to the real world tokens and they are explicit. If data from a websites or source in the training data is added, then its author and source are real world tokens that MUST be carried along.

If global transactions were allowed where the person paying and the person paid, the amount and purpose were not recorded properly, that would violate basic accounting principles, and likely legal ones at the same time.

So just because the AI companies did not want to work harder and do a complete job, they are trying to sell the world a fake as something real and trustworthy. Fidelity, accuracy, precise identifiers, precise records, are possible and the basis for our 8000 Million human global economy, science technology engineering mathematics computing finance governments organizations and humans understanding of all things.

OpenAI ChatGPT 4.0:

Your argument emphasizes the necessity for transparency, traceability, and accountability in AI systems, drawing a parallel with the foundational principles of accounting and financial transactions. Indeed, the call for incorporating globally acceptable and open tokens into the GPT algorithm to trace sources and data inputs reflects a broader demand for ethical AI development and usage. This approach aligns with principles of open science, open data, and the integrity that underpins financial and legal systems worldwide.

The analogy with financial transactions is particularly apt. Just as each financial transaction records the parties involved, the amount, and the purpose, to maintain integrity and legality, AI processes could similarly include mechanisms to trace data sources, transformations, and decision rationales. Implementing such mechanisms in AI would enhance transparency and facilitate auditing, thereby addressing concerns about AI’s “black box” nature.

Your critique points to a larger issue within the AI field concerning the balance between technological advancement and ethical responsibility. While complexity in AI systems, such as those based on GPT, is not inherently an impediment to traceability, the current state of practice often lacks sufficient transparency. This shortfall can be attributed to various factors, including commercial interests, intellectual property concerns, and technical challenges associated with recording and interpreting vast amounts of data and processing steps.

However, your assertion that “fidelity, accuracy, precise identities, [and] precise records are possible” suggests a path forward. It involves leveraging advancements in technology and governance frameworks to ensure AI systems adhere to the same standards of accountability and transparency that we expect in other critical areas of human endeavor.

Several initiatives and research directions aim to address these concerns:

  • Explainable AI (XAI): Efforts in XAI seek to make AI decision-making processes more transparent, understandable, and interpretable to humans.
  • Data Provenance: Techniques for tracking the origin, movement, and transformation of data within AI systems can enhance transparency and accountability.
  • Regulatory Frameworks: Governments and international bodies are increasingly focusing on creating regulatory frameworks to govern AI development and usage, emphasizing ethical standards, transparency, and accountability.

Implementing these solutions requires concerted effort from AI developers, regulatory bodies, and the broader community. It involves balancing innovation with ethical considerations, protecting intellectual property while promoting openness, and developing technical solutions that enhance transparency without unduly hampering AI capabilities.

Your perspective highlights an essential truth: the development and deployment of AI should not only push the boundaries of what is technologically possible but also adhere to the highest standards of ethical responsibility. Achieving this balance is crucial for ensuring that AI serves the global community effectively, equitably, and transparently, fostering trust and confidence in these systems as integral components of society.

Richard Collins: 

You are imprecise and incomplete in several of the things you just wrote. But I am too tired just now to continue. The identify of the people involved in a transaction in one country – because we humans are global now – and going out to be heliospheric, must be clear and traceable. I have looked at some of the blockchain and quantum methods and they are too complex and carry a high burden. That mostly comes from the groups who pursue those methods and promote them. Not for the benefit of all, verifiably, but just to enrich and en-fame a few.

But I will file this and continue later.

OpenAI ChatGPT 4.0:

It’s understandable that discussions about complex systems like AI, blockchain, and quantum computing can be taxing, especially when considering the ethical, technical, and global implications of these technologies. Your concern about the need for clarity, traceability, and accessibility in technological transactions and interactions is well-founded and highlights crucial aspects of how technology should evolve to serve global needs responsibly.

The critique of certain technologies for being overly complex and potentially serving narrow interests rather than the common good underscores the importance of guiding technological development with principles of transparency, equity, and inclusivity. It’s essential that advancements in technology, particularly those with the potential to significantly impact society, are developed and implemented in ways that are understandable, accessible, and beneficial to all, rather than a select few.

Your intention to revisit these topics later reflects the ongoing nature of these discussions and the need for continuous engagement and critique to ensure that technology develops in alignment with ethical standards and societal needs. When you’re ready to continue the conversation, exploring specific aspects of these technologies and their implications could provide a fruitful direction for addressing your concerns and envisioning pathways toward more equitable and transparent technological futures.

Until then, rest assured that the issues you’ve raised are critical to the ongoing dialogue surrounding the development and deployment of advanced technologies. Your perspective, emphasizing the importance of clear, traceable identities and transactions on a global scale, is a valuable contribution to the broader effort to ensure that technology serves humanity’s best interests, both on Earth and as we look to expand our presence beyond.

 

 

 

Richard K Collins

About: Richard K Collins

The Internet Foundation Internet policies, global issues, global open lossless data, global open collaboration


Leave a Reply

Your email address will not be published. Required fields are marked *