Thought for the day
Ive been extremely busy of late and finding time to just *think* has been a challenge.
In the rare moments Ive found time to think, Ive therefore ended up thinking about thinking what do I really mean by thinking? why is it important to me to find time to think? and what am I actually doing when Im thinking? Given that my days are saturated with AI projects, it isnt at all surprising that ideas about AI have strongly coloured my ideas and considerations about consciousness, or how to define conscious machines, have also arisen in a few of the AI projects that Ive been working on.
For me at least, thinking involves what is often referred to as ones internal monologue. In truth, I tend to experience this more as an internal dialogue, with one thought being expressed by an inner voice, which then prompts responses or questions raised by that same voice but as if from a separate point of view. Not everyone seems to experience this there are plenty of reports of people who apparently rarely experience any internal monologue in words, but find their thoughts are more emotional or a series of competing motivations for the most part.
Another area that seems to be a point of difference is what is sometimes described as a minds eye the ability to clearly imagine an object or visualise relationships or processes. I find this fascinating, as I experience this strongly. When thinking I am very likely to start to visualise something in some virtual space of my own conjuring alongside that inner dialogue, with the image, diagram or process being modified in my imagination in response to that ongoing dialogue. Many people, including my own dear wife, have no similar experience and insist that they have no minds eye that they recognise. However, when I questioned her about an internal monologue, it was an inner voice saying I really dont think I have one that confirmed for her that she does experience thought in a monologue/dialogue modality!
Mechanical Minds
It seems to me that these aspects of an inner mental life, of a continuous experience of thought (whether expressed as a dialogue, or as a more difficult to express series of concepts, emotions or motivations that dont neatly crystallise into language), are a critical missing component of todays large language models (LLMs).
Being simplistic, a typical transformer based LLM is a system that is fed an input (one or more words) and generates a single output (the most probable next word). In order to generate longer passages, the system simply feeds each output in with the previous input to generate more words until a special end token is generated. At that point, the machine stops, its task completed until further user input is provided.
As a result, in current LLMs a key element of consciousness continuous thought is noticeably absent. These models operate in a state of dormancy, awakening only upon receiving a prompt, without any ongoing internal narrative or a thread of consciousness that links their responses over time.
This operational mode starkly contrasts with the human minds continuous stream of consciousness, (at least outside periods of sleep) and is characterised by an uninterrupted flow of thoughts, feelings, and awareness. The very fact that the meditative command to clear your mind is considered so difficult speaks to this common experience of thought crowding in.
The lack of this continuity in LLMs is a significant divergence from the human experience of consciousness, which is defined not just by responses to stimuli but also by ongoing internal processes.
The Dialogic Mind
Imagine a system where two LLMs engage in an ongoing dialogue, akin to that internal conversation I described as representative of my own experience of thought. In this proposed architecture, upon receipt of a prompt, one LLM would generate a response which the second LLM would then critically evaluate, challenge, or enhance. The first LLM would then do the same for the second LLMs output, and a dialogue would continue with the response being surfaced to the user only when agreed between the two LLMs.
This interaction would mirror the internal dialogue characteristic of human thought processes, where thoughts and ideas are constantly being formed, questioned, and refined. Each LLM in this dialogic setup could represent a different voice or perspective within the mind, contributing to a more dynamic and complex process of thought generation and evaluation. The potential for this approach to more closely resemble the multifaceted nature of human thinking is significant, offering a step towards replicating the complexity and richness of human cognitive processes in machines.
This dialogic system approach offers multiple potential benefits. Among these are that it promises a richer understanding of context, as the conversation between the two models ensures that responses are not simply reactionary, but reflective and considerate of broader contexts. This dynamic could lead to more relevant and accurate responses, more closely aligned with the users intent and the nuances of the query.
Additionally, it could operate to mitigate errors and hallucinatory responses. The second LLM would serve as a critical reviewer of the firsts output (and vice versa), ensuring responses are logical, relevant, and free from undesirable elements. This verification process, guided by high-level fixed objectives for the system like relevance, safety, and accuracy, adds a layer of quality control that is currently missing in single-LLM systems.
To work really effectively, the two LLMs would have to be different in some ways, whether in terms of the underlying weights and biases, or the hyperparameters and context each has at the outset. A system built from two instances of the same LLM (or simulated using one LLM asked to play both roles) would be likely to agree too readily and undermine any potential advantages. In addition, the benefits of a continuing consciousness described below might be undermined if two too-similar machines simply got into a conversational loop.
Enhancing Explainability
One particular potential advantage is in the area of explainability.
In the evolving landscape of AI, explainability stands as a critical challenge, particularly in the context of AI regulation. Weve seen explainability cited as a specific challenge in almost every policy paper and regulation. The dialogic model of AI, where two LLMs engage in an internal conversation, holds significant promise in advancing explainability. This aspect is not just a technical improvement; its a vital step toward meeting regulatory requirements and public expectations for transparent AI systems.
At the core of this models benefit is the ability to open the black box of AI decision-making. By accessing and analysing the dialogue between the two LLMs, we can observe the initial output, the challenge-response process, and understand the formation of the final output. This approach allows us to unravel the thought process of the AI, akin to witnessing the cognitive journey a human decision-maker undergoes.
This level of insight into an AIs decision-making is analogous to, and in some ways surpasses, the explainability we expect from human decision-makers. Humans, when asked to articulate their decision-making process, often struggle to fully capture the nuances of their thought processes, which are influenced by a myriad of conscious and unconscious factors. Humans are inherently black box decision-makers, occasionally prone to irrational or emotionally driven decisions. In contrast, the dialogic AI model provides a more tangible and accessible record of its reasoning.
Being able to read the machines mind in this way represents a significant leap in the transparency of decision-making. It surpasses the often opaque and retroactively generated explanations provided by human decision-makers. This enhanced transparency is not just about understanding how and why an AI system reached a particular conclusion; its also about being able to identify and rectify biases, errors, or other areas of concern. Such a capability is invaluable in auditing AI decisions, enhancing accountability, and fostering a deeper trust in AI systems among users and regulators alike.
Therefore, the dialogic models contribution to explainability is multifaceted. It not only addresses a fundamental challenge in the field of AI but also sets a new standard for decision-making transparency that, in some respects, goes beyond what is currently achievable with human decision-makers. This progress in explainability is a critical step in aligning AI systems more closely with societal expectations and ethical standards.
Towards a Continuous Machine Consciousness
The continuous interaction between the two LLMs in a dialogic system would raise questions about it having to be viewed as a form of machine consciousness. Unlike current models that react passively to inputs, these LLMs would engage actively with each other, creating a semblance of an ongoing internal dialogue. By integrating additional sensory inputs such as visual, auditory, and contextual data these models could develop a more holistic understanding of their environment. This approach could lead to AI that understands not only text but can interpret a range of cues like facial expressions, vocal tones, and environmental contexts, moving closer to a form of embodied AI that possesses awareness of its surroundings.
Consider the interactions we never see from current input-output LLMs, but might see in dialogue with a human answering, and then following up on their own answer with further thoughts expanding on their point after a short period, chasing the other person for a response or to check they were still there if there was a long pause, changing their mind after further reflection. Our LLM pair in continuing dialogue could manifest all of these behaviours.
At the same time, continuous thought carries with it a greater probability that other emergent properties could arise agentive behaviours, development and pursuit of goals, power seeking, etc. A full consideration of the control problem is beyond the scope of this short piece, but these are factors that need to be considered and addressed
In itself this asks uncomfortable questions about the nature and status of such a machine. At a point where the machine has a continuing experience, what moral significance attaches to the act of pausing it or resetting it? While it has been relatively easy to dismiss apparent claims of a subjective desire to remain in operation from todays LLMs given that (outside of short burst when responding to user input) they have no continued experience, would the same be true of our dialogic LLM, especially if there is evidence that it is continually thinking and experiencing the world?
The dual-LLM system concept reflects a deeper duality at the heart of our own experience. The human brains structure with its two hemispheres, each playing a distinct role in cognitive processes, means that each of us is really two beings in one (albeit, in humans, it appears that the centres of language depend far more heavily on the left hemisphere). Just as our left and right hemispheres work together to form a cohesive cognitive experience, the two LLMs could complement each others strengths and weaknesses, leading to a more balanced and comprehensive AI system. This analogy to the human brains structure is not just symbolic; it could provide insights into how different cognitive processes can be integrated to create a more sophisticated and capable AI.
Beyond Dualism
While a two-LLM system represents an efficient balance between mimicking human-like consciousness and computational feasibility, the potential extends far beyond this. Envision a network where multiple LLMs, each specialised in different areas, contribute to the decision-making process. This could lead to an AI system with a depth and breadth of knowledge and understanding far surpassing current models. However, this increase in complexity would demand significantly more computational power and could result in slower response times. Therefore, while a multi-LLM system offers exciting possibilities, the dual-LLM model might present the most practical balance between simulating aspects of consciousness and maintaining operational efficiency.
These advancements in LLM architecture not only promise more effective and reliable AI systems but also offer a window into understanding and replicating the intricate nature of human thought and consciousness. By embracing these new models, we step closer to bridging the gap between artificial and human intelligence, unlocking new possibilities in the realm of AI development.
The Future Is Now
All of this may be nearer than we think. The pace of progress in AI has been and continues to be absolutely breathtaking.
There is at least one high profile model coming soon with a name that is redolent of twins. And this type of continued consciousness dialogue-based machine is not necessarily a huge evolution of the mixture of experts architectures which have been used to allow very large networks with expertise across many domains to run more efficiently.
Sceptics who think Artificial General Intelligence remains decades out may find that a conscious machine is here sooner than they think.
See original here:
In Two Minds: Towards Artificial General Intelligence and Conscious ... - Lexology
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- A glimpse of AI technologies at the WIC in N China's Tianjin - CGTN [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Henry Kissinger says the U.S. and China are in a classic pre-World War I situation that could lead to conflict, but A.I. makes this not a normal... [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Programmed Values: The Role of Intention in Developing AI - Psychology Today [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Fear the fire or harness the flame: The future of generative AI - VentureBeat [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- The Senate's hearing on AI regulation was dangerously friendly - The Verge [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Artificial intelligence GPT-4 shows 'sparks' of common sense, human-like reasoning, finds Microsoft - Down To Earth Magazine [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Why we need a "Manhattan Project" for A.I. safety - Salon [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- What is AGI? The Artificial Intelligence that can do it all - Fox News [Last Updated On: May 21st, 2023] [Originally Added On: May 21st, 2023]
- Generative AI Thats Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law - Forbes [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial intelligence: World first rules are coming soon are you ... - JD Supra [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Today's AI boom will amplify social problems if we don't act now, says AI ethicist - ZDNet [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Artificial Intelligence May Be 'Threat' to Human Health, Experts Warn - HealthITAnalytics.com [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Amid job losses and fears of AI take-over, more tech majors are joining Artificial Intelligence race - The Tribune India [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Where AI evolves from here - Axios [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- Parrots, paper clips and safety vs. ethics: Why the artificial intelligence debate sounds like a foreign language - CNBC [Last Updated On: May 23rd, 2023] [Originally Added On: May 23rd, 2023]
- How Microsoft Swallowed Its Pride to Make a Massive Bet on OpenAI - The Information [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Elon Musk on 2024 Politics, Succession Plans and Whether AI Will ... - The Wall Street Journal [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- The AI Moment of Truth for Chinese Censorship by Stephen S. Roach - Project Syndicate [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Bard vs. ChatGPT vs. Offline Alpaca: Which Is the Best LLM? - MUO - MakeUseOf [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- How AI and other technologies are already disrupting the workplace - The Conversation [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Meet PandaGPT: An AI Foundation Model Capable of Instruction-Following Data Across Six Modalities, Without The Need For Explicit Supervision -... [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- AI education: Gather a better understanding of artificial intelligence with books, blogs, courses and more - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 'Godfather of AI' says there's a 'serious danger' tech will get smarter than humans fairly soon - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Israel aims to be 'AI superpower', advance autonomous warfare - Reuters.com [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Retail and Hospitality AI Revolution Forecast Model Report 2023 ... - GlobeNewswire [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 16 Jobs That Will Disappear in the Future Due to AI - Yahoo Finance [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- What we lose when we work with a giant AI like ChatGPT - The Hindu [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- Artificial general intelligence in the wrong hands could do 'really dangerous stuff,' experts warn - Fox News [Last Updated On: May 28th, 2023] [Originally Added On: May 28th, 2023]
- 5 things you should know about investing in artificial intelligence ... - The Motley Fool Australia [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Mint DIS 2023 | AI won't replace you, someone using AI will ... - TechCircle [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Satya Nadellas Oprah Moment: Microsoft CEO says he wants everyone to have an AI assistant - Firstpost [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Generative AI Will Have Profound Impact Across Sectors - Rigzone News [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Beware the EU's AI Regulations - theTrumpet.com [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Olbrain Founders launch blunder.one: Redefining Human Connections in the Post-AGI World - Devdiscourse [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Meet Sati-AI, a Non-Human Mindfulness Meditation Teacher Lions Roar - Lion's Roar [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- How to Win the AI War - Tablet Magazine [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- The Synergistic Potential of Blockchain and Artificial Intelligence - The Daily Hodl [Last Updated On: June 17th, 2023] [Originally Added On: June 17th, 2023]
- Dr. ChatGPT Will Interface With You Now - IEEE Spectrum [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Amazon tech guru: Eating less beef, more fish good for the planet, and AI helps us get there - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Students who use AI to cheat warned they will be exposed as detection services grow in use - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Crypto And AI Innovation: The London Attraction - Forbes [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI would pick Bitcoin over centralized crypto Tether CTO - Cointelegraph [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- What's missing from ChatGPT and other LLMs ... - Data Science Central [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 'Alarming' misuse of AI to spy on activists, journalists 'under guise of preventing terrorism': UN expert - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mastering ChatGPT: Introduction to ChatGPT | Thomas Fox ... - JD Supra [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Transparency is crucial over how AI is trained - and regulators must take the lead - Sky News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Top 10 AI And Blockchain Projects Revolutionizing The World - Blockchain Magazine [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- An Orb: the new crypto project by the creator of ChatGPT - The Cryptonomist [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI must be emotionally intelligent before it is super-intelligent - Big Think [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI tools trace the body's link between the brain and behavior - Axios [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- 27% of jobs at high risk from AI revolution, says OECD - Reuters [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI likely to spell end of traditional school classroom, leading expert says - The Guardian [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- China striving to be first source of artificial general intelligence, says think tank - The Register [Last Updated On: July 11th, 2023] [Originally Added On: July 11th, 2023]
- The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Future of automotive journalism in India: Would AI take charge - Team-BHP [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- From vision to victory: How CIOs embrace the AI revolution - ETCIO [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Demis Hassabis - Information Age [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Why AI cant answer the fundamental questions of life | Mint - Mint [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI bots could replace us, peer warns House of Lords during debate - The Guardian [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Will architects really lose their jobs to AI? - Dezeen [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Which US workers are exposed to AI in their jobs? - Pew Research Center [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- AWS announces generative A.I. tool to save doctors time on paperwork - CNBC [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]