In Two Minds: Towards Artificial General Intelligence and Conscious … – Lexology

Thought for the day

Ive been extremely busy of late and finding time to just *think* has been a challenge.

In the rare moments Ive found time to think, Ive therefore ended up thinking about thinking what do I really mean by thinking? why is it important to me to find time to think? and what am I actually doing when Im thinking? Given that my days are saturated with AI projects, it isnt at all surprising that ideas about AI have strongly coloured my ideas and considerations about consciousness, or how to define conscious machines, have also arisen in a few of the AI projects that Ive been working on.

For me at least, thinking involves what is often referred to as ones internal monologue. In truth, I tend to experience this more as an internal dialogue, with one thought being expressed by an inner voice, which then prompts responses or questions raised by that same voice but as if from a separate point of view. Not everyone seems to experience this there are plenty of reports of people who apparently rarely experience any internal monologue in words, but find their thoughts are more emotional or a series of competing motivations for the most part.

Another area that seems to be a point of difference is what is sometimes described as a minds eye the ability to clearly imagine an object or visualise relationships or processes. I find this fascinating, as I experience this strongly. When thinking I am very likely to start to visualise something in some virtual space of my own conjuring alongside that inner dialogue, with the image, diagram or process being modified in my imagination in response to that ongoing dialogue. Many people, including my own dear wife, have no similar experience and insist that they have no minds eye that they recognise. However, when I questioned her about an internal monologue, it was an inner voice saying I really dont think I have one that confirmed for her that she does experience thought in a monologue/dialogue modality!

Mechanical Minds

It seems to me that these aspects of an inner mental life, of a continuous experience of thought (whether expressed as a dialogue, or as a more difficult to express series of concepts, emotions or motivations that dont neatly crystallise into language), are a critical missing component of todays large language models (LLMs).

Being simplistic, a typical transformer based LLM is a system that is fed an input (one or more words) and generates a single output (the most probable next word). In order to generate longer passages, the system simply feeds each output in with the previous input to generate more words until a special end token is generated. At that point, the machine stops, its task completed until further user input is provided.

As a result, in current LLMs a key element of consciousness continuous thought is noticeably absent. These models operate in a state of dormancy, awakening only upon receiving a prompt, without any ongoing internal narrative or a thread of consciousness that links their responses over time.

This operational mode starkly contrasts with the human minds continuous stream of consciousness, (at least outside periods of sleep) and is characterised by an uninterrupted flow of thoughts, feelings, and awareness. The very fact that the meditative command to clear your mind is considered so difficult speaks to this common experience of thought crowding in.

The lack of this continuity in LLMs is a significant divergence from the human experience of consciousness, which is defined not just by responses to stimuli but also by ongoing internal processes.

The Dialogic Mind

Imagine a system where two LLMs engage in an ongoing dialogue, akin to that internal conversation I described as representative of my own experience of thought. In this proposed architecture, upon receipt of a prompt, one LLM would generate a response which the second LLM would then critically evaluate, challenge, or enhance. The first LLM would then do the same for the second LLMs output, and a dialogue would continue with the response being surfaced to the user only when agreed between the two LLMs.

This interaction would mirror the internal dialogue characteristic of human thought processes, where thoughts and ideas are constantly being formed, questioned, and refined. Each LLM in this dialogic setup could represent a different voice or perspective within the mind, contributing to a more dynamic and complex process of thought generation and evaluation. The potential for this approach to more closely resemble the multifaceted nature of human thinking is significant, offering a step towards replicating the complexity and richness of human cognitive processes in machines.

This dialogic system approach offers multiple potential benefits. Among these are that it promises a richer understanding of context, as the conversation between the two models ensures that responses are not simply reactionary, but reflective and considerate of broader contexts. This dynamic could lead to more relevant and accurate responses, more closely aligned with the users intent and the nuances of the query.

Additionally, it could operate to mitigate errors and hallucinatory responses. The second LLM would serve as a critical reviewer of the firsts output (and vice versa), ensuring responses are logical, relevant, and free from undesirable elements. This verification process, guided by high-level fixed objectives for the system like relevance, safety, and accuracy, adds a layer of quality control that is currently missing in single-LLM systems.

To work really effectively, the two LLMs would have to be different in some ways, whether in terms of the underlying weights and biases, or the hyperparameters and context each has at the outset. A system built from two instances of the same LLM (or simulated using one LLM asked to play both roles) would be likely to agree too readily and undermine any potential advantages. In addition, the benefits of a continuing consciousness described below might be undermined if two too-similar machines simply got into a conversational loop.

Enhancing Explainability

One particular potential advantage is in the area of explainability.

In the evolving landscape of AI, explainability stands as a critical challenge, particularly in the context of AI regulation. Weve seen explainability cited as a specific challenge in almost every policy paper and regulation. The dialogic model of AI, where two LLMs engage in an internal conversation, holds significant promise in advancing explainability. This aspect is not just a technical improvement; its a vital step toward meeting regulatory requirements and public expectations for transparent AI systems.

At the core of this models benefit is the ability to open the black box of AI decision-making. By accessing and analysing the dialogue between the two LLMs, we can observe the initial output, the challenge-response process, and understand the formation of the final output. This approach allows us to unravel the thought process of the AI, akin to witnessing the cognitive journey a human decision-maker undergoes.

This level of insight into an AIs decision-making is analogous to, and in some ways surpasses, the explainability we expect from human decision-makers. Humans, when asked to articulate their decision-making process, often struggle to fully capture the nuances of their thought processes, which are influenced by a myriad of conscious and unconscious factors. Humans are inherently black box decision-makers, occasionally prone to irrational or emotionally driven decisions. In contrast, the dialogic AI model provides a more tangible and accessible record of its reasoning.

Being able to read the machines mind in this way represents a significant leap in the transparency of decision-making. It surpasses the often opaque and retroactively generated explanations provided by human decision-makers. This enhanced transparency is not just about understanding how and why an AI system reached a particular conclusion; its also about being able to identify and rectify biases, errors, or other areas of concern. Such a capability is invaluable in auditing AI decisions, enhancing accountability, and fostering a deeper trust in AI systems among users and regulators alike.

Therefore, the dialogic models contribution to explainability is multifaceted. It not only addresses a fundamental challenge in the field of AI but also sets a new standard for decision-making transparency that, in some respects, goes beyond what is currently achievable with human decision-makers. This progress in explainability is a critical step in aligning AI systems more closely with societal expectations and ethical standards.

Towards a Continuous Machine Consciousness

The continuous interaction between the two LLMs in a dialogic system would raise questions about it having to be viewed as a form of machine consciousness. Unlike current models that react passively to inputs, these LLMs would engage actively with each other, creating a semblance of an ongoing internal dialogue. By integrating additional sensory inputs such as visual, auditory, and contextual data these models could develop a more holistic understanding of their environment. This approach could lead to AI that understands not only text but can interpret a range of cues like facial expressions, vocal tones, and environmental contexts, moving closer to a form of embodied AI that possesses awareness of its surroundings.

Consider the interactions we never see from current input-output LLMs, but might see in dialogue with a human answering, and then following up on their own answer with further thoughts expanding on their point after a short period, chasing the other person for a response or to check they were still there if there was a long pause, changing their mind after further reflection. Our LLM pair in continuing dialogue could manifest all of these behaviours.

At the same time, continuous thought carries with it a greater probability that other emergent properties could arise agentive behaviours, development and pursuit of goals, power seeking, etc. A full consideration of the control problem is beyond the scope of this short piece, but these are factors that need to be considered and addressed

In itself this asks uncomfortable questions about the nature and status of such a machine. At a point where the machine has a continuing experience, what moral significance attaches to the act of pausing it or resetting it? While it has been relatively easy to dismiss apparent claims of a subjective desire to remain in operation from todays LLMs given that (outside of short burst when responding to user input) they have no continued experience, would the same be true of our dialogic LLM, especially if there is evidence that it is continually thinking and experiencing the world?

The dual-LLM system concept reflects a deeper duality at the heart of our own experience. The human brains structure with its two hemispheres, each playing a distinct role in cognitive processes, means that each of us is really two beings in one (albeit, in humans, it appears that the centres of language depend far more heavily on the left hemisphere). Just as our left and right hemispheres work together to form a cohesive cognitive experience, the two LLMs could complement each others strengths and weaknesses, leading to a more balanced and comprehensive AI system. This analogy to the human brains structure is not just symbolic; it could provide insights into how different cognitive processes can be integrated to create a more sophisticated and capable AI.

Beyond Dualism

While a two-LLM system represents an efficient balance between mimicking human-like consciousness and computational feasibility, the potential extends far beyond this. Envision a network where multiple LLMs, each specialised in different areas, contribute to the decision-making process. This could lead to an AI system with a depth and breadth of knowledge and understanding far surpassing current models. However, this increase in complexity would demand significantly more computational power and could result in slower response times. Therefore, while a multi-LLM system offers exciting possibilities, the dual-LLM model might present the most practical balance between simulating aspects of consciousness and maintaining operational efficiency.

These advancements in LLM architecture not only promise more effective and reliable AI systems but also offer a window into understanding and replicating the intricate nature of human thought and consciousness. By embracing these new models, we step closer to bridging the gap between artificial and human intelligence, unlocking new possibilities in the realm of AI development.

The Future Is Now

All of this may be nearer than we think. The pace of progress in AI has been and continues to be absolutely breathtaking.

There is at least one high profile model coming soon with a name that is redolent of twins. And this type of continued consciousness dialogue-based machine is not necessarily a huge evolution of the mixture of experts architectures which have been used to allow very large networks with expertise across many domains to run more efficiently.

Sceptics who think Artificial General Intelligence remains decades out may find that a conscious machine is here sooner than they think.

See original here:

In Two Minds: Towards Artificial General Intelligence and Conscious ... - Lexology

Related Posts

Comments are closed.