Category Archives: Artificial General Intelligence
In Two Minds: Towards Artificial General Intelligence and Conscious … – Lexology
Thought for the day
Ive been extremely busy of late and finding time to just *think* has been a challenge.
In the rare moments Ive found time to think, Ive therefore ended up thinking about thinking what do I really mean by thinking? why is it important to me to find time to think? and what am I actually doing when Im thinking? Given that my days are saturated with AI projects, it isnt at all surprising that ideas about AI have strongly coloured my ideas and considerations about consciousness, or how to define conscious machines, have also arisen in a few of the AI projects that Ive been working on.
For me at least, thinking involves what is often referred to as ones internal monologue. In truth, I tend to experience this more as an internal dialogue, with one thought being expressed by an inner voice, which then prompts responses or questions raised by that same voice but as if from a separate point of view. Not everyone seems to experience this there are plenty of reports of people who apparently rarely experience any internal monologue in words, but find their thoughts are more emotional or a series of competing motivations for the most part.
Another area that seems to be a point of difference is what is sometimes described as a minds eye the ability to clearly imagine an object or visualise relationships or processes. I find this fascinating, as I experience this strongly. When thinking I am very likely to start to visualise something in some virtual space of my own conjuring alongside that inner dialogue, with the image, diagram or process being modified in my imagination in response to that ongoing dialogue. Many people, including my own dear wife, have no similar experience and insist that they have no minds eye that they recognise. However, when I questioned her about an internal monologue, it was an inner voice saying I really dont think I have one that confirmed for her that she does experience thought in a monologue/dialogue modality!
Mechanical Minds
It seems to me that these aspects of an inner mental life, of a continuous experience of thought (whether expressed as a dialogue, or as a more difficult to express series of concepts, emotions or motivations that dont neatly crystallise into language), are a critical missing component of todays large language models (LLMs).
Being simplistic, a typical transformer based LLM is a system that is fed an input (one or more words) and generates a single output (the most probable next word). In order to generate longer passages, the system simply feeds each output in with the previous input to generate more words until a special end token is generated. At that point, the machine stops, its task completed until further user input is provided.
As a result, in current LLMs a key element of consciousness continuous thought is noticeably absent. These models operate in a state of dormancy, awakening only upon receiving a prompt, without any ongoing internal narrative or a thread of consciousness that links their responses over time.
This operational mode starkly contrasts with the human minds continuous stream of consciousness, (at least outside periods of sleep) and is characterised by an uninterrupted flow of thoughts, feelings, and awareness. The very fact that the meditative command to clear your mind is considered so difficult speaks to this common experience of thought crowding in.
The lack of this continuity in LLMs is a significant divergence from the human experience of consciousness, which is defined not just by responses to stimuli but also by ongoing internal processes.
The Dialogic Mind
Imagine a system where two LLMs engage in an ongoing dialogue, akin to that internal conversation I described as representative of my own experience of thought. In this proposed architecture, upon receipt of a prompt, one LLM would generate a response which the second LLM would then critically evaluate, challenge, or enhance. The first LLM would then do the same for the second LLMs output, and a dialogue would continue with the response being surfaced to the user only when agreed between the two LLMs.
This interaction would mirror the internal dialogue characteristic of human thought processes, where thoughts and ideas are constantly being formed, questioned, and refined. Each LLM in this dialogic setup could represent a different voice or perspective within the mind, contributing to a more dynamic and complex process of thought generation and evaluation. The potential for this approach to more closely resemble the multifaceted nature of human thinking is significant, offering a step towards replicating the complexity and richness of human cognitive processes in machines.
This dialogic system approach offers multiple potential benefits. Among these are that it promises a richer understanding of context, as the conversation between the two models ensures that responses are not simply reactionary, but reflective and considerate of broader contexts. This dynamic could lead to more relevant and accurate responses, more closely aligned with the users intent and the nuances of the query.
Additionally, it could operate to mitigate errors and hallucinatory responses. The second LLM would serve as a critical reviewer of the firsts output (and vice versa), ensuring responses are logical, relevant, and free from undesirable elements. This verification process, guided by high-level fixed objectives for the system like relevance, safety, and accuracy, adds a layer of quality control that is currently missing in single-LLM systems.
To work really effectively, the two LLMs would have to be different in some ways, whether in terms of the underlying weights and biases, or the hyperparameters and context each has at the outset. A system built from two instances of the same LLM (or simulated using one LLM asked to play both roles) would be likely to agree too readily and undermine any potential advantages. In addition, the benefits of a continuing consciousness described below might be undermined if two too-similar machines simply got into a conversational loop.
Enhancing Explainability
One particular potential advantage is in the area of explainability.
In the evolving landscape of AI, explainability stands as a critical challenge, particularly in the context of AI regulation. Weve seen explainability cited as a specific challenge in almost every policy paper and regulation. The dialogic model of AI, where two LLMs engage in an internal conversation, holds significant promise in advancing explainability. This aspect is not just a technical improvement; its a vital step toward meeting regulatory requirements and public expectations for transparent AI systems.
At the core of this models benefit is the ability to open the black box of AI decision-making. By accessing and analysing the dialogue between the two LLMs, we can observe the initial output, the challenge-response process, and understand the formation of the final output. This approach allows us to unravel the thought process of the AI, akin to witnessing the cognitive journey a human decision-maker undergoes.
This level of insight into an AIs decision-making is analogous to, and in some ways surpasses, the explainability we expect from human decision-makers. Humans, when asked to articulate their decision-making process, often struggle to fully capture the nuances of their thought processes, which are influenced by a myriad of conscious and unconscious factors. Humans are inherently black box decision-makers, occasionally prone to irrational or emotionally driven decisions. In contrast, the dialogic AI model provides a more tangible and accessible record of its reasoning.
Being able to read the machines mind in this way represents a significant leap in the transparency of decision-making. It surpasses the often opaque and retroactively generated explanations provided by human decision-makers. This enhanced transparency is not just about understanding how and why an AI system reached a particular conclusion; its also about being able to identify and rectify biases, errors, or other areas of concern. Such a capability is invaluable in auditing AI decisions, enhancing accountability, and fostering a deeper trust in AI systems among users and regulators alike.
Therefore, the dialogic models contribution to explainability is multifaceted. It not only addresses a fundamental challenge in the field of AI but also sets a new standard for decision-making transparency that, in some respects, goes beyond what is currently achievable with human decision-makers. This progress in explainability is a critical step in aligning AI systems more closely with societal expectations and ethical standards.
Towards a Continuous Machine Consciousness
The continuous interaction between the two LLMs in a dialogic system would raise questions about it having to be viewed as a form of machine consciousness. Unlike current models that react passively to inputs, these LLMs would engage actively with each other, creating a semblance of an ongoing internal dialogue. By integrating additional sensory inputs such as visual, auditory, and contextual data these models could develop a more holistic understanding of their environment. This approach could lead to AI that understands not only text but can interpret a range of cues like facial expressions, vocal tones, and environmental contexts, moving closer to a form of embodied AI that possesses awareness of its surroundings.
Consider the interactions we never see from current input-output LLMs, but might see in dialogue with a human answering, and then following up on their own answer with further thoughts expanding on their point after a short period, chasing the other person for a response or to check they were still there if there was a long pause, changing their mind after further reflection. Our LLM pair in continuing dialogue could manifest all of these behaviours.
At the same time, continuous thought carries with it a greater probability that other emergent properties could arise agentive behaviours, development and pursuit of goals, power seeking, etc. A full consideration of the control problem is beyond the scope of this short piece, but these are factors that need to be considered and addressed
In itself this asks uncomfortable questions about the nature and status of such a machine. At a point where the machine has a continuing experience, what moral significance attaches to the act of pausing it or resetting it? While it has been relatively easy to dismiss apparent claims of a subjective desire to remain in operation from todays LLMs given that (outside of short burst when responding to user input) they have no continued experience, would the same be true of our dialogic LLM, especially if there is evidence that it is continually thinking and experiencing the world?
The dual-LLM system concept reflects a deeper duality at the heart of our own experience. The human brains structure with its two hemispheres, each playing a distinct role in cognitive processes, means that each of us is really two beings in one (albeit, in humans, it appears that the centres of language depend far more heavily on the left hemisphere). Just as our left and right hemispheres work together to form a cohesive cognitive experience, the two LLMs could complement each others strengths and weaknesses, leading to a more balanced and comprehensive AI system. This analogy to the human brains structure is not just symbolic; it could provide insights into how different cognitive processes can be integrated to create a more sophisticated and capable AI.
Beyond Dualism
While a two-LLM system represents an efficient balance between mimicking human-like consciousness and computational feasibility, the potential extends far beyond this. Envision a network where multiple LLMs, each specialised in different areas, contribute to the decision-making process. This could lead to an AI system with a depth and breadth of knowledge and understanding far surpassing current models. However, this increase in complexity would demand significantly more computational power and could result in slower response times. Therefore, while a multi-LLM system offers exciting possibilities, the dual-LLM model might present the most practical balance between simulating aspects of consciousness and maintaining operational efficiency.
These advancements in LLM architecture not only promise more effective and reliable AI systems but also offer a window into understanding and replicating the intricate nature of human thought and consciousness. By embracing these new models, we step closer to bridging the gap between artificial and human intelligence, unlocking new possibilities in the realm of AI development.
The Future Is Now
All of this may be nearer than we think. The pace of progress in AI has been and continues to be absolutely breathtaking.
There is at least one high profile model coming soon with a name that is redolent of twins. And this type of continued consciousness dialogue-based machine is not necessarily a huge evolution of the mixture of experts architectures which have been used to allow very large networks with expertise across many domains to run more efficiently.
Sceptics who think Artificial General Intelligence remains decades out may find that a conscious machine is here sooner than they think.
See original here:
In Two Minds: Towards Artificial General Intelligence and Conscious ... - Lexology
Forget dystopian scenarios AI is pervasive today, and the risks are … – The Conversation
The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.
The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.
OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.
As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.
AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.
AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.
Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.
The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.
Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.
Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.
The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.
And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.
For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.
Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.
View original post here:
Forget dystopian scenarios AI is pervasive today, and the risks are ... - The Conversation
Role of AI on the Battlefield Debated as Putin Stakes New Policy Position – Decrypt
As world leaders band together to shape shared policies around the development of artificial intelligence, policymakers are looking to leverage the technology on the battlefields of the futureincluding Russian President Vladimir Putin.
The West should not be allowed to monopolize AI, Putin said during a recent conference on AI, signaling that he would advance an ambitious Russian AI strategy, according to a Friday report by Reuters.
"In the very near future, as one of the first steps, a presidential decree will be signed, and a new version of the national strategy for the development of artificial intelligence will be approved," Putin said during the Artificial Intelligence Journey conference.
The competition among Microsoft, Google, and Amazon to bring more advanced AI to the masses has been compared to a nuclear arms race, even as an actual AI arms race is unfolding between the United States and China. On that front, top U.S. military contractorsincluding Lockheed Martin, General Dynamics, and Raytheonare developing AI tech for military operations.
Another company working on combat AI is San Diego-based Shield AI, recently featured in the Netflix documentary Unknown: Killer Robots.
Shield AI is an American aerospace and defense technology company founded by brothers Brandon Tseng and Ryan Tseng, along with Andrew Reiter in 2015. Shield AI is responsible for the Nova line of unmanned aerial vehicles (UAV) that the U.S. military already uses in urban environments where GPS or radio frequencies are unavailable.
While automated war machines may give visions of T-800 from the Terminator Series, Logan says the goal of bringing AI to the battlefield is about saving lives.
The success of Nova is you could push a button and go explore that building, and Nova would go fly into that building, and it would go into a room, spin around 360 degrees, perceive the environment, and make decisions based on what to do and then continue to explore, Shield AI Director of Engineering Willie Logan told Decrypt. The whole goal of that was to provide [soldiers] on the ground insights into what was in the building before they had to walk in themselves.
Shield AI calls its AI software the hivemind. As Logan explained, the difference between an AI-powered UAV and one guided by humans is that instead of a human telling the UAV how to fly and wait for the operator to identify a target, the AI is programmed to look for the target and then monitor the object once it's discovered.
In addition to adding AI brains to drones, Shield AI partnered with defense contractor Kratos Defense to add an AI pilot to its unmanned XQ-58A fighter jetthe Valkyrie. In October, Shield AI announced the raise of $200 million in investments, giving the company a $2.7 billion valuation.
The U.S. military has invested heavily in leveraging AI, including generative AI, to conduct virtual military operations based on military documents fed into the AI model.
In August, the Department of Defense Deputy Secretary of Defense Kathleen Hicks unveiled the Pentagons Replicator initiative that aims to "field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18 to 24 months."
Others developing battlefield AI include European AI defense developer Helsing, which announced raising $223 million in Series B funding in September, including from Swedish airplane and car manufacturer Saab, creator of the Gripen fighter jet.
Logan said that while the idea of killer robots may be good for a Hollywood blockbuster, AI is about keeping humans out of harm's way while keeping humans in the loop.
I really highlight the shield part of Shield AI, Logan said. By giving the United States this capability, [Shield AI] is providing a deterrence. Logan cautioned that even if the United States said it wont develop AI tools for war, that does not mean other countries wont.
I think if we can be in the forefront of it and design it in a way that we think is the right way for the world to use this, Logan said. We can help deter bad actors from doing it the wrong way.
Edited by Ryan Ozawa.
Link:
Role of AI on the Battlefield Debated as Putin Stakes New Policy Position - Decrypt
The Biden EO on AI: A stepping stone to the cybersecurity benefits of … – SC Media
While the Biden administrations executive order (EO) on artificial intelligence (AI) governs policy areas within the direct control of the U.S. governments executive branch, they are important broadly because they inform industry best practices and subsequent laws and regulations in the U.S. and abroad.
Accelerating developments in AI particularly generative AI over the past year or so has captured policymakers attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) has further heightened attention in Washington. In that context, we should view the EO as an early and significant step addressing AI policy rather than a final word.
Given our extensive experience with AI since the companys founding in 2011, we want to highlight a few important issues that relate to innovation, public policy and cybersecurity.
Like the technology it seeks to influence, the EO itself has many parameters. Its 13 sections cover a broad cross-section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, theres significant attention to the nexus between AI and cybersecurity, and thats covered at some length in Section 4.
Before diving into specific cybersecurity provisions, its important to highlight a few observations on the documents overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but were encouraged by several attributes of the document.
First, in numerous areas of the EO, agencies are designated as owners of specific next steps. This clarifies for stakeholders how to offer feedback and reduces the odds for gaps or duplicative efforts.
Second, the EO outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through request for comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the EO tasks existing or establishes new advisory panels to integrate structured stakeholder feedback on AI policy issues.
Third, the EO mandates a brisk progression for next steps. Many EOs require agencies to finish tasks in 30 or 60-day windows, which are difficult for them to meet at all, let alone in deliberate fashion. This document in many instances spells out 240-day deadlines, which should allow for 30 and 60-day engagement periods through the RFCs.
Finally, the EO states plainly: as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI. This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If we can use history as a guide, its easy to imagine a scenario where a talented, junior staffer at a given agency identifies a good way to leverage AI at some time next year that no one could easily forecast this year. Its unwise to foreclose that possibility, as we should encourage innovation inside and outside of government.
On cybersecurity, the EO touches on a number of important areas. Its good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.
One section of the EO attempts to reduce risks of synthetic content: generative audio, imagery and text. Its clear that the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, well need to innovate solutions to this problem. And with elections around the corner, we hope to see rapid advancements in this area.
Its clear the EOs authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1), the Secure Software Development Framework, and the Blueprint for an AI Bill of Rights. This will reduce risks associated with establishing new processes, while allowing for more coherent frameworks for areas where theres only subtle distinctions or boundaries between, for example, software, security and AI.
The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. It mandates the following:
Within 90 days of the date of this order, and at least annually thereafter relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks, and shall consider ways to mitigate these vulnerabilities.
While its important language, we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.
This EO represents an important step in the evolution of U.S. AI policy. Its also very timely. As we described in our recent testimony to the House Judiciary Committee, AI will drive better cybersecurity outcomes and its also of increasing interest to cyber threat actors. As a community, well need to continue to work together to ensure defenders realize the leverage AI can deliver, while mitigating whatever harms might come from the abuse of AI systems by threat actors.
Drew Bagley, vice president of cyber policy, CrowdStrike; Robert Sheldon, senior director, public policy and strategy, CrowdStrike
Read more here:
The Biden EO on AI: A stepping stone to the cybersecurity benefits of ... - SC Media
CJI DY Chandrachud speaks on AI, poses question on ethical treatment of these technologies – HT Tech
While the world is busy developing artificial intelligence (AI) to take it to artificial general intelligence (AGI) stage, not many are giving a thought to the interpersonal relationship that is being created between humans and this emerging technology. AGI is human-leven intelligence, and thus, it can also empower AI to develop some level of consciousness. Speaking at a conference, Chief Justice of India D.Y. Chandrachud on Saturday highlighted the fundamental questions about AI - ethical treatment.
Addressing the plenary session of the 36th 'LAWASIA' conference virtually, the CJI spoke on "Identity, the Individual and the State - New Paths to liberty". LAWASIA is a regional association of lawyers, judges, jurists and legal organisations, which advocates for the interests and concerns of the Asia Pacific legal progression.
Citing English philosopher John Stuart Mill's book on Liberty published in 1859, the CJI said the author discussed the historical struggle between liberty and authority describing the tyranny of the government which in his view needs to be controlled by the liberty of citizens, reported PTI. Mill devised this control of authority into two mechanisms. Firstly, necessary rights belonging to the citizens, and secondly there must be constitutional checks for the community to consent to the impacts of the governing path, according to him.
The idea of liberty, PTI Chief Justice said, can be summarised in the following phrase: "Your right to swing your fist ends where my nose begins. He also spoke about how in the digital age "we are faced with several fascinating aspects of Artificial Intelligence. There is a complex interplay between Artificial Intelligence (AI) and personhood where we find ourselves navigating uncharted territories that demand both philosophical reflection and practical considerations."
In contemplating the intersection of AI and personhood, "We are confronted with fundamental questions about the ethical treatment of these technologies...." He cited an example of a human robot (Sophia) which was granted citizenship (in Saudi Arabia) and said, "We must reflect on whether all humans who live, breathe and walk are entitled to personhood and citizenship based on their identity."
Noting that liberty is the ability to make choices for oneself and change the course of life, the Chief Justice said identity intersects with the person's agency and life choices. "As lawyers, we are constantly confronted with this intersection and the role of the State to limit or expand the life opportunities of the people. While the relationship between the state and liberty has been understood widely, the task of establishing and explaining the relationship between identity and liberty is incomplete," he said.
Traditionally, liberty has been understood as the absence of State interference in a person's right to make choices. However, contemporary scholars have come to the conclusion that the role of the State in perpetuating social prejudices and hierarchies cannot be ignored, Chief Justice Chandrachud said.
"In effect, whether the state does not intervene, it automatically allows communities with social and economic capital to exercise dominance over communities who have been historically marginalised." He also said people who face marginalisation because of their caste, race, religion, gender, or sexual orientation will always face oppression in a traditional, liberal paradigm. This empowers the socially dominant.
The Chief Justice also stressed: "We must broaden our perspectives. The notion of popular sovereignty, for example, inherently demands the inclusion of pluralism and diversity at its core." In India, he said, affirmative action has been prescribed and even mandated by the Constitution of India in the context of Scheduled Castes, Scheduled Tribes, and Backward Classes," he said.
(With inputs from PTI)
Read the original post:
CJI DY Chandrachud speaks on AI, poses question on ethical treatment of these technologies - HT Tech
Will AI Replace Humanity? – KDnuggets
We are living in a world of probabilities. When I started talking about AI and its implications years ago, the most common question was is AI coming after us?
And while the question remains the same, my response has changed regarding probabilities. It is more likely to replace human judgment in certain areas, so the probability has increased over time.
As we discuss a complex technology, the answer will not be straightforward. It depends on several factors, such as what it means to be intelligent, whether we suggest replacing jobs, anticipating the timelines for Artificial General Intelligence (AGI), or identifying the capabilities and limitations of AI.
Let us start with understanding the definition of Intelligence:
Stanford defines intelligence as the ability to learn and perform suitable techniques to solve problems and achieve goals appropriate to the context in an uncertain, ever-varying world.
Gartner describes it as the ability to analyze, interpret events, support and automate decisions, and take action.
AI is good at learning patterns, however, mere pattern recognition does not qualify as intelligence. It is one of the aspects of the broader spectrum of multi-dimensional human intelligence.
As experts believe, AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present, and the future; of history, injury or nostalgia. Without that, theres no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the intelligence part.
Some might refer to AI clearing tests from prestigious institutes and, most recently, the Turing test as a testament to its intelligence.
For the unversed, the Turing test is an experiment designed by Alan Turing, a renowned computer scientist. According to the test, machines possess human-like intelligence if an evaluator cannot distinguish the response between a machine and a human.
A comprehensive overview of the test highlights that though Generative AI models can generate natural language based on the statistical patterns or associations learned from vast training data, they do not have human-like consciousness.
Even advanced tests, such as the General Language Understanding Evaluation, or GLUE, and the Stanford Question Answering Dataset, or SQuAD, share the same underlying premise as that of Turing.
Let us start with the fear that is fast becoming a reality will AI make our jobs redundant? There is no clear yes or no answer, but it is fast approaching as the GenAI casts a wider net on automation opportunities.
McKinsey reports, By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automateda trend accelerated by generative AI.
Profiles like office support, accounting, banking, sales, or customer support are first in line toward automation. Generative AI augmenting the software developers in code writing and testing workflows has already impacted the job roles of junior developers.
Its results are often considered a good starting point for an expert to enhance the output further, such as in making marketing copy, promotional content, etc.
Some narratives make this transformation sound subtle by highlighting the possibility of new job creations, such as that of healthcare, science, and technology in the near to short term; and AI ethicists, AI governance, audits, AI safety, and more to make AI a reality overall. However, these new jobs can not outnumber those being replaced, so we must consider the net new jobs created to see the final impact.
Next comes the possibility of AGI, which, similar to the multiple definitions of intelligence, warrants clear meaning. Generally, AGI refers to the stage when machines gain sentience and awareness of the world, similar to a human's.
However, AGI is a topic that deserves a post on its own and is not under the scope of this article.
For now, we can take a leaf from the diary of DeepMinds CEO to understand its early signs.
Looking at a broader picture, it is intelligent enough to help humans identify patterns at scale and generate efficiencies.
Let us substantiate it with the help of an example where a supply chain planner looks at several order details and works on ensuring the ones at risk of being met with a shortfall. Each planner has a different approach to managing the shortfall deliveries:
As an individual planner could be limited with its view and approach to managing such situations, machines can learn the optimal approach by understanding the actions of many planners and help them automate easy scenarios through their ability to discover patterns.
This is where machines have a vantage point over humans limited ability to simultaneously manage several attributes or factors.
However, machines are what they are, i.e., mechanical. You can not expect them to cooperate, collaborate, and develop compassionate relationships with the teams as empathetically as great leaders do.
I frequently engage in lighter team discussions not because I have to but because I prefer working in an environment where I am connected with my team, and they know me well, too. It is too mechanical to only talk about work from the get-go or try to act as it matters.
Take another instance where a machine analyzes a patients records and discloses a health scare as-is following its medical diagnosis. Compare this with how a doctor would handle the situation thoughtfully, simply because they have emotions and know what it feels like to be in a crisis.
Most successful healthcare professionals go beyond their Call of Duty and develop a connection with the patient to help them through difficult times, which machines are not good at.
Machines are trained on data that could capture the underlying phenomenon and create models that best estimate them.
Somewhere in this estimation, the nuances of specific conditions get lost. They do not have a moral compass, similar to a judge has when looking at each case.
To summarize, machines may learn patterns from data (and the bias that comes with it) but do not have the intelligence, drive, or motivation to make fundamental changes to handle the issues plaguing humanity. They are objective-focused and built on top of human intelligence, which is complex.
This phrase sums up my thoughts well AI can replace human brains, not beings.
Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.
The rest is here:
India’s approach to regulating AI is good, says Andrew Ng | Mint – Mint
Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI--an edtech company, founder & CEO of Landing AI--a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford Universitys Computer Science Department.
Andrew Ng, the founding lead of the Google Brain team and former chief scientist at Baidu, juggles multiple roles as a teacher, entrepreneur, and investor. He is currently the founder of DeepLearning.AI--an edtech company, founder & CEO of Landing AI--a software provider for industrial automation and manufacturing, general partner at AI Fund, and chairman and co-founder of Coursera, besides being an adjunct professor at Stanford Universitys Computer Science Department.
In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:
Hi! You're reading a premium article
In an interview, he shares his views on the OpenAI fracas, loss of jobs to generative artificial intelligence (AI), the heated debate around artificial general intelligence (AGI), and global regulation of AI, among other things. Edited excerpts:
Sam (Altman, CEO of OpenAI) was my student at Stanford. He interned at my lab. I think he's been a great leader. What happened was pretty tragic and it could have been avoided (the interview was conducted a day prior to Altman returning as CEO of OpenAI). OpenAI has many valuable assets, and reportedly more than $1 billion in annualised revenue, many customers, and a phenomenal product. But its governance structure is now very much discredited. Earlier, there were all sorts of arguments about why a nonprofit structure is preferable, but this incident will make investors shy away from the clever arguments for very innovative governance structures.
For a lot of jobs, Gen AI can augment or automate just a small fraction of it--let's say 20% of someone's job could be automated using GenAI. That means that it's beneficial both to businesses and to individuals, but we need to figure out which 20% can be automated, and then use GenAI to get that productivity boost. I'm not minimising the suffering of the much smaller number of people whose jobs will be fully automated. I think we owe it to them (those impacted) and create a safety net for them. But in the vast majority of cases, AI today is good enough only to automate part of someone's job. And that often means that people in that job who use AI will replace people who don't.
Fewer Asian countries have been caught up with the AI extinction hype. Its more of a European thing. The most-widely accepted definition of AGI is that AI would do any intellectual task that a human could do. I think we're at these decades away from that--maybe 30-50 years away. It turns out that there's a number of companies and people who are optimistic about achieving AGI in 3-5 years.
But if you look carefully, many of them have been changing the definition of AGI, and thus are quickly lowering the bar. If we ask: Is the machine sentient, or self-aware, it will be a philosophical question. And I don't know the answer to it because it's not a scientific question. But imagine if we were to set a very low bar--some very simple test to declare machines sentient, it would lead to very sensational news articles saying machines are sentient. So, I'm not sure whether coming up with a very narrow technical definition is a good thing.
We need good regulations on AI, and clarity on how we should or should not take any AI to areas such as healthcare, etc. The EU (European Union) and AI Act was thoughtful in some places and flawed in some places. It's a good idea to take a tiered approach to AI risk like using AI for screening people for jobs -- that's high risk, so let's make sure to mitigate that risk.
Unfortunately, I'm seeing much more bad regulation around the world rather than good regulation. I think the US White House executive order is a bad idea in terms of starting to put your burdensome reporting requirements on people training large models. It will stifle innovation, because only large tech companies will have the capacity to manage compliance. If something like the White House executive order ends up being enforced in other countries too, the winners, arguably, will be a handful of tech companies while it will become much harder to access open-source technology.
I'm not very familiar with the India approach to regulation. But my sense is that India is taking a very light touch. And I think, Indias approach is good. In fact, most Asian nations have been regulating AI with a much lighter touch, which has been a good move.
I think regulating AI applications is a great idea. Deep fakes are problematic, and certainly one of the most disgusting things, has been generation of non-consensual, pornographic images. I'm glad regulators are trying to regulate those horrible applications.
Yet, having more intelligence in the world via human intelligence or artificial intelligence is a good thing. While intelligence can be used for nefarious purposes too, one of the reasons that humanity has advanced over the centuries is because we all collectively got smarter and better educated and have more knowledge. Slowing that down (with regulation) seems like a very foolish thing for governments to do.
Gen AI is the fastest growing course of 2023 with about 74,000 enrollments in just the first week. That probably won't surprise you since there's very high interest in learning Gen AI skills and technical skills. We are seeing a lot of traction on developer-oriented content, as well as by a non-technical audience because GenAI is so disruptive; it is changing the nature of work for a lot of professions. I hope that Gen AI for everyone and other courses on Coursera can help people use the technology to become developers that build on top of the technology and create a layer that is valued by the builders (of the platform).
Fear of job losses is a very emotional subject. I wish AI was even more powerful than it is. But realistically, it can automate only a fraction of tasks done by the economy. There's still so much that Gen AI cannot do. Some estimates peg that GenAI can automate may be 15% of the tasks done in the US economy; at the higher end, maybe approaching 50%. But 15 or 50%, these are huge numbers as a percentage of the economy. We should embrace it (Gen AI) and figure out the use cases. In terms of how to think about one's own career, I hope that the 'Gen AI for everyone' course will help with that.
Any company that does a lot of knowledge work should embrace it (Generative AI), and do so relatively quickly. Even industries that don't do knowledge work seem to be becoming more data oriented. Even things like manufacturing and natural resource extraction, which traditionally did not seem like knowledge-driven, are becoming more data- and AI-oriented, and it turns out that the cost of experimenting with, and developing with Gen AI is lower than earlier with AI.
A good recipe for senior executives is to take a look at the jobs being done by people in the company, break the jobs down into tasks, and see which tasks are amenable to automation. And given the low development costs, definitely every large enterprise should look at it (Gen AI). Even medium enterprises may have the resources to develop genuine applications, and so do small enterprises.
Gen AI is absolutely safe enough for many applications, but not for all applications. Part of the job of not just C-Suite executives, but of companies broadly, is to identify and take advantage of Gen AI within those applications. Would I have Gen AI tell me how to take a pill as a drug for a specific ailment? Probably not. But Gen AI can be used for a lot of applications including using it as a thought partner to help with brainstorming, or improving your writing or, helping to summarise information or process information. There are a lot of use cases in corporations too where it can boost productivity significantly.
Think about the use of a CPU (central processing unit) that spans different sizes for different applications. Today, we have very powerful data centre servers and GPUs (graphics processing units), and yet we have a CPU running on my laptop, a less powerful one running on my phone, an even less powerful one running my watch, and an even less powerful one controlling the sunlight in my car.
Likewise, a really advanced model like GPT (generative pre-trained transformer) should be used for some very complex tasks. But if your goal is to summarise conversations in the contact centre, or maybe check grammar, or for your writing, then maybe it does not need to know much about history, philosophy, or astronomy, implying that a smaller model would work just fine.
Looking at the future. there will be more work on edge (on devices) AI, where more people will run smaller models that can protect one's privacy too.
There are models where you can probably understand the code better and where, perhaps, the transparency is higher. But even for open source, it's pretty hard to figure out why a specific algorithm gave a certain answer.
While it is true that there have been some companies that are a thin wrapper on some APIs (application programming interfaces), there are actually a lot of opportunities to build really deep tech company atop new Gen AI capabilities. Take a different analogy--I don't think that Uber is a thin wrapper on top of iOS but you can do a lot of work on top of them. AI Fund focuses on venture scale businesses, so we tend to go after businesses with a significant market need, and we can build technology to address. And we are building things that involve deep tech that are not that easy to eradicate.
But I would tell an entrepreneur-- just go and experiment. And frankly, if you build a thin wrapper that works, great. Use those learnings to maybe make that wrapper deeper or go do something else that is even harder to build. This is a time of expansion, creativity, and innovation but innovators must be responsible. There are so many opportunities to build things that were not possible before the new tools were available.
AI is a very transformative technology that benefits every individual and every business. That's why I was excited to teach 'GenAI for Everyone'. Because we have to help every individual and every business navigate this. I hope that people will jump on, learn about technology, and use it to benefit themselves in the communities around them and the world.
See the rest here:
India's approach to regulating AI is good, says Andrew Ng | Mint - Mint
Baidu reveals expectations-beating earnings and touts its new ChatGPT-like AI models, amid leadership chaos at U.S. competitor OpenAI – Fortune
Baidu shares jumped almost 4.5% during Wednesday trading in Hong Kong following expectations-beating revenue from the Chinese tech giant. Baidu is trying to solidify an early lead in the race to win Chinas AI market, starting with the launch of its ChatGPT-like ERNIE Bot earlier this year.
Baidu generated revenue of $4.7 billion for the three months ending Sept. 30, a 6% year-on-year increase. The company also earned $916 million in net income, compared to a $20.6 million loss for the same quarter last year.
Our AI-centric business and product strategy should set the stage for sustained multiyear revenue and profit expansion within our ERNIE and ERNIE Bot ecosystem, CEO Robin Li said in a statement on Tuesday.
Baidu and Li, also the companys founder, hope AI will revive the tech companys fortunes, after the company lost ground to competitors like Tencent and Alibaba. The company is primarily known for its search engine, but is now shifting to new sectors like automated driving and generative AI.
Baidu launched ERNIE earlier this year, though observers were underwhelmed by the presentation compared to its non-Chinese peers like Google and Microsoft. Yet the Chinese company has continued to update the model and its chatbot, releasing ERNIE 4.0 in October.
The tech company also shared details on its robotaxi service, named Apollo Go, which operates in major cities like Wuhan, Shenzhen, and Beijing. The autonomous ride-hailing service carried 821,000 passengers last quarter, a 73% increase from a year ago.
Baidu is part of a growing rush in Chinas tech sector to launch generative AI products, and arguably leading the way: The company is the only Chinese firm featured in Fortunes inaugural AI Innovators list, released on Tuesday, which highlights 50 companies at the forefront of AI.
The companys ERNIE bot is perhaps Chinas closest equivalent to OpenAIs ChatGPT, currently banned in China. The bot outperforms ChatGPT in several Chinese-language tasks, Baidu says.
Yet Baidus Big Tech peers are also barreling into the space. Alibaba, Tencent, and JD.com have all announced their own large language models. (JD.com CEO Sandy Xu Ran is joining Baidus board as an independent director, the company announced Tuesday.) Several smaller AI companies and startups are also developing their own models: There are now over 130 large language models being developed in China today, according to one estimate.
Yet Chinas AI companies need to work within the limits of what Beijing allows. According to rules approved in July, Chinese developers must ensure that their AI services align with core socialist values and national security. Yet they also highlight the importance of innovation, and revisions weakened provisions on how to penalize companies that break the rules.
Developers in China face another threat: U.S. rules limiting the sale of advanced AI chips from firms like Nvidia to Chinese companies. Last week, Alibaba shelved its plan to spin off its cloud computing division as an independent company, blaming uncertainty from U.S. export controls.
On Tuesday, Li warned that these restrictions could force the consolidation of large language models in China. Baidu has enough AI chips stockpiled in the near term, he said.
Regulation and access to chips are the primary risks faced by Chinas AI sector, seemingly more real-world concerns compared to the recent worries in the U.S. about safety and more existential threats from the new technology. OpenAI, the developer behind ChatGPT, fired its CEO Sam Altman on Friday, reportedly due to concerns that he was moving too quickly on releasing the organizations products. (Altman returned as OpenAIs CEO on Wednesday morning, ending days of negotiations to bring him back to the organization.)
The debate around the existential risks around [artificial general intelligence] has not been as much of a priority within the Chinese AI community, which has focused more on developing solid use cases for enterprise deployments of generative AI, Paul Triolo, an associate partner for China and technology policy lead at the advisory firm Albright Stonebridge, told Fortune on Monday.
Conversations on AI risk will be very much a government-driven thing in China. No CEO is going to be forced out because of disputes over the lack of guardrails to tackle the existential risks of AGI, he added.
Baidu established an ethics committee in October to guide the practices of technology professionals, the company said in its earnings statement.
Additional reporting by Nicholas Gordon
Continue reading here:
Anthropic seizes the moment, unveiling Claude 2.1 as OpenAI … – VentureBeat
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
As OpenAI, the leading artificial intelligence (AI) startup, faces an uncertain future following the surprise ouster of its CEO Sam Altman, competitor Anthropic is seizing the moment to release its own updated large language model (LLM), Claude 2.1.
The launch allows Anthropic to present itself as a stable alternative and take advantage of the turmoil surrounding industry leader OpenAI.
OpenAI this week is reeling after its board abruptly fired Altman on Friday, prompting nearly all of its employees to threaten to depart to Microsoft alongside Altman and other executives. The news sent shockwaves through the tech industry, given OpenAIs meteoric rise created by the launch of ChatGPT.
Anthropic appears keen to capitalize on the instability at its main rival. Claude 2.1, one of the leading alternatives to ChatGPT, begins rolling out today and delivers major improvements to accuracy, honesty and technical capabilities. These are upgrades Anthropic hopes will appeal to enterprises wary of OpenAIs internal conflicts.
The AI Impact Tour
Connect with the enterprise AI community at VentureBeats AI Impact Tour coming to a city near you!
The turmoil at OpenAI highlights growing divisions in artificial intelligence around issues like safety and ethics. OpenAI was founded to responsibly develop artificial general intelligence (AGI), but some insiders feared it was sacrificing safety in pursuit of profits and rapid growth.
Anthropic has differentiated itself with a rigorous focus on AI safety. Releasing Claude 2.1 now allows it to tout its technology as more trustworthy compared to OpenAIs chaotic power struggles.
This launch is a savvy strategic move by Anthropic CEO Dario Amodei. The AI pioneer is seizing the moment to promote his company as the less drama-prone option for organizations deploying natural language systems.
The most significant advancement is a 200,000 token context window, allowing Claude to process documents up to 150,000 words or 500 pages long. This enables analysis of entire codebases, lengthy financial reports, research papers and other complex documents. Summarizing, extracting key insights and answering questions from such large inputs were previously impossible for AI systems.
Claude 2.1 also reduces rates of hallucination and false claims by 50%, a key priority for enterprises deploying AI responsibly across customer-facing applications. In evaluations by Anthropic researchers, Claude 2.1 was significantly more likely to admit uncertainty than provide incorrect answers to factual questions.
The new tool use feature allows Claude 2.1 to integrate with internal systems via APIs and search knowledge bases. It also takes actions through software tools on a users behalf. This aims to make Claude more interoperable with business processes.
Claude 2.1 introduces system prompts that let users customize instructions to Claude for handling specified tasks consistently. This tuning capability helps Claude adapt its performance to user needs.
Summarization and comprehension of long, complex documents has substantially improved in Claude 2.1. In Anthropics tests, it demonstrated 30% fewer incorrect answers and 3-4x lower rates of inaccurate conclusions from documents.
Developers can also define a set of tools for Claude to use, and the model will decide which tool is required to complete a task. Potential applications range from using a calculator for complex numerical reasoning to answering questions by searching databases or using a web search API.
For enterprises, these upgrades promise to unlock new use cases and value. Claude 2.1 can now reliably parse lengthy inputs like engineering specs, financial filings and user manuals to automate processes like release note generation and regulatory analysis.
The expanded context window and tool integration open up new self-service abilities for customers, like uploading extensive product feedback for Claude to summarize key themes and suggest improvements.
For any organization deploying natural language AI, Claudes accuracy and honesty gains should provide much greater confidence. It showed concretely stronger precision on complex enterprise tasks compared to previous versions.
With ChatGPT minting OpenAI billions in annualized revenue, Anthropic surely aims to siphon off some of that demand with a model boasting better accuracy and safety. And the latest turmoil may have enterprises doubting OpenAIs stability.
The Claude 2.1 launch escalates the AI arms race. Anthropic is staking its claim as a leader amid the chaos at OpenAI and rising competition from tech giants like Google and Microsoft, who are all vying for supremacy in this booming field.
For Anthropic, the timing of this release couldnt be better. With its chief rival in disarray, it can pitch customers on a more reliable choice as organizations integrate natural language AI into their operations. The coming months will determine if enterprises take the bait. But for now, Anthropic appears well-positioned to capitalize on OpenAIs misfortune.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
See original here:
Anthropic seizes the moment, unveiling Claude 2.1 as OpenAI ... - VentureBeat
Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 … – Not a Tesla App
November 24, 2023
By Kevin Armstrong
Elon Musk announced earlier this month that Tesla's Full Self-Driving (FSD) v12 would be released in two weeks. The usual timeframe reference Musk is famous for was met with skepticism. However, it seems that Tesla is right on track with its rollout.
We have learned through a trusted source that FSD v12 has started rolling out internally with Tesla update 2023.38.10.
Update: Musk has responded to our article on X, confirming that Tesla has indeed starting rolling out FSD v12 to employees.
FSD v12 is the update that is expected to remove "beta" from the title. The initial rollout to employees appears more limited in scale than previous updates. Considering the magnitude of the changes in this version, it makes sense to start slow.
The timing of this internal release is close to two major Tesla events. The Cybertruck delivery event is just a few days away. Many eyes will be on the company during the event, allowing Tesla to possibly show the world its latest breakthrough. Alternatively, the highly anticipated holiday update, often regarded as the best update of the year, is expected to be released by 'Santa Musk' in the coming weeks, potentially featuring v12 as a significant addition.
The latest public FSD build, v11.4.7.3, is Tesla update 2023.27.7. This FSD build is several revisions behind the latest production builds, so it's nice to see that v12 will bring FSD beta testers back up to speed with some of the latest Tesla features such as Predictive Charger Availability, Faster Hazard Lights After a Crash, and other features included in updates 2023.32 and 2023.38.
As for FSD improvements, we haven't had a chance to see the release notes for FSD v12 yet. However, now that it has started going out to employees, it shouldn't be long before we find out all the FSD improvements included in this milestone release.
A significant change in v12 is eliminating over 300,000 lines of code previously governing FSD functions that controlled the vehicle, replaced by further reliance on neural networks. This transition means the system reduces its dependency on hard-coded programming. Instead, FSD v12 is using neural networks to control steering, acceleration, and braking for the first time. Up until now, neural networks have been limited to detecting objects and determining their attributes, but v12 will be the first time Tesla starts using neural networks for vehicle control.
The FSD v12 represents a significant leap in Tesla's FSD technology. Musk has described it as an "end-to-end AI", employing a "photon in, controls out" approach akin to human optical processing. This analogy underscores Tesla's ambition to replicate human-like decision-making capabilities in its vehicles.
Labeled as a "Baby AGI" (Artificial General Intelligence), the system is designed to perceive and understand the complexities of the real world. This philosophical and technological shift in AI-driven autonomy was vividly showcased during a live-streamed drive by Musk through Palo Alto, where the Model S demonstrated smooth and almost flawless navigation through various real-world scenarios, including construction zones, roundabouts, and traffic. That was three months ago; imagine how much the system has learned in 90 days.
The rollout of FSD v12 marks a critical point in Tesla's journey in AI and autonomous driving. It's not just about technological prowess but also about aligning AI with nuanced human behavior. With Musk's continued focus on AI, which is evident across his ventures, Tesla remains a crucial player in the EV market and the broader AI revolution.
As we await further details on the public release of FSD v12 and its potential showcase at the Cybertruck event, it's clear that Tesla is moving closer to a future where cars are not just self-driving but are also intelligent and responsive to the complexities of the real world.
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
By Kevin Armstrong
Tesla's highly anticipated Cybertruck is gracing showrooms nationwide. Cybertruck was trending on X as users posted pictures and videos from Tesla stores throughout the U.S., ramping up even more excitement for the delivery event on November 30th.
Cybertruck started its showroom appearances in San Diego and San Jose earlier this week, but according to Elon Musk, several more Tesla stores may want to clear some space. Musk posted on X: "Cybertrucks are on their way to Tesla stores in North America!" It's unclear if that means every Tesla store and gallery across North America or just a few. There are 236 stores in the U.S., 24 in Canada, and 3 in Mexico.
It's also strange that so many Cybertrucks are in showrooms, as it's been reported that Tesla Product Design Director Javier Verdura said only ten would be delivered at the November 30th event. It's believed that slow rollout highlights the company's cautious approach, ensuring quality control before increasing deliveries and production volumes.
'A Better Theater,' a popular site for Tesla owners to stream content in their vehicles, is tracking all showrooms which have the Cybertruck on display. We've added the list below, but for the latest locations, checkout their site.
860 Washington St., New York, NY 10014
333 Santana Row, San Jose, CA 95128
6692 Auto Center Dr, Buena Park, CA 90621
4545 La Jolla Village Dr, San Diego, CA 92122
Bellevue, WA 98004 (Coming Soon)
2223 N Westshore Blvd, Tampa, FL 33607
4039 NE 1st Ave, Miami, FL 33137
9140 E Independence Blvd, Matthews, NC 28105
901 N Rush St, Chicago, IL 60611
This widespread showcase in Tesla showrooms is more than just about displaying the new Cybertruck; it's a strategic move to draw consumers into showrooms. As Cybertrucks make their way into more stores, potential customers and enthusiasts get a firsthand look, creating a tangible sense of excitement. This strategy is particularly effective before Black Friday, leveraging the shopping season's foot traffic to draw more attention.
Adding to the intrigue, Tesla has revealed key specifications of the Cybertruck in its showrooms. The confirmed towing capacity of 11,000 lbs and a payload of 2,500 lbs have been significant talking points, giving potential buyers more reasons to consider the Cybertruck as a formidable competitor in the electric vehicle market. However, we still don't know the price.
Despite the initially limited delivery numbers, Tesla's decision to place Cybertrucks in showrooms across North America is another clever marketing move - for a company that doesn't advertise. It maintains high levels of interest and anticipation and gives the rest of the lineup a chance to shine. Christmas comes earlier this year; just a few more sleeps until November 30th.
By Kevin Armstrong
Tesla's incredible journey started by piecing together the Roadster, a painstaking ordeal that nearly caused the company to go bankrupt more than once. The piece-by-piece instruction manual to build the car that started an automotive revolution has been made public, fully open-sourced. CEO Elon Musk posted on X: "All design & engineering of the original @Tesla Roadster is now fully open source. Whatever we have, you now have."
The open-source announcement has sparked enthusiasm and curiosity within the engineering community. A post from the World of Engineering (@engineers_feed) on X, asking, "Does this mean I can build my own roadster in my garage?" garnered a direct response from Musk: "* some assembly required."
Theoretically, if one can get their hands on the parts, they have some direction to build one of these historic vehicles. From a business side, this kind of information sharing with competitors is curious, although it does follow Tesla's mission statement to accelerate the world's transition to sustainable energy. Although the information is 15 years old, it could provide some useful information.
Tesla has clarified the nature of the information released, stating it's a resource for Roadster enthusiasts derived from the car's R&D phase. The details are not intended for manufacturing, repair, or maintenance and may not align with final production models. Users leveraging this information are reminded of their responsibility to adhere to legal and safety protocols, as Tesla offers no warranties for work done using these details. This open-source initiative encourages innovation but stresses the importance of safety and legal compliance.
Launched in 2008, the original Roadster was the first legal electric vehicle on highways to utilize lithium-ion batteries and achieve over 200 miles per charge. It bankrolled the next phase of Tesla, the Model S, and set a benchmark for future EVs.
While this open-source initiative revisits Tesla's past, it also shifts the focus back to the next-generation Roadster. Initially unveiled in 2017, its production has been delayed, and there is no timeline for when the new sportscars will be manufactured. Moreover, Tesla's focus on the Cybertruck and a more affordable $25,000 EV indicates a strategic balance between innovation and mass EV adoption.
Tesla's decision to make the original Roadster's design and engineering open source should not be too surprising. Musk has said, "I don't care about patents. Patents are for the weak. They don't actually help advance things. They just stop others from following you." Perhaps the biggest surprise is how long it took for Musk to open-source the Roadster blueprint.
Continue reading here:
Tesla FSD v12 Rolls Out to Employees With Update 2023.38.10 ... - Not a Tesla App