Category Archives: Artificial General Intelligence

Elon Musk: AI Will Surpass Human Intelligence Next Year – WebProNews

Elon Musk is bullish on AIs potential to surpass human intelligence, saying it will happen next or, or within two years at the latest.

AI firms are racing to unlock artificial general intelligence (AGI), the level in which AI will achieve true intelligence allowing it to perform complex tasks as well or better than humans. The term is also used in relation to an AI achieving consciousness or sentience. In contrast, current AI models are still far more basic and dont rise to meet any of the criteria associated with a true AGI.

Despite the current state of AI, Musk is convinced we are quickly approaching AGI. According to Reuters, in an interview with Nicolai Tangen, CEO of a Norway wealth fund, Musk answered a questions about AGI and provided a timeline for achieving it.

If you define AGI (artificial general intelligence) as smarter than the smartest human, I think its probably next year, within two years, Musk responded.

Musk has been one of the most outspoken critics of AI, saying it represents an existential threat to humanity. The risk AI poses increases exponentially once AGI is achieved, making it more important than ever for proper safeguards to be in place.

Visit link:

Elon Musk: AI Will Surpass Human Intelligence Next Year - WebProNews

Meta and OpenAI Set to Launch Advanced AI Models, Paving the Way for AGI – elblog.pl

Meta and OpenAI, two leading companies in the field of artificial intelligence, are preparing to introduce their newest AI models, taking a significant step closer to achieving Artificial General Intelligence (AGI). These cutting-edge models demonstrate remarkable advancements in machine cognitive abilities, specifically in the areas of reasoning and planning.

Both Meta and OpenAI have announced their plans to release their respective large language models in the near future. Meta will be unveiling the third iteration of their LLaMA model in the coming weeks, while OpenAI, with Microsoft as one of its key investors, is preparing to launch their AI model tentatively named GPT-5, as reported by The Financial Times.

Joelle Pineau, Metas VP of AI research, emphasized the companies dedication to advancing these models beyond basic conversational capabilities towards genuine reasoning, planning, and memory functions. The objective is to enable the models to not only communicate but also engage in critical thinking, problem-solving, and retaining information.

On the other hand, Brad Lightcap, COO of OpenAI, revealed that the upcoming version of GPT will excel in handling complex tasks, with a focus on reasoning. This marks a shift towards AI systems that can tackle intricate tasks with sophistication.

The advancements made by Meta and OpenAI are part of a wider trend among tech giants such as Google, Anthropic, and Cohere, who are also launching new large language models that significantly surpass the capabilities of traditional models.

To achieve Artificial General Intelligence, the ability to reason and plan is crucial. These capabilities allow AI systems to complete sequences of tasks and anticipate outcomes, surpassing basic word generation.

Yann LeCun, Metas chief AI scientist, emphasized the importance of reasoning for AI models, as current systems often lack critical thinking and planning abilities, leading to errors.

Meta has plans to integrate its new AI model into platforms like WhatsApp and its Ray-Ban smart glasses, offering various model sizes optimized for different applications and devices.

OpenAI is expected to share more details about the next version of GPT soon, with a particular focus on enhancing the models reasoning capabilities for handling complex tasks.

Ultimately, both Meta and OpenAI envision AI assistants seamlessly integrating into daily life, revolutionizing human-computer interactions by providing support for a wide range of tasks, from troubleshooting broken appliances to planning travel itineraries.

FAQs:

Q: What are the latest advancements in AI models from Meta and OpenAI? A: Meta and OpenAI are launching new AI models that showcase significant advancements in reasoning and planning capabilities.

Q: Why are reasoning and planning important for AI models? A: Reasoning and planning enable AI systems to complete complex tasks and anticipate outcomes, moving beyond basic word generation.

Q: Are Meta and OpenAI the only companies working on advanced AI models? A: No, other tech giants like Google, Anthropic, and Cohere are also launching new large language models that surpass traditional models capabilities.

Q: How do Meta and OpenAI envision AI assistants integrating into daily life? A: Both companies envision AI assistants seamlessly integrating into daily life, providing support for various tasks and revolutionizing human-computer interactions.

The advancements made by Meta and OpenAI are significant within the broader artificial intelligence industry. The field of AI has been rapidly expanding in recent years, with increased investment and research focused on pushing the boundaries of machine cognitive abilities. These advancements have led to the development of large language models that exhibit remarkable reasoning and planning capabilities.

Market forecasts for the AI industry indicate strong growth potential. According to a report by Market Research Future, the global AI market is expected to reach a value of $190.61 billion by 2025, growing at a CAGR of 36.62% during the forecast period. The demand for advanced AI models is driven by various industries, including healthcare, finance, retail, and entertainment, among others.

While Meta and OpenAI are leading the way in AI model development, other prominent companies are also actively involved in advancing the field. Google, known for its deep learning research, is working on large language models that go beyond traditional capabilities. Anthropic, a company founded by former OpenAI researchers, is focused on developing AI systems with robust reasoning and planning abilities. Cohere, another player in the industry, is working on creating AI models that can understand and generate code.

However, the development of advanced AI models does come with its fair share of challenges and issues. One of the primary concerns is ethical considerations and the potential misuse of AI technology. Ensuring that AI systems are designed and deployed responsibly is crucial to mitigate risks and ensure their positive impact on society. In addition, there are ongoing discussions and debates surrounding the transparency and explainability of AI models, as these advanced models operate as complex black boxes.

For further reading on the AI industry, market forecasts, and related issues, you can visit reputable sources such as Forbes AI, BBC Technology News, and McKinsey AI. These sources provide in-depth analysis and insights into the industrys trends, market forecasts, and the ethical considerations surrounding AI development and deployment.

More here:

Meta and OpenAI Set to Launch Advanced AI Models, Paving the Way for AGI - elblog.pl

Elon Musk Says That Within Two Years, AI Will Be "Smarter Than the Smartest Human" – Yahoo News UK

Tesla CEO Elon Musk who has an abysmal track recordfor making predictions is predicting that we will achieve artificial general intelligence (AGI) by 2026.

"If you define AGI as smarter than the smartest human, I think it's probably next year, within two years," he told Norway wealth fund CEO Nicolai Tangen during an interview this week, as quoted by Reuters.

The mercurial billionaire also attempted to explain why his own AI venture, xAI, has been falling behind the competition. According to Musk, a shortage of chips was hampering his startup's efforts to come up with the successor of Grok, a foul-mouthed, dad joke-generating AI chatbot.

Of course, we should take his latest prognostication with a hefty grain of salt. Musk already has a well-established track record of making self-serving timeline predictionsthat didn't come true on schedule or at all.

Nonetheless, he's far from the only tech leader in the business arguing that we're mere years away from a point at which AIs can compete with humans on virtually any intellectual task. Other experts have predicted that AGI could become a reality as soon as 2027. Last year, DeepMind co-founder Shane Legg reiterated his belief that there was a 50-50 chance of achieving AGI by 2028.

What complicates all these predictions is the fact that we have yet to agree on a unifying definition of what AGI would actually entail. Last year, OpenAI CEO Sam Altman published an incendiary blog post, arguing that his company was set to use AGI to "benefit all of humanity."

Researchers dismissed the post as a meaningless publicity stunt to appease investors.

"The term AGI is so loaded, it's misleading to toss it around as though it's a real thing with real meaning," Bentley University mathematics professor Noah Giansiracusa argued in atweet at the time. "It's not a scientific concept, it's a sci-fi marketing ploy."

"AI will steadily improve, there's no magic [moment] when it becomes 'AGI,'" he added.

Story continues

In short, it's no secret that billions of dollars are tied up in the industry's promise of achieving AGI and tech leaders, including Musk, are gripping onto the idea that such a watershed moment is only a few years away.

That type of money talks. According to a January Financial Times report, xAI was looking to raise up to $6 billion in funding for a proposed valuation of $20 billion.

That's despite the venture having a vapid and borderline meaningless goal of assisting "humanity in its quest for understanding and knowledge," for some reason programming its Grok AI chatbot to have "a bit of wit" and "a rebellious streak."

In practice, Musk wants his startup to enhance human knowledge through an "anti-woke" and "maximum truth-seeking AI" that can teach people how to make cocaine or build explosives while insulting its users and indulging in low-brow potty humor.

Worst of all, the AI is relying on real-time X-formerly-Twitter data, making it a "form of digital inbreeding that will continually train its model on the data of a website that, other than being a deeply-unreliable source of information, is beset with spam," as media commentator Ed Zitron described it in a December blog post.

In short, given the complexities involved and the countless ways to interpret and quantify human intelligence, we should treat any predictions as to when we'll reach the point of AGI with skepticism especially when they come from a man who thinks a dad joke generator will lead us to enlightenment.

More on Elon: Poverty-Stricken Elon Musk Falls Behind Wealth of Mark Zuckerberg

Read the original here:

Elon Musk Says That Within Two Years, AI Will Be "Smarter Than the Smartest Human" - Yahoo News UK

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest – TweakTown

Elon Musk has predicted that the development of artificial intelligence will get to the stage of being smarter than the smartest humans by 2025, and if not, by 2026.

VIEW GALLERY - 2 IMAGES

In an explosive interview on X Spaces, the Tesla and SpaceX boss told Norway wealth fund CEO Nicolai Tangen that IA was constrained by electricity supply and that the next-gen version of Grok, the AI chatbot from Musk's xAI startup, was expected to finish training by May, next month.

When discussing the timeline of developing AGI, or artificial general intelligence, Musk said: "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years". A monumental amount of AI GPU power will be pumped into training Musk's next-gen Grok 3, with 100,000 x NVIDIA H100 AI GPUs required for training.

Earlier this year, Tesla said it would be sending billions of dollars buying NVIDIA AI GPUs and AMD AI GPUs, so these numbers will radically change throughout the year as Tesla scoops up more AI silicon from NVIDIA. The recent $500 million investment into the Dojo Supercomputer is "only equivalent" to 10,000 x NVIDIA H100 AI GPUs, said Musk in January 2024, adding, "Tesla will spend more than that on NVIDIA hardware this year. The table stakes for being competitive in AI are at least several billion dollars per year at this point".

View post:

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest - TweakTown

Artificial intelligence in healthcare: defining the most common terms – HealthITAnalytics.com

April 03, 2024 -As healthcare organizations collect more and more digital health data, transforming that information to generate actionable insights has become crucial.

Artificial intelligence (AI) has the potential to significantly bolster these efforts, so much so that health systems are prioritizing AI initiatives this year. Additionally, industry leaders are recommending that healthcare organizations stay on top of AI governance, transparency, and collaboration moving forward.

But to effectively harness AI, healthcare stakeholders need to successfully navigate an ever-changing landscape with rapidly evolving terminology and best practices.

In this primer, HealthITAnalytics will explore some of the most common terms and concepts stakeholders must understand to successfully utilize healthcare AI.

To understand health AI, one must have a basic understanding of data analytics in healthcare. At its core, data analytics aims to extract useful information and insights from various data points or sources. In healthcare, information for analytics is typically collected from sources like electronic health records (EHRs), claims data, and peer-reviewed clinical research.

Analytics efforts often aim to help health systems meet a key strategic goal, such as improving patient outcomes, enhancing chronic disease management, advancing precision medicine, or guiding population health management.

However, these initiatives require analyzing vast amounts of data, which is often time- and resource-intensive. AI presents a promising solution to streamline the healthcare analytics process.

The American Medical Association (AMA) indicates that AI broadly refers to the ability of computers to perform tasks that are typically associated with a rational human being a quality that enables an entity to function appropriately and with foresight in its environment.

However, the AMA favors an alternative conceptualization of AI that the organization calls augmented intelligence. Augmented intelligence focuses on the assistive role of AI in healthcare and underscores that the technology can enhance, rather than replace, human intelligence.

AI tools are driven by algorithms, which act as instructions that a computer follows to perform a computation or solve a problem. Using the AMAs conceptualizations of AI and augmented intelligence, algorithms leveraged in healthcare can be characterized as computational methods that support clinicians capabilities and decision-making.

Generally, there are multiple types of AI that can be classified in various ways: IBM broadly categorizes these tools based on their capabilities and functionalities which covers a plethora of realized and theoretical AI classes and potential applications.

Much of the conversation around AI in healthcare is centered around currently realized AI tools that exist for practical applications today or in the very near future. Thus, the AMA categorizes AI terminology into two camps: terms that describe how an AI works and those that describe what the AI does.

AI tools can work by leveraging predefined logic or rules-based learning, to understand patterns in data via machine learning, or using neural networks to simulate the human brain and generate insights through deep learning.

In terms of functionality, AI models can use these learning approaches to engage in computer vision, a process for deriving information from images and videos; natural language processing to derive insights from text; and generative AI to create content.

Further, AI models can be classified as either explainable meaning that users have some insight into the how and why of an AIs decision-making or black box, a phenomenon in which the tools decision-making process is hidden from users.

Currently, all AI models are considered narrow or weak AI, tools designed to perform specific tasks within certain parameters. Artificial general intelligence (AGI), or strong AI, is a theoretical system under which an AI model could be applied to any task.

Machine learning (ML) is a subset of AI in which algorithms learn from patterns in data without being explicitly trained. Often, ML tools are used to make predictions about potential future outcomes.

Unlike rules-based AI, ML techniques can use increased exposure to large, novel datasets to learn and improve their own performance. There are three main categories of ML based on task type: supervised, unsupervised, and reinforcement learning.

In supervised learning, algorithms are trained on labeled data data inputs associated with corresponding outputs to identify specific patterns, which helps the tool make accurate predictions when presented with new data.

Unsupervised learning uses unlabeled data to train algorithms to discover and flag unknown patterns and relationships among data points.

Semi-supervised machine learning relies on a mix of supervised and unsupervised learning approaches during training.

Reinforcement learning relies on a feedback loop for algorithm training. This type of ML algorithm is given labeled data inputs, which it can use to take various actions, such as making a prediction, to generate an output. If the algorithms action and output align with the programmers goals, its behavior is reinforced with a reward.

In this way, algorithms developed using reinforcement techniques generate data, interact with their environment, and learn a series of actions to achieve a desired result.

These approaches to pattern recognition make ML particularly useful in healthcare applications like medical imaging and clinical decision support.

Deep learning (DL) is a subset of machine learning used to analyze data to mimic how humans process information. DL algorithms rely on artificial neural networks (ANNs) to imitate the brains neural pathways.

ANNs utilize a layered algorithmic architecture, allowing insights to be derived from how data are filtered through each layer and how those layers interact. This enables deep learning tools to extract more complex patterns from data than their simpler AI- and ML-based counterparts.

Like machine learning models, deep learning algorithms can be supervised, unsupervised, or somewhere in between.There are four main types of deep learning used in healthcare: deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).

DNNs are a type of ANN with a greater depth of layers. The deeper the DNN, the more data translation and analysis tasks can be performed to refine the models output.

CNNs are a type of DNN that is specifically applicable to visual data. With a CNN, users can evaluate and extract features from images to enhance image classification.

RNNs are a type of ANN that relies on temporal or sequential data to generate insights. These networks are unique in that, where other ANNs inputs and outputs remain independent of one another, RNNs utilize information from previous layers inputs to influence later inputs and outputs.

RNNs are commonly used to address challenges related to natural language processing, language translation, image recognition, and speech captioning. In healthcare, RNNs have the potential to bolster applications like clinical trial cohort selection.

GANs utilize multiple neural networks to create synthetic data instead of real-world data. Like other types of generative AI, GANs are popular for voice, video, and image generation. GANs can generate synthetic medical images to train diagnostic and predictive analytics-based tools.

Recently, deep learning technology has shown promise in improving the diagnostic pathway for brain tumors.

With their focus on imitating the human brain, deep learning and ANNs are similar but distinct from another analytics approach: cognitive computing.

The term typically refers to systems that simulate human reasoning and thought processes to augment human cognition. Cognitive computing tools can help aid decision-making and assist humans in solving complex problems by parsing through vast amounts of data and combining information from various sources to suggest solutions.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, remember previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information.

To achieve this, these tools use self-learning frameworks, ML, DL, natural language processing, speech and object recognition, sentiment analysis, and robotics to provide real-time analyses for users.

Cognitive computings focus on supplementing human decision-making power makes it promising for various healthcare use cases, including patient record summarization and acting as a medical assistant to clinicians.

Natural language processing (NLP) is a branch of AI concerned with how computers process, understand, and manipulate human language in verbal and written forms.

Using techniques like ML and text mining, NLP is often used to convert unstructured language into a structured format for analysis, translating from one language to another, summarizing information, or answering a users queries.

There are also two subsets of NLP: natural language understanding (NLU) and natural language generation (NLG).

NLU is concerned with computer reading comprehension, focusing heavily on determining the meaning of a piece of text. These tools use the grammatical structure and the intended meaning of a sentence syntax and semantics, respectively to help establish a structure for how the computer should understand the relationship between words and phrases to accurately capture the nuances of human language.

Conversely, NLG is used to help computers write human-like responses. These tools combine NLP analysis with rules from the output language, like syntax, lexicons, semantics, and morphology, to choose how to appropriately phrase a response when prompted. NLG drives generative AI technologies like OpenAIs ChatGPT.

In healthcare, NLP can sift through unstructured data, such as EHRs, to support a host of use cases. To date, the approach has supported the development of a patient-facing chatbot, helped detect bias in opioid misuse classifiers, and flagged contributing factors to patient safety events.

McKinsey & Company describes generative AI (genAI) as algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.

GenAI tools take a prompt provided by the user via text, images, videos, or other machine-readable inputs and use that prompt to generate new content. Generative AI models are trained on vast datasets to generate realistic responses to users prompts.

GenAI tools typically rely on other AI approaches, like NLP and machine learning, to generate pieces of content that reflect the characteristics of the models training data. There are multiple types of generative AI, including large language models (LLMs), GANs, RNNs, variational autoencoders (VAEs), autoregressive models, and transformer models.

Since ChatGPTs release in November 2022, genAI has garnered significant attention from stakeholders across industries, including healthcare. The technology has demonstrated significant potential for automating certain administrative tasks: EHR vendors are using generative AI to streamline clinical workflows, health systems are pursuing the technology to optimize revenue cycle management, and payers are investigating how genAI can improve member experience. On the clinical side, researchers are also assessing how genAI could improve healthcare-associated infection (HAI) surveillance programs.

Despite the excitement around genAI, healthcare stakeholders should be aware that generative AI can exhibit bias, like other advanced analytics tools. Additionally, genAI models can hallucinate by perceiving patterns that are imperceptible to humans or nonexistent, leading the tools to generate nonsensical, inaccurate, or false outputs.

View original post here:

Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com

We’re Focusing on the Wrong Kind of AI Apocalypse – TIME

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.

There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human controlor worse (the movies Terminator and 2001 come to mind).

Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or dont build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over.

But the reality is we are already living in the early days of the AI Age, and, at every level of an organization, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us. It opens us up to many little apocalypses, as jobs and workplaces are disrupted one-by-one in ways that change lives and livelihoods.

We know this is a real threat, because, regardless of any pauses in AI creation, and without any further AI development beyond what is available today, AI is going to impact how we work and learn. We know this for three reasons: First, AI really does seem to supercharge productivity in ways we have never really seen before. An early controlled study in September 2023 showed large-scale improvements at work tasks, as a result of using AI, with time savings of more than 30% and a higher quality output for those using AI. Add to that the near-immaculate test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.

Read More: There Is Only One Question That Matters with AI

We also know that AI is going to change how we work and learn because it is affecting a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will make the hardest pivot as a result of AI) are educated and highly paid workers, and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects these workers will be immense, especially as AI-driven productivity gains become widespread. These tools are on their way to becoming deeply integrated into our work environments. Microsoft, for instance, has released Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its office tools.

As a result, a natural instinct among many managers might be to say fire people, save money. But it doesnt need to be that wayand it shouldnt be. There are many reasons why companies should not turn efficiency gains into headcount or cost reduction. Companies that figure out how to use their newly productive workforce have the opportunity to dominate those who try to keep their post-AI output the same as their pre-AI output, just with less people. Companies that commit to maintaining their workforce will likely have employees as partners, who are happy to teach others about the uses of AI at work, rather than scared workers who hide AI for fear of being replaced. Psychological safety is critical to innovative team success, especially when confronted with rapid change. How companies use this extra efficiency is a choice, and a very consequential one.

There are hints buried in the early studies of AI about a way forward. Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks. So, even as AI removes some previously valuable tasks from a job, the work that is left can be more meaningful and more high value. But this is not inevitable, so managers and leaders must decide whether and how to commit themselves to reorganizing work around AI in ways that help, rather than hurt, their human workers. They need to ask what is my vision about how AI makes work better, rather than worse?

Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. Educators may decide to use AI in ways that leave some students behind. And those are just the obvious problems.

But AI does not need to be catastrophic. Correctly used, AI can create local victories, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.

The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment. But to make those choices matter, serious discussions need to start in many placesand soon. We cant wait for decisions to be made for us, and the world is advancing too fast to remain passive.

Read this article:

We're Focusing on the Wrong Kind of AI Apocalypse - TIME

As ‘The Matrix’ turns 25, the chilling artificial intelligence (AI) projection at its core isn’t as outlandish as it once seemed – TechRadar

Living in 1999 felt like standing on the edge of an event horizon. Our growing obsession with technology was spilling into an outpouring of hope, fear, angst and even apocalyptic distress in some quarters. The dot-com bubble was swelling as the World Wide Web began spreading like a Californian wildfire. The first cell phones had been making the world feel much more connected. Let's not forget the anxieties over Y2K that were escalating into panic as we approached the bookend of the century.

But as this progress was catching the imagination of so many, artificial intelligence (AI) was in a sorry state only beginning to emerge from a debilitating second 'AI winter' which spanned between 1987 and 1993.

Some argue this thawing process lasted as long as the mid-2000s. It was, indeed, a bleak period for AI research; it was a field that "for decades has overpromised and underdelivered", according to a report in the New York Times (NYT) from 2005.

Funding and interest was scarce, especially compared to its peak in the 1980s, with previously thriving conferences whittled down to pockets of diehards. In cinema, however, stories about AI were flourishing with the likes of Terminator 2: Judgement Day (1991) and Ghost in the Shell (1995) building on decades of compelling feature films like Blade Runner (1982).

It was during this time that the Wachowskis penned the script for The Matrix a groundbreaking tour de force that threw up a mirror to humanity's increasing reliance on machines and challenged our understanding of reality.

It's a timeless classic, and its impact since its March 31 1999 release has been sprawling. But the chilling plot at its heart namely the rise of an artificial general intelligence (AGI) network that enslaves humanity has remained consigned to fiction more so than it's ever been considered a serious scientific possibility. With the heat of the spotlight now on AI, however, ideas like the Wachowskis' are beginning to feel closer to home than we had anticipated.

AI has become not just the scientific, but the cultural zeitgeist with large language models (LLMs) and the neural nets that power them cannonballing into the public arena. That dry well of research funding is now overflowing, and corporations see massive commercial appeal in AI. There's a growing chorus of voices, too, that feel an AGI agent is on the horizon.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

People like the veteran computer scientist Ray Kurzweil had anticipated that humanity would reach the technological singularity (where an AI agent is just as smart as a human) for yonks, outlining his thesis in 'The Singularity is Near' (2005) with a projection for 2029.

Disciples like Ben Goertzel have claimed it can come as soon as 2027. Nvidia's CEO Jensen Huang says it's "five years away", joining the likes of OpenAI CEO Sam Altman and others in predicting an aggressive and exponential escalation. Should these predictions be true, they will also introduce a whole cluster bomb of ethical, moral, and existential anxieties that we will have to confront. So as The Matrix turns 25, maybe it wasn't so far-fetched after all?

Sitting on tattered armchairs in front of an old boxy television in the heart of a wasteland, Morpheus shows Neo the "real world" for the first time. Here, he fills us in on how this dystopian vision of the future came to be. We're at the summit of a lengthy yet compelling monologue that began many scenes earlier with questions Morpheus poses to Neo, and therefore us, progressing to the choice Neo must make and crescendoing into the full tale of humanity's downfall and the rise of the machines.

Much like we're now congratulating ourselves for birthing advanced AI systems that are more sophisticated than anything we have ever seen, humanity in The Matrix was united in its hubris as it gave birth to AI. Giving machines that spark of life the ability to think and act with agency backfired. And after a series of political and social shifts, the machines retreated to Mesopotamia, known as the cradle of human civilization, and built the first machine city, called 01.

Here, they replicated and evolved developing smarter and better AI systems. When humanity's economies began to fall, they struck the machine civilization with nuclear weapons to regain control. Because the machines were not as vulnerable to heat and radiation, the strike failed and instead represented the first stone thrown in the 'Machine War'.

Unlike in our world, the machines in The Matrix were solar-powered and harvested their energy from the sun. So humans decide to darken namely enslaving humans and draining their innate energy. They continued to fight until human civilization was enslaved, with the survivors placed into pods and connected to the Matrix an advanced virtual reality (VR) simulation intended as an instrument for control while their thermal, bio-electric, and kinetic energy was harvested to sustain the machines.

"This can't be real," Neo tells Morpheus. It's a reaction we would all expect to have when confronted with such an outlandish truth. But, as Morpheus retorts: "What is real?' Using AI as a springboard, the film delves into several mind-bending areas including the nature of our reality and the power of machines to influence and control how we perceive the environment around us. If you can touch, smell, or taste something, then why would it not be real?

Strip away the barren dystopia, the self-aware AI, and strange pods that atrophied humans occupy like embryos in a womb, and you can see parallels between the computer program and the world around us today.

When the film was released, our reliance on machines was growing but not final. Much of our understanding of the world today, however, is filtered through the prism of digital platforms infused with AI systems like machine learning. What we know, what we watch, what we learn, how we live, how we socialize online all of these modern human experiences are influenced in some way by algorithms that direct us in subtle but meaningful ways. Our energy isn't harvested, but our data is, and we continue to feed the machine with every tap and click.

Intriguingly, as Agent Smith tells Morpheus in the iconic interrogation scene a revelatory moment in which the computer program betrays its emotions the first version of the Matrix was not a world that closely resembled society as we knew it in 1999. Instead, it was a paradise in which humans were happy and free of suffering.

The trouble, however, is that this version of the Matrix didn't stick, and people saw through the ruse rendering it redundant. That's when the machine race developed version 2.0. It seemed, as Smith lamented, that humans speak in the language of suffering and misery and without these qualities, the human condition is unrecognizable.

By every metric, AI is experiencing a monumental boom when you look at where the field once was. Startup funding surged by more than ten-fold between 2011 and 2021, surging from 670 million to $72 billion a decade later, according to Statista. The biggest jump came during the COVID-19 pandemic, with funding rising from $35 billion the previous year. This has since tapered off falling to $40 billion in 2023 but the money that's pouring into research and development (R&D) is surging.

But things weren't always so rosy. In fact, in the early 1990s during the second AI winter the term "artificial intelligence" was almost taboo, according to Klondike, and was replaced with other terms such as "advanced computing" instead. This is simply one turbulent period in a long near-75 year history of the field, starting with Alan Turing in 1950 when he pondered whether a machine could imitate human intelligence in his paper 'Computing Machinery and Intelligence'.

In the years that followed, a lot of pioneering research was conducted but this early momentum fell by the wayside during the first AI winter between 1974 and 1980 where issues including limited computing power prevented the field from advancing, and organizations like DARPA and national governments pulled funding from research projects.

Another boom in the 1980s, fuelled by the revival of neural networks, then collapsed once more into a bust with the second winter spanning six years up to 1993 and thawing well into the 21st century. Then, in the years that followed, scientists around the world were slowly making progress once more as funding restarted and AI caught people's imagination once again. But the research field itself was siloed, fragmented and disconnected, according to Pamela McCorduck writing in 'Machines Who Think' (2004). Computer scientists were focusing on competing areas to solve niche problems and specific approaches.

As Klondike highlights, they also used terms such as "advanced computing" to label their work where we may now refer to the tools and systems they built as early precursors to the AI systems we use today.

It wasn't until 1995 four years before The Matrix hit theaters that the needle in AI research really moved in a significant way. But you could already see signs the winter was thawing, especially with the creation of the Loebner Prize an annual competition created by Hugh Loebner in 1990.

Loebner was "an American millionaire who had given a lot of money" and "who became interested in the Turing test," according to the recipient of the prize in 1997, the late British computer scientist Yorick Wilks, speaking in an interview in 2019. Although the prize wasn't particularly large $2,000 initially it showed that interest in building AI agents was expanding, and that it was being taken seriously.

The first major development of the decade came when computer scientist Richard Wallace developed the chatbot ALICE which stood for artificial linguistic internet computer entity. Inspired by the famous ELIZA chatbot of the 1960s which was the world's first major chatbot ALICE, also known as Alicebot, was a natural language processing system that applied heuristic pattern matching to conversations with a human in order to provide responses. Wallace went on to win the Loebner Prize in 2000, 2001 and 2004 for creating and advancing this system, and a few years ago the New Yorker reported ALICE was even the inspiration for the critically acclaimed 2013 sci-fi hit Her, according to director Spike Jonze.

Then, in 1997, AI hit a series of major milestones, starting with a showdown starring the reigning world chess champion and grandmaster Gary Kasparov, who in May that year went head to head in New York with the challenger of his life: a computing agent called 'Deep Blue' created by IBM. This was actually the second time Kasparov faced Deep Blue after beating the first version of the system in Philadelphia the year before, but Deep Blue narrowly won the rematch by 3.5 to 2.5.

"This highly publicized match was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision making program," wrote Rockwell Anyoha in a Harvard blog.

It did something "no machine had ever done before", according to IBM, delivering its victory through "brute force computing power" and for the entire world to see as it was indeed broadcast far and wide. It used 32 processors to evaluate 200 chess positions per second. I have to pay tribute, Kasparov said. The computer is far stronger than anybody expected.

Another major milestone was the creation of NaturallySpeaking by Dragon Software in June 1997. This speech recognition software was the first universally accessible and affordable computer dictation system for PCs if $695 (or $1,350 today) is your idea of affordable, that is. "This is only the first step, we have to do a lot more, but what we're building toward is to humanizing computers, make them very natural to use, so yes, even more people can use them," said CEO Jim Baker in a news report from the time. Dragon licensed the software to big names including Microsoft and IBM, and it was later integrated into the Windows operating system, signaling much wider adoption.

A year later, researchers with MIT released Kismet a "disembodied head with gremlin-like features" that learns about its environment "like a baby" and entirely "upon its benevolent carers to help it find out about the world", according to Duncan Graham-Rowe writing in New Scientist at the time. Spearheaded by Cynthia Greazeal, this creation was one of the projects that fuelled MIT's AI research and secured its future. The machine could interact with humans, and simulated emotions by changing its facial expression, its voice and its movements.

This contemporary resurgence also extended to the language people used too. The taboo around "artificial intelligence" was disintegrating and terms like "intelligent agents" began slipping their way into the lexicon of the time, wrote McCorduck in 'Machines Who Think'. Robotics, intelligent AI agents, machine surpassing the wit of man, and more: it was these ingredients that, in turn, fed into the thinking behind The Matrix and the thesis at its heart.

When The Matrix hit theaters, there was a real dichotomy between movie-goers and critics. It's fair to say that audiences loved the spectacle, to say the least with the film taking $150 million at the US box office while a string of publications stood in line to lambast the script and the ideas in the movie. "It's Special Effects 10, Screenplay 0," wrote Todd McCarthy in his review in Variety. The Miami Herald rated it two-and-a-half stars out of five.

Chronicle senior writer Bob Graham praised Joe Pantoliano (who plays Cypher) in his SFGate review, "but even he is eventually swamped by the hopeless muddle that "The Matrix" becomes." Critics wondered why people were so desperate to see a movie that had been so widely slated and the Guardian pondered whether it was sci-fi fans "driven to a state of near-unbearable anticipation by endless hyping of The Phantom Menace, ready to gorge themselves on pretty much any computer graphics feast that came along?"

Veteran film director Quentin Tarantino, however, related more with the average audience member in his experiences, which he shared in an interview with Amy Nicholson. "I remember the place was jam-packed and there was a real electricity in the air it was really exciting," he said, speaking of his outing to watch the movie on the Friday night after it was released.

"Then this thought hit me, that was really kind of profound, and that was: it's easy to talk about 'The Matrix' now because we know the secret of 'The Matrix', but they didn't tell you any of that in any of the promotions in any of the big movie trailer or any of the TV spots. So we were really excited about this movie, but we really didn't know what we were going to see. We didn't really know what to expect; we did not know the mythology at all I mean, at all. We had to discover that."

The AI boom of today is largely centered around an old technology known as neural networks. Despite incredible advancements in generative AI tools, namely large language models (LLMs) that have captured the imagination of businesses and people alike.

One of the most interesting developments is the number of people who are becoming increasingly convinced that these AI agents are conscious, or have agency, and can think or even feel for themselves. One startling example is a former Google engineer who claimed a chatbot the company was working on was sentient. Although this is widely understood not to be the case, it's a sign of the direction in which we're heading.

Elsewhere, despite impressive systems that can generate images, and now video thanks to OpenAI's SORA these technologies still all rely on the principles of neural networks that many in the field don't believe will lead to the sort of human-level AGI, let alone a super intelligence that can modify itself and build even more intelligence agents autonomously. The answer, according to Databricks CTO Matei Zaharia, is a compound AI system that uses LLMs as one component. It's an approach backed by Goertzel, the veteran computer scientist who is working on his own version of this compound system with the aim of creating a distributed open source AGI agent within the next few years. He suggests that humanity could build an AGI agent as soon as 2027.

There are so many reasons why The Matrix has remained relevant from the fact it was a visual feast to the rich and layered parables one can draw between its world and ours.

Much of the backstory hasn't been a part of that conversation in the 25 years since its cinematic release. But as we look to the future, we can begin to see how a similar world might be unfolding.

We know, for example, the digital realm we occupy largely through social media channels is influencing people in harmful ways. AI has also been a force for tragedy around the world, with Amnesty International claiming Facebook's algorithms played a role in pouring petrol on ethnic violence in Myanmar. Although not generally taken seriously, companies like Meta are attempting to build VR-powered alternate realities known as the metaverse.

With generative AI now a proliferating technology, groundbreaking research found recently that more than half (57.1%) of the internet comprises AI-generated content.

Throw increasingly improving tools like Midjourney and now SORA into the mix and to what extent can we know what is real and what is generated by machines especially if they look so lifelike and indistinguishable from human-generated content? The lack of sentience in the billions of machines around us is an obvious divergence from The Matrix. But that doesn't mean our own version of The Matrix has the potential to be any less manipulative.

Go here to read the rest:

As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar

R&D misdirection and the circuitous US path to artificial general intelligence – DataScienceCentral.com – Data Science Central

Image by CDD20 on Pixabay

Big tech has substantial influence over the direction of R&D in the US. According to the National Science Foundation and the Congressional Research Service, US business R&D spending dwarfs domestic Federal or state government spending on research and development. The most recent statistics I found include these:

Of course, business R&D spending focuses mainly on development76 percent versus 14 percent on applied research and seven percent on basic research.

Most state R&D spending is also development-related, and like the Federal variety tends to be largely dedicated to sectors such as healthcare that dont directly impact system-level transformation needs.

Private sector R&D is quite concentrated in a handful of companies with dominant market shares and capitalizations. From a tech perspective, these companies might claim to be on the bleeding edge of innovation, but are all protecting cash cow legacy stacks and a legacy architecture mindset. The innovations theyre promoting in any given year are the bright and shiny objects, the VR goggles, the smart watches, the cars and the rockets. Meanwhile, what could be substantive improvement in infrastructure receives a fraction of their investment. Public sector R&D hasnt been filling in these gaps.

Back in the 2010s, I had the opportunity to go to a few TTI/Vanguard conferences, because the firm I worked for had bought an annual seat/subscription.

I was eager to go. These were great conferences because the TTI/V board was passionate about truly advanced and promising emerging tech that could have a significant impact on large-scale systems. Board membersthe kind of people who the Computer History Museum designates as CHM fellows once a yearhad already made their money and names for themselves. The conferences were a way for these folks to keep in touch with the latest advances and with each other, as well as a means of entertainment.

One year at a TTI/V event, a chemistry professor from Cornell gave a talk about the need for liquid, olympic swimming pool-sized batteries. He was outspoken and full of vigor, pacing constantly across the stage, and he had a great point. Why werent we storing the energy our power plants were generating? It was such a waste not to store it. The giant liquid batteries he described made sense as a solution to a major problem with our infrastructure.

This professor was complaining about the R&D funding he was applying for, but not receiving, to research the mammoth liquid battery idea. As a researcher, hed looked into the R&D funding situation. He learned after a bit of digging that lots and lots of R&D funding was flowing into breast cancer research.

I immediately understood what he meant. Of course, breast cancer is a major, serious issue, one that deserves significant R&D funding. But the professors assertion was that breast cancer researchers couldnt even spend all the money that was flowing inthere wasnt enough substantive research to soak up all those funds.

Why was so much money flowing into breast cancer research? I could only conclude that the Susan G. Komen for the Cure (SGKC) foundation was so successful at its high-profile marketing efforts that they were awash in donations.

As you may know, the SGKC is the nonprofit behind the pink cure breast cancer ribbons, the pink baseball bats, cleats and other gear major sports teams don from time to time. By this point, the SGKC owns that shade of pink. Give them credittheir marketing is quite clever. Most recently, a local TV station co-hosted an SGKC More than Pink Walk in the Orlando, FL metro area.

I dont begrudge SGKC their success. As I mentioned, I actually admire their abilities. But I do bemoan the fact that other areas with equivalent or greater potential impact are so underfunded by comparison, and that public awareness here in the US is so low when it comes to the kind of broadly promising systems-level R&D that the Cornell chemistry professor alluded to that weve been lacking.

By contrast, the European Unions R&D approach over the past two decades has resulted in many insightful and beneficial breakthroughs. For example, in 2005 the Brain and Mind Institute of cole Polytechnique Fdrale de Lausanne (EPFL) in Switzerland won a Future and Emerging Technologies (FET) Flagship grant to reverse engineer the brain from the European Commission, as well as funding from the Swiss government and private corporations.

For over 15 years, this Blue Brain Project succeeded in a number of areas, including a novel algorithmic classification scheme announced in 2019 for identifying and naming different neuron types such as pyramidal cells that act as antennae to collect information from other parts of the brain. This was work that benefits both healthcare and artificial intelligence.

Additionally, the open source tooling the Project has developed included Blue Brain Nexus, a standards-based knowledge graph designed to simplify, scale the sharing and enable the findabilty of various imagery and other media globally.

I hope the US, which has seemed to be relatively leaderless on the R&D front over the past decade, can emulate more visionary European efforts like this one from the Swiss in the near future.

Here is the original post:

R&D misdirection and the circuitous US path to artificial general intelligence - DataScienceCentral.com - Data Science Central

Creating ‘good’ AGI that won’t kill us all: Crypto’s Artificial Superintelligence Alliance – Cointelegraph

After a year of increasingly dire warnings about the imminent demise of humanity at the hands of superintelligent artificial intelligence (AI), Magazine is in Panama at the Beneficial AGI Conference to hear the other side of the story. Attendees include an eclectic mix of transhumanists, crypto folk, sci-fi authors including David Brin, futurists and academics.

The conference is run by SingularityNET, a key member of the proposed new Artificial Superintelligence Alliance, to find out what happens if everything goes right with creating artificial general intelligence (AGI) human-level,artificial general intelligence.

But how do we bring about that future, rather than the scenario in which Skynet goes rogue and kills us all?

One of the best insights into why those questions are so important comes from futurist Jose Luis Cordeiro, author of The Death of Death, who believes humanity will cure all diseases and aging thanks to AGI.

He tells Magazine of some sage wisdom that Arthur C. Clarke, the author of 2001: A Space Odyssey, once told him.

He said: We have to be positive about the future because the images of the future of whats possible begin with our minds. If we think we will self-destroy, most likely we will. But if we think that we will survive, [that] we will move into a better world [then we] will work toward that and we will achieve it. So it begins in our minds.

Humans are hardwired to focus more on the existential threats from AGI than on the benefits.

Evolutionary speaking, its better that our species worries nine times too often that the wind rustling in the bushes could be a tiger than it is to be blithely unconcerned about the rustling and get eaten by a tiger on the 10th occurrence.

Even the doomers dont put a high percentage chance of AGI killing us all, with a survey of almost 3000 AI researchers suggesting the chance of an extremely bad outcome ranges from around 5% to 10%. So while thats worryingly high, the odds are still in our favor.

Opening the conference, SingularityNET founder and the Father of AGI, Dr. Ben Goertzel, paid tribute to Ethereum founder Vitalik Buterins concept of defensive accelerationism. Thats the midpoint between the effective accelerationism techno-optimists and their move fast and break things ethos, and the decelerationists, who want to slow down or halt the galloping pace of AI development.

Goertzel believes that deceleration is impossible but concedes theres a small chance things could go horribly wrong with AGI. So hes in favor of pursuing AGI while being mindful of the potential dangers. Like many in the AI/crypto field, he believes the solution is open-sourcing the technology and decentralizing the hardware and governance.

This week SingularityNET announced it has teamed up with the decentralized multi-agent platform FetchAI founded by DeepMind veteran Humayun Sheikh and the data exchange platform Ocean Protocol to form the Artificial Superintelligence Alliance (ASI).

It will be the largest open-sourced independent player in AI research and development and has proposed merging SingularityNET, FetchAI and Ocean Protocols existing tokens into a new one called ASI. It would have a fully diluted market cap of around $7.5 billion subject to approval votes over the next two weeks. The three platforms would continue to operate as separate entities under the guidance of Goertzel, with Sheikh as chair.

According to the Alliance, the aim is to create a powerful compelling alternative to Big Techs control over AI development, use and monetization by creating decentralized AI infrastructure at scale and accelerating investment into blockchain-based AGI.

Probably the most obvious beneficial impact is AGIs potential to analyze huge swathes of data to help solve many of our most difficult scientific, environmental, social and medical issues.

Weve already seen some amazing medical breakthroughs, with MIT researchers using AI models to evaluate tens of thousands of potential chemical compounds and discovered the first new class of antibiotics in 60 years, one thats effective against the hitherto drug-resistant MRSA bacteria. Its the sort of scaling up of research thats almost impossible for humans to achieve.

Also read: Ben Goertzel profile How to prevent AI from annihilating humanity using blockchain

And thats all before we get to the immortality and mind-uploading stuff that the transhumanists get very excited about but which weirds most people out.

This ability to analyze great swathes of data also suggests the technology will be able to give early warnings of pandemics, natural disasters and environmental issues. AI and AGI also have the potential to free humans from drudgery and repetitive work, from coding to customer service help desks.

While this will cause a massive upheaval to the workforce, the invention of washing machines and Amazons online businesses had big impacts on particular occupations. The hope is that a bunch of new jobs will be created instead.

Economic professor Robin Hanson says this has happened over the past two decades, even though people were very concerned at the turn of the century that automation would replace workers.

Hansons study of the data on how automation impacted wages and employment across various industries between 1999 and 2019 found that despite big changes, most people still had jobs and were paid pretty much the same.

On average, there wasnt a net effect on wages or jobs in automation of U.S. jobs from 1999 to 2018, he says.

Janet Adams, the optimistic COO of SingularityNET, explains that AGI has the potential to be extraordinarily positive for all humanity.

I see a future in which our future AGIs are making decisions which are more ethical than the decisions which humans make. And they can do that because they dont have emotions or jealousy or greed or hidden agendas, she says.

Adams points out that 25,000 people die every day from hunger, even as people in rich countries throw away mountains of food. Its a problem that could be solved by intelligent allocation of resources across the planet, she says.

But Adams warns AGI needs to be trained on data sets reflecting the entire worlds population and not just the top 1% so that when they make decisions, they wont make them just for the benefit of the powerful few, they will make them for the benefit of the broader civilization, broader humanity.

Anyone who watched the early utopian dreams of a decentralized internet crumble into a corporate ad-filled landscape of addictive design and engagement farming may have doubts this rosy future is possible.

Building high-end AI requires a mountain of computing and other resources that are currently out of reach of all but a handful of the usual suspects: Nvidia, Google, Meta and Microsoft. So the default assumption is that one of these tech giants will end up controlling AGI.

Goertzel, a long-haired hippy who plays in a surprisingly good band fronted by a robot, wants to challenge that assumption.

Goertzel points out that the default assumption used to be that companies like IBM would win the computing industry and Yahoo would win search.

The reason these things change is because people were concretely fighting to change it in each instance, he says. Instead, Bill Gates, Steve Jobs and the Google guys came along.

The founder of SingularityNET, hes been thinking about the Singularity (a theoretical moment when technological development increases exponentially) since the early 1970s when he read an early book on the subject called The Prometheus Project.

Hes been working on AGI for much of the time since then, popularizing the term AGI and launching the OpenCog AI framework in 2008.

Adams says Goertzel is a key reason SingularityNET has a credible shot.

We are the biggest not-for-profit, crypto-funded AI science and research team on the planet, Adams says, noting their competitors have been focused on narrow AIs like ChatGPT and are only now shifting their strategy to AGI.

Theyre years behind us, she says. We have three decades of research with Dr. Ben Goertzel in neural symbolic methods.

But she adds that opening up the platform to any and all developers around the world and rewarding them for their contribution will give it the edge even over the mega-corporations who currently dominate the space.

Because we have a powerful vision and a powerful commitment to building the most advanced, most intelligent AGI in a democratic way, its hard to imagine that Big Tech or any other player could come in and compete, particularly when youre up against open source.

[We will] see a potentially huge influx of people developing on the SingularityNET marketplace and the continued escalation of pace toward AGI. Theres a good chance it will be us.

The Prometheus Project proposed that AI was such an earth-shattering development that everyone in the world should get a democratic vote on its development.

So when blockchain emerged, it seemed like implementing decentralized infrastructure and token-based governance for AI was the next most practical alternative.

HyperCycle CEO Toufi Saliba tells Magazine this mitigates the threat of a centralized company or authoritarian country gaining immense power from developing AGI first, which would be the worst thing that ever happened to humanity.

Also read: Real AI use cases in crypto, No 1: The best use of money for AI is crypto

Its not the only potential solution to the problem. Meta chief AI scientist Yan Le Cun is a big proponent of open-sourcing AI models and letting a thousand flowers bloom, while X owner Elon Musk recently open-sourced the model for Grok.

But blockchain is arguably a big step up. SingularityNET aims to network the technology around the world, with different components controlled by different communities, thereby spreading the risk of any single company, group or government controlling the AGI.

So you could use these infrastructures to implement decentralized deep neural networks, you could use them to implement a huge logic engine, you can use them to implement an artificial life approach where you have a simulated ecosystem and a bunch of little artificial animals interacting and trying to evolve toward intelligence, explains Goertzel.

I want to foster creative contributions from everywhere, and it may be some, you know, 12-year-old genius from Tajikistan comes up with a new artificial life innovation that provides a breakthrough to AGI.

HyperCycle is a ledgerless blockchain thats fast enough to allow AI components to communicate, coordinate and transact to finality in under 300 milliseconds. The idea is to give AIs a way to call on the resources of other AIs, paid for via microtransactions.

For now, the fledgling network is being used for small-scale applications, like an AI app calling on another AI service to help complete a task. But in time, as the network scales, its theoretically possible that AGI might be an emergent property of the various AI components working together in a sort of distributed brain.

So, in that approach, the entire world has a much higher chance to get to AGI as a single entity, Saliba says.

Goertzel didnt develop HyperCycle for that reason he just needed something miles faster than existing blockchains to enable AIs to work together.

The project hes most excited about is OpenCog Hyperon, which launches in alpha this month. It combines together deep neural nets, logic engines, evolutionary learning and other AI paradigms in the same software framework, for updating the same extremely decentralized Knowledge Graph.

The idea is to throw open the doors to anyone who wants to work on it in the hope they can improve the METTA AGI programming language so it can scale up massively. We will have the complete toolset for building the baby AGI, he says. To get something I would want to call it baby AGI we will need that million times speed up of the METTA interpreter, he says.

My own best guess is that Opencog Hyperon may be the system to make the [AGI] breakthrough.

Of course, decentralization does not ensure things will go right with AGI. As Goertzel points out, the government of Somalia was decentralized very widely in the 1990s under a bunch of warlords and militias, but it would have been preferable at the time to live under the centralized government of Finland.

Furthermore, token-based governance is a long way from being fit for prime time. In projects like Uniswap and Maker, large holders like a16z and the core team have so many tokens its almost not worth anyone else voting. Many other decentralized autonomous organizations are wracked by politics and infighting.

The surging price of crypto/AI projects has attracted a bunch of token speculators. Are these really the people we want to put in control of AGI?

Goertzel argues that while blockchain projects are currently primarily attractive to people interested in making money, that will change as the use case evolves.

If we roll out the worlds smartest AI on decentralized networks, you will get a lot of other people involved who are not primarily oriented toward financial speculation. And then itll be a different culture.

But if the Artificial Superintelligence Alliance does achieve AGI, wouldnt its tokens be ludicrously expensive and out of reach of those primarily interested in beneficial AGI?

Goetzel suggests that perhaps a weighted voting system that prioritizes those who have contributed to the project may be required:

I think for guiding the mind of the AGI, we want to roll out a fairly sophisticated, decentralized reputation system and have something closer to one person, one vote, but where people who have some track record of contributing to the AI network and making some sense, get a higher weighting.

Subscribe

The most engaging reads in blockchain. Delivered once a week.

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Read the rest here:

Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph

Whoever develops artificial general intelligence first wins the whole game – ForexLive

High risk warning: Foreign exchange trading carries a high level of risk that may not be suitable for all investors. Leverage creates additional risk and loss exposure. Before you decide to trade foreign exchange, carefully consider your investment objectives, experience level, and risk tolerance. You could lose some or all your initial investment; do not invest money that you cannot afford to lose. Educate yourself on the risks associated with foreign exchange trading and seek advice from an independent financial or tax advisor if you have any questions.

Advisory warning: FOREXLIVE is not an investment advisor, FOREXLIVE provides references and links to selected news, blogs and other sources of economic and market information for informational purposes and as an educational service to its clients and prospects and does not endorse the opinions or recommendations of the blogs or other sources of information. Clients and prospects are advised to carefully consider the opinions and analysis offered in the blogs or other information sources in the context of the client or prospect's individual analysis and decision making. None of the blogs or other sources of information is to be considered as constituting a track record. Past performance is no guarantee of future results and FOREXLIVE specifically hereby acknowledges clients and prospects to carefully review all claims and representations made by advisors, bloggers, money managers and system vendors before investing any funds or opening an account with any Forex dealer. Any news, opinions, research, data, or other information contained within this website is provided on an "as-is" basis as a general market commentary and does not constitute investment or trading advice, and we do not purport to present the entire relevant or available public information with respect to a specific market or security. FOREXLIVE expressly disclaims any liability for any lost principal or profits which may arise directly or indirectly from the use of or reliance on such information, or with respect to any of the content presented within its website, nor its editorial choices.

Disclaimer: FOREXLIVE may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.

Finance Magnates CY Limited

Follow this link:

Whoever develops artificial general intelligence first wins the whole game - ForexLive