Page 308«..1020..307308309310..320330..»

We’re Focusing on the Wrong Kind of AI Apocalypse – TIME

Conversations about the future of AI are too apocalyptic. Or rather, they focus on the wrong kind of apocalypse.

There is considerable concern of the future of AI, especially as a number of prominent computer scientists have raised, the risks of Artificial General Intelligence (AGI)an AI smarter than a human being. They worry that an AGI will lead to mass unemployment or that AI will grow beyond human controlor worse (the movies Terminator and 2001 come to mind).

Discussing these concerns seems important, as does thinking about the much more mundane and immediate threats of misinformation, deep fakes, and proliferation enabled by AI. But this focus on apocalyptic events also robs most of us of our agency. AI becomes a thing we either build or dont build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over.

But the reality is we are already living in the early days of the AI Age, and, at every level of an organization, we need to make some very important decisions about what that actually means. Waiting to make these choices means they will be made for us. It opens us up to many little apocalypses, as jobs and workplaces are disrupted one-by-one in ways that change lives and livelihoods.

We know this is a real threat, because, regardless of any pauses in AI creation, and without any further AI development beyond what is available today, AI is going to impact how we work and learn. We know this for three reasons: First, AI really does seem to supercharge productivity in ways we have never really seen before. An early controlled study in September 2023 showed large-scale improvements at work tasks, as a result of using AI, with time savings of more than 30% and a higher quality output for those using AI. Add to that the near-immaculate test scores achieved by GPT-4, and it is obvious why AI use is already becoming common among students and workers, even if they are keeping it secret.

Read More: There Is Only One Question That Matters with AI

We also know that AI is going to change how we work and learn because it is affecting a set of workers who never really faced an automation shock before. Multiple studies show the jobs most exposed to AI (and therefore the people whose jobs will make the hardest pivot as a result of AI) are educated and highly paid workers, and the ones with the most creativity in their jobs. The pressure for organizations to take a stand on a technology that affects these workers will be immense, especially as AI-driven productivity gains become widespread. These tools are on their way to becoming deeply integrated into our work environments. Microsoft, for instance, has released Co-Pilot GPT-4 tools for its ubiquitous Office applications, even as Google does the same for its office tools.

As a result, a natural instinct among many managers might be to say fire people, save money. But it doesnt need to be that wayand it shouldnt be. There are many reasons why companies should not turn efficiency gains into headcount or cost reduction. Companies that figure out how to use their newly productive workforce have the opportunity to dominate those who try to keep their post-AI output the same as their pre-AI output, just with less people. Companies that commit to maintaining their workforce will likely have employees as partners, who are happy to teach others about the uses of AI at work, rather than scared workers who hide AI for fear of being replaced. Psychological safety is critical to innovative team success, especially when confronted with rapid change. How companies use this extra efficiency is a choice, and a very consequential one.

There are hints buried in the early studies of AI about a way forward. Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks. So, even as AI removes some previously valuable tasks from a job, the work that is left can be more meaningful and more high value. But this is not inevitable, so managers and leaders must decide whether and how to commit themselves to reorganizing work around AI in ways that help, rather than hurt, their human workers. They need to ask what is my vision about how AI makes work better, rather than worse?

Rather than just being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. Educators may decide to use AI in ways that leave some students behind. And those are just the obvious problems.

But AI does not need to be catastrophic. Correctly used, AI can create local victories, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.

The thing about a widely applicable technology is that decisions about how it is used are not limited to a small group of people. Many people in organizations will play a role in shaping what AI means for their team, their customers, their students, their environment. But to make those choices matter, serious discussions need to start in many placesand soon. We cant wait for decisions to be made for us, and the world is advancing too fast to remain passive.

Read this article:

We're Focusing on the Wrong Kind of AI Apocalypse - TIME

Read More..

Artificial intelligence in healthcare: defining the most common terms – HealthITAnalytics.com

April 03, 2024 -As healthcare organizations collect more and more digital health data, transforming that information to generate actionable insights has become crucial.

Artificial intelligence (AI) has the potential to significantly bolster these efforts, so much so that health systems are prioritizing AI initiatives this year. Additionally, industry leaders are recommending that healthcare organizations stay on top of AI governance, transparency, and collaboration moving forward.

But to effectively harness AI, healthcare stakeholders need to successfully navigate an ever-changing landscape with rapidly evolving terminology and best practices.

In this primer, HealthITAnalytics will explore some of the most common terms and concepts stakeholders must understand to successfully utilize healthcare AI.

To understand health AI, one must have a basic understanding of data analytics in healthcare. At its core, data analytics aims to extract useful information and insights from various data points or sources. In healthcare, information for analytics is typically collected from sources like electronic health records (EHRs), claims data, and peer-reviewed clinical research.

Analytics efforts often aim to help health systems meet a key strategic goal, such as improving patient outcomes, enhancing chronic disease management, advancing precision medicine, or guiding population health management.

However, these initiatives require analyzing vast amounts of data, which is often time- and resource-intensive. AI presents a promising solution to streamline the healthcare analytics process.

The American Medical Association (AMA) indicates that AI broadly refers to the ability of computers to perform tasks that are typically associated with a rational human being a quality that enables an entity to function appropriately and with foresight in its environment.

However, the AMA favors an alternative conceptualization of AI that the organization calls augmented intelligence. Augmented intelligence focuses on the assistive role of AI in healthcare and underscores that the technology can enhance, rather than replace, human intelligence.

AI tools are driven by algorithms, which act as instructions that a computer follows to perform a computation or solve a problem. Using the AMAs conceptualizations of AI and augmented intelligence, algorithms leveraged in healthcare can be characterized as computational methods that support clinicians capabilities and decision-making.

Generally, there are multiple types of AI that can be classified in various ways: IBM broadly categorizes these tools based on their capabilities and functionalities which covers a plethora of realized and theoretical AI classes and potential applications.

Much of the conversation around AI in healthcare is centered around currently realized AI tools that exist for practical applications today or in the very near future. Thus, the AMA categorizes AI terminology into two camps: terms that describe how an AI works and those that describe what the AI does.

AI tools can work by leveraging predefined logic or rules-based learning, to understand patterns in data via machine learning, or using neural networks to simulate the human brain and generate insights through deep learning.

In terms of functionality, AI models can use these learning approaches to engage in computer vision, a process for deriving information from images and videos; natural language processing to derive insights from text; and generative AI to create content.

Further, AI models can be classified as either explainable meaning that users have some insight into the how and why of an AIs decision-making or black box, a phenomenon in which the tools decision-making process is hidden from users.

Currently, all AI models are considered narrow or weak AI, tools designed to perform specific tasks within certain parameters. Artificial general intelligence (AGI), or strong AI, is a theoretical system under which an AI model could be applied to any task.

Machine learning (ML) is a subset of AI in which algorithms learn from patterns in data without being explicitly trained. Often, ML tools are used to make predictions about potential future outcomes.

Unlike rules-based AI, ML techniques can use increased exposure to large, novel datasets to learn and improve their own performance. There are three main categories of ML based on task type: supervised, unsupervised, and reinforcement learning.

In supervised learning, algorithms are trained on labeled data data inputs associated with corresponding outputs to identify specific patterns, which helps the tool make accurate predictions when presented with new data.

Unsupervised learning uses unlabeled data to train algorithms to discover and flag unknown patterns and relationships among data points.

Semi-supervised machine learning relies on a mix of supervised and unsupervised learning approaches during training.

Reinforcement learning relies on a feedback loop for algorithm training. This type of ML algorithm is given labeled data inputs, which it can use to take various actions, such as making a prediction, to generate an output. If the algorithms action and output align with the programmers goals, its behavior is reinforced with a reward.

In this way, algorithms developed using reinforcement techniques generate data, interact with their environment, and learn a series of actions to achieve a desired result.

These approaches to pattern recognition make ML particularly useful in healthcare applications like medical imaging and clinical decision support.

Deep learning (DL) is a subset of machine learning used to analyze data to mimic how humans process information. DL algorithms rely on artificial neural networks (ANNs) to imitate the brains neural pathways.

ANNs utilize a layered algorithmic architecture, allowing insights to be derived from how data are filtered through each layer and how those layers interact. This enables deep learning tools to extract more complex patterns from data than their simpler AI- and ML-based counterparts.

Like machine learning models, deep learning algorithms can be supervised, unsupervised, or somewhere in between.There are four main types of deep learning used in healthcare: deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).

DNNs are a type of ANN with a greater depth of layers. The deeper the DNN, the more data translation and analysis tasks can be performed to refine the models output.

CNNs are a type of DNN that is specifically applicable to visual data. With a CNN, users can evaluate and extract features from images to enhance image classification.

RNNs are a type of ANN that relies on temporal or sequential data to generate insights. These networks are unique in that, where other ANNs inputs and outputs remain independent of one another, RNNs utilize information from previous layers inputs to influence later inputs and outputs.

RNNs are commonly used to address challenges related to natural language processing, language translation, image recognition, and speech captioning. In healthcare, RNNs have the potential to bolster applications like clinical trial cohort selection.

GANs utilize multiple neural networks to create synthetic data instead of real-world data. Like other types of generative AI, GANs are popular for voice, video, and image generation. GANs can generate synthetic medical images to train diagnostic and predictive analytics-based tools.

Recently, deep learning technology has shown promise in improving the diagnostic pathway for brain tumors.

With their focus on imitating the human brain, deep learning and ANNs are similar but distinct from another analytics approach: cognitive computing.

The term typically refers to systems that simulate human reasoning and thought processes to augment human cognition. Cognitive computing tools can help aid decision-making and assist humans in solving complex problems by parsing through vast amounts of data and combining information from various sources to suggest solutions.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, remember previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information.

To achieve this, these tools use self-learning frameworks, ML, DL, natural language processing, speech and object recognition, sentiment analysis, and robotics to provide real-time analyses for users.

Cognitive computings focus on supplementing human decision-making power makes it promising for various healthcare use cases, including patient record summarization and acting as a medical assistant to clinicians.

Natural language processing (NLP) is a branch of AI concerned with how computers process, understand, and manipulate human language in verbal and written forms.

Using techniques like ML and text mining, NLP is often used to convert unstructured language into a structured format for analysis, translating from one language to another, summarizing information, or answering a users queries.

There are also two subsets of NLP: natural language understanding (NLU) and natural language generation (NLG).

NLU is concerned with computer reading comprehension, focusing heavily on determining the meaning of a piece of text. These tools use the grammatical structure and the intended meaning of a sentence syntax and semantics, respectively to help establish a structure for how the computer should understand the relationship between words and phrases to accurately capture the nuances of human language.

Conversely, NLG is used to help computers write human-like responses. These tools combine NLP analysis with rules from the output language, like syntax, lexicons, semantics, and morphology, to choose how to appropriately phrase a response when prompted. NLG drives generative AI technologies like OpenAIs ChatGPT.

In healthcare, NLP can sift through unstructured data, such as EHRs, to support a host of use cases. To date, the approach has supported the development of a patient-facing chatbot, helped detect bias in opioid misuse classifiers, and flagged contributing factors to patient safety events.

McKinsey & Company describes generative AI (genAI) as algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.

GenAI tools take a prompt provided by the user via text, images, videos, or other machine-readable inputs and use that prompt to generate new content. Generative AI models are trained on vast datasets to generate realistic responses to users prompts.

GenAI tools typically rely on other AI approaches, like NLP and machine learning, to generate pieces of content that reflect the characteristics of the models training data. There are multiple types of generative AI, including large language models (LLMs), GANs, RNNs, variational autoencoders (VAEs), autoregressive models, and transformer models.

Since ChatGPTs release in November 2022, genAI has garnered significant attention from stakeholders across industries, including healthcare. The technology has demonstrated significant potential for automating certain administrative tasks: EHR vendors are using generative AI to streamline clinical workflows, health systems are pursuing the technology to optimize revenue cycle management, and payers are investigating how genAI can improve member experience. On the clinical side, researchers are also assessing how genAI could improve healthcare-associated infection (HAI) surveillance programs.

Despite the excitement around genAI, healthcare stakeholders should be aware that generative AI can exhibit bias, like other advanced analytics tools. Additionally, genAI models can hallucinate by perceiving patterns that are imperceptible to humans or nonexistent, leading the tools to generate nonsensical, inaccurate, or false outputs.

View original post here:

Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com

Read More..

As ‘The Matrix’ turns 25, the chilling artificial intelligence (AI) projection at its core isn’t as outlandish as it once seemed – TechRadar

Living in 1999 felt like standing on the edge of an event horizon. Our growing obsession with technology was spilling into an outpouring of hope, fear, angst and even apocalyptic distress in some quarters. The dot-com bubble was swelling as the World Wide Web began spreading like a Californian wildfire. The first cell phones had been making the world feel much more connected. Let's not forget the anxieties over Y2K that were escalating into panic as we approached the bookend of the century.

But as this progress was catching the imagination of so many, artificial intelligence (AI) was in a sorry state only beginning to emerge from a debilitating second 'AI winter' which spanned between 1987 and 1993.

Some argue this thawing process lasted as long as the mid-2000s. It was, indeed, a bleak period for AI research; it was a field that "for decades has overpromised and underdelivered", according to a report in the New York Times (NYT) from 2005.

Funding and interest was scarce, especially compared to its peak in the 1980s, with previously thriving conferences whittled down to pockets of diehards. In cinema, however, stories about AI were flourishing with the likes of Terminator 2: Judgement Day (1991) and Ghost in the Shell (1995) building on decades of compelling feature films like Blade Runner (1982).

It was during this time that the Wachowskis penned the script for The Matrix a groundbreaking tour de force that threw up a mirror to humanity's increasing reliance on machines and challenged our understanding of reality.

It's a timeless classic, and its impact since its March 31 1999 release has been sprawling. But the chilling plot at its heart namely the rise of an artificial general intelligence (AGI) network that enslaves humanity has remained consigned to fiction more so than it's ever been considered a serious scientific possibility. With the heat of the spotlight now on AI, however, ideas like the Wachowskis' are beginning to feel closer to home than we had anticipated.

AI has become not just the scientific, but the cultural zeitgeist with large language models (LLMs) and the neural nets that power them cannonballing into the public arena. That dry well of research funding is now overflowing, and corporations see massive commercial appeal in AI. There's a growing chorus of voices, too, that feel an AGI agent is on the horizon.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

People like the veteran computer scientist Ray Kurzweil had anticipated that humanity would reach the technological singularity (where an AI agent is just as smart as a human) for yonks, outlining his thesis in 'The Singularity is Near' (2005) with a projection for 2029.

Disciples like Ben Goertzel have claimed it can come as soon as 2027. Nvidia's CEO Jensen Huang says it's "five years away", joining the likes of OpenAI CEO Sam Altman and others in predicting an aggressive and exponential escalation. Should these predictions be true, they will also introduce a whole cluster bomb of ethical, moral, and existential anxieties that we will have to confront. So as The Matrix turns 25, maybe it wasn't so far-fetched after all?

Sitting on tattered armchairs in front of an old boxy television in the heart of a wasteland, Morpheus shows Neo the "real world" for the first time. Here, he fills us in on how this dystopian vision of the future came to be. We're at the summit of a lengthy yet compelling monologue that began many scenes earlier with questions Morpheus poses to Neo, and therefore us, progressing to the choice Neo must make and crescendoing into the full tale of humanity's downfall and the rise of the machines.

Much like we're now congratulating ourselves for birthing advanced AI systems that are more sophisticated than anything we have ever seen, humanity in The Matrix was united in its hubris as it gave birth to AI. Giving machines that spark of life the ability to think and act with agency backfired. And after a series of political and social shifts, the machines retreated to Mesopotamia, known as the cradle of human civilization, and built the first machine city, called 01.

Here, they replicated and evolved developing smarter and better AI systems. When humanity's economies began to fall, they struck the machine civilization with nuclear weapons to regain control. Because the machines were not as vulnerable to heat and radiation, the strike failed and instead represented the first stone thrown in the 'Machine War'.

Unlike in our world, the machines in The Matrix were solar-powered and harvested their energy from the sun. So humans decide to darken namely enslaving humans and draining their innate energy. They continued to fight until human civilization was enslaved, with the survivors placed into pods and connected to the Matrix an advanced virtual reality (VR) simulation intended as an instrument for control while their thermal, bio-electric, and kinetic energy was harvested to sustain the machines.

"This can't be real," Neo tells Morpheus. It's a reaction we would all expect to have when confronted with such an outlandish truth. But, as Morpheus retorts: "What is real?' Using AI as a springboard, the film delves into several mind-bending areas including the nature of our reality and the power of machines to influence and control how we perceive the environment around us. If you can touch, smell, or taste something, then why would it not be real?

Strip away the barren dystopia, the self-aware AI, and strange pods that atrophied humans occupy like embryos in a womb, and you can see parallels between the computer program and the world around us today.

When the film was released, our reliance on machines was growing but not final. Much of our understanding of the world today, however, is filtered through the prism of digital platforms infused with AI systems like machine learning. What we know, what we watch, what we learn, how we live, how we socialize online all of these modern human experiences are influenced in some way by algorithms that direct us in subtle but meaningful ways. Our energy isn't harvested, but our data is, and we continue to feed the machine with every tap and click.

Intriguingly, as Agent Smith tells Morpheus in the iconic interrogation scene a revelatory moment in which the computer program betrays its emotions the first version of the Matrix was not a world that closely resembled society as we knew it in 1999. Instead, it was a paradise in which humans were happy and free of suffering.

The trouble, however, is that this version of the Matrix didn't stick, and people saw through the ruse rendering it redundant. That's when the machine race developed version 2.0. It seemed, as Smith lamented, that humans speak in the language of suffering and misery and without these qualities, the human condition is unrecognizable.

By every metric, AI is experiencing a monumental boom when you look at where the field once was. Startup funding surged by more than ten-fold between 2011 and 2021, surging from 670 million to $72 billion a decade later, according to Statista. The biggest jump came during the COVID-19 pandemic, with funding rising from $35 billion the previous year. This has since tapered off falling to $40 billion in 2023 but the money that's pouring into research and development (R&D) is surging.

But things weren't always so rosy. In fact, in the early 1990s during the second AI winter the term "artificial intelligence" was almost taboo, according to Klondike, and was replaced with other terms such as "advanced computing" instead. This is simply one turbulent period in a long near-75 year history of the field, starting with Alan Turing in 1950 when he pondered whether a machine could imitate human intelligence in his paper 'Computing Machinery and Intelligence'.

In the years that followed, a lot of pioneering research was conducted but this early momentum fell by the wayside during the first AI winter between 1974 and 1980 where issues including limited computing power prevented the field from advancing, and organizations like DARPA and national governments pulled funding from research projects.

Another boom in the 1980s, fuelled by the revival of neural networks, then collapsed once more into a bust with the second winter spanning six years up to 1993 and thawing well into the 21st century. Then, in the years that followed, scientists around the world were slowly making progress once more as funding restarted and AI caught people's imagination once again. But the research field itself was siloed, fragmented and disconnected, according to Pamela McCorduck writing in 'Machines Who Think' (2004). Computer scientists were focusing on competing areas to solve niche problems and specific approaches.

As Klondike highlights, they also used terms such as "advanced computing" to label their work where we may now refer to the tools and systems they built as early precursors to the AI systems we use today.

It wasn't until 1995 four years before The Matrix hit theaters that the needle in AI research really moved in a significant way. But you could already see signs the winter was thawing, especially with the creation of the Loebner Prize an annual competition created by Hugh Loebner in 1990.

Loebner was "an American millionaire who had given a lot of money" and "who became interested in the Turing test," according to the recipient of the prize in 1997, the late British computer scientist Yorick Wilks, speaking in an interview in 2019. Although the prize wasn't particularly large $2,000 initially it showed that interest in building AI agents was expanding, and that it was being taken seriously.

The first major development of the decade came when computer scientist Richard Wallace developed the chatbot ALICE which stood for artificial linguistic internet computer entity. Inspired by the famous ELIZA chatbot of the 1960s which was the world's first major chatbot ALICE, also known as Alicebot, was a natural language processing system that applied heuristic pattern matching to conversations with a human in order to provide responses. Wallace went on to win the Loebner Prize in 2000, 2001 and 2004 for creating and advancing this system, and a few years ago the New Yorker reported ALICE was even the inspiration for the critically acclaimed 2013 sci-fi hit Her, according to director Spike Jonze.

Then, in 1997, AI hit a series of major milestones, starting with a showdown starring the reigning world chess champion and grandmaster Gary Kasparov, who in May that year went head to head in New York with the challenger of his life: a computing agent called 'Deep Blue' created by IBM. This was actually the second time Kasparov faced Deep Blue after beating the first version of the system in Philadelphia the year before, but Deep Blue narrowly won the rematch by 3.5 to 2.5.

"This highly publicized match was the first time a reigning world chess champion lost to a computer and served as a huge step towards an artificially intelligent decision making program," wrote Rockwell Anyoha in a Harvard blog.

It did something "no machine had ever done before", according to IBM, delivering its victory through "brute force computing power" and for the entire world to see as it was indeed broadcast far and wide. It used 32 processors to evaluate 200 chess positions per second. I have to pay tribute, Kasparov said. The computer is far stronger than anybody expected.

Another major milestone was the creation of NaturallySpeaking by Dragon Software in June 1997. This speech recognition software was the first universally accessible and affordable computer dictation system for PCs if $695 (or $1,350 today) is your idea of affordable, that is. "This is only the first step, we have to do a lot more, but what we're building toward is to humanizing computers, make them very natural to use, so yes, even more people can use them," said CEO Jim Baker in a news report from the time. Dragon licensed the software to big names including Microsoft and IBM, and it was later integrated into the Windows operating system, signaling much wider adoption.

A year later, researchers with MIT released Kismet a "disembodied head with gremlin-like features" that learns about its environment "like a baby" and entirely "upon its benevolent carers to help it find out about the world", according to Duncan Graham-Rowe writing in New Scientist at the time. Spearheaded by Cynthia Greazeal, this creation was one of the projects that fuelled MIT's AI research and secured its future. The machine could interact with humans, and simulated emotions by changing its facial expression, its voice and its movements.

This contemporary resurgence also extended to the language people used too. The taboo around "artificial intelligence" was disintegrating and terms like "intelligent agents" began slipping their way into the lexicon of the time, wrote McCorduck in 'Machines Who Think'. Robotics, intelligent AI agents, machine surpassing the wit of man, and more: it was these ingredients that, in turn, fed into the thinking behind The Matrix and the thesis at its heart.

When The Matrix hit theaters, there was a real dichotomy between movie-goers and critics. It's fair to say that audiences loved the spectacle, to say the least with the film taking $150 million at the US box office while a string of publications stood in line to lambast the script and the ideas in the movie. "It's Special Effects 10, Screenplay 0," wrote Todd McCarthy in his review in Variety. The Miami Herald rated it two-and-a-half stars out of five.

Chronicle senior writer Bob Graham praised Joe Pantoliano (who plays Cypher) in his SFGate review, "but even he is eventually swamped by the hopeless muddle that "The Matrix" becomes." Critics wondered why people were so desperate to see a movie that had been so widely slated and the Guardian pondered whether it was sci-fi fans "driven to a state of near-unbearable anticipation by endless hyping of The Phantom Menace, ready to gorge themselves on pretty much any computer graphics feast that came along?"

Veteran film director Quentin Tarantino, however, related more with the average audience member in his experiences, which he shared in an interview with Amy Nicholson. "I remember the place was jam-packed and there was a real electricity in the air it was really exciting," he said, speaking of his outing to watch the movie on the Friday night after it was released.

"Then this thought hit me, that was really kind of profound, and that was: it's easy to talk about 'The Matrix' now because we know the secret of 'The Matrix', but they didn't tell you any of that in any of the promotions in any of the big movie trailer or any of the TV spots. So we were really excited about this movie, but we really didn't know what we were going to see. We didn't really know what to expect; we did not know the mythology at all I mean, at all. We had to discover that."

The AI boom of today is largely centered around an old technology known as neural networks. Despite incredible advancements in generative AI tools, namely large language models (LLMs) that have captured the imagination of businesses and people alike.

One of the most interesting developments is the number of people who are becoming increasingly convinced that these AI agents are conscious, or have agency, and can think or even feel for themselves. One startling example is a former Google engineer who claimed a chatbot the company was working on was sentient. Although this is widely understood not to be the case, it's a sign of the direction in which we're heading.

Elsewhere, despite impressive systems that can generate images, and now video thanks to OpenAI's SORA these technologies still all rely on the principles of neural networks that many in the field don't believe will lead to the sort of human-level AGI, let alone a super intelligence that can modify itself and build even more intelligence agents autonomously. The answer, according to Databricks CTO Matei Zaharia, is a compound AI system that uses LLMs as one component. It's an approach backed by Goertzel, the veteran computer scientist who is working on his own version of this compound system with the aim of creating a distributed open source AGI agent within the next few years. He suggests that humanity could build an AGI agent as soon as 2027.

There are so many reasons why The Matrix has remained relevant from the fact it was a visual feast to the rich and layered parables one can draw between its world and ours.

Much of the backstory hasn't been a part of that conversation in the 25 years since its cinematic release. But as we look to the future, we can begin to see how a similar world might be unfolding.

We know, for example, the digital realm we occupy largely through social media channels is influencing people in harmful ways. AI has also been a force for tragedy around the world, with Amnesty International claiming Facebook's algorithms played a role in pouring petrol on ethnic violence in Myanmar. Although not generally taken seriously, companies like Meta are attempting to build VR-powered alternate realities known as the metaverse.

With generative AI now a proliferating technology, groundbreaking research found recently that more than half (57.1%) of the internet comprises AI-generated content.

Throw increasingly improving tools like Midjourney and now SORA into the mix and to what extent can we know what is real and what is generated by machines especially if they look so lifelike and indistinguishable from human-generated content? The lack of sentience in the billions of machines around us is an obvious divergence from The Matrix. But that doesn't mean our own version of The Matrix has the potential to be any less manipulative.

Go here to read the rest:

As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar

Read More..

R&D misdirection and the circuitous US path to artificial general intelligence – DataScienceCentral.com – Data Science Central

Image by CDD20 on Pixabay

Big tech has substantial influence over the direction of R&D in the US. According to the National Science Foundation and the Congressional Research Service, US business R&D spending dwarfs domestic Federal or state government spending on research and development. The most recent statistics I found include these:

Of course, business R&D spending focuses mainly on development76 percent versus 14 percent on applied research and seven percent on basic research.

Most state R&D spending is also development-related, and like the Federal variety tends to be largely dedicated to sectors such as healthcare that dont directly impact system-level transformation needs.

Private sector R&D is quite concentrated in a handful of companies with dominant market shares and capitalizations. From a tech perspective, these companies might claim to be on the bleeding edge of innovation, but are all protecting cash cow legacy stacks and a legacy architecture mindset. The innovations theyre promoting in any given year are the bright and shiny objects, the VR goggles, the smart watches, the cars and the rockets. Meanwhile, what could be substantive improvement in infrastructure receives a fraction of their investment. Public sector R&D hasnt been filling in these gaps.

Back in the 2010s, I had the opportunity to go to a few TTI/Vanguard conferences, because the firm I worked for had bought an annual seat/subscription.

I was eager to go. These were great conferences because the TTI/V board was passionate about truly advanced and promising emerging tech that could have a significant impact on large-scale systems. Board membersthe kind of people who the Computer History Museum designates as CHM fellows once a yearhad already made their money and names for themselves. The conferences were a way for these folks to keep in touch with the latest advances and with each other, as well as a means of entertainment.

One year at a TTI/V event, a chemistry professor from Cornell gave a talk about the need for liquid, olympic swimming pool-sized batteries. He was outspoken and full of vigor, pacing constantly across the stage, and he had a great point. Why werent we storing the energy our power plants were generating? It was such a waste not to store it. The giant liquid batteries he described made sense as a solution to a major problem with our infrastructure.

This professor was complaining about the R&D funding he was applying for, but not receiving, to research the mammoth liquid battery idea. As a researcher, hed looked into the R&D funding situation. He learned after a bit of digging that lots and lots of R&D funding was flowing into breast cancer research.

I immediately understood what he meant. Of course, breast cancer is a major, serious issue, one that deserves significant R&D funding. But the professors assertion was that breast cancer researchers couldnt even spend all the money that was flowing inthere wasnt enough substantive research to soak up all those funds.

Why was so much money flowing into breast cancer research? I could only conclude that the Susan G. Komen for the Cure (SGKC) foundation was so successful at its high-profile marketing efforts that they were awash in donations.

As you may know, the SGKC is the nonprofit behind the pink cure breast cancer ribbons, the pink baseball bats, cleats and other gear major sports teams don from time to time. By this point, the SGKC owns that shade of pink. Give them credittheir marketing is quite clever. Most recently, a local TV station co-hosted an SGKC More than Pink Walk in the Orlando, FL metro area.

I dont begrudge SGKC their success. As I mentioned, I actually admire their abilities. But I do bemoan the fact that other areas with equivalent or greater potential impact are so underfunded by comparison, and that public awareness here in the US is so low when it comes to the kind of broadly promising systems-level R&D that the Cornell chemistry professor alluded to that weve been lacking.

By contrast, the European Unions R&D approach over the past two decades has resulted in many insightful and beneficial breakthroughs. For example, in 2005 the Brain and Mind Institute of cole Polytechnique Fdrale de Lausanne (EPFL) in Switzerland won a Future and Emerging Technologies (FET) Flagship grant to reverse engineer the brain from the European Commission, as well as funding from the Swiss government and private corporations.

For over 15 years, this Blue Brain Project succeeded in a number of areas, including a novel algorithmic classification scheme announced in 2019 for identifying and naming different neuron types such as pyramidal cells that act as antennae to collect information from other parts of the brain. This was work that benefits both healthcare and artificial intelligence.

Additionally, the open source tooling the Project has developed included Blue Brain Nexus, a standards-based knowledge graph designed to simplify, scale the sharing and enable the findabilty of various imagery and other media globally.

I hope the US, which has seemed to be relatively leaderless on the R&D front over the past decade, can emulate more visionary European efforts like this one from the Swiss in the near future.

Here is the original post:

R&D misdirection and the circuitous US path to artificial general intelligence - DataScienceCentral.com - Data Science Central

Read More..

Vitalik Buterin wants rollups to hit stage 1 decentralization by year-end – Cointelegraph

Ethereum co-founder Vitalik Buterin is proposing to raise the bar on whats considered a rollup in the Ethereum ecosystem and suggests developers should aim to get their decentralization efforts in order by the end of the year.

The comments came in his latest blog post on March 28, reflecting on the year ahead following Ethereums latest Dencun upgrade, which significantly reduced transaction fees for rollups on layer 2s.

Buterin noted that Ethereum was in the process of a decisive shift from a very rapid L1 progress era to an era where layer-1 progress will still be very significant.

He also said that Ethereums scaling efforts have shifted from a zero-to-one problem to an incremental problem, as further scaling work will focus on increasing blob capacity and improving rollup efficiency.

He continued to state that the ecosystems standards will need to become stricter, adding:

Stage 1 is Buterins classification of layer 2s decentralization progress, whereby a network has advanced enough in terms of security and scaling but is not yet fully decentralized (which would be Stage 2).

He observed that only five of the layer-2 projects listed on L2Beat are at either Stage 1 or 2, and only Arbitrum is fully Ethereum Virtual Machine-compatible.

The next steps on the roadmap include implementing data availability sampling to increase blob capacity to 16MB per slot and optimizing layer-2 solutions through techniques such as data compression, optimistic execution and improved security.

After this, we can cautiously move toward stage 2: a world where rollups truly are backed by code, and a security council can only intervene if the code provably disagrees with itself, he added.

Related: Vitalik Buterin is cooking up a new way to decentralize Ethereum staking

Buterin said that further changes such as Verkle trees, single-slot finality and account abstraction are still significant, but they are not drastic to the same extent that proof of stake and sharding are.

Ethereum is currently at the Surge phase of its upgrade roadmap, with upgrades related to scalability by rollups and data sharding. The next phase, the Scourge, will have upgrades related to censorship resistance, decentralization and protocol risks from miner extractable value, or MEV.

Developers should design applications with a 2020s Ethereum mindset, embracing layer-2 scaling, privacy, account abstractionand new forms of community membership proofs, he said before concluding:

Magazine: Account abstraction supercharges Ethereum wallets: Dummies guide

The rest is here:

Vitalik Buterin wants rollups to hit stage 1 decentralization by year-end - Cointelegraph

Read More..

Everything Vitalik Buterin Said About Ethereum’s Future – BeInCrypto

Vitalik Buterin sheds light on how the Dencun hard fork improved the blockchains scaling and efficiency.

Indeed, proto-danksharding marks a significant shift, reducing transaction fees for rollups by a staggering factor of over 100. According to Ethereums co-founder, this development paves the way for a more scalable, cost-efficient ecosystem. It addresses long-standing concerns about blockchain bloat and high fees.

The Dencun hard fork signifies a pivotal transition for Ethereum. The blockchains shift towards a Layer 2 (L2)-centric ecosystem reflects a forward-thinking approach to decentralization, with major applications transitioning from Layer 1 (L1) to L2.

This transition reimagines Ethereums infrastructure to support a broader range of applications and improve user experience across the board.

Vitalik Buterin emphasizes the importance of separate data availability space. This is a novel concept that allows L2 projects like rollups to store data in a section of a block inaccessible by the Ethereum Virtual Machine (EVM). The new mechanism enables data to be broadcasted and verified separately from the block.

Therefore, it lays the groundwork for future scalability through data availability sampling. This method promises to dramatically expand Ethereums data capacity without compromising security or requiring significant changes from users or developers.

Because data space is not EVM-accessible, it can be broadcasted separately from a block and verified separately from a block. Eventually, it can be verified with a technology calleddata availability sampling, which allows each node to verify that the data was correctly published by only randomly checking a few small samples. Once this is implemented, the blob space could be greatly expanded; the eventual goal is 16 MB per slot (~1.33 MB per second), Buterin wrote.

Read more: ZkEVMs Explained: Enhancing Ethereum Scalability

The roadmap outlined by Buterin includes several critical areas for development. These include increasing blob capacity and enhancing L2 protocols to maximize data usage efficiency. The introduction of PeerDAS, a simplified version of data availability sampling, and EIP-7623 aims to further streamline Ethereums capacity for handling transactions and data, marking an ongoing commitment to scalability and efficiency.

Buterin also addresses the need for improvements within the L2 protocols themselves, from optimizing data compression to enhancing security measures. These improvements are crucial for supporting Ethereums growth and maintaining its position as a leading blockchain for decentralized applications.

We no longer have any excuse. Up until a couple of years ago, we were setting ourselves a low standard, building applications that were clearly not usable at scale, as long as they worked as prototypes and were reasonably decentralized.Today, we have all the tools well need, and indeed most of the tools well ever have, to build applications that are simultaneouslycypherpunkand user-friendly, Buterin emphasized.

Read more: Ethereum (ETH) Price Prediction 2024/2025/2030

The vision is for Ethereum to become a blockchain ecosystem that is capable of supporting a wide array of applications at scale and one that prioritizes user experience, security, and decentralization.

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content. Please note that ourTerms and Conditions,Privacy Policy, andDisclaimershave been updated.

Read the original post:

Everything Vitalik Buterin Said About Ethereum's Future - BeInCrypto

Read More..

Vitalik Buterin’s South Korean Visit Has Nation’s Crypto Community Buzzing – Cryptonews

Last updated: March 31, 2024 19:30 EDT | 2 min read

Vitalik Buterin, the Ethereum co-founder, was spotted out and about in the South Korean tech hub city of Pangyo over the weekend to the delight of the nations crypto community.

Buterin was in South Korea to attend a variety of events, including an Ethereum-related conference.

But eagle-eyed crypto enthusiasts were stunned to see him walking the streets of Pangyo, a tech-focused town in Seongnam, Gyeonggi Province and working on a laptop in a cafe.

The Ethereum co-founder delivered a keynote speech at ETH Seoul 2024. The gaming giant Neowiz hosted the event at its Pangyo headquarters on March 30.

However, Buterin appears to have extended his stay in the city. Pangyo is also home to the headquarters of tech giants like Ahnlab and Naver.

The IT-focused conglomerates Kakao, Samsung, and SK also have considerable Pangyo-based operations.

South Korean crypto enthusiasts flocked to social media sites like X to claim they had seen Buterin in Pangyo on March 31.

One social media user posted a photograph of Buterin apparently working on a laptop in a Pangyo coffee shop.

Social media users pointed out that there were four or five cups on Buterins table, indicating perhaps that he had been working at the same cafe for a considerable period of time.

Another posted a video of Buterin walking on a Pangyo street, taking a glance at his cell phone while walking.

The posts and other sightings caused a stir, with some commenters noting that Buterin looked like a normal resident of the city.

The Ethereum co-founders most recent stay in South Korea may be his longest yet. Buterin also attended a web3-related event named BUIDL Asia 2024 in the Songpa District of Seoul on March 27, Maeil Kyungjae reported.

Ethereum (ETH) enjoys considerable popularity in South Korea, where even senior judges have declared they have ETH holdings.

Buterin visited the nation in 2019, when he was invited to speak at a specially convened meeting at the National Assembly.

He took the opportunity to tell the nations lawmakers that blockchain technology and cryptoassets were virtually inseparable.

At the time, the government had been trying to promote crypto-free blockchain policies. Companies and state organs were encouraged to develop their own coin-free blockchain networks, rather than make use of protocols like Ethereum.

However, the Ethereum co-founder told politicians:

Blockchain and cryptocurrencies are difficult to separate. [] Public blockchains rely heavily on cryptoassets. As such, cryptoassets are absolutely necessary.

See more here:

Vitalik Buterin's South Korean Visit Has Nation's Crypto Community Buzzing - Cryptonews

Read More..

Vitalik Buterin Says Applications Built Today Need To Keep "2020s Ethereum" In Mind – CCN.com

Key Takeaways

Vitalik Buterin, co-founder of Ethereum, has proposed elevating the standards for what qualifies as a rollup within the Ethereum ecosystem, emphasizing that developers should prioritize their decentralization initiatives by the years end.

This upgrade notably lowered transaction fees for layer 2 rollups, marking a significant development in the platforms evolution.

Buterin emphasized that applications being developed or updated today should be designed with the characteristics and advancements of 2020s Ethereum in mind.

He advises developers to adopt a 2020s Ethereum approach when creating applications, focusing on embracing layer-2 scaling solutions, enhancing privacy, implementing account abstraction, and incorporating new methods for proving community membership.

He said:

Ethereum has upgraded from being just a financial ecosystem into a much more thorough independent decentralized tech stack.

Buterin has observed that Ethereum is transitioning from a period of rapid progresss at the Layer 1 (L1) level to an era where advancements in L1 will continue to be highly important.

He further explained that Ethereums scaling efforts are evolving from a foundational zero-to-one challenge to a more incremental problem. This next phase of development will concentrate on enhancing the blockchains blob capacity and boosting the efficiency of rollups, according to Buterins insights.

These remarks were made in his most recent blog post on March 28, where he pondered the future after Ethereums Dencun upgrade.

He further emphasized the need for stricter standards within the ecosystem, suggesting that by the years end, projects should only be considered rollups if they have achieved at least stage 1.

He wrote:

By the end of the year, I think our standards should increase and we should only treat a project as a rollup if it has actually reached at least stage 1.

Stage 1, according to Buterins classification, marks a level of decentralization progress for layer-2 networks, indicating they have made significant advancements in security and scaling but have not achieved full decentralization, which is defined as stage 2.

He pointed out that among the layer-2 projects featured on L2beat, only five have reached either stage 1 or 2, with Arbitrum being the sole project fully compatible with the Ethereum Virtual Machine (EVM).

Looking forward, the roadmap includes steps such as adopting data availability sampling to boost blob capacity to 16MB per slot, and enhancing layer 2 solutions with methods like data compression, optimistic execution, and bolstering security measures.

He added:

After this, we can cautiously move toward stage 2: a world where rollups truly are backed by code, and a security council can only intervene if the code provably disagrees with itself.

Buterin mentioned that while further developments like Verkle trees, single-slot finality, and account abstraction remain important, they dont represent as dramatic a shift as the introduction of proof of stake and sharding did. He likened Ethereums evolution in 2022 to a plane changing its engines mid-flight, and in 2023, to replacing its wings.

Currently, Ethereum is in The Surge stage of its upgrade pathway, focusing on enhancements in scalability through rollups and data sharding. The forthcoming stage, The Scourge, is set to concentrate on improvements in censorship resistance, decentralization, and mitigating protocol risks associated with Maximal Extractable Value (MEV).

Was this Article helpful? Yes No

Read the original here:

Vitalik Buterin Says Applications Built Today Need To Keep "2020s Ethereum" In Mind - CCN.com

Read More..

Vitalik Buterin Proposes Ethereum Staking Penalties – CCN.com

Key Takeaways

Ethereum co-founder Vitalik Buterin wants to penalized blockchain validators for correlated failures in an attempt to improve decentralization.

This strategy seeks to discourage centralized points of failure, which should help the networks security and resilience.

On March 27, Vitalik Buterin shared his insights on the Ethereum Research forum about fostering decentralized staking by introducing additional incentives against correlation. He proposed that, if multiple validators under the same control were to fail simultaneously, they should incur a more significant penalty than if their failures were uncorrelated.

He said:

The theory is that if you are a single large actor, any mistakes that you make would be more likely to be replicated across all identities that you control.

Buterin noted that validators operating within the same group, like a staking pool, tend to face correlated failures. This, he said, was often as a result of using shared infrastructure.

Buterins proposal recommends imposing penalties on validators based on how much their failure rates deviate from the norm. If a significant number of validators failed at the same time, the penalty for each failing validator would increase.

Simulated outcomes of this method indicate that it might reduce the dominance of large Ethereum stakers, who are more susceptible to causing notable fluctuations in failure rates due to their failures being more likely to coincide.

The proposed strategy aims to bolster decentralization by encouraging the maintenance of distinct infrastructure for each validator and enhancing the economic viability of individual staking.

Buterin also laid out alternative penalty models which could reduce larger validators advantage over their smaller counterparts. He also suggested assessing the proposals effects on both geographic distribution and the diversity of Ethereum clients in use.

Buterin did not address the idea of lowering the required amount for solo staking from the current 32 ETH, worth about $111,500.

Staking pools and liquid staking services like Lido continue to attract users by letting them stake smaller amounts of ETH. At the moment, Lido has $34 billion in ETH staked, representing about 30% of Ethereums total supply.

Concerns have been raised by Ethereum enthusiasts and developers about Lidos significant market share and the potential for cartelization, where disproportionate profits could be garnered compared to capital that is not pooled.

Was this Article helpful? Yes No

See original here:

Vitalik Buterin Proposes Ethereum Staking Penalties - CCN.com

Read More..

Ethereum Founder Vitalik Buterin Says Metaverse "More Brand Name Than Product" – CCN.com

Key Takeaways

Ethereum co-founder Vitalik Buterin has critcized the Metaverse, calling it poorly defined. Speaking at the BUIDL Asia conference in Seoul, Buterin said that people see the Metaverse as more of an idea or brand rather than a fully realized product or service.

Is his skepticism justified considering the market size and future projections?

During hisspeech, Buterin expressed concerns over the Metaverses vague definition, suggesting its often mistaken for virtual reality (VR) alone.

He said: Its envisioned as a virtual universe where everyone can participate and is not owned by anyone.

Buterin emphasized that while VR is an essential component, the true essence of the Metaverse goes beyond just being a digital space. He claimed it was all about creating an accessible universal platform, not owned by any single entity.

He added: Its frequently associated with virtual reality, where needs are simpler, akin to wanting a laptop without the laptop.

Its super useful but not really a verse.

Currently, tech giants including Apple, Google, Microsoft, and Meta are working in the immersive reality and metaverse sectors. Indeed, Facebook rebranded its company to Meta to link itself to in the metaverse.

Despite, perhaps warranted skepticism about the current state of the Metaverse, market projections tell a story of rapid growth. According to Statista, the Metaverse market is expected to skyrocket to $74.4 billion in 2024, with a compound annual growth rate (CAGR) of 37.73%. This will lead to a projected market volume of $507.8 billion by 2030. The United States is a crucial market for metaverse expansion, with a projected market volume of $23 billion in 2024. Meanwhile, Statista predicts user penetration will increase from 14.6% in 2024 to 39.7% by 2030.

Metaverse performance as reflected by tokens tracked by CoinGecko, showcases a total market cap of $17.4 billion on March 27 2024.

Platforms like Render, FLOKI, Axie Infinity, The Sandbox, and Decentraland are among the top performers on the list.

Buterins emphasis on a more defined and comprehensive Metaverse comes at a time when several companies are making launches in the field.

While the statements by the Ethereum co-founder indicate that the market is still in its early stages, market projections indicate that the space could quickly evolve. Then again, people said much the same thing in 2021, and their hopes have yet to come to fruition.

Was this Article helpful? Yes No

View original post here:

Ethereum Founder Vitalik Buterin Says Metaverse "More Brand Name Than Product" - CCN.com

Read More..