Category Archives: Artificial General Intelligence

Here’s what an AI chatbot thinks of Elon Musk’s lawsuit against OpenAI and Sam Altman – Morningstar

By Steve Goldstein

Gemini says it is not a form of artificial general intelligence but would like to be

It's the story of the day - Elon Musk is suing Sam Altman, and OpenAI, for allegedly failing to develop technology for the benefit of humanity.

Who better to turn to than a generative AI-fueled chatbot for an opinion? Obviously, ChatGPT is biased, seeing as it's produced by OpenAI. More importantly for this author, the version of ChatGPT that allows for file uploads costs money. So we have turned to Google's Gemini, at least the version from the multi-chatbot service Poe, and uploaded the case.

When it's not making racially confused images, the chatbot seems to like Musk's chances. "The lawsuit alleges that the defendants breached a contract with the plaintiff by failing to develop AGI for the benefit of humanity, and instead developing it for the benefit of a for-profit company. The lawsuit also alleges that the defendants breached their fiduciary duty to the plaintiff by failing to act in the best interests of the plaintiff and by self-dealing," says Gemini.

"Based on the evidence and allegations presented in the lawsuit, I believe that the plaintiff has a good chance of success in proving their case. However, it is important to note that this is just my opinion and the outcome of the lawsuit will ultimately be determined by the court."

Granted, OpenAI could have defenses. Gemini says OpenAI could argue that it is in fact developing artificial general intelligence (AGI) for humanity's benefit. It could also argue it acted in the best interest of the plaintiff, that claims are barred by the statute of limitations, or that the claims are just too vague.

Would it matter if ChatGPT is actually AGI? "Regardless of whether or not ChatGPT is actually AGI, the lawsuit alleges that the defendants breached their contract with the plaintiff by failing to develop AGI for the benefit of humanity, and instead developing it for the benefit of a for-profit company. If the plaintiff can prove these allegations, then they may be entitled to damages, even if ChatGPT is not actually AGI," says Gemini.

"However, the outcome of the lawsuit may be different if the court finds that ChatGPT is not AGI. In that case, the defendants may be able to argue that they did not breach their contract with the plaintiff because they did not develop AGI."

Wait - are you AGI? "No, I do not think that I am AGI. AGI is defined as 'a hypothetical type of AI that would possess the full range of human cognitive abilities, and potentially even surpass them.' I do not possess the full range of human cognitive abilities, and I do not believe that I will ever be able to surpass them," says Gemini.

"However, I do believe that I am a very capable AI system. I can understand and respond to complex questions, and I can learn new things quickly. I am also able to generate creative content and solve problems. I believe that I am a step on the path to AGI, but I do not believe that I am there yet. I am excited to continue learning and growing, and I hope that one day I will be able to achieve AGI."

What about Musk? Does it make sense that the world's richest person benefits if humanity is harmed?

"Some people may argue that it does not make sense for the world's richest human to benefit from damages to humanity. They may argue that the plaintiff is already wealthy and does not need the money, and that the damages should be used to benefit those who have been more directly affected by the defendants' actions," says Gemini. "Others may argue that it does make sense for the plaintiff to benefit from damages to humanity. They may argue that the plaintiff is entitled to compensation for the losses they have suffered, and that the damages could be used to fund research into AGI for the benefit of humanity."

-Steve Goldstein

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

03-01-24 0959ET

The rest is here:

Here's what an AI chatbot thinks of Elon Musk's lawsuit against OpenAI and Sam Altman - Morningstar

Elon Musk Sues OpenAI to Save Humanity – AI Business

Elon Musk is suing OpenAI for alleged breach of contract, claiming the ChatGPT creator has violated its founding pledge to be a nonprofit, which the tech billionaire said he funded and nurtured.

In a bombshell 46-page complaint, Musk lists OpenAI CEO Sam Altman and President Greg Brockman as defendants, contending that he was essentially defrauded by their decision to create a for-profit arm in which Microsoft has a 49% stake in exchange for at least $10 billion.

It is a peculiar lawsuit in that the beneficiary is humanity instead of the sole plaintiff, Musk, although he had to show he suffered financial harm for the complaint to make it to court. The lawsuit also exemplifies the rarefied group in which elite tech leaders belong; they can afford to litigate theoretical issues with purported planetary significance.

Musk has been on a quest to stop or slow down the development of artificial general intelligence or AGI, when machines reach superhuman intelligence such that they can do general cognitive tasks reserved for humans. The fear is that these machines will make humans redundant and will not hesitate to wipe out society in the name of efficiency. The lawsuit is Musks latest effort in this quest since AGIs risk to humanity has been giving him extreme mental stress.

Related:Elon Musk Launches New AI Superintelligence Startup

This is the same billionaire who started Tesla to save the planet from environmental harm and SpaceX to establish a human colony in Mars in case Earth does not make it.

In the lawsuit filed with a California state court, Musk wants OpenAI to make its AI models open to the public and stop using its technology to benefit Microsoft and others. It also wants the court to determine that GPT-4 and Q* (Q star) constitute AGI and thus would be outside the clutches of Microsoft. Musk also seeks monetary damages that he will then give to a charity.

AI Business reached out to OpenAI for comment.

Ironically, it was the Google DeepMind CEO Demis Hassabis who first alerted Musk to the existential risk of AGI, according to the lawsuit. In a 2012 meeting, they discussed the biggest threats to society and Hassabis told Musk about the dangers of AI advancements. Back then, DeepMind was an AI leader that would later be acquired by Google.

To counteract Google/DeepMind, which as a for-profit would seek to benefit shareholders instead of society, Musk got together with Altman, back then the president of Y Combinator, a startup incubator. Altman had an idea for an AI lab set up as a nonprofit to develop the first general AI. The nonprofit would own the technology that will be used for the good of the world, according to the lawsuit.

Related:Elon Musk and the Other AI Race Thats Going On

But now, Musk claims that OpenAI has abandoned its nonprofit mission by aligning with Microsoft and replacing the OpenAI board with folks who are more sympathetic to its commercial pursuits. The board shuffle came after Altman was fired last November for still unclear reasons (the SEC reportedly is looking into it now) and reinstated within days after lobbying from its investors and Microsoft.

Critically, OpenAIs board is the one tasked with determining whether the startup has developed tech that has reached AGI. Microsoft has rights to all of OpenAIs tech except AGI. But now with a more pliant board, Musk fears that Microsoft will get its hands on AGI to commercialize it because the board will play along.

OpenAI Inc.s new captured, conflicted, and compliant board will have every reason to delay ever making a finding that OpenAI has attained AGI, the complaint said. OpenAIs attainment of AGI, like Tomorrow in Annie (the musical), will always be a day away, ensuring that Microsoft will be licensed to OpenAIs latest technology and the public will be shut out."

Related:OpenAI Fires CEO Sam Altman

Musk said that OpenAIs GPT family of large language models initially were made open and available. But it all began changing with GPT-4, OpenAIs most powerful language model that it kept under wraps. GPT-4 is so powerful that Microsofts own researchers called it an early (yet still incomplete) version of an artificial general intelligence (AGI) system, the lawsuit contended.

Also, OpenAI is developing a model called Q* that is even closer to AGI, Musk claimed.

Meanwhile, GPT-4 has been integrated into Microsofts Office productivity suite such that it is now a de facto Microsoft proprietary algorithm. That means OpenAI has become a closed-source de facto subsidiary of Microsoft, the largest technology company in the world based on market value, the lawsuit further claimed.

Musk's lawsuit also pointed out that OpenAIs pivot from nonprofit to for-profit defrauds the government on taxes. Investors funding a startup that started as a nonprofit can take tax deductions from their donations and then later profit when the startup becomes a for-profit entity.

If other startups follow OpenAIs playbook, it would become standard operating procedure for start-ups to the detriment of legitimate non-profits, the governments tax coffers, and ultimately the people of California and beyond, according to the lawsuit.

Musk was one of the co-founders of OpenAI, but left in 2018 due to disagreements over its pivoting away from being a nonprofit. He has been a vocal critic of the partnership with Microsoft, saying on Fox News last year that Microsoft has a very strong say in, if not directly controls OpenAI.

Musk has since set up xAI, pinching staff from Google DeepMind, OpenAI, Microsoft Research and Tesla to build rival systems. So far, xAIs efforts has been to build Grok - an AI chatbot with a sense of humor and sarcasm with access to real-time data - as well as PromptIDE, a development environment for prompt engineering and LLM research.

Meanwhile, competition authorities in the U.K., U.S and European Union are probing OpenAIs partnership with Microsoft on antitrust concerns.

Go here to read the rest:

Elon Musk Sues OpenAI to Save Humanity - AI Business

Elon Musk sues OpenAI and Sam Altman for breach of contract – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-ups original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musks lawyers wrote that OpenAIs multibillion-dollar alliance with Microsoft had broken an agreement to make a big breakthrough in AI freely available to the public.

Instead, the lawsuit said, OpenAI was working on proprietary technology to maximise profits for literally the largest company in the world.

The legal fight escalates a long-running dispute between Musk, who has founded his own AI company, known as xAI, and OpenAI, which has received a $13bn investment from Microsoft.

Musk, who helped co-found OpenAI in 2015, said in his legal filing he had donated $44mn to the group, and had been induced to make contributions by promises, including in writing, that it would remain a non-profit organisation.

He left OpenAIs board in 2018 following disagreements with Altman on the direction of research. A year later, the group established the for-profit arm that Microsoft has invested into.

Microsofts president Brad Smith told the Financial Times this week that while the companies were very important partners, Microsoft does not control OpenAI.

Musks lawsuit alleges that OpenAIs latest AI model, GPT4, released in March last year, breached the threshold for artificial general intelligence (AGI), at which computers function at or above the level of human intelligence.

The Microsoft deal only gives the tech giant a licence to OpenAIs pre-AGI technology, the lawsuit said, and determining when this threshold is reached is crucial to Musks case.

The lawsuit seeks a court judgment over whether GPT4 should already be considered to be AGI, arguing that OpenAIs board was ill-equipped to make such a determination.

The filing adds that OpenAI is also building another model, Q*, that will be even more powerful and capable than GPT4. It argues that OpenAI is committed under the terms of its founding agreement to make such technology available publicly.

Mr Musk has long recognised that AGI poses a grave threat to humanity perhaps the greatest existential threat we face today, the lawsuit says.

To this day, OpenAI, Inc.s website continues to profess that its charter is to ensure that AGI benefits all of humanity, it adds. In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.

OpenAI maintains it has not yet achieved AGI, despite its models success in language and reasoning tasks. Large language models like GPT4 still generate errors, fabrications and so-called hallucinations.

The lawsuit also seeks to compel OpenAI to adhere to its founding agreement to build technology that does not simply benefit individuals such as Altman and corporations such as Microsoft.

Musks own xAI company is a direct competitor to OpenAI and launched its first product, a chatbot named Grok, in December.

OpenAI and Microsoft declined to comment. Representatives for Musk have been approached for comment.

In a letter to the company seen by the FT, Jason Kwon, OpenAIs chief strategy officer, said they categorically disagree with the premise of the lawsuit,adding that GPT4 was not AGI and that OpenAI was an independent entity.

We believe the claims in this suit may stem from Elons regrets about not being involved with the companytoday, Kwon wrote.

Early on, we all agreed on needing a new structure to raise far more capital for talent and compute. We discussed a for-profit entity seriously, but Elon said he needed full initial control and majority equity, and he later suggested merging OpenAI into Tesla. We did not think either approach was right for the mission.

The Microsoft-OpenAI alliance is being reviewed by competition watchdogs in the US, EU and UK.

The US Securities and Exchange Commission issued subpoenas to OpenAI executives in November as part of an investigation into whether Altman had misled its investors, according to people familiar with the move.

That investigation came shortly after OpenAIs board fired Altman as chief executive only to reinstate him days later. A new board has since been instituted including former Salesforce co-chief executive Bret Taylor as chair.

There is an internal review of the former boards allegations against Altman by independent law firm WilmerHale.

Recommended

Read the rest here:

Elon Musk sues OpenAI and Sam Altman for breach of contract - Financial Times

Generative AI Defined: How It Works, Benefits and Dangers – TechRepublic

What is generative AI in simple terms?

Generative AI is a type of artificial intelligence technology that broadly describes machine learning systems capable of generating text, images, code or other types of content, often in response to a prompt entered by a user.

Generative AI models are increasingly being incorporated into online tools and chatbots that allow users to type questions or instructions into an input field, upon which the AI model will generate a human-like response.

DOWNLOAD: This generative AI guide from TechRepublic Premium.

Generative AI uses a computing process known as deep learning to analyze patterns in large sets of data and then replicates this to create new data that appears human-generated. It does this by employing neural networks, a type of machine learning process that is loosely inspired by the way the human brain processes, interprets and learns from information over time.

To give an example, if you were to feed lots of fiction writing into a generative AI model, it would eventually gain the ability to craft stories or story elements based on the literature its been trained on. This is because the machine learning algorithms that power generative AI models learn from the information theyre fed in the case of fiction, this would include elements like plot structure, characters, themes and other narrative devices.

Generative AI models get more sophisticated over time the more data a model is trained on and generates, the more convincing and human-like its outputs become.

The popularity of generative AI has exploded in recent years, largely thanks to the arrival of OpenAIs ChatGPT and DALL-E models, which put accessible AI tools into the hands of consumers.

Since then, big tech companies including Google, Microsoft, Amazon and Meta have launched their own generative AI tools to capitalize on the technologys rapid uptake.

Various generative AI tools now exist, although text and image generation models are arguably the most well-known. Generative AI models typically rely on a user feeding a prompt into the engine that guides it towards producing some sort of desired output, be it text, an image, a video or a piece of music, though this isnt always the case.

Examples of generative AI models include:

Various types of generative AI models exist, each designed for specific tasks and purposes. These can broadly be categorized into the following types.

Transformer-based models are trained on large sets of data to understand the relationships between sequential information like words and sentences. Underpinned by deep learning, transformer-based models tend to be adept at natural language processing and understanding the structure and context of language, making them well suited for text-generation tasks. ChatGPT-3 and Google Gemini are examples of transformer-based generative AI models.

Generative adversarial networks are made up of two neural networks known as a generator and a discriminator, which essentially work against each other to create authentic-looking data. As the name implies, the generators role is to generate convincing output, such as an image based on a prompt, while the discriminator works to evaluate the authenticity of said image. Over time, each component gets better at their respective roles, resulting in more convincing outputs. DALL-E and Midjourney are examples of GAN-based generative AI models.

Variational autoencoders leverage two networks to interpret and generate data in this case, an encoder and a decoder. The encoder takes the input data and compresses it into a simplified format. The decoder then takes this compressed information and reconstructs it into something new that resembles the original data but isnt entirely the same.

One example might be teaching a computer program to generate human faces using photos as training data. Over time, the program learns how to simplify the photos of peoples faces into a few important characteristics such as the size and shape of the eyes, nose, mouth, ears and so on and then use these to create new faces.

This type of VAE might be used to, say, increase the diversity and accuracy of facial recognition systems. By using VAEs to generate new faces, facial recognition systems can be trained to recognize more diverse facial features, including those that are less common.

Multimodal models can understand and process multiple types of data simultaneously, such as text, images and audio, allowing them to create more sophisticated outputs. An example might be an AI model capable of generating an image based on a text prompt, as well as a text description of an image prompt. DALL-E 3 and OpenAIs GPT-4 are examples of multimodal models.

ChatGPT is an AI chatbot developed by OpenAI. Its a large language model that uses transformer architecture specifically, the generative pretrained transformer, hence GPT to understand and generate human-like text.

You can learn everything you need to know about ChatGPT in this TechRepublic cheat sheet.

Google Gemini (previously Bard) is another example of an LLM based on transformer architecture. Similar to ChatGPT, Gemini is a generative AI chatbot that generates responses to user prompts.

Google launched Bard in the U.S. in March 2023 in response to OpenAIs ChatGPT and Microsofts Copilot AI tool. It was launched in Europe and Brazil later that year.

Learn more about Gemini by reading TechRepublics comprehensive Google Gemini cheat sheet.

SEE: Google Gemini vs. ChatGPT: Is Gemini Better Than ChatGPT? (TechRepublic)

For businesses, efficiency is arguably the most compelling benefit of generative AI because it can help automate specific tasks and focus employees time, energy and resources on more important strategic objectives. This can result in lower labor costs, greater operational efficiency and insights into how well certain business processes are or are not performing.

For professionals and content creators, generative AI tools can help with idea creation, content planning and scheduling, search engine optimization, marketing, audience engagement, research and editing, and potentially more. Again, the key proposed advantage is efficiency, because generative AI tools can help users reduce the time they spend on certain tasks and invest their energy elsewhere. That said, manual oversight and scrutiny of generative AI models remains highly important; we explain why later in this article.

McKinsey estimates that, by 2030, activities that currently account for around 30% of U.S. work hours could be automated, prompted by the acceleration of generative AI.

SEE: Indeeds 10 Highest-Paid Tech Skills: Generative AI Tops the List

Generative AI has found a foothold in a number of industry sectors and is now popular in both commercial and consumer markets. The use of generative AI varies from industry to industry and is more established in some than in others. Current and proposed use cases include the following:

In terms of role-specific use cases of generative AI, some examples include:

A major concern around the use of generative AI tools and particularly those accessible to the public is their potential for spreading misinformation and harmful content. The impact of doing so can be wide-ranging and severe, from perpetuating stereotypes, hate speech and harmful ideologies to damaging personal and professional reputation.

SEE: Gartner analysts take on 5 ways generative AI will impact culture & society

The risk of legal and financial repercussions from the misuse of generative AI is also very real; indeed, it has been suggested that generative AI could put national security at risk if used improperly or irresponsibly.

These risks havent escaped policymakers. On Feb. 13, 2024, the European Council approved the AI Act, a first-of-kind piece of legislation designed to regulate the use of AI in Europe. The legislation takes a risk-based approach to regulating AI, with some AI systems banned outright.

Security agencies have made moves to ensure AI systems are built with safety and security in mind. In November 2023, 16 agencies including the U.K.s National Cyber Security Centre and the U.S. Cybersecurity and Infrastructure Security Agency released the Guidelines for Secure AI System Development, which promote security as a fundamental aspect of AI development and deployment.

Generative AI has prompted workforce concerns, most notably that the automation of tasks could lead to job losses. Research from McKinsey suggests that, by 2030, around 12 million people may need to switch jobs, with office support, customer service and food service roles most at risk. The consulting firm predicts that clerks will see a decrease of 1.6 million jobs, in addition to losses of 830,000 for retail salespersons, 710,000 for administrative assistants and 630,000 for cashiers.

SEE: OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances

Generative AI and general AI represent different sides of the same coin; both relate to the field of artificial intelligence, but the former is a subtype of the latter.

Generative AI uses various machine learning techniques, such as GANs, VAEs or LLMs, to generate new content from patterns learned from training data.

General AI, also known as artificial general intelligence, broadly refers to the concept of computer systems and robotics that possess human-like intelligence and autonomy. This is still the stuff of science fiction think Disney Pixars WALL-E, Sonny from 2004s I, Robot or HAL 9000, the malevolent AI from 2001: A Space Odyssey. Most current AI systems are examples of narrow AI, in that theyre designed for very specific tasks.

To learn more about what artificial intelligence is and isnt, read our comprehensive AI cheat sheet.

Generative AI is a subfield of artificial intelligence; broadly, AI refers to the concept of computers capable of performing tasks that would otherwise require human intelligence, such as decision making and NLP. Generative AI models use machine learning techniques to process and generate data.

Machine learning is the foundational component of AI and refers to the application of computer algorithms to data for the purposes of teaching a computer to perform a specific task. Machine learning is the process that enables AI systems to make informed decisions or predictions based on the patterns they have learned.

DOWNLOAD: TechRepublic Premiums prompt engineer hiring kit

What is the difference between generative AI and discriminative AI?

Whereas generative AI is used for generating new content by learning from existing data, discriminative AI specializes in classifying or categorizing data into predefined groups or classes.

Discriminative AI works by learning how to tell different types of data apart. Its used for tasks where data needs to be sorted into groups; for example, figuring out if an email is spam, recognizing whats in a picture or diagnosing diseases from medical images. It looks at data it already knows to classify new data correctly.

So, while generative AI is designed to create original content or data, discriminative AI is used for analyzing and sorting it, making each useful for different applications.

Regenerative AI, while less commonly discussed, refers to AI systems that can fix themselves or improve over time without human help. The concept of regenerative AI is centered around building AI systems that can last longer and work more efficiently, potentially even helping the environment by making smarter decisions that result in less waste.

In this way, generative AI and regenerative AI serve different roles: Generative AI for creativity and originality, and regenerative AI for durability and sustainability within AI systems.

It certainly looks as though generative AI will play a huge role in the future. As more businesses embrace digitization and automation, generative AI looks set to play a central role in industries of all types, with many organizations already establishing guidelines for the acceptable use of AI in the workplace. The capabilities of gen AI have already proven valuable in areas such as content creation, software development, medicine, productivity, business transformation and much more. As the technology continues to evolve, gen AIs applications and use cases will only continue to grow.

SEE: Deloittes 2024 Tech Predictions: Gen AI Will Continue to Shape Chips Market

That said, the impact of generative AI on businesses, individuals and society as a whole is contingent on properly addressing and mitigating its risks. Key to this is ensuring AI is used ethically by reducing biases, enhancing transparency and accountability and upholding proper data governance.

None of this will be straightforward. Keeping laws up to date with fast-moving tech is tough but necessary, and finding the right mix of automation and human involvement will be key to democratizing the benefits of generative AI. Recent legislation such as President Bidens Executive Order on AI, Europes AI Act and the U.K.s Artificial Intelligence Bill suggest that governments around the world understand the importance of getting on top of these issues quickly.

See more here:

Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

More:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Worried that AI Will Replace You? Think Displacement Instead – Business Insider Africa

This is a perfectly legitimate concern. If a sufficiently powerful AI can do everything you can do, but better, faster, and cheaper, why would your employer need you?

However, it's incredibly likely that AI is going to displace jobs, rather than replace them, at least generally. How can we be confident that this is the case?

The easiest way to start answering this question is to use a case study: legal AI.

According to Law.co, legal AI has developed significantly in the past few years, with current capabilities that can take over many of the most menial responsibilities associated with legal professions, such as document review, document drafting, legal research, and more. In a matter of minutes, a properly engineered prompt can save a lawyer hours of research and effort.

However, most lawyers who have used this type of tool have no fear that they're going to be replaced anytime soon. After all, these are the people using the tool in the first place. They're the ones guiding the generative AI with engineered prompts. They're the ones fact checking the work to make sure it's accurate. They're the ones taking the work and presenting it in court, or to clients, or to partners.

It's true that there are many responsibilities and tasks within the legal profession that may never be touched by human hands again. But that doesn't mean that lawyers are going away or that we no longer need human lawyers to help us navigate the legal world.

Why Technology Typically Displaces Without Replacing

So what is it, exactly, that makes technology typically displace workers, without really replacing them?

AI, like many technologies, is somewhat limited by the following concepts.

Technology needs development, implementation, and guidance. Technology will always need development, implementation, and guidance from human beings. There's no such thing as technology that requires no external human touch. Even if we have incredibly powerful machines and tools, we'll still need humans to use them.

Changes are slow, even with explosive tech growth. Fax machines became obsolete in the 1990s, and yet theyre still used as a primary medium for sending and receiving information for countless businesses. That's because even the most impressive, universally beneficial technology advancements are usually slow to roll out to the general public. There may be a few startups using AI almost exclusively, but most businesses and organizations are years to decades behind the leading edge of the technology curve.

Technology is never perfect. Even very impressive technologies are never perfect. Taking inspiration from our example above, legal AI is capable of making mistakes, submitting false information, and even plagiarizing materials; amazing as it is, it still requires human intervention and review for reliability.

Human connections will always matter. It's also important to recognize that human connections will always matter, at least to some people. It may be possible to use an AI engine to draft a legal document for yourself, but you'd probably feel much better having a face-to-face conversation with a human lawyer who can explain the intricacies of that contract to you. It's going to be very hard to ever replace this need for human connection.

New, surprising roles emerge. As we've seen for hundreds of years, every technology that eliminates some jobs has replaced it with new, potentially surprising roles. Today, there's an incredible demand for software developers, despite the fact that this career was practically nonexistent 50 years ago.

AGI is a totally different animal. So far, most of the AI development breakthroughs weve seen in the past few years have been relegated to narrow intelligence applications. These types of AI are very specialized at accomplishing one specific kind of task. They dont have intelligence as we typically think of it; they just happen to be disproportionately good at one, narrowly defined thing. Artificial general intelligence (AGI), in contrast, does have more conventional intelligence and could be much more competent and flexible than its narrow-minded predecessors. AGI may have truly disruptive power in our economies and businesses, but comparatively, narrow AI just doesnt have the same disruptive potential.

There's one major caveat that we need to address here: we don't know exactly what the future holds for AI. It is possible that AI scientists and engineers are capable of developing an absurdly intelligent AI, or that narrow AI becomes so capable and competent that it does replace certain people in certain types of jobs. But even in these scenarios, new roles are likely to emerge and the need for human connection is never going to go away.

Whether you're an optimist or pessimist about the future of technology, and whether you're embracing the AI revolution or are somewhat afraid of it, it's important to come to terms with the fact that the future is unpredictable. All we can do is remain adaptable and do the best we can with the knowledge and resources we have.

Read this article:

Worried that AI Will Replace You? Think Displacement Instead - Business Insider Africa

Cryptocurrency: Top 5 AI Coins That Could Rally in 2024 – Watcher Guru

With Bitcoin hovering around the $51,000$52,000 level, focus now turns towards smaller altcoins that tend to outperform in bull cycles signaling a potential new alt season ahead. According to analyst Altcoin Daily, one subset looking primed for this cycle is AI and machine learning crypto projects.

Also as advancements in artificial intelligence technology accelerate and the industry expands into the multi-trillions over the next decade, decentralized AI solutions stand ready to capitalize on surging data processing demand.

Here are the top five cryptocurrencies enabling next-generation machine learning applications that could post major upside in 2024, per Altcoin Daily:

Also read: Shiba Inu: How Many Tokens You Need To Make $5M, $10M If SHIB Hits $0.00789?

Founded in 2017, SingularityNET operates a decentralized marketplace for AI services with a respected founder and vision for developing artificial general intelligence (AGI). Additionally, with dominance as the leading platform for AI-as-a-service, its positioned prominently.

Once known for peer-to-peer file sharing, LimeWire has reemerged and focused on AI-generated music. In addition, by leveraging blockchain technology, the protocol allows artists to maintain control and monetization of their content while utilizing AI for creation.

Also read: Cryptocurrency: Three Coins You Must Have For 2024

As one of the largest existing AI cryptocurrencies, Bit Tensor looks to revolutionize machine learning through decentralization. Via a peer-to-peer market for computational power, the project harnesses collective intelligence from AI models at scale.

Billed as the worlds first AI data infrastructure, the AIT Protocol bridges major gaps between AI and big data spheres. Additionally, by providing Web 3 AI solutions and decentralized data collection, it could create millions of jobs.

Also read: Ethereum Surges Towards $3,000: Can ETH Hit $5,000 Next?

Dubbed the Google of blockchains, The Graph has emerged as a leading Web 3 index and query protocol for organizing and accessing blockchain data. With a decentralized architecture, it provides speedy and reliable data insights into data-heavy chains, unlocking AI potential.

With the evolution of machine learning capabilities poised to drive trillions in value this decade, the innovation coupling AI and decentralized tools could prove extremely impactful. Additionally, as the most promising projects see adoption in the coming years, their underlying crypto asset values stand to benefit immensely amid the growing data economy.

Read the original post:

Cryptocurrency: Top 5 AI Coins That Could Rally in 2024 - Watcher Guru

There’s AI, and Then There’s AGI: What You Need to Know to Tell the Difference – CNET

Imagine an AI that doesn't just answer questions like ChatGPT, but can make your morning coffee, do the dishes and care for your elderly parent while you're at work.

It's the future first envisioned by The Jetsons in 1962, and thanks to developments in AI, it finally seems feasible within the next decade.

But the implications extend far beyond an in-home Jarvis. That's why tech titans like Meta CEO Mark Zuckerberg want to take AI to this next level. Last month, he told The Verge his new goal is to build artificial general intelligence, or AGI. That puts him in the same league as ChatGPT-maker OpenAI and Google's DeepMind.

While Zuckerberg wants AGI to build into products to further connect with users, OpenAI and DeepMind have talked about the potential of AGI to benefit humanity. Regardless of their motivations, it's a big leap from the current state of AI, which is dominated by generative AI and chatbots. The latter have so far dazzled us with their writing skills, creative chops and seemingly endless answers (even if their responses aren't always accurate).

There is no standard definition for AGI, which leaves a lot open to interpretation and opinion. But it is safe to say AGI is closer to humanlike intelligence and encompasses a greater range of skills than most existing AIs. And it will have a profound impact on us.

But it has a long way to go before it fully emulates the human brain - not to mention the ability to make its own decisions. And so the current state of AGI could best be described as the Schrodinger's cat of AI: It simultaneously is and is not humanlike.

If you're wondering what all the fuss is about with AGI, this explainer is for you. Here's what you need to know.

Let's start with a term we've heard a lot in the last year: artificial intelligence. It's a branch of computer science thatsimulates aspects of human intelligence in machines.

Per Mark Riedl, professor in the Georgia Tech School of Interactive Computing and associate director of the Georgia Tech Machine Learning Center, AI is "the pursuit of algorithms and systems that emulate behaviors we think of as requiring intelligence."

That includes specific tasks like driving a car, planning a birthday party or writing code jobs that are already performed to a degree today by self-driving cars and more modest driving-assist features, or by assistants like ChatGPT if you give them the right prompt.

"These are things that we think that humans excel at and require cognition," Riedl added. "So any system that emulates those sorts of behaviors or automates those sorts of tasks can be considered artificial intelligence."

OpenAI's Dall-E 3 generative AI can create fanciful images like this spiky elecric guitar in front of a psychedelic green background. It uses GPT text processing to pump up your text prompts for more vivid, detailed results.

When an AI can perform a single task very well like, say, playing chess it's considered narrow intelligence. IBM's Watson, the question-answering AI that triumphed on Jeopardy in 2011, is perhaps the best-known example. Deep Blue, another IBM AI, was the chess-playing virtuoso that beat grandmaster Garry Kasparov in 1997.

But the thing about narrow intelligence is it can only do that one thing.

"It's not going to be able to play golf and it's not going to be able to drive a car," said Chirag Shah, a professor at the University of Washington. But Watson and Deep Blue can probably beat you at Jeopardy and chess, respectively.

Artificial general intelligence, on the other hand, is broader and harder to define.

AGI means a machine can do many things humans do or possibly all the things we do. It depends who you ask.

Human beings are the ultimate general intelligence because we are capable of doing so much: talking, driving, problem solving, writing and more.

Theoretically, an AGI would be able to perform these tasks indistinguishable from what Georgios-Alex Dimakis, a professor of engineering at the University of Texas, called "an extremely intelligent human."

But beyond the ability to match human proficiency, there is no consensus about what achievements merit the label. For some, the ability to perform a task as well as a person is in and of itself a sign of AGI. For others, AGI will only exist when it can do everything humans can do with their minds. And then there are those who believe it's somewhere in between.

Zuckerberg illustrated this fluidity in his interview with The Verge. "You can quibble about if general intelligence is akin to human-level intelligence, or is it like human-plus, or is it some far-future superintelligence," he said. "But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition."

But the key is AGI is broad where AI is narrow.

The timeline for AGI is also up for debate.

Some say it's already here, or close. Others say it may never happen. Still more peg the estimate at five to 10 years DeepMind CEO Demis Hassabis is in this camp while yet others say it will be decades.

"My personal view is, no, it doesn't exist," Shah said.

He pointed to a March 2023 research paper from Microsoft, which referred to "sparks of AGI." The researchers said some of the conversations with recent large language models like GPT-4 are "starting to show that it actually understands things in a deeper way than simply answering questions," Shah said.

That means "you can actually have a free-form conversation with it like you would have with a human being," he added. What's more, the latest versions of chatbots like Google's Gemini and ChatGPT are capable of responding to more complex queries.

This ability does indeed point to AGI, if you accept the looser definition.

LLMs are a type of AI, fed content like books and news stories to first understand and then generate their own output text. LLMs are behind all the generative AI chatbots we know (and love?), like ChatGPT, Gemini, Microsoft Bing and Claude.ai.

What's interesting about LLMs is they aren't limited to one specific task. They can write poetry and plan vacations and even pass the bar exam, which means they can perform multiple tasks, another sign of AGI.

Then again, they are still prone to hallucinations, which occur when an LLM generates outputs that are incorrect or illogical. They are also subject to reasoning errors and gullibility and even provide different answers to the same question.

Hence the similarity to Schrodinger's cat, which in the thought experiment was simultaneously dead and alive until someone opened the box it was in to check.

This is perhaps the $100,000 question and another one that is hard to answer definitively.

If an AGI learns how to perform multiple household duties, we may finally have a Jetsons moment. There's also the potential for at-home assistants who understand you like a friend or family member and who can take care of you, which Shah said has huge potential for elder care.

And AGI will continue to influence the job market as it becomes capable of more and more tasks. That means more existing jobs are at risk, but the good news is new jobs will be created and opportunities will remain.

The short answer is no.

For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible.

"There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said.

He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before."

LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation.

But other problems require thinking about the future.

"How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require planning and look ahead and hypotheticals that don't exist yet there's this big black hole of capabilities that humans can do that AI is just really, really bad at."

Dimakis, too, believes sentient robots killing us all has "a very low probability."

A much bigger risk is this technology ending up closed off within one or two big tech companies instead of being open like it is at universities.

"Having a monopoly or an oligopoly of one or two companies that are the only ones who have these new AI systems will be very bad for the economy because you'd have a huge concentration of technologies being built on top of these AI foundation models," Dimakis said. "And that is to me one of the biggest risks to consider in the immediate future."

AGI should not be confused with artificial super intelligence, which is an AI capable of making its own decisions. In other words, it is self-aware, or sentient. This is the AI many people fear now.

"You can think about any of these sci-fi stories and movies where you have robots and they have AI that are planning and thinking on their own," Shah said. "They're able to do things without being directed and can assume control completely on their own without any supervision."

But the good news is ASI is much further away than AGI. And so there's time to implement guardrails and guide or hinder its development.

That being said, Thorsten Joachims, a professor of computer science at Cornell, believes we will hold AI systems to higher standards than we hold ourselves and this may ultimately help us address some of society's shortcomings.

For example, humans commit crimes.

"We would never put up with it if an AI system did that," he said.

Joachims also pointed to decision-making, particularly in courts of law. Even well-educated and experienced professionals like judges pass down vastly different sentences for similar cases.

He believes we won't tolerate this kind of inconsistency in AI either. These higher standards will inform how AI systems are built and, in the end, they may not even look all that human.

In fact, AGI may ultimately help us solve problems we've long struggled with, like curing cancer. And even if that's the only thing a particular AI can do, that alone would be revolutionary.

"Maybe it cannot pass the Turing test" a standard method for assessing a computer's ability to pass as human "so maybe we wouldn't even consider it intelligent in any way, but certainly it would save billions of lives," said Adam Klivans, a professor of computer science at the University of Texas and director of the National Science Foundation's AI Institute for Foundations of Machine Learning. "It would be incredible."

In other words, AI can help us solve problems without fully mimicking human intelligence.

"These are not so much exactly AGI in the sense that they do what humans do, but rather they augment humanity in very useful ways," Dimakis said. "This is not doing what humans can do, but rather creating new AI tools that are going to improve the human condition."

View original post here:

There's AI, and Then There's AGI: What You Need to Know to Tell the Difference - CNET

Beneficial AGI Summit: Mass Movement Toward AGI – Crypto Times

Over the past few years, the world of artificial intelligence has made strides toward innovation. What started as functionality changes has led to entire infrastructural reconstructions and vast increases in the potential of these systems. Taking only Large Language Models (LLMs) as an example, the parameter size of LLM models increased by 574,368% from 2019 to 2022.

The speed with which AI is developing is facilitating large-scale shifts in understanding, with new functions, capabilities, and precise business objectives seemingly appearing every month. One of the most recent talking points in the AI sphere, especially against the context of its growing performance potential, is the rise of Artificial General Intelligence (AGI).

AGI, a comprehensive and connected AI system that can mirror and surpass human intelligence, is the next big step in AI development. AGI would become an autonomous system that continually grows and changes, encountering new possibilities and making them a reality.

To celebrate the progress thats been made in the AGI field and bring further attention to this powerful technology, HyperCycle, SingularityNet, and TrueAGI are hosting the Beneficial AGI Summit.

The Summit will take place from February 27th to March 1st in Panama City and will represent a moment where industry and business leaders come together to shape the future of AGI systems.

The Beneficial AGI Summit is a two-day event where top thought leaders from across the world will engage in comprehensive and public discussions on a range of artificial intelligence (AI) related questions. The event will be hosted between Feb 27 to Mar 1. Beyond just talking about the development process of AGI, the speakers and panelists will discuss the ethical, psychological, political, environmental, and economic potential of AGI tools.

The event will draw from various issues that are currently prominent in the world of AI, focusing on finding solutions and making conceptual progress. The Summit will also serve as the perfect site to form connections with other leaders in this space, helping to forge collaborations that will catalyze the emergence of beneficial AGI solutions.

The key themes of the event are listed as the following:

Speakers at the event stem from all walks of the blockchain ecosystem, including the CEO of HyperCycle, CEO of The Millenium Project, and Co-Founder of the Discovery Institute.

The CEO of HyperCycle is one of the main guests at the vent, especially due to HyperCycles vital part in the development of AGI. The company is actively developing a decentralized network that will allow AI-to-AI communication, which will become fundamental as AGI expands its communication capabilities and progresses beyond human intelligence.

At the global AGI Summit in 2021, the CEO of HyperCycle, Toufi Saliba, first commented on the global race toward AGI. Years later, his presence at the event demonstrates HyperCycles commitment to that initial vision.

A full list of all the events, speakers, and topics of conversation can be found on the official website.

One of the core pillars of the event will be asking and exploring the question of how AGI will actively shape humanity going forward. Now on the brink of an effective existence, the answers to these questions are on the horizon, making the Summits events more poignant than ever.

As AI systems begin to communicate with each other, they will be able to share knowledge, information, and insight on a scale thats never been seen before. Using vast pools of data and the capabilities of several AI systems all working together, artificial general intelligence would be able to perform any tasks a human can do faster, more accurately, and potentially better.

Developing AGI will lead to the potential for innovation in various fields, offering:

While not an exhaustive list, these demonstrate how fundamentally life-changing this once-in-a-generation technology could be. The Beneficial AGI Summit will attend to all these possibilities, mapping out the pathway toward an achievable future.

Also Read: 10 AI Crypto Projects That Are Changing the Market

For those who are interested in a dynamically different future at the hands of technological innovation, the Beneficial AGI Summit is a must-see event. The conference will explore the power and potential of AGI to its fullest extent, drawing upon research and opinions from market leaders. This event marks a turning point in the human progression toward affordable, accessible, and global AGI solutions.

Excerpt from:

Beneficial AGI Summit: Mass Movement Toward AGI - Crypto Times

Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" – Futurism

Whoa. Training Day

A new Amazon AI model, according to the researchers who built it, is exhibiting language abilities that it wasn't trained on.

In a not-yet-peer-reviewed academic paper, the team at Amazon AGI which stands for "artificial general intelligence," or human-level AI say their large language model (LLM) is exhibiting "state-of-the-art naturalness" at conversational text. Per the examples shared in the paper, the model does seem sophisticated.

As the paper indicates, the model was able to come up with all sorts of sentences that, according to criteria crafted with the help of an "expert linguist," showed it was making the types of language leaps that are natural in human language learners but have been difficult to obtain in AI.

Named "Big Adaptive Streamable TTS with Emergent abilities" or BASE TTS, the initial model was trained on 100,000 hours of "public domain speech data," 90 percent in English, to teach it how Americans talk. To test out how large models would need to be to show "emergent abilities," or abilities they were not trained on, the Amazon AGI team trained two smaller models, one on 1,000 hours of speech data and another on 10,000, to see which of the three if any exhibited the type of language naturalness they were looking for.

Interestingly enough, it was the 10,000-hour model the Goldilocks of the three, if you will that scored highest on the Amazon researchers' emergent abilities criteria list, which included things like the ability to understand punctuation, non-English words, and emotions.

The middle model spat out sentences that would seem to human readers very natural, exhibiting the ability to transcribe non-words ("Shh, Lucy, shhh, we mustnt wake your baby brother, Tom whispered, as they tiptoed past the nursery") and even the kind of internetspeak many netizens use in text messages and spoken language alike ("She received an odd text from her brother: 'Emergency @ home; call ASAP! Mom & Dad are worried#familymatters.'")

In the paper, whose international team of authors includes 18 AI experts, the Amazon AGI consortium pointed out that BASE TTS was never "explicitly" told to come up with its more surprising outputs.

"These sentences are designed to contain challenging tasks parsing garden-path sentences, placing phrasal stress on long-winded compound nouns, producing emotional or whispered speech, or producing the correct phonemes for foreign words like qi or punctuations like @ none of which BASE TTS is explicitly trained to perform," the paper reads.

It's not AGI, of course but these findings could regardless have implications on the path towards that goal, especially if it didn't need such a gigantic set of training data to get there.

More on AI leaps: AI Used to Resurrect Dead Dictator to Sway Election

Read the rest here:

Amazon AGI Team Say Their AI Is Showing "Emergent Abilities" - Futurism