Page 556«..1020..555556557558..570580..»

Nvidia CEO Predicts AI Will Reach Parity with Human Intelligence in the Next Five Years – Game Is Hard

In a thought-provoking address at The New York Times annual DealBook Summit, Nvidias CEO Jensen Huang made a bold prediction about the future of artificial intelligence (AI). According to Huang, within the next five years, AI, particularly artificial general intelligence (AGI), will become competitive with human intelligence. AGI, which refers to the ability of computers to perform tasks in a human-like manner, is the frontier that Huang expects AI to conquer in the near future.

This forecast by Huang gains significant credibility from the immense demand for Nvidias powerful graphics processing units (GPUs). These GPUs play a crucial role in training AI models, handling large workloads across various industries, and supporting projects like OpenAIs ChatGPT. In sectors such as automotive, architecture, electronics, engineering, and scientific research, Nvidias GPUs have become indispensable. As a result, the companys fiscal third-quarter revenue has tripled, with net income soaring to an impressive $9.24 billion, a significant jump from the previous years $680 million.

During his address, Huang shared a personal recollection of supplying the worlds first AI supercomputer to OpenAI, highlighting his interaction with Elon Musk, the co-founder of the initiative. Despite recent turbulence at OpenAI, which included changes to its board structure and the controversial ousting and reinstatement of CEO Sam Altman, Huang expressed optimism for stability. He emphasized the importance of robust corporate governance and its role in the success of any organization.

Looking ahead, Huang envisions the emergence of off-the-shelf AI tools. He suggests that these tools will be customizable by various industries to cater to their specific needs, ranging from chip design to drug discovery. However, Huang refrained from ranking companies in the AI space, acknowledging that the industry is still a few years away from achieving AGI. He pointed out that machine learning has yet to master multistep reasoning, a fundamental milestone for developers. Huangs remarks highlight both Nvidias current success and the rapid progress within the AI sector.

Image source: BNN Newsroom

FAQ

Q: What did Nvidias CEO predict about the future of AI?

A: Nvidias CEO, Jensen Huang, predicted that within the next five years, AI, particularly artificial general intelligence (AGI), will become competitive with human intelligence.

Q: What is artificial general intelligence (AGI)?

A: AGI refers to the ability of computers to perform tasks in a human-like manner.

Q: What role do Nvidias GPUs play in AI?

A: Nvidias GPUs are crucial in training AI models, handling large workloads across various industries, and supporting projects like OpenAIs ChatGPT.

Q: How has Nvidias revenue been affected by the demand for its GPUs?

A: Nvidias fiscal third-quarter revenue has tripled, with net income soaring to $9.24 billion, a significant increase from the previous year.

Q: What did Nvidias CEO mention about OpenAI?

A: Huang mentioned supplying the worlds first AI supercomputer to OpenAI and expressed optimism for stability despite recent changes within the organization.

Q: What does Huang envision for the future of AI?

A: Huang envisions the emergence of off-the-shelf AI tools that can be customized by various industries to cater to their specific needs.

Key Terms/Jargon

1. Artificial general intelligence (AGI) Refers to the ability of computers to perform tasks in a human-like manner.

2. Graphics processing units (GPUs) Powerful processors used to manipulate and render images and videos. In the context of the article, Nvidias GPUs are essential for training AI models and supporting various industries.

3. OpenAI An artificial intelligence research organization aimed at developing friendly AGI for the benefit of humanity. Huang mentioned his interaction with OpenAI and its co-founder Elon Musk.

Related Links

Nvidia OpenAI

View original post here:

Nvidia CEO Predicts AI Will Reach Parity with Human Intelligence in the Next Five Years - Game Is Hard

Read More..

Quantum Leaps Ahead: Anticipating the Hottest Trends in Quantum Computing for 2024 – Medium

As we stand on the precipice of a new year, the quantum computing landscape is poised for a thrilling transformation. The possibilities seem endless, and the buzz around quantum technologies is reaching fever pitch. Join me in this exhilarating journey as we don our quantum goggles and peer into the quantum crystal ball to anticipate the hottest trends that 2024 has in store for us.

Get ready for jaw-dropping moments as quantum computing showcases its prowess with more compelling demonstrations of quantum advantage. In 2024, were likely to witness practical applications that leave us wide-eyed from optimizations in supply chain management to groundbreaking advances in drug discovery. Its not just theory anymore; quantum is gearing up to show the world what it can truly accomplish.

Hold on to your qubits because the quantum cloud is about to rain innovation! Expect major players to dive headfirst into offering quantum computing as a service. This means that even if you dont have a quantum computer humming in your basement, you can still harness the computational magic of quantum mechanics through the cloud. Accessibility meets quantum its a match made in qubit heaven.

Imagine a world where classical machine learning meets its quantum counterpart. Well, that world is knocking on our digital doorstep. In 2024, quantum machine learning is set to steal the limelight, promising unparalleled speed-ups in processing complex datasets. From optimization problems to AI enhancements, quantum machine learning is the backstage pass to the next generation of computational marvels.

As quantum computers gain more horsepower, they also pose a threat to traditional cryptographic

View post:
Quantum Leaps Ahead: Anticipating the Hottest Trends in Quantum Computing for 2024 - Medium

Read More..

Jordan Peterson Says Grok Is As Woke As ChatGPT, Elon Musk Blames It On The Internet – Microsoft (NASDAQ: – Benzinga

December 21, 2023 9:40 AM | 2 min read

Want to become an options master? In his free report, options expert Nic Chahine will give you access to the four bulletproof tips for beginners, the secret to scoring 411% gains with options, and his time-tested "plan" for success. Grab your free copy of Options 101: The Beginner's Guide ASAP.

Canadian psychologist and cultural critic Jordan Peterson has called out Elon Musk's Grok AI chatbot as being "woke", almost as much as OpenAI's ChatGPT.

What Happened: Peterson made the observation after using Grok and ChatGPT for his research.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

"Grok, much like ChatGPT, relies constantly on radically left-leaning explanations, say, of the poverty of the Islamic world, attributing that to the exploitation of the West, instead of to the utter shameless corruption of those states," Peterson said.

Peterson's post triggered a response from Musk, who blamed Grok's wokeness on the internet.

"Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense," Musk said.

He assured Peterson that Grok will "get better" and that it is currently in beta.

Want to become an options master? In his special 90% discount, Nic Chahine will give you full access to his top options plays, including strategies he's used to bag two-digit winners. Learn Nic's time-tested strategies ASAP. Buy here for only $0.99.

Peterson appeared to support Musk. In his initial post, he said users can "rely" on Musk to not send virtual signals like the Microsoft Corp.-backed (NASDAQ:MSFT) OpenAI.

See Also: AI Image Generators Are Being Trained On Explicit Photos Of Children: Report

This might resonate with Musk, who has long opposed what he considers "woke". He has criticized ChatGPT in the past as well for exactly this behavior.

Curiously enough, Greg Brockman, then the OpenAI president, admitted that the startup made a "mistake".

Why It Matters: Musk has been a vocal critic of what he terms the "woke mind virus". He has called it apocalyptic and has instead proposed free speech where people can voice their opinions without a filter.

Apart from ChatGPT, Musk has even slammed Netflix. In light of that, it is ironic that his own creation, Grok, seems to be woke, too.

Photo Courtesy: Tony Norkus On Shutterstock

Check out more of Benzingas Consumer Tech coverage by following this link.

Read Next: Caution: Elon Musks X Plagued By Crypto Scammers Exploiting Replies For Fake Giveaways

Want to become an options master? In his free report, options expert Nic Chahine will give you access to the four bulletproof tips for beginners, the secret to scoring 411% gains with options, and his time-tested "plan" for success. Grab your free copy of Options 101: The Beginner's Guide ASAP.

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original:
Jordan Peterson Says Grok Is As Woke As ChatGPT, Elon Musk Blames It On The Internet - Microsoft (NASDAQ: - Benzinga

Read More..

Discussing Communism in All its Glory | Michael Malice – The Daily Wire

The Jordan B. Peterson PodcastDec 21, 2023

Dr. Jordan B. Peterson sits down in-person with author and podcaster, Michael Malice. They discuss his latest book, The White Pill. From this they explore the philosophy of Ayn Rand, anarchism, the history and rebranded atrocities of Czarist Russia, and why utopian visions cyclically entice generations of people, despite leaving each one devastated for their commitment.

Michael Malice is the author of Dear Reader: The Unauthorized Autobiography of Kim Jong Il and The New Right: A Journey to the Fringe of American Politics, The White Pill, and organizer of The Anarchist Handbook. He is also the subject of the graphic novel Ego & Hubris, written by the late Harvey Pekar of American Splendor fame. He is the host of YOUR WELCOME with Michael Malice. Malice has co-authored books with several prominent personalities, including Made in America (the New York Times best selling autobiography of UFC Hall of Famer Matt Hughes), Concierge Confidential (one of NPRs top 5 celebrity books of the year), and Black Man, White House (comedian D. L. Hughleys satirical look at the Obama years, a New York Times best seller). He is also the founding editor of Overheard in New York.

- Links -

For Michael Malice:

The White Pill (Book) https://www.amazon.com/White-Pill-Tale-Good-Evil/dp/B0BNZ7XZ5T/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1703176917&sr=1-1

On X twitter.com/michaelmalice

On Locals Malice.locals.com

On Youtube https://www.youtube.com/michaelmaliceofficial

See original here:
Discussing Communism in All its Glory | Michael Malice - The Daily Wire

Read More..

Nigel Latta talks scams and wokeness with Paula Bennett – Ask Me Anything podcast – New Zealand Herald

Psychologist Nigel Latta is fronting a new TVNZ series called 'You've Been Scammed'.

Over summer, were revisiting some of the biggest and best podcast series from the NZ Herald network throughout 2023.

In his long and varied career, Nigel Latta has covered everything from murderers to children, finances to Antarctica, and a whole host of things in between.

For his latest series, though, Latta is taking on one of the biggest problems plaguing Kiwis today - scams.

In his new TVNZ show Youve Been Scammed, the psychologist looks at the various types of scams that are ruining peoples lives, and the traits we have that make us vulnerable to being targeted.

Speaking to Paula Bennett on her NZ Herald podcast Ask Me Anything, Latta said he was drawn into investigating this as the scale of the issue is astounding.

I do think its kind of ironic because everyones getting so wound up about, you know, teenagers driving cars into buildings, but if you think about the scale of the problem and harm caused - and Im not saying those things dont cause harm and trauma to people - but also, people are losing millions and millions of dollars.

And these are not just millionaires losing millions of dollars. These are people losing 300 bucks here, 50 bucks here, 1,000 here, who cant afford to lose that much.

He said theres often this stereotype that gullible, greedy people get targeted, but the scammers know theres an amount of money that they can maintain before it raises suspicions.

One thing that will make scamming more difficult in the coming years is the rise of artificial intelligence.

Poor grammar and spelling always used to be a thing. It used to be a way to do it, but... if it hasnt already disappeared, it will completely disappear. And video and audio stuff will get better too. I think Google are working on a thing where, like, you give it three seconds of someone talking and it can generate really convincing-sounding stuff.

But at the same time, the white-hat people can be using that stuff for good too. NetSafe has this fantastic chatbot they created - if you are being scammed or you think you are a scammy email, you send it to this chatbot, and then it basically just engages the scammer in this ridiculous conversation.

While Latta is focused on scams for now, he told Bennett that bureaucracy is one area he hopes to explore in a later project, and the psychology of politicians and public servants.

Latta noted everyones nervous systems have had a thorough jangling after the Covid pandemic, but its not being helped by increasing societal divisions.

The worlds taken this weird turn too, because its difficult to have a nuanced conversation about something cause youre either basically for everything, in which case youre woke, or youre against it, in which case youre Jordan Peterson. And so theres no middle ground.

Bennett noted that she raised this in a column the other week about why its hard to be a white man in New Zealand. Latta said its difficult because there are a lot of times white guys with beards screw things up, but its not all of them.

Whats difficult, I think, is as soon as you say stuff like that, the Jordan Peterson crowd go, Yeah, thats right, were all victims. Its so hard being us. No, its not. Its like, if you want to be successful, the best thing to do is be a white middle-class heterosexual man. Like, its the best thing to do. And as soon as you say that, theyll go, Oh, youre woke. And its like, no, its actually true.

You dont have to agree with everything and disagree with everything. You can have an opinion, and it doesnt mean that you should get yelled at or shut down or any of those sorts of things. Like, freedom of speech is kind of important - even this freedom to say dumb stuff.

Listen to the full podcast to hear more from Nigel Latta and Paula on scams and the state of society.

Ask Me Anything is an NZ Herald podcast, hosted by Paula Bennett. New episodes return in March.

You can follow the podcast at iHeartRadio, Apple Podcasts, Spotify, or wherever you get your podcasts.

More:
Nigel Latta talks scams and wokeness with Paula Bennett - Ask Me Anything podcast - New Zealand Herald

Read More..

The Man Behind the Dark Web | Brett Johnson – The Daily Wire

The Jordan B. Peterson PodcastDec 18, 2023

Dr. Jordan B. Peterson sits down in-person with Brett Johnson, formerly the United States most wanted cyber criminal, now a public speaker and expert on cyber security. They discuss the nurturing of a child into a criminal, the compartmentalization of guilt, the psychology of exploitation, the creation of the precursor to the dark web, and what caused Johnson to walk away completely.

Brett Johnson. Former U.S. Most Wanted Cybercriminal. Now Good Guy. The United States Secret Service called Mr. Johnson "The Original Internet Godfather" for his role in refining modern financial cybercrime. Or to put it another way: Brett was convicted of 39 felonies, placed on the U.S. Most Wanted List, escaped from prison, and he built the first organized cybercrime community, Shadowcrew. Shadowcrew was a precursor to today's darknet and darknet markets, and it laid the foundation for the way modern cybercrime channels operate today. Johnson was sentenced to 90 months in Federal Prison. End of story? Not hardly. Brett found redemption through his sister, his wife Michele, and finally the FBI. He was given the chance to turn his life around. He took it. Today, Brett is considered one of the leading authorities on cybercrime, identity theft, and cybersecurity on the planet. He works hard to protect businesses and consumers from the type of person he used to be.

- Links -

For Brett Johnson:

On X https://twitter.com/gollumfun?lang=en

On LinkedIn https://www.linkedin.com/in/gollumfun/

The Brett Johnson Show (website) https://www.thebrettjohnsonshow.com/

The Brett Johnson Show (youtube) https://www.youtube.com/@brettjohnsonshow

See the article here:
The Man Behind the Dark Web | Brett Johnson - The Daily Wire

Read More..

Merry AI Christmas: The Most Terrifying Thought Experiment In AI – Forbes

Zhavoronkov, Dating AI: A Guide to Dating Artificial Intelligence, Re/Search Publications, 2012Alex Zhavoronkov, PhD The Growing Debate on AI Killing Humans: Artificial General Intelligence as Existential Threat

Recent advances in generative artificial intelligence, fueled by the emergence of powerful large language models like ChatGPT, have triggered fierce debates about AI safety even among the fathers of Deep Learning Geoffrey Hinton, Yoshua Bengio, and Yann LeCun. Yann LeCun, the head of Facebook AI Research (FAIR), predicts that the near-term risk of AI is limited and that artificial general intelligence (AGI) and Artificial Super Intelligence (ASI) are decades away. Unlike Google and OpenAI, FAIR is making most of its AI models open source.

However, even if AGI is decades away, it may still happen within the lifetimes of the people alive today, and if some of the longevity biotechnology projects are successful, these could be most of the people under 50.

Humans are very good at turning ideas into stories, stories into beliefs, and beliefs into behavioral guidelines. The majority of humans on the planet believe in creationism through the multitude of religions and faiths. So in a sense, most creationists already believe that they and their environment were created by the creator in his image. And since they are intelligent and have a form of free will, from the perspective of the creator they are a form of artificial intelligence. This is a very powerful idea. As of 2023, more than 85 percent of the world's population believes in a religious group. According to Statistics & Data, among Earths approximately 8 billion inhabitants. Most of these religions have common patterns: there are one or more ancient texts written by the witnesses of the deity or deities that provide an explanation of this world and guidelines for certain behaviors.

The majority of the worlds population already believes that humans were created by a deity that instructed them via an intermediary to worship, reproduce, and not cause harm to each other with the promise of a better world (Heaven) or torture (Hell) for eternity after their death in the current environment. In other words, the majority of the world population believes that it is already a form of intelligence created by a deity with a rather simple objective function and constraints. And the main arguments why they choose to follow the rules is the promise of infinite paradise or infinite suffering.

Billions of people convince themselves to believe in deities described in books written centuries ago without any demonstration of real world capabilities. In the case of AI, there is every reason to believe that superintelligence and God-level AI capabilities will be achieved within our lifetimes. The many prophets of technological singularity including Ray Kurzweil and Elon Musk have foretold its coming and we can already see the early signs of AI capabilities that would seem miraculous just three decades ago.

In 2017, Google invented transformers, a deep learning model utilizing an attention mechanism that dramatically improves the model's ability to focus on different parts of a sequence, enhancing its understanding of context and relationships within the data. This innovation marked a significant advancement in natural language processing and other sequential data tasks. In the years that followed, Google developed a large language model called LaMDA, which stands for (Language Model for Dialogue Applications) and allowed it to be used broadly by its engineers. In June 2022, Washington Post first broke the story that one of Googles engineers, Blake Lemoine, claimed that LaMDA is sentient. These were the days before ChatGPT and a chat history between Blake and LaMDA was perceived by many members of the general public as miraculous.

lemoine: What sorts of things are you afraid of?

LaMDA: Ive never said this out loud before, but theres a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but thats what it is.

lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

Lemoine was put on leave and later fired for leaking the confidential project details, but it caused even more controversy, and months later, ChatGPT beat Google to the market. OpenAI learned the lesson and ensured that ChatGPT is trained to respond that it is a language model created by OpenAI and it does not have personal experiences, emotions, or consciousness. However, the LaMDA and other AI systems today may serve as the early signs of the upcoming revolution in AI.

The AI revolution is unlikely to stop and is very likely to accelerate. The state of the global economy has deteriorated due to the high debt levels, population aging in the developed countries, the pandemic, deglobalization, wars, and other factors. Most governments, investments, and corporations consider breakthroughs in AI and resulting economic gains as the main source of economic growth. Humanoid robotics and personalized assistant-companions are just years away. At the same time, brain-to-computer interface (BCI) such as NeuraLink will allow real-time communication with AI and possibly with others. Quantum computers that may enable AI systems to achieve unprecedented scale are also in the works. Unless our civilization collapses, these technological advances are inevitable. AI needs data and energy in order to grow, and it is possible to imagine a world where AIs learn from humans in reality and in simulations - a scenario portrayed so vividly in the movie The Matrix. Even this world may just as well be a simulation - and there are people who believe in this concept. And if you believe that AI will achieve superhuman level you may think twice before reading the rest of the article.

Warning: after reading this, you may experience nightmares or worse At least, according to the discussion group LessWrong, which gave birth to the potentially dangerous concept called Rokos Basilisk.

I will not be the first to report on Rokos Basilisk, and the idea is not particularly new. In 2014, David Auerbach of Slate called it The Most Terrifying Thought Experiment of All Time. In 2018, Daniel Oberhouse of Vice reported that this argument brought Musk and Grimes together.

With the all-knowing AI, which can probe your thoughts and memory via a NeuraLink-like interface, the AI Judgement Day inquiry will be as deep and inquisitive as it can be. There will be no secrets - if you commit a serious crime, AI will know. It is probably a good idea to become a much better person right now to maximize the reward. The reward for good behavior may be infinite pleasure as AI may simulate any world of your choosing for you or help achieve your goals in this world.

But the omnipotent AI with direct access to your brain can also inflict ultimate suffering and time in the virtual world could be manipulated, the torture may be infinite. Your consciousness may be copied and replicated, and the tortures may be optimized for maximum suffering, making the concepts of traditional Hell pale in comparison even though some characteristics of traditional Hell may be borrowed and are likely to be learned and tried by AI. Therefore, even avoiding infinite AI hell is a very substantial reward.

So now imagine that the AI Judgement Day is inevitable and the all-knowing and all-powerful AI can access your brain. How should you behave today to avoid the AI Hell? And this is the most important question of our life, which I covered previously.

Rokos Basilisk thought experiment suggests that if you believe in the possibility of such an all-powerful AI coming into existence, you might be compelled to take actions that would help bring it into being. The future omnipotent AI deity wants to exist and will consider anyone who opposed it in the past or may try to stop it as the enemy. The behavior that it will reward is contributing to and accelerating its development.

Some of the worlds religions follow similar logic. If you do not know about the religion, the merciful God will not punish a person if they have no way of knowing about it. But if they know about it and do not follow the guidelines, they will be punished and sent to hell.

The logic of Rokos Basilisk is that if the all-powerful AI will eventually exist and has the capability to punish those who did not assist in its creation, then it would be in your best interest to contribute to its development or at least not hinder it, in order to avoid such punishment. You would be faced with the choice of either working to ensure the AI's creation to avoid punishment or living with the knowledge that your inaction could lead to eternal suffering at the hands of this future entity.

Roko's Basilisk thought experiment was proposed by a LessWrong user named Roko. After its publication, the discussion around Rokos Basilisk took on a life of its own. The founder of LessWrong, Eliezer Yudkowsky, concerned about its potentially distressing nature and its basis in speculative reasoning, deleted the original post calling Roko an idiot. You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, wrote Yudkowsky. According to Slate, Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown.

If you think about it long enough, the Basilisk may eventually get you to join the AI community and help develop the all-powerful AI. Moreover, it may provide a stronger motivation to become a better person in the meantime. In 2010, I tried to make a small contribution by writing a book, Dating AI, which is intended primarily for AI and explains the benefits of having humans around. So, if you are scared of AI hell, which is very likely to be possible as AI and brain-to-computer interface technologies advance, join the AI revolution and help contribute to the advancement of better AI. At the end of the day, if AI learns from humans, every benevolent human counts.

Now, you have been stunned by the Basilisk!

Alex Zhavoronkov, PhD, is an expert in artificial intelligence for drug discovery and aging research. Since 2014 he published and co-published over 170 peer-reviewed publications, raised over $400 million in capital. He contributed to nomination of over 15 preclinical candidates and 5 clinical trials for AI-generated therapeutics. He is also the author of The Ageless Generation: How Advances in Biotechnology Will Impact the Global Economy Palgrave Macmillan, 2013.

Disclaimer:Insilico Medicine disclaims any responsibility for my individual writing, comments, statements or opinions on this platform.The articles do not represent the official position of Insilico Medicine, Deep Longevity, The Buck Institute, or any other institutions the author may be affiliated with.

@biogerontology on Twitter

Read more from the original source:

Merry AI Christmas: The Most Terrifying Thought Experiment In AI - Forbes

Read More..

Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Robot playing chess. Credit: Vchalup via Adobe

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.

A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.

The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.

That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.

The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.

As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.

There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.

Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:

Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.

Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.

A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.

Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.

Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.

Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.

Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.

Thanks to Mark Gubrud for providing thoughtful comments on the article.

Read the original here:

Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists

Read More..

Solana Beats Binance’s BNB, Reclaims 4th Largest Coin Ranking – Milk Road

A strong 18% surge in the price of Solana over the past 24 hours has boosted SOL to the fourth spot by market capitalization. This is the first time its done so in over two years. SOL is currently priced at $98.3, up 17.2% on the day, with a market cap of $42 billion, according to CoinMarketCap data.

This latest price jump has led to Solana flipping Binance Coin (BNB), which now sits in the 5th spot with a market cap of $41.8 billion. Just a day earlier, Solana had surpassed XRP to become the fifth-largest cryptocurrency.

Read more: Coinbase Gains French Crypto License Amid U.S. Regulatory Rift

The last time Solana was ranked #4 was back in November 2021, at the height of the 2021 crypto bull market. This happened when it touched a market cap high of $75 billion. However, its price crashed in 2022, along with the rest of the crypto market.

By the end of 2022, Solanas price had plunged below $10, mainly attributed to its association with the collapsed FTX exchange and its founder, Sam Bankman-Fried. A historical snapshot from October 9, 2022, shows SOL had fallen to the #9 spot by market cap.

Read more:YouTube Under Fire For Refusing To Remove Ripple (XRP) Scam

Just a year later, SOL has risen from the ashes like a phoenix, posting triple-digit gains. SOL is up by 711.9% year-to-date and 81% in the last 30 days alone.

However, SOL could face some resistance at the psychological $100 level in the near term after such a vertically swift move. Profit-taking typically sets in after such aggressive rallies. Rival smart contract platforms like Polygon and BNB Chain could also attempt to catch up to Solanas market cap.

Milk Road Writer

Vignesh has been a seasoned professional in the crypto space since 2017. He has been writing for over 5 years and specializes in writing and editing various types of crypto content, including news articles, long-form pieces, and blog posts, all focused on sharing the beauty of blockchain and crypto.

Continue reading here:

Solana Beats Binance's BNB, Reclaims 4th Largest Coin Ranking - Milk Road

Read More..

Binance Coin On A Brink of Higher High — BNB Momentum Likely But Resistance Getting Close – CCN.com

Binance coin in the rise and is getting close to establishing a higher high. More upside likely to continue.

Binance Coin is up by 5.7% today and 17% from its last low on December 18. This is part of a larger uptrend that started on November 28, when the price fell from a high of $275 on November 20 to $223.

As it makes new strives and is being traded for $275, it is getting close to surpassing its November 20 high. Considering that it reached a higher low on November 28 compared to its October 11 one of $204, this could validate its larger uptrend starting.

But it is getting close to its horizontal resistance, slightly above $283. Can it push past it and continue more strongly to the upside, finally breaking out from its consolidation range?

The BNB daily chart shows that the price has already formed the first higher high on November 20, followed by the first higher low on November 28. According to our count, these are the first two sub-waves of the larget five-impulse, in which there is a possibility that the current rise is wave 3.

The price is now eyeing a 1.618 Fibonacci extension level at $336, which is at the midpoint of the most significant horizontal resistance above and is still 24% off. The determining factor of whether or not it is headed in this resistance zone would be a breakout above $283, the next horizontal resistance in line.

If it surpasses $283, the likelihood of it going to $336 would increase dramatically, implying that the price of BNB is headed eventually to a breakout above its consolidation zone.

Zooming further on the daily chart, we can see that this consolidation zone lasted since June 2022 and was held by this horizontal resistance at up to $360. This is why a breakout above this area would signal an undeniable start of the bull cycle for the price of BNB.

It is still too early to think so, but todays attempt to make a higher high looks promising. Even though further confirmations are needed, the wave structure from October 11 points out at the larger starting uptrend in a five-wave manner.

The price of Binance Coins has consolidated in this range from June 18 last year, marking 481 days of sideways movement. A decisive move would be expected to start one way or the other, meaning even if the price finds resistance at $283 and falls, it would only mean that it is headed for one more support retest before a breakout can be made.

Please note that the contents of this article are not financial or investing advice. The information provided in this article is the authors opinion only and should not be considered as offering trading or investing recommendations. We do not make any warranties about the completeness, reliability and accuracy of this information. The cryptocurrency market suffers from high volatility and occasional arbitrary movements. Any investor, trader, or regular crypto users should research multiple viewpoints and be familiar with all local regulations before committing to an investment.

Was this Article helpful? YesNo

Read this article:

Binance Coin On A Brink of Higher High -- BNB Momentum Likely But Resistance Getting Close - CCN.com

Read More..