Category Archives: Artificial Super Intelligence

Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank – DataEthics.eu

Why are a rich group of companies allowed to work towards Artificial General Intelligence without any adults looking over their shoulders? It should be illegal.

OpenAI, the company behind ChatGPT and Dall-E, is working to build Artificial General Intelligence (AGI), according to an article in Wired, What OpenAI Really Wants. All 500+ employees of what was until recently a start-up, but is now partially owned by Microsoft, are working against AGI knowing that it is disruptive to humanity.

OpenAI insists, according to the article, that their real strategy is to create a soft landing for the singularity. It doesnt make sense to just build AGI in secret and throw it out to the world, OpenAI CEO Sam Altman said.

The definition of AGI is a computer system that can generate new scientific knowledge and perform any task that humans can. In other words, AGI can outmaneuver humans. With ChatGPT, many believe that we have come a significant step closer to AGI.

The crazy thing is that OpenAI and at least seven other large companies are openly working towards AGI without any adults looking over their shoulders to stop them.

Ian Hogarth, AI investor, co-author of The State of AI Report and one of the UK governments leading AI experts, writes in the Financial Times (FT);

We have gone from one AGI startup, DeepMind, which received $23 million in funding in 2012, to at least eight organizations that could collectively raise $20 billion in investment by 2023.

He emphasises that the AI-development is entirely profit-driven. It is not driven by what is good or bad for society and our democracies. While Google-owned DeepMind dedicates 2% of its employees to making AI responsible, OpenAI spends only 7%. The rest is about making AI more capable, according to Hogarth.

Working to disrupt humanity is a crazy thing. Weve already seen the first step, where OpenAI has made a hallucinating but extremely convincing chatbot designed as humanly as possible in its language freely available with ChatGPT and even allowed it to be built into childrens SnapChat.

Thankfully, regulation is on the way in the EU. But we also know that regulation takes time and isnt always super effective. For example, GDPR, which is almost six years old, is only now starting to be enforced in earnest. And even if the EU takes the lead in regulation and sets some precedents, it almost always ends up being voluntary and self-regulation in the US, which is afraid of losing the AI race to China.

Sam Altman co-founded OpenAI with Elon Musk as a non-profit and open source-based organization. He was afraid that it would be the profit-hungry big tech companies that would reach AGI first. Today, Musk is out, OpenAI is closed as a black box and its a profit-maximizing company hastily working towards AGI.

It should be illegal to work to build AGI. But it is happening. We constantly get new smart AI tools, small carrots, which we are overwhelmed by, and one day we have landed in singularity as Sam Altman wants to give the world.

No, instead, we should do as former Google employee and AI ethics specialist Timnit Gebru tells the FT: Trying to build AGI is an inherently unsafe practice. Instead, build well-delineated, well-defined systems. Dont try to build a God.

Photo: Photo byWayne PulfordonUnsplash

This column was first published at Prosabladet in Danish page 10.

Continued here:

Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank - DataEthics.eu

Before Skynet and The Matrix, This 50-Year-Old Movie Predicted the … – IGN

Mankind versus a hostile AI! From The Terminator to The Matrix to Ex Machina and beyond, so many movies and TV shows have explored the idea of artificial intelligence attempting to take over the world. Some of these films may be getting on in years, but the best sci-fi never feels dated. For Alien, 2001: A Space Odyssey, and many others, the ideas and concepts at the heart of the truly great films are timeless. It's not the sci-fi trappings like the blinking lights and special effects that make them movies we want to revisit time and again.

One of the earliest entries of the AI genre came in 1970 way before audiences had any real sense of where the digital revolution was about to take the world with the overlooked classic Colossus: The Forbin Project. It remains, 53 years after it was released, one of the most gripping and prophetic films to ask the question: What happens when we create something that is smarter than us?

The films title refers to Colossus, a super-computer that is basically Skynet 14 years before The Terminator even came out. James Cameron is apparently a fan of Colossus: The Forbin Project, and it doesnt seem like a stretch to say that he and Gale Anne Hurd were at least partially inspired by the 1970 picture when they wrote their franchise-starter.

After kicking things off with the Universal Pictures logo a rotating Earth thats about to be overcome by a new world order Colossus immediately if subtly predicts its premise with a pair of shots that quickly flash by. To the sound of trippy electronic sound effects and a vibrating score, we see the beeping light of what is maybe an EKG machine, followed by an out-of-focus eyeball but wait a second. Is that actually some kind of computer read-out thats beeping? And maybe thats not an eyeball at all, but a camera lens staring at us through hazy focus?

In 1970, you couldnt pause the tape uh, DVD uh, stream to be sure, though a little later we see that the EKG thingy is in fact a monitor device built into Colossus. But the blurring of the line between computer and human being is clear. And while thats an idea that had so effectively been conveyed just two years earlier with 2001: A Space Odysseys HAL 9000, Colossus: The Forbin Project took that evolution one step further as its computer eventually approaches something closer to godhood.

Also shooting for godhood, perhaps, is Dr. Charles Forbin (Eric Braeden), the brilliant if short-sighted mind behind Colossus, which the U.S. President (Gordon Pinsent) sees as the ultimate in Cold War technology. A super-computer designed to control the countrys nuclear arsenal, Colossus just like in that song about, well, God will soon have the whole world in its hands.

We first meet Forbin as he tours the top-secret facility where Colossus brain is housed, switching on gizmos that are meant to portray the most sophisticated computer imaginable in the 1970s, but which look mainly like flashing blinkies and colorful buttons. Theres not a touchscreen in sight! Of course, when this film was made, the very notion of how we would interact with computers in the 21st century was inconceivable for most people. The GUI (graphical user interface) that is commonplace now essentially, interacting with machines through graphics instead of text wouldnt really be invented for another three years. The microchip had only been created 12 years earlier! So the filmmakers here had to figure out how we would communicate with a computer like Colossus.

The answer? Through a LED-light news ticker and a teletype.

Considering how often Hollywood has botched its depictions of computers often endowing them with abilities that dont make sense, like when a character only has to do some fast typing on a keyboard to magically move plot or action forward its fairly remarkable how convincing Colossus is as a machine. Convincing and scary. This is what makes good sci-fi striking imagery or hyper-accurate depictions of future tech are secondary to high stakes and captivating storytelling.

Take the scene where Colossus communication line to its Soviet counterpart another newborn super-brain called Guardian is severed by the humans. A world map in the White House situation room shows Colossus trying to find a new path to its sibling. It almost feels desperate sad as the computer fruitlessly reaches out for its friend, as depicted visually on the map as various telecom pathways. But then Colossus drops a message on its news ticker: IF LINK NOT RESTORED ACTION WILL BE TAKEN IMMEDIATELY.

Actually, thats not a message. Its a threat. Up until now, the illusion of human control has kept Forbin and the rest on their perch. But when the President gets on the line the user has to dictate what they want to say to Colossus to an underling, who types it into a device thats even bigger than the typewriter I took to college with me he makes things worse, and the computer announces that its launching a nuclear missile directed at the Soviet Union. Guardian does the same, aimed at the US.

What follows is a flurry of teletype sounds type-type-type-type and beeps and increasingly nervous voices as Forbin tries to negotiate with his creation. The Michel Colombier score intensifies as suddenly were on a countdown clock. Multiple video conference screens feature the scrambling Soviets while the camera trains on the world map, where simple yellow-white lines indicate the two missiles passing one another on their way to their final destinations.

Its an incredibly gripping sequence, culminating in Forbin giving the computer what it wants. Four tense-as-hell minutes after the first missile launch, the sequence ends with one aborted attack, one destroyed Soviet town, and Colossus one step closer to full dominance over man.

Thats the type of action the film provides its basically just a bunch of guys in a room talking to a news ticker. Its simple and theres no need for high-tech frills. But man, is it unforgettable. Still, its no surprise that most of the movie posters for Colossus: The Forbin Project focus on a minor character who is gunned down midway through the film, since that death takes place during one of the few more traditional action scenes.

Directed by Joseph Sargent, a TV helmer who was transitioning to a full-time feature career and would soon turn out the classic NYC subway thriller The Taking of Pelham One Two Three, Colossus is a tight 100 minutes of increasingly ratcheted-up tension as the cocky Forbin and the clueless President watch their control of the world, slowly at first, but eventually in runaway-train-like fashion, disappear utterly and completely.

Early in the film, the charming Forbin gives a Steve Jobs-like presentation about how impressive his new tech is (the only thing missing is the black turtleneck). Speaking of Jobs, its interesting that the not-too-distant future depicted here doesnt seem to have any room for Big Tech. Colossus is apparently a government-funded project, and Forbin has to bend to the will of the President at times even more so in the book on which the movie is based. That said, the film predicts some of the workplace and lifestyle developments that have since become commonplace for us. Zoom calls are basically a thing, as is a work-from-home ethos at least for Forbin and his team, who all live on a sort of campus where work and play are intermixed. Eventually, Forbin is forced to live under the constant gaze of an ever-watchful Colossus, which perhaps isnt all that different for some of us who are forever tied to our tech, social media and otherwise.

During that earlier presentation at the White House, Forbin standing in front of a portrait of Washington no less as he unwittingly signs away mankinds freedom had asked rhetorically, Is Colossus capable of creative thought? His answer at the time is no. So when, shortly thereafter, Colossus outgrows his creator in a matter of days, its got to be a tough pill to swallow. (In a great sequence, Colossus and Guardian begin to communicate via basic math 2+2=4 and so on but before too long, theyve advanced to theoretical mathematics and are breaking new ground on topics that the human scientific community hasnt even been close to touching.)

But thats the real trick, isnt it? From Frankenstein to HAL to Scarlett Johanssons Samantha in Her, the genre has a long history of man creating something that, once created, can no longer be controlled. Forbin, in his quest for more knowledge and scientific dominance, made a mind greater than his own. Indeed, early on, when he attempts to punish Colossus like a bratty child and seems to briefly win back control of the computer, his assistant asks if hes disappointed. Forbin only chuckles in response, but its right there: Deep down, theres a part of the scientist who wants his creation to be more. And if that means letting a super-computer run the planet Earth Eh, whaddaya gonna do?

Its the genie out of the bottle syndrome, and while the current AI situation that we are facing in 2023 may be far less dramatic than Dr. Forbins nightmare scenario no AI in the real world has blown up a city yet as far as I know the bottom line is that much of the reasoning and arguments made in favor of the development of AI are the same as the promises Forbin and the President make: [It will be used as] an aide to the solution of the many problems that we face on this Earth...

By the end of Colossus: The Forbin Project, the solution to those many problems means that Colossus/Guardian have inherited the Earth, and Forbin is a prisoner in his own life, working as a slave to his creation. Mankind may be better off because of Colossus, but its no longer calling the shots. The final moments of the film are perfect, early-70s bleak sci-fi: Forbin finally breaks down in rage and frustration as the now seemingly all-knowing, all-seeing Colossus reads out its benevolent plans for humankinds future. That includes a promise that, in time, Forbin will come to regard the machine with love. That Forbins last words, as the computer and we watch him simultaneously from every angle, are Never! means nothing to the AI. See, Colossus has evolved past mere man, and it knows better now.

Its got the whole world in its hands.

Talk to Executive Editor Scott Collura on Twitter at @ScottCollura, or listen to his Star Trek podcast, Transporter Room 3. Or do both!

Read the original:

Before Skynet and The Matrix, This 50-Year-Old Movie Predicted the ... - IGN

Science Writers Treated to a Smorgasbord of Inventive Research – University of Colorado Anschutz Medical Campus

Innovation and knowledge were on the menu Sunday as about 200 Science Writers 2023 participants attended a variety of talks during Lunch With a Scientist sessions. In small breakout groups, CU Anschutz researchers shared their expertise on a host of subjects, from psychedelics in medicine to AI in healthcare and concussions in youth sports to metabolism in super athletes.

Get a taste for the breadth of research taking place on the CU Anschutz Medical Campus in these stories from the lunch sessions:

Bone marrow transplants and chemotherapy can cure cancers of the blood such as acute lymphocytic leukemia (ALL) in about 40% of pediatric patients, but they are not without serious toxic side effects, including fatigue, bone damage and having the potential to develop into another cancer.

That is why scientists have long sought less-toxic therapies for treating leukemia and other blood cancers such as lymphoma. One new treatment that holds important promise is chimeric antigen receptor (CAR) T-Cell therapy. It involves genetically modifying patients T cells to target and kill cancer.

Were seeing unbelievable response rates in pediatric ALL, with 80% to 90% of patients going into complete remission, but that rate then flattens out to about 50% of patients after a year, said Terry Fry, MD, a professor of pediatrics, hematology and immunology at the University of Colorado School of Medicine. Were trying to identify factors that will predict which patients will relapse.

Fry is executive director of the Gates Institute, launched jointly by the Gates Frontier Fund and CU Anschutz Medical Campus in 2023 to focus on translating laboratory findings into regenerative, cellular and gene therapies for patients.

Most cancer treatments involve a combination of drugs and other therapies that take aim at a variety of targets in cancer cells, and Fry believes that process will ultimately improve the success of CAR T-Cell therapy. He also sees promise for CAR T-Cell therapy in treating solid tumors such as those of the brain.

A recent series of high profile concussions in professional sports has brought the issue to the forefront in youth sports.

The topic was discussed during a session led by Christine Baugh, PhD, MPH, assistant professor in the Center for Bioethics and Humanities at the CU School of Medicine and assistant professor in the Injury & Violence Prevention Center at the Colorado School of Public Health. Her recent survey research examines the interestingly complex relationship parents have around concussion risk with their children playing contact and collision sports.

Baugh detailed the important role social norms play in shaping attitudes around sports and concussion risk especially with the unique case of football. One of the bigger surprises in her research were the responses from children of former professional football players who witnessed firsthand the impacts of the sport: particularly neurodegenerative diseases and cognitive decline. These individuals now parents themselves had diverse and unique perspectives on the risks and benefits of football, she said.

They responded to us that football had enriched our family, and its hard to say no unilaterally to football, even seeing my dads decline, said Baugh.

Baugh added that adjusting the level of risk alongside the health and social benefits of youth sports will remain central to future research and public conversations around concussions.

Casey Greene, PhD, founding director of the University of Colorado School of Medicines Center for Health Artificial Intelligence and chair of the Department of Biomedical Informatics, led a discussion on the emerging role of artificial intelligence (AI) and ChatGPT in research and healthcare.

Greene recognized AI as a powerful tool. He introduced attendees to a workflow that he and others developed, called Manubot, which uses AI to help scientists write, edit and collaborate on manuscripts.

Casey Greene, PhD, talks about the implications of AI and Chat GPT in healthcare during a Lunch With a Scientist session.

According to Greene, AI also allows researchers to sort through large amounts of data, such as genomics, proteomics and transcriptomics, and identify patterns. Healthcare providers can then use that information to better treat patients. It puts data at the forefront of care, he said.

Greene acknowledged that AI, like many tools, can bring either health or harm.

The School of Medicine has thought a lot about how to implement data to guide care in a responsible, ethical way. Its going to be incumbent upon you all, as journalists and writers, and us, as folks who work in the field and build these systems, to keep ethics and human-centered elements at the forefront.

Emmy Betz, MD, MPH, a professor of emergency medicine at the University of Colorado School of Medicine and co-founder of the Colorado Firearm Safety Coalition, discussed the rise in firearm suicides and gun violence across the U.S. along with the importance of creating safer options for firearm storage to help prevent injury and death.

Ninety percent of those who attempt suicide with a firearm die. Firearms are used in about 50 percent of suicides in the U.S. On the other end, 90 percent of people who attempt suicide and survive dont later die by suicide, Betz said. We are collaborating with public health professionals, clinicians, policymakers, the military and local communities to develop safer methods for storing firearms.

Talking to the group, Betz said her team is using a multi-pronged approach to tackle this issue through research, education, collaboration and economics. Some ideas for safer storage include free/low-cost safes, locks and firearm storage outside the home if an individual is going through a crisis.

Betz also discussed research that is being conducted to help veterans, aging adults and their families handle issues with post-traumatic stress disorder and dementia.

A trio of top doctors at the University of Colorado School of Medicine demonstrated the multidisciplinary collaboration in the OCD Program, one of the few centers in the country that offers deep brain stimulation (DBS) for obsessive-compulsive disorder (OCD). The doctors discussed their unique perspectives as neurosurgeon, clinician, patient and advocate.

Steven Ojemann, MD, professor of clinical practice, neurosurgery, spoke about his viewpoint as a neurosurgeon who performed this procedure on Moksha Patel, MD, assistant professor of hospital medicine, in 2021. During the brain surgery, electrodes are implanted in the deeper structures of the brain and connected to generators in the chest that deliver small currents of electricity to the brain.

Rachel Davis, MD, associate professor of psychiatry, then discussed her clinical experience working with patients with OCD, specifically her work with Patel, whom she recommended for the surgery and whom she has provided exposure therapy to both before and after his undergoing DBS.

Patel shared his experience living with OCD and how DBS has significantly improved his quality of life. DBS is not a cure for OCD, but he feels much more cognitive flexibility with his obsessions and unwanted, intrusive thoughts, he said.

When asked how the DBS has changed his life, Patel responded, I can make decisions based on what I value rather than what I fear.

Facing desperate patients whose depression defied any medication they tried, mental health providers were already searching for answers when psychedelic mushrooms began popping up in their conversations.

In response to that need, and to growing evidence in prestigious journals that the hallucinogenic was showing impressive results, a psychedelic program at the University of Colorado Department of Psychiatry was launched with Andrew Novick, MD, PhD, as principal investigator.

There are sick people out there who really want to feel better, Novick said. No one drug or combination of drugs works at all for at least 30% of patients, he said. And we need to stop banging our heads against the same mechanism of action.

Novick shared possible benefits of psychedelic therapy, including its potential rapid and long-term effect in one or two sessions as opposed to todays more common chronic medication regimens that can take weeks to have an effect.

As Colorado prepares to roll out newly legalized treatment programs, Novick and colleague Scott Thompson, PhD, are conducting promising research on a way to block the hallucinogenic effect of the drug in an effort to overcome access barriers by reducing the need for long therapy sessions.

Currently, lengthy and expensive therapy is required with the psychedelic treatment option, Novick said. The goal is to encourage more research and open scientific dialogue so that it does not become a therapy just for the ultrawealthy, he said.

With each study of elite athletes being pushed to physical extremes, CU Anschutz researchers get deeper insights into metabolic flexibility and what it means to general health and the potential to head off future disease.

Travis Nemkov, PhD, assistant professor in the Department of Biochemistry and Molecular Genetics at the CU School of Medicine, gave a presentation about what super-athletes can teach us about disease treatment.

Nemkov explained how researchers shifted their study of professional cyclists metabolomics, previously confined to a training room, onto the road of a competitive race. While the cyclists competed, blood samples were non-invasively collected from a small device attached to their shoulder. The dried blood droplets were then analyzed in the lab.

The platform allows the research team to collect detailed molecular data on anyone, not just athletes. Were trying to come up with a better information panel than the typical lab test, which is a single snapshot in time. To transform healthcare we need to enhance the information that gets into these reports, Nemkov said.

He added that a lot of metabolic-signature research has been done on hospitalized patients and sedentary people, as well as high-performing athletes on the other end of the spectrum. Now the goal of our research is to fill this gap were trying to figure out what else, physiologically, is going on in this (middle) area.

While talking about obesity and evolution, Richard Johnson, MD, professor at the University of Colorado School of Medicine specializing in renal disease and hypertension, said many animals are triggered at certain times of the year to gorge themselves to better survive the lean months of winter.

Bears can eat 10,000 grapes in 24 hours. Orangutans shovel down 100 pieces of fruit in one sitting. Hummingbirds guzzle so much nectar their livers turn pearly white with fat and they develop diabetes every day.

But its not permanent. Bears, birds and primates will become svelte again.

Humans were once the same. They evolved to overeat in times of plenty and burn it off in times of scarcity. Somewhere along the line the switch that turned this ancient impulse off got stuck, Johnson said. Now it's permanently on. Humans live in a time of relative abundance yet eat as if food will run out. And the biochemical mechanisms behind feeling full or satiated are often short-circuited.

Johnson, author of the book Nature Wants Us To Be Fat, discussed how sugar and fructose play key roles in obesity. He said glucose can be changed to fructose in humans. One easy way to avoid obesity, he said, is to drink water.

Link:

Science Writers Treated to a Smorgasbord of Inventive Research - University of Colorado Anschutz Medical Campus

Its WarCEO Of ChatGPT Developer OpenAI And AI Pioneer Issues Stark Bitcoin Warning Amid Crypto Price Swings – Forbes

BitcoinBTC and cryptocurrencies have struggled under the weight of a U.S. government crackdown this year that could be about to get a whole lot worse.

Subscribe now to Forbes' CryptoAsset & Blockchain Advisor and successfully navigate the bitcoin and crypto market rollercoaster ahead of next year's historical bitcoin halving!

The bitcoin price has lost momentum after surging higher through the first half of 2023 (though a BlackRock insider has just primed the bitcoin and crypto market for a huge $17.7 trillion earthquake).

Now, Sam Altman, the chief executive of ChatGPT developer OpenAI and artificial intelligence (AI) pioneer, has warned the U.S. government is waging "war" on crypto and wants to "control" bitcoin.

Bitcoin's historical halving that's expected to cause crypto price chaos is just around the corner! Sign up now for the free CryptoCodexA daily newsletter for traders, investors and the crypto-curious that will keep you ahead of the market

"I'm disappointed that the U.S. government has done recently, but the war on crypto, which I think is a, like, we can't give this up, we're going to control [bitcoin and crypto] makes me quite sad about the country," Altman said during an appearance on Joe Rogan's podcast.

"I'm very worried about how far the surveillance state could go here," Altman said, referring to state control over money and adding he's "super against" central bank digital currencies (CBDCs).

U.S. lawmakers and regulators have discussed the possibility of creating a so-called digital dollar CBDC, but Federal Reserve chair Jerome Powell has said such a technology remains many years away.

Fears over financial and monetary censorship have been stoked by Covid pandemic-associated lockdowns, exacerbating concerns some have that digital money will allow governments to control what people can buy.

Altman has attracted criticism from the bitcoin community over his involvement in the controversial WorldcoinWLD crypto project, aiming to create a database of people by scanning their eyes in return for Worldcoin's cryptocurrency WLD.

"I'm excited about bitcoin, too," Altman said. "I think this idea that we have a global currency that is outside of the control of any government is a super logical and important step on the tech tree." Rogan added that he sees bitcoin as having "the most likely possibility of becoming a universal viable currency."

Sign up now for CryptoCodexA free, daily newsletter for the crypto-curious

The U.S. government has been accused this year of orchestrating a crackdown on bitcoin, crypto and crypto companies, seeking to prevent them from accessing the traditional financial system in what's been branded "Operation Choke Point 2.0."

The original 2013 Operation Choke Point was a U.S. Department of Justice initiative to discourage banks from working with arms dealers, payday lenders, and other companies believed to be at a high risk for fraud and money laundering.

Last month, Alexander Grieve, head of government affairs at bitcoin and crypto-focused investment company Paradigm, warned a rumored White House executive order designed to limit the amount of computing power used for AI could have serious spill over into bitcoin and crypto, calling it "Operation Choke Point, but for computing power."

I am a journalist with significant experience covering technology, finance, economics, and business around the world. As the founding editor of Verdict.co.uk I reported on how technology is changing business, political trends, and the latest culture and lifestyle. I have covered the rise of bitcoin and cryptocurrency since 2012 and have charted its emergence as a niche technology into the greatest threat to the established financial system the world has ever seen and the most important new technology since the internet itself. I have worked and written for CityAM, the Financial Times, and the New Statesman, amongst others. Follow me on Twitter @billybambrough or email me on billyATbillybambrough.com.Disclosure: I occasionally hold some small amount of bitcoin and other cryptocurrencies.

See the article here:

Its WarCEO Of ChatGPT Developer OpenAI And AI Pioneer Issues Stark Bitcoin Warning Amid Crypto Price Swings - Forbes

Blockchains 2 billionth user could be an AI, says Joe Lubin – Forkast News

As a co-founder of the Ethereum blockchain, Joe Lubins footprint is all over the world of crypto.

Some of the Canadians detractors argue that his footprint is perhaps a little too large, his reported influence on legislators and ties to centralized finance firms like JPMorgan Chase & Co. too pronounced for a longterm champion of the decentralization philosophy underpinning blockchain development.

Regardless, Lubin has played a foundational role in the crypto industry, leading or bankrolling some of the most well-known on-chain products. He said he sees developers increasingly incorporating artificial intelligence into those products, with blockchain too playing a growing role in AI as the nascent sector progresses.

The Princeton computing and electrical science graduate started his working life in the universitys robotics lab in the 1980s. He then moved into the world of financial technology (fintech) as vice president of technology at the investment giant Goldman Sachs before making a detour to Jamaica and a second career as a dancehall music producer.

Returning to fintech in the 2010s, he was there on the ground floor with programmer Vitalik Buterin, computer scientist Gavin Wood and others in setting up the Switzerland-based non-profit organization the Ethereum Foundation in 2014. He reportedly supplied a large chunk of the startup cash for the Ethereum network now the worlds second largest blockchain after Bitcoin before an acrimonious Buterin-led split saw the founders go their separate ways.

But Lubin was also at the time laying the foundations for what would become Consensys, a New York-based crypto development platform for applications based on the Ethereum blockchain. Lubin, as CEO, retains a supermajority stake in the company, valued by consulting firm PwC at US$46.4 million in June 2020. A separate valuation in May 2022 raised that figure considerably to over US$7 billion.

That was around the time of the Terra stablecoin project collapse and the onset of the ongoing period of crypto winter. Lubin spoke to Forkasts Will Fee at Token 2049 in Singapore (Sept. 13-14) about Consensys, decentralization and the AI-backed evolution of crypto beyond the current bear market. The interview has been edited for clarity and length.

See similar article: AI, Asia and analytics an interview with Nansens Alex Svanevik

Will Fee: Youve seen your fair share of bear markets. Whats different about this one?

Joe Lubin: This bear market is partly the result of wave after wave of innovation which drove greater and greater excitement in our space. It was irrational exuberance similar to the dotcom boom and bust [in the late 1990s]. At that time, the whole tech and web space built toward this blow-off-the-top crescendo. It coincided with a global financial collapse. Thats very similar to what weve seen in our space.

Were not at an end-of-life moment for monetary systems yet. But were getting close to it. Geopolitically, financially, economically, there are massive challenges in the world. Rising interest rates, inflation, those factors have made the capital markets environment very difficult. We got to a point with [crypto industry] building where we hit a top at the same time that the 80-year supercycle of the global economy also hit a top.

Which is great news in my opinion. The dissolution of the previous system, based on top-down command and control via centralized institutions, makes it clear that we need a new trust foundation. We need a new approach to building better, more secure, sounder systems that benefit more people. That, essentially, will bring greater economic and political agency to lots more people and lots more small organizations.

Fee: How does the current period of regulatory scrutiny, particularly in the traditional crypto powerhouse of the U.S., prevent those systemic changes from taking place?

Lubin: The current period of regulatory scrutiny is a natural reaction at the end of the economic super cycle. Its a generational super cycle where youve got different age groups that interact with one another and repeat certain patterns. Then its a monetary system and debt super cycle. The people who are in control of the world have vested interests and want to perpetuate current systems. Rightly so, because a lot of people depend on those systems.

Its really hard to wrap your head around a fundamentally new technology. [Decentralization] is a paradigm shift to a world where trust is bottom-up based on a globally shared database. Thats versus the current top-down system where authorities imbue trust and other levels of authority through intermediaries all around the world.

The regulatory reaction to that is playing out differently in different places. In the United States, the executive branch wants to maintain total control over all its intermediaries in the world. So theyre resisting pretty significantly. They would like to slow-roll or kill our industry. The legislative branch is mixed, while the judicial branch is starting to speak up quite significantly. So theres some progress and some resistance.

When the internet and the web grew to prominence, the U.S. had the same sorts of struggles. But there are clear-thinking people out there working to protect little things like free speech and free market access, the proper functioning of markets and so on. There are these different forces at work in the U.S. Im convinced that, while it would be an exaggeration to say things are moving quickly, we are starting to move directly on a path toward clearing all this up. That will help us make what we do better understood and better accepted.

Fee: Do you see progress arriving at a faster pace in places outside the U.S.?

Lubin: There are other parts of the world Europe, Asia in particular where theres much more interest in supporting and benefiting from decentralized protocol technology and different kinds of decentralized assets. Thats partly because they see it as a leveling of the playing field with the U.S. This technology is so powerful. Its going to change everything. That means that the nation states and major companies that do well in the space will probably see pretty rapid growth.

If you look at the U.K., France, various Asian countries, the Middle East, theres a huge amount of activity and the conversations with regulators there are totally different. Theyre eager to understand and eager to figure out ways to assist by modifying their own frameworks. Every new revolutionary technology needs a new set of societal rules under which to operate. At Consensys, we pay a lot of attention to whats going on with the regulatory conversation in these other parts of the world.

Fee: In order to achieve those capabilities, how important is it for the blockchain industry to incorporate artificial intelligence and the analytical advances offered by that technology?

Lubin: Its very important to bring AI into the blockchain space. Its even more important to bring decentralized protocols into the AI space. At Consensys, we have developers, we have end users and we are working to bring them closer together. We see a future in which our end users are increasingly going to be builders with low code. With no code tools, theyll be able to stand up DAOs, mint NFTs.

We think of these users as a broad spectrum of builders. If youve got a broad spectrum of builders who have economic and political agency, you probably want each of them to be able to learn stuff fast. For that theyll need tutors, mentors. AI represents that prospect in some really interesting ways. We need to level up humanity in a big way. Our AI allies are going to get better and better at that.

Blockchain is going to be essential for the development and evolution of the AI space, which is currently broken into two major camps. One is private and extremely well resourced. Some of the best talent, research and engineering. Tons of compute, tons of data, tons of bandwidth, tons of storage. That camp will build great things. Has already built great things. Then you have the open source camp, which is moving incredibly quickly. Open source is really hard to stop. Once it gets going, its likely to become as powerful or more powerful than the more centralized camp.

Fee: Where specifically can decentralized technologies fit into the evolving use of AI in society?

Lubin: The failure mode for humanity is if the centralized camp gets so powerful that it operates the most powerful and dangerous weapon that a small group of humans have ever had for centralizing control on the planet. We need to guard against that. Whether its from a regulatory or any other societal perspective, we need to make sure that the best systems are built by many different people with many different technologies for many different purposes. That building needs to be largely out in the open.

Decentralized protocols can be part of that because you can have decentralized compute, you can have decentralized sourcing of data, you can have decentralized cleaning up of data. You can have decentralized training, decentralized inference for running the networks and the queries. Weve got that technology. Its a case of marrying AI approaches to decentralized protocols.

I think the first billion users of decentralized protocol technology and cryptocurrencies are going to be human. But Im not sure whos going to get to 2 billion first, whether its intelligent or not-so-intelligent machines and devices or humans. Either way, AI is going to be tremendous for our ecosystem for a bunch of reasons. Mainly just because its going to represent a giant amount of activity.

Fee: Youve touched on some of AIs more dystopian outcomes. How do advocates of either AI or blockchain which, particularly since the FTX collapse in Nov. 2022, has taken a major beating in the mainstream press generate public trust in these technologies?

Lubin: Any technology can produce harm. Weve navigated a lot of difficult military, scientific, technological evolutions. Im very convinced that well do it this time too. Of course there will be challenges. Im not an AI doomer at all. I think this is great and Im a big fan of AI. I actually spent years, a long time ago, working in this space. Im incredibly excited for what I think of as the necessary complementarity between decentralized protocols and AI.

On the trust front, theres also a lack of trust in centralized finance. In addition, theres a lack of understanding about the decentralized protocol space and all the good aspects of Web3 [a new phase of the internet built around decentralized blockchain technologies, the metaverse, and non-fungible tokens]. And thats more of an educational issue rather than a case of saying I know about that thing and I dont trust it.

Some really smart people are saying that about AI. But people who understand decentralized protocol technology and cryptocurrencies, theyre excited about it. Once you really get it and youre not protecting some agenda, then its a pretty positive technology.

Fee: How do these technologies not then just become the preserve of the privileged few who do get it?

Lubin: By growing the corpus of the people who get it. Similar to the Web, youve probably seen famous snippets from talk shows from 1996 or 97 when people were saying silly things about this new technology. Its just a question of education. Its a question of younger generations who are crypto native growing older by a few more years and taking their positions in society. This will then just be the way the world works to them.

Fee: Finally, with investment in Web3 down significantly so far this fiscal year in the wake of various crypto scandals and collapses, how does the industry regain momentum?

Lubin: The situations similar to the post-dotcom boom and bust period. We had all this excitement and a lot of amazing evolutionary breakthroughs. Then something big happened. It was a big technological thing. A blow off. A big financial thing. And for the next ten years, all those people got busy. They took failed approaches and improved them. They took their expertise, formed a new company, joined a new company. All those people built e-commerce. They built the web and they transformed the way the planet works.

I think we [the Web3 industry] have that ahead of us for the next few years. Things move really fast in our ecosystem and there will be many astonishing new waves of innovation. But I dont think were going to see any more crazy irrational exuberance in the short-term. Not unless the U.S. Securities and Exchange Commission (SEC) decides to green light a bunch of Bitcoin and Ethereum exchange traded funds (ETF) right at the same time.

Even then, I dont think its going to be that crazy. Theres a wave of institutions out there chomping at the bit to get into our space. Theyre chomping at the bit to get their customers into crypto ETFs. Its going to proceed in a little more orderly fashion than it has in the past. But I think there will be tremendous growth. That growth will be a slower exponential, but it will be exponential.

See similar article: Silver Lining? Google Clouds head of Web3 talks Big Tech possibilities for the blockchain

See original here:

Blockchains 2 billionth user could be an AI, says Joe Lubin - Forkast News

Level Up your business and events – Warrnambool City Council

Leading creative business thinkers will share insights at this years Level Up event.

Warrnambool Mayor Cr Debbie Arnott said the Level Up conference was about encouraging business operators in the South West to unlock their creative super powers.

Level Up 23 is a one-day program that provides business owners with inspiration and ideas to take them to the next level, Cr Arnott said.

This year, Level Up explores how we can recapture the creativity that comes to us naturally as children but which can sometimes be lost as we journey through life.

And with the ever-evolving intrusion of AI and market disruptors, its never been more important to nurture a culture of creativity.

Alex Wadelton from The Right-brain Workout leads a stellar speaker lineup. Alex will be joined by Dani Pearce, CEO and founder of Merry People gum boots and Nick

Pearce, co-founder of streetwear clothing social enterprise HoMie.

Alex said regaining the confidence to think creatively was a key and that while it was easy to go with the flow, thinking creatively provided people and businesses with the opportunity to stand out in a crowd and to have more fun.

Free your mind, drop the ego a bit and dont worry about saying something stupid. Have fun and youll put more smiles on peoples faces.

Creativity is not just for artists and elites, its for kids, parents, businesses, everyone.

Alex will also talk about AI (artificial intelligence) and how it might be used.

He said AI at times seemed frightening and scary but could also be viewed as a tool in the same way as Photoshop and Canva were tools.

He said AI might hold the potential for small and medium-sized businesses to compete with larger businesses.

Along with hearing from Alex, Dani and Nick, participants will also hear from the best in all things business events Business Events Victoria.

After their morning presentations, Alex, Dani and Nick will host small group workshops in the afternoon, which are not to be missed!

Arrival tea and coffee, morning tea and lunch are included in the conference ticket, which is $60.

The conference and a workshop are $100 with tickets available at Humanitix.

Workshop places are limited to 20 participants to its first in, best dressed.

The event is supported by Warrnambool City Council initiative The Ideas Place, LaunchVic and Business Events Victoria.

The detailsEvent: Level Up 23 taking events and business to the next levelTime: presentations from 9am to 1.20pm, choice of workshops from 1.20pm to 2.30pm or 3.20pm.Date: Tuesday 14 November 2023Tickets: https://events.humanitix.com/levelup23Venue: The Lighthouse Theatre

View original post here:

Level Up your business and events - Warrnambool City Council

Why Amazon Stock Was a Winner on Monday – The Motley Fool

What happened

Monday's trading session wasn't a particularly memorable one for Amazon (AMZN 0.94%) stockholders. Nevertheless, their company's share price inched up marginally on muted optimism for large-cap titles. Helping the stock defy gravity was its inclusion on a list of star stocks from a storied researcher.

That researcher was Mizuho, which on Monday introduced its Top Picks List. As the name implies, this is a lineup of the most promising equity investments, as chosen by the company's U.S. analysts.

Amazon made the 21-title list, which included seven stocks. Besides the online retailer, this sub-group consisted of both traditionally popular names (like Microsoft) and somewhat offbeat choices such as Trip.com.

Mizuho is clearly enthusiastic about Amazon's prospects, particularly regarding its dominant Amazon Web Services (AWS) business, which is well positioned to capitalize on the strong push into generative artificial intelligence (Gen AI).

"We believe AWS' strong ecosystem and marketplace approach for Gen AI are not fully appreciated by investors," the bank wrote. "We expect Gen AI to drive the next super cycle of cloud migration as the adoption could accelerate from 20% of workloads currently on cloud to 70% in five years versus prior estimate of ten years."

Amazon is known to most consumers and not a few investors as mostly a digital retail business. But AWS has been a key contributor to both revenue growth and the bottom line, so its future success is crucial to the company's efforts. Investors would be wise to keep a sharp eye on the unit's development.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Eric Volkman has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Amazon.com and Microsoft. The Motley Fool has a disclosure policy.

Read this article:

Why Amazon Stock Was a Winner on Monday - The Motley Fool

We Have No Chance of Controlling a Superintelligent AI – Medium

If an AI superintelligence becomes more capable than the brightest human minds, how could we ever hope to control it?

Sign up for MachineWatch, a weekly newsletter for rational humans with reasonable concerns about AI.

Now that weve checked the box for developing a generative AI, many experts believe we could see a superintelligent AI within our lifetimes. Thats one of the next stages in the ongoing evolution of artificial intelligence.

And it sounds great, at first blush. By its very nature, a superintelligent AI would also be super useful and super powerful and generally super awesome.

But it could also be super difficult, if not impossible, for humans to control and super dangerous as a result. Thats the warning call from a growing chorus of AI researchers and even some who work in that industry.

Well get to those warnings in a moment. But first, lets establish a definition of the term superintelligence, within the context of AI.

There is no official definition of AI superintelligence. But most experts define it in the same basic way. Heres one commonly used definition, borrowed from Wikipedia:

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.

University of Oxford philosopher Nick Bostrom (and the author of a book on this subject) defines superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

The New York City-based nonprofit A.I. for Anyone offers a similar but broader definition:

Superintelligence is a term used to describe a hypothetical future artificial intelligence (AI) that is significantly smarter than the best human minds in every field, including scientific creativity, general wisdom and social skills.

Lets look at one final definition, just to make sure weve nailed it down. I asked the generative AI chatbot ChatGPT: What is superintelligence within the context of AI?

Here is its reply:

Superintelligence refers to a hypothetical level of artificial intelligence (AI) that surpasses human intelligence in almost every aspect. Its a concept often discussed in the field of AI and is associated with the idea that once we achieve superintelligence, AI systems could outperform humans in a wide range of cognitive tasks, including problem-solving, creativity, learning, and decision-making.

All of these definitions and countless others that can be found online have three main features in common:

Artificial intelligence experts, researchers and developers typically label superintelligence as a hypothetical and future capability of AI. But hypothetical does not mean impossible. On the contrary, a growing number of AI-savvy individuals believe we could see a superintelligent AI in the near future.

So its critical that we have these discussions now, before reaching what would essentially be a point of no return.

Most AI experts that Ive read (and Ive read a lot) seem to agree that machines will eventually rival and possibly surpass human intelligence. But theres little consensus or agreement on when that might actually happen. Many believe it will happen by 2050. Some predict we could see an AI superintelligence within the next 10 years.

In a May 2023 article entitled Governance of superintelligence, ChatGPTs parent company OpenAI wrote the following:

Given the picture as we see it now, its conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of todays largest corporations.

Geoffrey Hinton, one of the so-called Godfathers of Deep Learning, resigned from his AI research position at Google to warn people about the technology. He too believes that a superintelligent AI is closer than previously thought:

The idea that this stuff could actually get smarter than people a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Experts also agree that a superintelligent AI, by definition, would be smarter than its human creators. A lot smarter.

It would be able to perform cognitive and creative tasks at the intellectual equivalent of lightspeed. It would be able to learn and teach itself new capabilities at a pace that puts the human mind to shame.

And because of that, we come to the third point of agreement among the various definitions of AI super intelligence

Due to its higher and more adaptive level of intelligence, a superintelligent AI would be able to outperform humans in a variety of cognitive tasks, including critical thinking, problem-solving, learning and decision-making.

A superintelligent AI would be able to rapidly learn and understand new concepts, eventually exceeding the collective intelligence of humanity. It could potentially master all human and scientific knowledge, something that a human could never do.

It could out us in every way: outsmart, outthink, outperform and outmaneuver.

And thats concerning.

If a superintelligent AI were to remain benign and helpful at all times, its powerful intelligence could benefit humanity in many ways. But if it decided to pursue objectives that were not aligned with human preferences (a leading concern among AI researchers), the results could be catastrophic.

All of the above begs the question: How could we possibly control a superintelligence thats so much smarter than us? How could we hope to contain, supervise or manage it?

Given the current state of artificial intelligence capabilities, the concept of developing a superintelligent AI falls within the realm of possibility. But the idea of controlling, reversing or undoing such a superintelligence seems like an impossible task, in theory.

This is a question we just cant answer in the present.

We can speculate and theorize about it. We can use our logic and reason to envision some futuristic scenario based on our current understanding. But the cold hard truth is that we have no way of answering this question because we have no past experience or models to draw from.

But common sense answers the question for us.

Common sense tells us that a superintelligent AI, by its very nature, would be difficult if not impossible for humans to control. As an entity with superior intelligence, it would likely be able to circumvent any human efforts to contain it. And that could have dire consequences.

Nick Bostrom, a philosopher from the University of Oxford who specializes in existential risks, warned about the dangers of superintelligent machines in his book, Superintelligence: Paths, Dangers, Strategies. Bostrom equates superintelligent AIs with a ticking bomb thats bound to detonate at some point.

Superintelligence is a challenge for which we are not ready now and will not be ready for a long time, Bostrom writes. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.

In a 2021 article published in the Journal of Artificial Intelligence Research, entitled Superintelligence Cannot Be Contained: Lessons from Computability Theory, the authors explained why humans have little to no chance of containing a superintelligent AI:

A superintelligence poses a fundamentally different problem than those typically studied under the banner of robot ethics. This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.

In a December 2023 article in Quanta Magazine, Santa Fe Institute professor Melanie Mitchell said we have reached a kind of tipping point, with regard to AI-related fears and concerns:

Its a familiar trope in science fiction humanity being threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the AI research community is deeply concerned about this kind of scenario playing out in real life.

Shes right about the sci-fi aspect of it. From HAL 9000 to Skynet, science fiction has frequently explored the concept of rogue machines that disregard human life. I myself have written about murderous androids pursuing their own prerogatives, in one of my novels.

Mitchell goes on to add that universities around the world and at major AI companies are already researching efforts on alignment, to make sure these technologies dont get out of control. But are we comfortable letting such a small and insular group make all of these important decisions on our behalf? Are we confident that they can protect us?

We can probably all agree that killer robots will remain confined to the pages of science fiction for the time being. But Mitchells article also points out that a not-insubstantial number of AI researchers are growing more and more concerned with the concept of intelligent machines pursuing objectives that might be harmful to humans.

James Barrat, a documentary filmmaker and author of the nonfiction book, Our Final Invention: Artificial Intelligence and the End of the Human Era, believes humans would be doomed to a life of servitude if machines develop a superior form of intelligence:

We humans steer the future not because were the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.

Its hard to argue an outlook thats built on such simple logic.

Heres a little analogy that illustrates why it would be next to impossible for humans to control an artificial superintelligence.

Imagine youre a novice cybersecurity specialist. Youve taken a few classes, completed a few projects. Earned a certificate from your local junior college. You just started working an entry-level security job, where you hope to develop your beginner-level skills.

Your first assignment is to develop an impenetrable firewall to protect your companys network from the worlds greatest hacker. This hacker has skills and abilities that far surpass those of other humans and makes network penetration look like childs play, operating at an almost superhuman level.

Who do you think will come out on top in this scenario? The newbie security specialist, or the godlike hacker?

By definition, a superintelligent AI would be able to surpass humans in a broad range of activities. So its logical to assume that it could run circles around even the smartest human programmers. It would be like watching a toddler play chess against a grandmaster.

Whatever safety protocols or guardrails we create, an AI superintelligence would anticipate them well in advance and possibly neutralize them, if it felt they challenged its own agenda.

In closing, Id like to leave you with a quote from the theoretical physicist Stephen Hawking:

The real risk with AI isnt malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals arent aligned with ours, were in trouble.

Hawking made this statement as part of a Reddit ask-me-anything (AMA) event, back in 2015. He always was ahead of his time.

Read the original here:

We Have No Chance of Controlling a Superintelligent AI - Medium

Elon Musk warns AI ‘could replace Chinese government and take control of country’ – Daily Star

Elon Musk has warned Artificial Intelligence could soon be in control of China.

The SpaceX boss spoke with Israeli Prime Minister Benjamin Netanyahu in the wide-ranging discussion on his platform X. The tech billionaire claimed that digital super-intelligence could be in charge of China instead of the CCP [Chinese Communist Party] being in charge of China unless controls were brought in.

He said that he had spoken directly to Chinese leaders about the threat, and The CCP prefers to be in charge. So, I think they understand the arguments [for regulation], Elon continued.

READ MORE: AI chief warns 'deadliest pandemics ever' on horizon with genetic engineering

Keep up to date with all the latest news about AI by keeping an eye on our dedicated page here.

Elon has issued repeated warnings about the potential threat of unregulated AI, saying there could be potentially a catastrophic outcome if researchers are not careful with creating artificial general intelligence. He added that most people werent aware of the scale of the AI threat, because they didnt know enough about it.

"Unless you are really immersed in the technology, you don't know how significant the risk can be, he said. Elon has launched a number of ventures including Neuralink, a system designed to link human and machine intelligence.

He has also created a new AI company, xAI which are designed to provide a balance to some of the more reckless AI research. In the discussion, Mr Netanyahu predicted were about six years from projecting AI into the cosmos and sending artificial intelligence to other planets.

But he said there was a threat that AI could be used as a weapon here on Earth with unpredictable results. Comparing the absence AI controls to the limitations on nuclear weapons, he said that rogue states could create as much devastation with runaway AI as they might have caused with a nuclear strike.

Instead of MAD mutually assured destruction," he said, "Wed have MAC mutually assured chaos.

Read the original:

Elon Musk warns AI 'could replace Chinese government and take control of country' - Daily Star

Artificial Intelligence May Be Humanity’s Most Ingenious Invention … – Vanity Fair

We invented wheels and compasses and chocolate chip cookie dough ice cream and the eames lounge chair and penicillin and e = mc2 and beer that comes in six-packs and guns and dildos and the Pet Rock and Doggles (eyewear for dogs) and square watermelons. One small step for man. We came up with the Lindy Hop and musical toothbrushes and mustard gas and glow-in-the-dark Band-Aids and paper and the microscope and baconfucking bacon!and Christmas. Ma-ma-se, ma-ma-sa, ma-ma-ko-ssa. We went to the bottom of the ocean and into orbit. We sucked energy from the sun and fertilizer from the air. Let there be light. We created the most amazing pink flamingo lawn ornaments that come in packs of two and only cost $9.99!

In a universe that stretches an estimated 93 billion light-years in diameter with 700 quintillion (7 followed by 20 zeros) planetshere, on this tiny little blue dot we call Earth, one of us created a tool called a spork. The most astounding part is that while that same universe is an estimated 26.7 billion years old, we did everything in just under 6,000 years.

All of this in less than 200 generations of human life.

Now weve just created a new machine that is made of billions of microscopic transistors and aluminum and copper wires that zigzag and twist and turn and are interconnected in incomprehensible ways. A machine that is only a few centimeters in width and length.

A little tiny machine that may end up being the last invention humans ever create.

This all stems from an idea conceptualized in the 1940s and finally figured out a few years ago. That could solve all of the worlds problems or destroy every single human on the planet in the snap of a fingeror both. Machines that will potentially answer all of our unanswerable questions: Are we alone in the universe? What is consciousness? Why are we here? Thinking machines that could cure cancer and allow us to live until were 150 years old. Maybe even 200. Machines that, some estimate, could take over up to 30 percent of all jobs within the next decade, from stock traders to truck drivers to accountants and telemarketers, lawyers, bookkeepers, and all things creative: actors, writers, musicians, painters. Something that will go to war for usand likely against us.

Artificial intelligence.

Thinking machines that are being built in a 50-square-mile speck of dirt we call Silicon Valley by a few hundred men (and a handful of women) who write in a language only they and computers can speak. And whether we understand what it is they are doing or not, we are largely left to the whims of their creation. We dont have a say in the ethics behind their invention. We dont have a say over whether it should even exist in the first place. Were creating God, one AI engineer working on large language models (LLMs) recently told me. Were creating conscious machines.

Already, weve seen creative AIs that can paint and draw in any style imaginable in mere seconds. LLMs can write stories in the style of Ernest Hemingway or Bugs Bunny or the King James Bible while youre drunk with peanut butter stuck in your mouth. Platforms that can construct haikus or help finish a novel or write a screenplay. Weve got customizable porn, where you can pick a womans breast size or sexual position in any settingincluding with you. Theres voice AI software that can take just a few seconds of anyones voice and completely re-create an almost indistinguishable replica of them saying something new. Theres AI that can re-create music by your favorite musician. Dont believe me? Go and listen to Not Johnny Cash singing Barbie Girl, Freddie Mercury intoning Thriller, or Frank Sinatra bellowing Livin on a Prayer to see just how terrifying all of this is.

Then theres the new drug discovery. People using AI therapists instead of humans. Others are uploading voicemails from loved ones who have died so they can continue to interact with them by talking to an AI replica of a dead parent or child. There are AI dating apps (yes, you date an AI partner). Its being used for misinformation in politics already, creating deepfake videos and fake audio recordings. The US military is exploring using AI in warfareand could eventually create autonomous killer robots. (Nothing to worry about here!) People are discussing using AI to create entirely new species of animals (yes, thats real) or viruses (also real). Or exploring human characteristics, such as creating a breed of super soldiers who are stronger and have less empathy, all through AI-based genetic engineering.

And weve adopted all of these technologies with staggering speedmost of which have been realized in just under six months.

It excites me and worries me in equal proportions. The upsides for this are enormous, maybe these systems find cures for diseases, and solutions to problems like poverty and climate change, and those are enormous upsides, said David Chalmers, a professor of philosophy and neural science at NYU. The downsides are humans that are displaced from leading the way, or in the worst case, extinguished entirely, [which] is terrifying. As one highly researched economist report circulated last month noted, There is a more than 50-50 chance AI will wipe out all of humanity by the middle of the century. Max Tegmark, a physicist at the Massachusetts Institute of Technology, predicts a 50 percent chance of demise within the next 100 years. Others dont put our chances so low. In July, a group of researchers, including experts in nuclear war, bioweapons, AI, and extinction, and a group of superforecastersgeneral-purpose prognosticatorsdid their own math. The experts deduced that there was a 20 percent chance of a catastrophe by 2100 and a 6 percent chance of an extinction-like event from AI, while the superforecasters had a more positive augury of a 9 percent chance of catastrophe and only 1 percent chance wed be wiped off the planet.

It feels a little like picking the extinction lottery numbersand even with a 1 percent chance, perhaps we should be asking ourselves if this new invention is worth the risk. Yet the question circulating around Silicon Valley isnt if such a scenario is worth it, even with a 1 percent chance of annihilation, but rather, if it is really such a bad thing if we build a machine that changes human life as we know it.

Larry Page is not an intimidating-looking man. When he speaks, his voice is so soft and raspy from a vocal cord injury, it sounds like a campfire that is trying to tell you something. The last time I shook his hand, many, many years ago, it felt as soft as a bar of soap. While his industry peers, like Mark Zuckerberg and Elon Musk, are often performing public somersaults with pom-poms for attention, Page, who cofounded Google and is on the board of Alphabet, hasnt done a single public interview since 2015, when he was onstage at a conference. In 2018, when Page was called before the Senate Intelligence Committee to address Russian election meddling, online privacy, and political bias on tech platforms, his chair sat empty as senators grilled his counterparts.

While Page stays out of the limelight, he still enjoys attending dinners and waxing poetic about technology and philosophy. A few years ago a friend found himself seated next to Page at one such dinner, and he relayed a story to me: Page was talking about the progression of technology and how it was inevitable that humans would eventually create superintelligent machines, also known as artificial general intelligence (AGI), which are computers that are smarter than humans, and in Pages view, once that happened, those machines would quickly find no use for us humans, and they would simply get rid of us.

What do you mean, get rid of us? my friend asked Page.

Like a sci-fi writer delivering a pitch for their new apocalyptic story idea, Page explained that these robots would become far superior to us very quickly, and if we were no longer needed on earth and thats the natural order of thingsand I quoteits just the next step in evolution. At first my friend assumed Page was joking. Im serious, said Page. When my friend argued that this was a really fucked up way of thinking about the world, Page grew annoyed and accused him of being specist.

Over the years, Ive heard a few other people relay stories like this about Page. While being interviewed on Fox News earlier this year, Musk was one of them. He explained that he used to be close with Page but they no longer talked after a debate in which Page called Musk specist too. My perception was that Larry was not taking AI safety seriously enough, Musk said. He really seems to want digital superintelligence, basically digital God, if you will, as soon as possible.

All of the people LEADING THE DEVELOPMENT OF AI right now are COMPLETELY DISINGENUOUS in public.

Lets just stop for a moment and unpack this. Larry Pagethe founder of one of the worlds biggest companiesa company that employs thousands of engineers that are building artificial intelligence machines right now, as you read thisbelieves that AI will, and should, become so smart and so powerful and so formidable andandthat one day it wont need us dumb pathetic little humans anymoreand it will, and it should, GET RID OF US!

See the article here:

Artificial Intelligence May Be Humanity's Most Ingenious Invention ... - Vanity Fair