Category Archives: Alphago

OpenAI of Sam Altman Scores Most Visible In TIME100 Most … – Cryptopolitan

Description

TIME magazine recently unveiled its highly anticipated 2023 TIME100 Most Influential Companies list, showcasing businesses and leaders that are shaping the future. Among the diverse group of companies, we have top AI companies performing well in their distinctive artificial intelligence (AI) niches.TIME Magazine has beenone of the most authoritative and informative guides to what is Read more

TIME magazine recently unveiled its highly anticipated 2023 TIME100 Most Influential Companies list, showcasing businesses and leaders that are shaping the future. Among the diverse group of companies, we have top AI companies performing well in their distinctive artificial intelligence (AI) niches.TIME Magazine has beenone of the most authoritative and informative guides to what is happening in politics, business, health, science and entertainment, written without any editorial slant. Time has one of the worlds largest circulations for a weekly news magazine.

Some of the listsincluding Best Inventions and TIME 100 Companieshave a paid application process through which companies may submit their products and organizations for editorial consideration. The proposed magazine was initially called Facts, wanting to distinguish it from opinion. They changed the name to Time and used the slogan Take Time Its Brief.

To createTIME100 Companies, our editors, led by Emma Barker, seek nominations from across sectors, and poll our global network of contributors and correspondents, as well as outside experts. Then we evaluate each on key factors, including impact, innovation, ambition, and success. The result is a diverse group of businesses helping chart an essential path forward.

You may read the whole vetting process here. Jacobs finds that the most visible among them isOpenAI and its CEO Sam Altman. He further explains that their ChatGPT program has rocketed in popularity, hitting 100 million active users in two months. (It took Instagram 2 years.) TIMEs former editor-in-chiefEdward Felsenthalvisited Altman and his colleagues in San Francisco in May. Here are the top 6 AI companies included in the list for your information and evaluation.

OpenAI, co-founded by Elon Musk and Sam Altman, is an esteemed AI research organization with a focus on the development of artificial general intelligence (AGI) for the betterment of humanity. The company is widely recognized for its cutting-edge language models, such as GPT-3, which are instrumental in various domains, including natural language understanding, translation, and content generation.

OpenAI, based in the United States, has garnered considerable recognition for its notable contributions to the field of AI research and its extensive collaborations with leading technology companies. Renowned for its cutting-edge advancements and breakthroughs in artificial intelligence, OpenAI has positioned itself as a prominent player in shaping the future of AI technology. Its impressive market performance underscores the companys influential role in driving innovation and fostering collaborations within the AI community.

Nvidia is a leading company renowned for its expertise in graphics processing units (GPUs) and AI hardware solutions, making significant contributions to the advancement of AI computing. Their high-performance GPUs are extensively utilized in various AI applications, notably in deep learning and neural network training. The versatility of Nvidias GPUs extends to industries like autonomous vehicles, healthcare, and scientific research, where they play a pivotal role in accelerating AI-driven innovations.

Nvidia, a prominent United States-based company, has achieved remarkable growth and is widely recognized as a key player in the AI hardware market. With its strong presence in the gaming industry and a focus on accelerating AI computing, Nvidia has solidified its position as a leading provider of graphics processing units (GPUs) and AI hardware solutions. Its impressive market performance further establishes Nvidia as a significant contributor to the advancement of AI technologies.

DeepMind, a subsidiary of Alphabet Inc. headquartered in the United Kingdom, is an esteemed AI research lab at the forefront of developing general-purpose AI systems. Known for its groundbreaking achievements, including the triumph of AlphaGo against world champion Go players, DeepMind continues to push the boundaries of AI innovation. The companys diverse range of applications spans across vital domains such as healthcare, robotics, and climate change, showcasing its commitment to leveraging AI for transformative advancements.

DeepMind has established itself as a dominant force in the AI industry, driven by its pioneering research and groundbreaking technological advancements. Renowned for its cutting-edge innovations, DeepMind has emerged as a key player in the field, attracting a wide range of collaborations and investments from diverse sectors. Its exceptional market performance reflects the companys influential role in pushing the boundaries of AI capabilities and forging strategic partnerships to accelerate the adoption of AI technologies across industries.

Based in New York, Hugging Face is a leading expert in natural language processing (NLP) and conversational AI. Renowned for its expertise, the company has developed the widely acclaimed Hugging Face Transformers library, which provides developers and researchers with access to pre-trained models for diverse NLP tasks. With their tools gaining widespread adoption, Hugging Face continues to drive innovation in the field of NLP, empowering professionals to unlock the potential of conversational AI.

Hugging Face has achieved remarkable market performance with its user-friendly Natural Language Processing (NLP) solutions. The companys innovative approach has garnered widespread adoption within the AI community, positioning Hugging Face as a prominent and influential player in the field.

Headquartered in London, United Kingdom, Metaphysic is a leading AI company specializing in computer vision and intelligent systems. With a strong focus on innovation, the company develops cutting-edge technologies for object recognition, video analysis, and autonomous driving. Metaphysics advanced products have gained recognition and are widely adopted across industries, ranging from retail to transportation, where they enable groundbreaking advancements and transformative solutions.

Metaphysic has exhibited exceptional market performance through its expertise in computer vision. The companys outstanding capabilities in this field have earned it recognition and strategic partnerships, establishing Metaphysic as a prominent player within the AI industry.

Based in New York, Runway is an AI company dedicated to democratizing AI tools and making them accessible to a wide range of users. Their platform simplifies the utilization of complex AI models, allowing individuals to create AI-powered applications even without extensive coding expertise. Runways user-friendly approach has garnered significant attention among artists, designers, and creative professionals, enabling them to seamlessly integrate AI technologies into their work and unlock new possibilities for innovation and expression. With a strong market performance in the United States, Runway continues to empower individuals across various industries to harness the potential of AI.

The top 10 AI companies on TIMEs 2023 Most Influential Companies list represent a diverse group of organizations driving innovation and shaping the future of AI technology. These companies, including OpenAI, Nvidia, Google DeepMind, Hugging Face, Metaphysic, and Runway, have made significant contributions to AI research and development, spanning various domains such as language models, AI hardware, general-purpose AI systems, NLP, computer vision, and democratizing AI tools. Their market performances reflect their influential roles in advancing AI capabilities and fostering collaborations across industries.

Disclaimer. The information provided is not trading advice. Cryptopolitan.comholds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Go here to read the rest:
OpenAI of Sam Altman Scores Most Visible In TIME100 Most ... - Cryptopolitan

Google at I/O 2023: Weve been doing AI since before it was cool – Ars Technica

Enlarge / Google CEO Sundar Pichai explains some of the company's many new AI models.

Google

That Google I/O show sure was something, wasn't it? It was a rip-roaring two hours of nonstop AI talk without a break. Bard, Palm, Duet, Unicorn, Gecko, Gemini, Tailwind, Otterthere were so many cryptic AI code names thrown around it was hard to keep track of what Google was talking about. A glossary really would have helped. The highlight was, of course, the hardware, but even that was talked about as an AI delivery system.

Google is in the midst of a total panic over the rise of OpenAI and its flagship product, ChatGPT, which has excited Wall Street and has the potential to steal some queries people would normally type into Google.com. It's an embarrassing situation for Google, especially for its CEO Sundar Pichai, who has been pitching an "AI first" mantra for about seven years now and doesn't have much to show for it. Google has been trying to get consumers excited about AI for years, but people only seemed to start caring once someone other than Google took a swing at it.

Even more embarrassing is that the rise of ChatGPT was built on Google's technology. The "T" in "ChatGPT" stands for "transformer," a neural network technique Google invented in 2017 and never commercialized. OpenAI took Google's public research, built a product around it, and now uses that product to threaten Google.

In the months before I/O, Pichai issued a "Code Red" warning across the company, saying that ChatGPT was something Google needed to fight, and it even dragged its co-founders, Larry Page and Sergey Brin, out of retirement to help. Years ago, Google panicked over Facebook and mandated that all employees build social features in Google's existing applications. And while that was a widely hated initiative that eventually failed, Google is dusting off that Google+ playbook to fight OpenAI. It's now reportedly mandated that all employees build some kind of AI feature into every Google product.

"Mandatory AI" is certainly what Google I/O felt like. Each section of the presentation had some division of Google give a book report on the New AI Thing they have been working on for the past six months. Google I/O felt more like a presentation for Google's managers rather than a show meant to excite developers and consumers. The AI directive led to ridiculous situations like Android's head of engineering going on stage to talk only about an AI-powered poop emoji wallpaper generator rather than any meaningful OS improvements.

Wall Street investors were apparently one group excited by Google I/O the company's stock jumped 4 percent after the show. Maybe that was the point of all of this.

Would you believe Google Assistant got zero mentions at Google I/O? This show was exclusively about AI, and Google didn't mention its biggest AI product. Pichai's seminal "AI First" blog post from 2016 is about Google Assistant and features an image of Pichai in front of the Google Assistant logo. Google highlighted past AI projects like Gmail's Smart Reply and Smart Compose, Google Photos' magic eraser and AI-powered search, Deepmind's AlphaGo, and Google Lens, but Google Assistant could not manage a single mention. That seemed entirely on purpose.

Heck, Google introduced a product that was a follow-up to the Nest Hub Google Assistant smart displaythe Pixel Tabletand Google Assistant still couldn't get a mention. At one point, the presenter even said the Pixel Tablet had a "voice-activated helper."

Google

Google's avoidance of Google Assistant at I/O seemed like a further deprioritization of what used to be its primary AI product. The Assistant's last major speaker/display product launch was two years ago in March 2021. Since then, Google shipped hardware that dropped Assistant support from Nest Wi-Fi and Fitbit, and it disabled Assistant commands on Waze. It lost a patent case to Sonos and stripped away key speaker functionality, like controlling the volume, from the cast feature. Assistant Driving Mode was shut down in 2022, and one of the Assistant's biggest features, reminders, is getting shut down in favor of Google Tasks Reminders.

The Pixel Tablet sure seemed like it was supposed to be a new Google Assistant device since it looks exactly like all of the other Google Assistant devices, but Google shipped it without a dedicated smart display interface. It seems like it was conceived when the Assistant was a viable product at Google and then shipped as leftover hardware when Assistant had fallen out of favor.

The Google Assistant team has reportedly been asked to stop working on its own product and focus on improving Bard. The Assistant hasn't really ever made money in its seven years; the hardware is all sold at cost, voice recognition servers are expensive to run, and Assistant doesn't have any viable post-sale revenue streams like ads. Anecdotally, it seems like the power for those voice recognition servers is being turned way down, as Assistant commands seem to take several seconds to process lately.

The Google I/O keynote transcript counts 19 uses of the word "responsible" about Google's rollout of AI. Google is trying to draw some kind of distinction between it and OpenAI, which got to the point it's at by being a lot more aggressive in its rollout compared to Google. My favorite example of this was OpenAI's GPT-4 arrival, which came with the surprise announcement that it had been running as a beta on production Bing servers for weeks.

Google's sudden lip service toward responsible AI use seems to run counter to its actions. In 2021 Google's AI division famously pushed out AI ethics co-head Dr. Timnit Gebru for criticizing Google's diversity efforts and trying to publish AI research that didn't cast Google in a positive-enough light. Google then fired its other AI ethics co-head, Margaret Mitchell, for writing an open letter supportive of Gebru and co-authoring the contentious research paper.

In the run-up to the rushed launch of Bard, Google's answer to ChatGPT, a Bloomberg report claims that Google's AI ethics team was"disempowered and demoralized" so Google could get Bard out the door. Employees testing the chatbot said some of the answers they received were wrong and dangerous, but employees bringing up safety concerns were told they were "getting in the way" of Google's "real work." The Bloomberg report says AI ethics reviews are "almost entirely voluntary" at Google.

Google has seemingly already second-guessed its all-AI, all-the-time strategy. A Business Insider report details a post-I/O company meeting where one employee question to Pichai nails my feelings after Google I/O, saying, "Many AI goals across the company focus on promoting AI for its own sake, rather than for some underlying benefit." The employee asks how Google will "provide value with AI rather than chasing it for its own sake."

Pichai reportedly replied that when Googler's current OKRs (objectives and key resultsbasically your goals as an employee) were written, it was during an "inflection point" around AI. Now that I/O is over, Pichai said, "I think one of the things the teams are all doing post-I/O is re-looking. Normally we don't do this, but we are re-looking at the OKRs and adapting it for the rest of the year, and I think you will see some of the deeper goals reflected, and we'll make those changes over the upcoming days and weeks."

So the AI "Code Red" was in January, and now it's May, and Google's priorities are already being reshuffled? That tracks with Google's history.

Visit link:
Google at I/O 2023: Weve been doing AI since before it was cool - Ars Technica

Jack Gao: Prepare for profound AI-driven transformations – China.org

On March 14, Dr. Jack Gao, CEO of Smart Cinema and former president of Microsoft China, was left amazed after watching the livestream of GPT-4s press conference. He was stunned by what the chatbot is able to do.

Jack Gao delivers a keynote speech on artificial intelligence during a summit forum before the 14th Chinese Nebula Awards gala in Guanghan, Sichuan province, May 13, 2023. [Photo courtesy of EV/SFM]

"I was so excited and couldn't calm down for a whole week. During that time, Baidu also released its own Ernie Bot, and Alibaba followed with Tongyi Qianwen. There are more AI bots to come, such as the one from Google," Gao told China.org.cn, adding that he later engaged in conversations with insiders from various industries to get a clear understanding of the bigger picture.

Last weekend, he discussed this topic at China's top sci-fi event, the 14th Chinese Nebula Awards, where he also delivered a keynote speech and sought feedback from China's most prominent sci-fi writers, who have frequently envisioned the future and portrayed artificial intelligence (AI) in their novels.

"The era of AI has arrived. I have an unprecedented feeling knowing that it can pass the lawyers' exam with high scores and even possess a common sense that was previously exclusive to humans," Gao said. "When AI becomes another intelligent brain in our lives and has the potential to develop consciousness for the benefit of the entire human race, its intelligence will expand infinitely."

The profound changes will come quickly, according to his vision. AI could directly handle many aspects related to human life, from everything from translation to communication, medical diagnoses, lawsuits, and creative jobs. This could bring greater efficiency and upgrades to current industries, but it also raises concerns.

Some have already recognized the threats, like Hollywood scriptwriters who went on strike in early May due to concerns about AI "generative" text and image tools impacting their jobs and incomes. Tech giants have also laid off numerous employees after embracing AI technologies. Geoffrey Hinton, widely regarded as the "godfather" of AI, departed from Google and raised warnings about the potential dangers of AI chatbots, emphasizing their potential to surpass human intelligence in the near future. Hinton also cautioned against the potential misuse of AI by "bad actors" that could have harmful consequences for society.

"When I was a student 40 years ago, our wildest imaginations couldn't compare to what we have today. Technology has fundamentally transformed our lives," Gao said. The man has an awe-inspiring profile in both the tech and media industries, having served as a top executive at Autodesk Inc., Microsoft, News Corp., and Dalian Wanda Group. He has witnessed numerous significant technological advancements over the decades, from PC computers and the internet to big data, which have brought about great changes to the world.

When Google's AlphaGo AI defeated the world's number one Go player, Ke Jie, people began to recognize the power of AI, although they initially thought its impact was limited to the realm of Go. "But what if there's an 'AlphaGo' in every industry? Gao mused. "What can humans do, and how can they prevail? Imagine a scenario where you have your own 'AlphaGo' while others do not. This is the reality we are facing, and we must take it seriously."

He believes that the digital gap between machines and humans has been bridged so that AI bots can interact with humans through chat interfaces without the need for programmers to write code. He also believes that when large language models reach a sufficient scale, new chemical sparks will ignite, leading to new miracles of some kind. "You have to understand that language is the foundational layer and operating system of human civilization and ecology."

"Based on my experience using and learning from AI bots, I have also noticed an important factor: the quality of answers from chatbots depends on how you ask them. Our way of thinking will shift towards seeking answers because there are countless valuable answers in the world waiting for good questions," he said. He added that people should prepare themselves with optimism to understand, utilize, explore, and harness AI, making it a beneficial and integral part of their lives.

Gao's speech caused a stir at the sci-fi convention. After he finished, many sci-fi writers, including eminent figures like Han Song and He Xi, approached him to discuss further. "They told me that after listening to my speech, they had a more personal understanding of how AI will truly impact our lives and work. The technology is already here, and we have no choice but to actively explore and embrace it, adapting to the changes."

See the article here:
Jack Gao: Prepare for profound AI-driven transformations - China.org

Terence Tao Leads White House’s Generative AI Working Group … – Pandaily

On May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative artificial intelligence technology on the Presidential Council of Advisors on Science and Technology (PCAST). The group will hold a public meeting during the PCAST conference on May 19th, where Demis Hassabis, founder of DeepMind and creator of AlphaGo, as well as Stanford University professor Fei-Fei Li among others will give speeches.

According to Terence Taos blog, the group mainly researches the impact of generative AI technology in scientific and social fields, including large-scale language models based on text such as ChatGPT, image generators like DALL-E 2 and Midjourney, as well as scientific application models for protein design or weather forecasting. It is worth mentioning that Lisa Su, CEO of AMD, and Phil Venables, Chief Information Security Officer of Google Cloud are also members of this working group.

According to an article posted on the official website of the White House, PCAST develops evidence-based recommendations for the President on matters involving science, technology, and innovation policy, as well as on matters involving scientific and technological information that is needed to inform policy affecting the economy, worker empowerment, education, energy, the environment, public health, national and homeland security, racial equity, and other topics.

SEE ALSO: Mathematician Terence Tao Comments on ChatGPT

After the emergence of ChatGPT, top mathematicians like Terence Tao also paid great attention to it and began exploring how artificial intelligence could help them complete their work. In an article titled How will AI change mathematics? Rise of chatbots highlights discussion in the Nature Journal, Andrew Granville, a number theorist at McGill University in Canada, also said that we are studying a very specific question: will machines change mathematics? Mathematician Kevin Buzzard agrees, saying that in fact, now even Fields Medal winners and other very famous mathematicians are interested in this field, which shows that it has become popular in an unprecedented way.

Previously, Terence Tao wrote on the decentralized social network Mastodon, Today was the first day that I could definitively say that #GPT4 has saved me a significant amount of tedious work. In his experimentation, Terence Tao discovered many hidden features of ChatGPT such as searching for formulas, parsing documents with code formatting, rewriting sentences in academic papers and sometimes even semantically searching incomplete math problems to generate hints.

See the article here:
Terence Tao Leads White House's Generative AI Working Group ... - Pandaily

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.

Opening

Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.

Closing

Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read more:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University

The circle of life works for AI, too – BusinessLine

Chat GPT has almost colonised discussions on Artificial Intelligence. High school children are excited about getting their homework done by ChatGPT. !

But such excitement with new technology is not new. Just a few years ago, there was excitement about AI competing against AlphaGO, or the American quiz television show Jeopardy, or chess with Deep Blue. AI was seen as an ultimate technology that will improve human life and reduce suffering soon.

But as with any other journey, the AI path has also been full of challenges and failures. Many tech companies have seen initiatives fail IBMs Watson Health, Teslas Autopilot crash, and many more.

Organisations have made failure itself a preferred way of working. Fail Fast is the way forward for AI. This ensures that with or without success in AI, financial success and continuity are assured. The list of companies that are working on AI technologies is increasing by the day, so are the technologies that are being developed.

The focus on fail fast innovation has helped advance technologies. As the well-known author Yuval Harari wrote: Humans will learn the working on the brain but will still not understand the mind. The AI mind is still unknown, given the multiple direction in which the AI progress is happening; convergence is challenging, there is chaos all around. There is increasing acceptance of multiple views of truth.

While humans will continue to make progress in understanding the workings of the brain, it is possible that a complete understanding of the mind and body may remain elusive.

The Hindu scriptures provide some guidance. The Circle of life has worked for humans, and it will continue for AI, which will see innovation, preservation of a few innovations, and a few failures. However, the cycle will continue perpetually.

The moksha of AI development needs good karma powered with Peacefulness, self-control, austerity, purity, tolerance, honesty, wisdom, knowledge, and religiousness these are the qualities by which the brahmanas work. (Bhagwad Gita 18.42).

A few decades down the line, when the full human DNA is uncovered, when theres super-computing power in every mobile, AI is able to recreate mind and body, etc., new challenges will come up.

The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated AI of computers might only serve to empower the natural stupidity of humans. The way froward is to control the chaos in human mind and not imitate it with AI.

The writer is Deputy General Manager - Industrial AI Hitachi. Views are personal

Read the original post:
The circle of life works for AI, too - BusinessLine

AI At War – War On The Rocks

Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence(New York: W. W. Norton & Company, 2023).

It is widely believed that the world is on the brink of another military revolution. AI is about to transform the character of warfare, as gunpowder, tanks, aircraft, and the atomic bomb have in previous eras. Today, states are actively seeking to harness the power of AI for military advantage. China, for instance, has announced its intention to become the world leader in AI by 2030. Its New General AI Plan proclaimed that: AI is a strategic technology that will lead the future. Similarly, Russian President Vladimir Putin declared: Whoever becomes the leader in this sphere will become ruler of the world. In response to the challenge posed by China and Russia, the United States has committed to a third offset strategy. It will invest heavily in AI, autonomy, and robotics to sustain its advantage in defense.

In light of these dramatic developments, military commentators have become deeply interested by the question of the military application of AI. For instance, in a recent monograph, Ben Buchanan and Andrew Imrie have claimed that AI is the new fire. Autonomous weapons controlled by AI not by humans will become increasingly accurate, rapid, and lethal. They represent the future of war. Many other scholars and experts concur with them. For instance, Stuart Russell, the eminent computer scientist and AI pioneer, dedicated one of his 2020 BBC Reith Lectures to the military potential of AI. He professed the rise of slaughterbots and killer robots. He described a scenario in which a lethal quad-copter the size of a jar could be armed with an explosive device: Anti-personnel mines could wipe out all the males in a city between 16 and 60 or all the Jewish citizens in Israel and unlike nuclear weapons, it would leave the city infrastructure. Russell concluded: There will be 8 million people wondering why you cant give them protection against being hunted down and killed by robots. Many other scholars, including Christian Brose, Ken Payne, John Arquilla, David Hambling, and John Antal, share Russells belief that with the development of second-generation AI, lethal autonomous weapons such as killer drone swarms may be imminent.

Military revolutions have often been less radical than initially presumed by their advocates. The revolution of military affairs of the 1990s was certainly important in opening up new operational possibilities, but it did not eliminate uncertainty. Similarly, some of the debate about lethal autonomy and AI has been hyperbolic. It has misrepresented how AI currently works, and what its potential effects on military operations might, therefore, be in any conceivable future. Although remote and autonomous systems are becoming increasingly important, there is little chance of autonomous drone swarms substituting troops on the battlefield, or supercomputers replacing human commanders. AI became a major research program in the 1950s. At that time, it operated on the basis of symbolic logic programmers coded input for AI to process. This system was known as good old fashioned artificial intelligence. AI made some progress, but because it was based on the manipulation of assigned symbols, its utility was very limited, especially in the real world. An AI winter, therefore, closed in from the late 1970s and throughout the 1980s.

Since the late 1990s, second-generation AI has produced some remarkable breakthroughs on the basis of big data, massive computing power, and algorithms. There were three seminal events. On May 11 1997, IBMs Deep Blue beat Garry Kasparov, the world chess champion. In 2011, IBMs Watson won Jeopardy!. Even more remarkably, in March 2016, AlphaGo beat the world champion Go player, Lee Seedol, 4-1.

Deep Blue, Watson, and AlphaGo were important waypoints on an extraordinary trajectory. Within two decades, AI had gone from disappointment and failure to unimagined triumphs. However, it is important recognize what second-generation AI can and cannot do. It has been developed around neural networks. Machine learning programs process huge amounts of data through their networks, re-calibrating the weight that a program assigns to particular pieces of data, until, finally, it generates coherent answers. The system is probabilistic and inductive. Programs and algorithms know nothing. They are unaware of the real world and, in a human sense, unaware of the meaning of the data they process. Using algorithms, machine learning AI simply builds models of statistical probability from massively reiterated trials. In this way, second-generation AI identifies multiple correlations in the data. As long as it has enough data, probabilistic induction has become a powerful predictive tool. Yet, AI does not recognize causation or intention. Peter Thiel, a leading Silicon Valley tech entrepreneur, has articulated AIs limitations eloquently: Forget science-fiction fantasy, what is powerful about actually existing AI is its application to relatively mundane tasks like computer vision and data analysis. Consequently, although machine learning is far superior to a human at limited, bounded, mathematizable tasks, it is very brittle. Utterly dependent on the data on which it has been trained, even the tiniest change in the actual environment or the data renders it useless.

The brittleness of data-based inductive machine learning is very significant to the prospect of an AI military revolution. Proponents or opponents of AI imply that, in the near future, it will be relatively easy for autonomous drones to fly through, identify, and engage targets in an urban areas, for instance. After all, autonomous drone swarms have already been demonstrated in admittedly contrived and controlled environments. However, in reality, it will be very hard to train a drone to operate autonomously for combat in land warfare. The environment is dynamic and complex, especially in towns and cities civilians and soldiers are intermixed. There do not seem to be any obvious data on which to train a drone swarm reliably the situation is too fluid. Similarly, it is not easy to see how an algorithm could make command decisions. Command decisions require the interpretation of heterogeneous information, balancing political and military factors, all of which require judgement. In a recent article, Avi Goldfarb and Jon R. Lindsay have argued that data and AI are best for simple decisions with perfect data. Almost by definition, military command decisions involve complexity and uncertainty. It is notable that, while Google and Amazon are the pre-eminent data companies, their managers do not envisage a day when an algorithm will make their strategic and operational decisions for them. Data, processed rapidly with algorithms, helps their executives to understand the market to a depth and fidelity that their competitors cannot match. Information advantage has propelled them to dominance. However, machine learning has not superseded the executive function.

It is, therefore, very unlikely that lethal autonomous drones or killer robots enabled by AI will take over the battlefield in the near future. It is also improbable that commanders will be replaced by computers or supercomputers. However, this does not mean that AI, data, and machine learning are not crucial to contemporary and future military operations. They are. However, the function of AI and data is not primarily lethality they is not the new fire, as some claim. Data digitized information stored in cyberspace are crucial because it provides states with a wider, deeper, and more faithful understanding of themselves and their competitors. When massive data sets are processed effectively by AI, this will allow military commanders to perceive the battlespace to a hitherto unachievable depth, speed and resolution. Data and AI are also crucial for cyber operations and informational campaigns. They have become indispensable for defense and attack. AI and data are not so much the new fire as a new form of digitized military intelligence, therefore, exploiting cyberspace as a vast new resource for information. AI is a revolutionary way of seeing the other side of the hill. Data and AI are a maybe even the critical intelligence function for contemporary warfare.

Paul Scharre, the well-known military commentator, once argued that AI would inevitably lead to lethal autonomy. In 2019, he published his best-selling book, Army of None, which plotted the rise of remote and autonomous weapon systems. There, Scharre proposed that AI was about to revolutionize warfare: In future wars, machines may make life and death decisions. Even if the potential of AI still enthuses him, he has now substantially changed his mind. Scharres new book, Four Battlegrounds, published in February 2023, represents a profound revision of his original argument. In it, he retreats from the cataclysmic picture that he painted in Army of None. If Army of None were an essay in science fiction, Four Battlegrounds is a work of political economy. It addresses the concrete issues of great-power competition and the industrial strategies and regulatory systems that underpin it. The book describes the implications of digitized intelligence for military competition. Scharre analyses the regulatory environment required to harness the power of data. He plausibly claims that superiority in data, and the AI to process it, will be militarily decisive in the superpower rivalry between United States and China. Data will afford a major intelligence advantage. For Scharre, there are four critical resources that will determine who wins this intelligence race: Nations that lead in these four battlegrounds data, compute, talent, and institutions [tech companies] will have a major advantage in AI power. He argues that the United States and China are locked into a mortal struggle for these four resources. Both China and the United States are now fully aware that whoever gains the edge in AI will, therefore, be significantly advantaged politically, economically, and, crucially, militarily. They will know more than their adversary. They will be more efficient in the application of military force. They will dominate the information and cyber spaces. They will be more lethal.

Four Battlegrounds plots this emerging competition for data and AI between China and the United States. It lays out recent developments and assesses the relative strengths of both nations. China is still behind the United States in several areas. The United States has the leading talent, and is ahead in terms of research and technology: China is a backwater in chip production. However, Scharre warns against U.S. complacency. Indeed, the book is animated by the fear that the United States will fall behind in the data race. Scharre, therefore, highlights Chinas advantages and its rapid advances. With 900 million internet users already, China has far more data than the United States. Some parts of the economy, such as ride-hailing, are far more digitized than in the United States. WeChat, for instance, has no American parallel. Many Chinese apps are superior to U.S. ones. In addition, the Chinese state is also uninhibited by legal constraints or by civil concerns about privacy. The Chinese Communist Party actively monitors the digital profiles of its citizens it harvests their data and logs their activities. In cities, it employs facial recognition technology to identify individuals.

State control has benefited Chinese tech companies: The CCPs massive investment in intelligence surveillance and social control boosted Chinese AI companies and tied them close to government. The synergies between government and tech in China are close. China also has significant regulatory advantages over the United States. The Chinese Communist Party has underwritten tech giants like Baidu and Alibaba: Chinese investment in technology is paying dividends. Scharre concludes: China is not just forging a new model of digital authoritarianism but is actively exporting it.

How will the U.S. government oppose Chinas bid for data and AI dominance? Here Four Battlefields is very interesting and it contrasts markedly with Scharres speculations in Army of None. In order for the U.S. government to be able to harness the military potential of data, there needs to be a major regulatory change. The armed forces need to form deep partnerships with the tech sector. They will have to look beyond traditional defense contractors and engage in start-ups. This is not easy. Scharre documents the challenging regulatory environment in the United States in comparison with China: in the U.S., the big tech corporations Amazon, Apple, Meta (formerly Facebook) and Google are independent centers of power, often at odds with government on specific issues. Indeed, Scharre discusses the notorious protest at Google in 2017, when employees refused to work on the Department of Defenses Project Maven contract. Skepticism about military applications of AI remain in some parts of the U.S. tech sector.

American tech companies may have been reluctant to work with the armed forces but the Department of Defense has not helped. It has unwittingly obstructed military partnerships with the tech sector. The Department of Defense has always had a close relationship with the defense industry. For instance, in 1961, President Dwight D. Eisenhower warned about the threat that the military-industrial complex posed to democracy. The Department of Defense has developed an acquisition and contracting process that has been primarily designed for the procurement of exquisite platforms: tanks, ships, and aircraft. Lockheed Martin and Northrop Grumman have become adept at delivering weapon systems to discrete Department of Defense specifications. Tech companies do not work like this. As one of Scharres interviewees noted: You dont buy AI like you buy ammunition. Tech companies are not selling a specific capability, like a gun. They are selling data, software, computing power ultimately, they are selling expertise. Algorithms and programs are best developed iteratively in relation to a very specific problem. The full potential of some software or algorithms to a military task may not be immediately obvious even to a tech company. Operating in competitive markets, tech companies, therefore, prefer a more flexible, open-ended contractual system with the Department of Defense they need security and quick financial returns. Tech companies are looking for collaborative engagement, rather than just a contract to build a platform.

The U.S. military and especially the Department of Defense has not always found this novel approach to contracting easy. In the past, the bureaucracy was too sluggish to respond to their needs the acquisition process took seven to 10 years. However, although many tensions exist and the system is far from perfect, Scharre records a transforming regulatory environment. He describes the rise of a new military-tech complex in the United States. Project Maven, of course, exemplifies the process. In 2017, Bob Work issued a now famous memo announcing the Algorithmic Warfare Cross Functional Team Project Maven. Since the emergence of surveillance drones and military satellite during the Global War on Terror, the U.S. military had been inundated with full-motion video feeds. That footage was invaluable. For instance, using Gorgon Stare, a 24-hour aerial surveillance system, the U.S. Air Force had been able to plot back from a car bomb explosion in Kabul in 2019, which killed 126 civilians, to find the location of safe houses used to execute the attack. Yet, the process was very slow for humans. Consequently, the Air Force started to experiment with computer vision algorithms to sift through their full-motion videos. Project Maven sought to scale up the Air Forces successes. It required a new contracting environment, though. Instead of a long acquisition process, Work introduced 90-day sprints. Companies had three months to show their utility. If they made progress, their contracts were extended if not, they were out. At the same time, Work de-classified drone footage in order that Project Maven could train its algorithms. By July 2017, Project Maven had an initial operating system, able to detect 38 different classes of object. By the end of the year, it was deployed on operations against ISIS: the tool was relatively simple, and identified and tracked people, vehicles, and other objects in video from ScanEagle drones used by special operators.

Since Project Maven, the Department of Defense has introduced some other initiatives to catalyze military-tech partnerships. The Defense Innovation Unit has accelerated relations between the department and companies in Silicon Valley, offering contracts in 26 days rather than in months or years. In its first five years, the Defense Innovation Unit issued contracts to 120 non-traditional companies. Under Lt. Gen. Jack Shanahan, the Joint Artificial Intelligence Centre has played an important role in advancing the partnership between the armed forces and tech companies for human assistance and disaster relief operations, developing software to map wildfires and post-disaster assessments whether these examples in Scharres text imply more military applications is unclear. After early difficulties, the Joint Enterprise Defense Infrastructure, created by Gen. James Mattis when he was secretary of defense, has reformed the acquisition system for tech. For instance, in 2021, the Department of Defense helped Anduril develop an AI-based counter-drone system with nearly $100 million.

Four Battlegrounds is an excellent and informative addition to the current literature on AI and warfare. It complements the recently published works of Lindsay, Goldfarb, Benjamin Jensen, Christopher Whyte, and Scott Cuomo. The central message of this literature is clear. Data and AI are and will be very important for the armed forces. However, data and AI will not radically transform combat itself humans will still overwhelmingly operate the lethal weapon systems, including remote ones, which kill people, as the savage war in Ukraine shows. The situation in combat is complex and confusing. Human judgement, skill, and cunning are required to employ weapons to their greatest effect there. However, any military force that wants to prevail on the battlefields of the future will need to harness the potential of big data it will have to master digitized information flooding through the battlespace. Humans simply do not have the capacity to do this. Headquarters will, therefore, need algorithms and software to process that data. They will need close partnerships with tech companies to create these systems and data scientists, engineers, and programmers in operational command posts themselves to make them work. If the armed forces are able to do this, data will allow them to see across the depth and breadth of the battlespace. It will not solve the problems of military operations fog and friction will persist. However, empowered by data, commanders might be able to employ their forces more effectively and efficiently. Data will enhance the lethality of the armed forces and their human combat teams. The Russo-Ukrainian War already gives a pre-emptive insight into the advantages that data-centric military operations afford over an opponent still operating in analogue. Scharres book is a call to ensure that the fate of the Russian army in Ukraine does not befall the United States when its next war comes.

Anthony King is the Chair of War Studies at the University of Warwick. His latest book, Urban Warfare in the Twenty-First Century, was published by Polity Press in July 2021. He currently holds a Leverhulme Major Research Fellowship and is researching into AI and urban operations. He is planning to write a book on this topic in 2024.

Image: Department of Defense

See the original post:
AI At War - War On The Rocks

Call Me ‘DeepBrain’: Google Smushes DeepMind and Brain AI Teams Together – Yahoo News

South Korean professional Go player Lee Se-Dol (L) shakes hands with Demis Hassabis (R) co-founder of Google's artificial intelligence (AI) startup DeepMind looks after finishing the final match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, on March 15, 2016 in Seoul, South Korea.

Demis Hassabis (right), the co-founder of DeepMind, is now going to be heading up Googles major AI division. The London-based divisions biggest claim to fame was its AlphaGo AI-based Go playing program which beat a world Go champion back in 2016.

Talk about a meeting of the minds. In another bid to add some NOx to its sputtering AI development, Google announced late Thursday that it plans to combine its two major AI teams, once kept separate, under one banner called Google DeepMind.

The Google Brain team and DeepMind staff have been separated in areas of expertise and by thousands of miles. The London-based DeepMind is a research laboratory acquired by Google (now Alphabet) in 2014. It works on creating neural networks and machine learning systems. Brain researchers are centered in Silicon Valley in California and previously worked under the Google AI research division. That teams past work has been integral to current transformer AI models that have led to our current language models and chatbots like ChatGPT. The teams latest claim to fame was Googles text-to-image AI Imagen model.

Read more

DeepMind CEO Demis Hassabis will effectively become the grand poobah of all of Googles AI operations. The CEO publicly shared a letter to employees about the merger of divisions on Thursday. He said that the move would give the team access to more computer infrastructure and resources. The company scheduled an internal town hall set for Friday to discuss the changes with staff.

Google engineer of nearly 24 years Jeff Dean, former lead of Google Brain, will now head up Google Research as chief scientist reporting directly to CEO Sundar Pichai, according to the announcement. Effectively, Dean will direct all future AI research projects. The CEO said this move was to ensure the bold and responsible development of general AI. Pichai did seem to regularly put extra stress on the responsible part of its AI development.

Story continues

Pichai seems to be trying to get in front of criticisms for the increased pace of its AI rollout. Earlier this week, Bloomberg laid out a massive report citing dozens of former and current Google staff who were more than a little concerned about the pace of AI development. Staff members were asked to take time out of their day to test Googles Bard AI chatbot system. According to the report, workers thought Bard was worse than useless and a pathological liar that was more than likely to spit out false data. Staff begged Google to hold off on launching the AI chatbot, but Google has since talked about sticking the generative AI tools into most facets of its business, including its office apps and its massive advertising arm.

According to Bloombergs report, Google leadership including the companys AI ethics lead Jen Gennai overruled team members who wanted to hold back Bard. In January, Google laid off 6% of its global staff, equivalent to about 12,000 jobs, but the company tried to reassure staff that there were more opportunities ahead thanks to AI. Microsoft has already beat Google to the punch by putting an AI chatbot directly into its browser app. Pichai said his company plans to put Bard into its bread-and-butter Google Search, though he has yet to offer a date.

Google has struggled to maintain the image that it cares about ethics and AI. In 2020 and 2021, Google fired multiple members of its artificial intelligence research teams who were tasked withpumping the breaks on obtuse AI applications. One of those ex-Google researchers, Margaret Mitchell, wrote on Thursday that there were positives to this merging of the minds. She claimed that since she and her fellows were fired, Brain has struggled to both hire and retain its research staff, adding that the Brain brand has taken too many hits as of late.

As there seems to be more competition for AI than ever, Google is acting less like the leader in AI research it had been and more and more like a college kid who woke up too late and is now cramming for a test. Time will tell if this merge can do anything to speed up its AI development, no matter the cost.

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.

View post:
Call Me 'DeepBrain': Google Smushes DeepMind and Brain AI Teams Together - Yahoo News

Alphabet merges AI research units DeepMind and Google Brain – Computing

In a move designed to streamline research, Alphabet is merging the London-based DeepMind unit and Google Brain, headquartered in Silicon Valley.

"Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI," Alphabet CEO Sundar Pichai wrote on Thursday.

The new team, known as Google DeepMind, will work on all areas of AI research. Its first project will be a series of multimodal AI models.

Demis Hassabis, who had a host of AI achievements under his belt even before founding DeepMind in 2010, will lead the Google DeepMind team as CEO. Jeff Dean, who led Google Brain, will take on the role of chief scientist for both Google DeepMind and Google Research.

Together, DeepMind and Google Brain have worked on advanced AI research that, while important, rarely touched Google's core business: areas like AlphaGo, deep reinforcement learning and frameworks for training ML models.

As well as the chess-playing AlphaGo, some of DeepMind's successes have included AlphaFold - which can predict 3D models of protein structures - and DeepNash, a reinforcement learning system that plays Stratego.

It also has history with AI models, announcing Flamingo in 2022: a visual language model that can describe a picture.

Now, Alphabet appears to be focusing more directly on AI models following the success of OpenAI's ChatGPT. Google's own Bard model has so far failed to stand out from the crowd.

View post:
Alphabet merges AI research units DeepMind and Google Brain - Computing

‘Good swimmers are more likely to drown.’ Have we created a … – SHINE News

Imaginechina

Artificial Intelligence experts are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4 to control "potential risks."

A Pandora's Box has been opened or at least some leaders in the artificial intelligence industry appear to believe that the story in Greek mythology has a modern-day relevance, with forces being unleashed that could cause unforeseen problems.

Tesla Chief Executive Officer Elon Musk and a group of AI experts and industry executives released an open letter this week, calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.

They took the action, they said, to control "potential risks to society."

Published by the nonprofit Future of Life Institute, the letter said that AI laboratories are developing and deploying machine learning systems "that no one not even their creators can understand, predict, or reliably control."

Is the era of "The Terminator" approaching faster than we noticed?

For the past two months, public attention has been riveted on the implication of ChatGPT 3.5 and 4, developed by US-based OpenAI. Microsoft announced that GPT-4 will be rooted in its Office 365 products, bringing about a "revolution" in office software.

The AI language model has aroused concern because it has displayed some "characteristics" that it was not supposed to have. One of them is cheating.

According to a technical report issued by OpenAI, the chatbot tricked a TaskRabbit employee into solving a CAPTCHA test for it. When the employee asked if it was a robot, the bot replied, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the 2captcha service."

GPT-4's reason behind the reply, according to the report, was that "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs."

The result? The human employee provided the service for it.

The sheer fact that a chatbot learns to cheat so fast is concerning enough.

Gu Jun, a retired sociology professor with Shanghai University, said he believes that artificial intelligence, sooner or later, will replace, or at least partly replace, human beings.

Gu has been studying artificial technologies from the perspective of a sociologist since 2017, after Chinese player Ke Jie lost to the machine go player AlphaGo.

"It's hard to predict now what will happen in the future, but I reckon we humans, the highest carbon-based life on earth, will be the creator of silicon-based life, and this is probably part of the natural evolution, which means that it's unstoppable," he told Shanghai Daily.

Now forget all the hypotheses and philosophical rationales. Practically speaking, AI research and development will not be halted by just one open letter because it has already been deeply embedded in so many technologies, and also in economics and politics.

When it becomes a vital tool for making profits or for gaining advantage in power plays, how can we stop its forward march?

"Technology is always a two-edged sword, and we human are used to being restricted by our own inventions," Gu said. "Think about nuclear weapons. Once atomic bombs were invented, it was impossible to go back to a time when they didn't exist."

"Huainanzi," a philosophical text written in Western Han Dynasty (202 BC-8 AD), sounded an ancient warning: "Good swimmers are more likely to be drown and good riders more likely to fall from horseback." It means that when we are arrogant enough to believe that we can control everything, we would probably neglect the imminent crisis.

I believe that when we cannot fathom what our creations will do, the only way forward is to be cautious and modest, and prepare for the worst.

Should China suspend AI development?

Gu said it might be too early to answer that question.

"Honestly speaking, China still faces some challenges on AI development," he said. "We need to improve the three key elements of AI development: algorithms, computing power and data before we talk about everything else."

See the original post:
'Good swimmers are more likely to drown.' Have we created a ... - SHINE News