Category Archives: Artificial General Intelligence

U. community discusses integration of AI into academic, points to … – The Brown Daily Herald

Provost Francis J. Doyle III identified the intersection of artificial intelligence and higher education as a University priority in an Aug. 31 letter to the community titled Potential impact of AI on our academic mission. Doyles address comes at a time of uncertainty as educational institutions struggle to make sense of the roles and regulations of artificial intelligence tools in academia.

Doyles letter begins by zooming in on generative AI tools such as ChatGPT, which soared in popularity after its debut in late November of last year. The program, an open-access online chatbot, raked in over 100 million monthly users within the first two months of its launch, according to data from Pew Research Center.

There is no shortage of public analysis regarding the ways in which the use of generative artificial intelligence tools which are open-access tools that can generate realistic text, computer code and other content in response to prompts from the user provide both challenges and opportunities in higher education, Doyle wrote in the letter.

Exploring the use of AI in ways that align with Browns values has been a topic of discussion among our senior academic leaders for several months, he continued.

Doyle did not prescribe University-wide AI policies in the letter but encouraged instructors to offer clear, unambiguous guidelines about AI usage in their courses. He also provided a variety of resources for students seeking guidelines on citing AI-generated content, as well as how to use AI as a research tool.

As we identify the ways in which AI can enhance academic activities, we must also ensure these tools are understood and used appropriately and ethically, Doyle wrote.

The contention presented by Doyle is one mirrored by educators and administrators nationwide: How can academic institutions strike a balance between using AI as a learning tool and regulating it enough to avoid misuse?

The upsides to AI tools such as ChatGPT that are often touted include improved student success, the ability to tailor lessons to individual needs, immediate feedback for students and better student engagement, Doyle wrote in a message to The Herald. But it is important for students to understand the inherent risks associated with any open-access technology, in terms of privacy, intellectual property ownership and more.

Doyle told The Herald that he anticipates prolonged discussions with academic leadership, faculty and students as the University continues to monitor the evolution of AI tools and discovers innovative applications to improve learning outcomes and inform research directions.

Michael Vorenberg, associate professor of history, is finding creative ways to bring AI into the classroom. On the first day of his weekly seminar, HIST 1972A: American Legal History, 1760-1920, Vorenberg spoke candidly with his students about general attitudes regarding AI in education and the opportunities for exploration these developments afford.

Most of what educators are hearing about are the negative sides of generative AI programs, Vorenberg wrote in a message to The Herald. I am also interested in how generative AI might be used as a teaching tool.

Vorenberg outlined two broad potential uses for AI in his class: The examination of sources generated by ChatGPT allowing students to probe into the appropriateness of the retrieved documents from a historians perspective and the intentional criticism of said generated sources, understanding how a historians perspective could have produced a stronger source.

The underlying assumption behind the exercise is that even a moderately skilled historian can do better at this sort of task than a generative AI program, Vorenberg explained. Until (this) situation changes, we who teach history have an opportunity to use generative AI to give concrete examples of the ways that well-trained human historians can do history better than AI historians.

Given the Universitys large pool of students interested in pursuing computer science The Heralds recent first-year poll shows computer science as the top indicated concentration for the class of 2027 Brown has the potential to shape the future of AI.

Doyle told The Herald that the University is well-situated to contribute our creativity (and) our entrepreneurial spirit to making an impact as researchers continue to strengthen these tools.

Jerry Lu 25, who is concentrating in both computer science and economics, obsessively followed the growing momentum behind Open AI, ChatGPT and developments in automation.

Lu believes there are two ways the University can best support its students in navigating artificial intelligence one from an educational perspective, and another from a more career-oriented view.

In terms of education, Lu said he hopes that the University would approach AI not just through computer science classes, but from a sociology approach or humanities lens as well to equip all students with the necessary skills to address how AI will undoubtedly affect society.

Get The Herald delivered to your inbox daily.

Lu also pointed to the restructured Center for Career Exploration as a potential resource for preparing students to enter a workforce heavily influenced by AI.

The new Career LAB should be cognizant of how these new technologies are going to impact careers, Lu said. Offering guidance on how students should think about AI and how they can navigate (it) or use (it) to their advantage, I think that that would be really key.

When asked about how the universities should engage with AI, ChatGPT focused on the pursuit for a common good.

Universities have a critical role to play in the responsible development and application of artificial intelligence, it replied. They should focus on research, education, ethics, collaboration and societal impact to ensure that AI technologies benefit humanity as a whole while minimizing potential harms.

Sofia Barnett is a University News editor overseeing the faculty and higher education beat. She is a sophomore from Texas studying history, politics and nonfiction writing.

Read more:

U. community discusses integration of AI into academic, points to ... - The Brown Daily Herald

As regulators talk tough, tackling AI bias has never been more urgent – VentureBeat

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

The rise of powerful generative AI tools like ChatGPT has been described as this generations iPhone moment. In March, the OpenAI website, which lets visitors try ChatGPT, reportedly reached847 million unique monthly visitors. Amid this explosion of popularity, the level of scrutiny placed on gen AI has skyrocketed, with several countries acting swiftly to protect consumers.

In April, Italy became the first Western country toblockChatGPT on privacy grounds, only to reverse the ban four weeks later. Other G7 countries areconsidering a coordinated approachto regulation.

The UK will host thefirst global AI regulation summitin the fall, with Prime Minister Rishi Sunak hoping the country can drive the establishment of guardrails on AI. Itsstated aimis to ensure AI is developed and adopted safely and responsibly.

Regulation is no doubt well-intentioned. Clearly, many countries are aware of the risks posed by gen AI. Yet all this talk of safety is arguably masking a deeper issue: AI bias.

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Although the term AI bias can sound nebulous, its easy to define. Also known as algorithm bias, AI bias occurs when human biases creep into the data sets on which the AI models are trained. This data, and the subsequent AI models, then reflect any sampling bias, confirmation bias and human biases (against gender, age, nationality, race, for example) and clouds the independence and accuracy of any output from the AI technology.

As gen AI becomes more sophisticated, impacting society in ways it hadnt before, dealing with AI bias is more urgent than ever. This technology isincreasingly usedto inform tasks like face recognition, credit scoring and crime risk assessment. Clearly, accuracy is paramount with such sensitive outcomes at play.

Examples of AI bias have already been observed in numerous cases. When OpenAIs Dall-E 2, a deep learning model used to create artwork, wasasked to create an imageof a Fortune 500 tech founder, the pictures it supplied were mostly white and male. When asked if well-known Blues singer Bessie Smith influenced gospel singer Mahalia Jackson, ChatGPTcould not answer the question without further prompts, raising doubts about its knowledge of people of color in popular culture.

Astudyconducted in 2021 around mortgage loans discovered that AI models designed to determine approval or rejection did not offer reliable suggestions for loans to minority applicants.These instances prove that AI bias can misrepresent race and gender with potentially serious consequences for users.

AI that produces offensive results can be attributed to the way the AI learns and the dataset it is built upon. If the data over-represents or under-represents a particular population, the AI will repeat that bias, generating even more biased data.

For this reason, its important that any regulation enforced by governments doesnt view AI as inherently dangerous. Rather, any danger it possesses is largely a function of the data its trained on. If businesses want to capitalize on AIs potential, they must ensure the data it is trained on is reliable and inclusive.

To do this, greater access to an organizations data to all stakeholders, both internal and external, should be a priority. Modern databases play a huge role here as they have the ability to manage vast amounts of user data, both structured and semi-structured, and have capabilities to quickly discover, react, redact and remodel the data once any bias is discovered. This greater visibility and manageability over large datasets means biased data is at less risk of creeping in undetected.

Furthermore, organizations must train data scientists to better curate data while implementing best practices for collecting and scrubbing data. Taking this a step further, the data training algorithms must be made open and available to as many data scientists as possible to ensure that more diverse groups of people are sampling it and can point out inherent biases. In the same way modern software is often open source, so too should appropriate data be.

Organizations have to be constantly vigilant and appreciate that this is not a one-time action to complete before going into production with a product or a service. The ongoing challenge of AI bias calls for enterprises to look at incorporating techniques that are used in other industries to ensure general best practices.

Blind tasting tests borrowed from the food and drink industry, red team/blue team tactics from the cybersecurity world or the traceability concept used in nuclear power could all provide valuable frameworks for organizations in tackling AI bias. This work will help enterprises to understand the AI models, evaluate the range of possible future outcomes and gain sufficient trust with these complex and evolving systems.

In previous decades, talk of regulating AI was arguably putting the cart before the horse. How can you regulate something whose impact on society is unclear? A century ago, no one dreamt of regulating smoking because it wasnt known to be dangerous. AI, by the same token, wasnt something under serious threat of regulation any sense of its danger was reduced tosci-fi filmswith no basis in reality.

But advances in gen AI and ChatGPT, as well as advances towards artificial general Intelligence (AGI), have changed all that. Some national governments seem to be working in unison to regulate AI, while paradoxically, others are jockeying for position as AI regulators-in-chief.

Amid this hubbub, its crucial that AI bias doesnt become overly politicized and is instead viewed as a societal issue that transcends political stripes. Across the world, governments alongside data scientists, businesses and academics must unite to tackle it.

Ravi Mayuram is CTO of Couchbase.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Read the rest here:

As regulators talk tough, tackling AI bias has never been more urgent - VentureBeat

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and … – Medium

Artificial intelligence promises to transform industries and generate immense economic value over the coming decades. Tech giants, cloud computing leaders and semiconductor firms are fiercely competing to provide the foundational AI infrastructure and services fueling this revolution. In this high-stakes battle to dominate the AI sphere, these companies are rapidly advancing hardware, software, cloud platforms, developer tools and applications. For investors, understanding the dynamic competitive landscape is key to identifying leaders well-positioned to capitalize on surging AI demand.

The worlds largest technology companies view leadership in artificial intelligence as vital to their futures. AI permeates offerings from Amazon, Microsoft, Google, Facebook and Apple as they fight for market share. The cloud has become the primary arena for delivering AI capabilities to enterprise customers. Amazon Web Services, Microsoft Azure and Google Cloud Platform offer integrated machine learning, data analytics and AI services through their cloud platforms.

The tech titans are also racing to advance AI assistant technologies like Alexa, Siri and Cortana for consumer and business use. IoT ecosystems that accumulate data to train AI depend on cloud infrastructure. Tech firms battle to attract top AI engineering talent and acquire promising startups. Government scrutiny of their AI competitive tactics is growing. But the tech giants continue aggressively investing in R&D and new partnerships to expand their AI footprints.

The major cloud providers have emerged as gatekeepers for enterprise AI adoption. AWS, Microsoft Azure, Google Cloud and IBM Cloud aggressively market integrated machine learning toolkits, neural network APIs, automated ML and other services that remove AI complexities. This strategy drives more customers to their clouds to access convenient AI building blocks.

Cloud platforms also offer vast on-demand computing power and storage for AI workloads. Firms like AWS and Google Cloud tout specialized AI accelerators on their servers. The cloud battleground has expanded to wearable, mobile and edge devices with AI capabilities. Cloud leaders aim to keep customers within their ecosystems as AI proliferates.

Graphical processing units (GPUs) from Nvidia, AMD and Intel currently dominate AI computing. But rising challengers like Cerebras, Graphcore and Tenstorrent are designing specialized processing chips just for deep learning. Known as AI accelerators, these chips promise faster training and inference than repurposed GPUs. Startups have attracted huge investments to develop new accelerator architectures targeted at AI workloads.

Big tech companies are also muscling into the AI chip space. Googles Tensor Processing Units power many internal workloads. Amazon has designed AI inference chips for Alexa and AWS. Microsoft relies on FPGA chips from Xilinx but is also developing dedicated AI silicon. As AI proliferates, intense competition in AI-optimized semiconductors will shape the future landscape.

Much AI innovation comes from open source projects like TensorFlow, PyTorch, MXNet and Keras. Tech giants liberally adopt each others frameworks into their own stacks. This open ecosystem drives rapid advances through collaboration between intense competitors. But tech firms then differentiate by offering proprietary development environments, optimized runtimes and additional services around the open source cores.

Leading corporate sponsors behind frameworks like Facebooks PyTorch and AWSs Gluon intend to benefit by steering standards and features. However, generous licensing enables wide adoption and growth. The symbiotic relationship between open source and proprietary AI has greatly accelerated overall progress.

Beyond core technology purveyors, many other players want a slice of the AI market. Consulting firms sell AI strategy and implementation services. Cloud data warehouse vendors feature AI-driven analytics. Low-code platforms incorporate AI-powered automation. Cybersecurity companies inject AI into threat detection. AI success will ultimately require an entire ecosystem integrating hardware, software, infrastructure, tools and expertise into multi-layered technology stacks.

Current AI capabilities remain narrow and require extensive human guidance. But rapid advances in foundational machine learning approaches, computing power and neural network design point to a future Artificial General Intelligence that mimics human-level capacities. Tech giants are investing today in moonshot projects like robotics, quantum computing and neuro-symbolic AI to fuel the next paradigm shifts.

Government regulation will also shape AIs evolution, balancing innovation with ethics. Despite uncertainties, AI will undoubtedly transform business and society over the next decade through visionary efforts underway today across the technology landscape.

For investors, AI represents an enormously valuable mega-trend with a long runway for growth. While hype exceeds reality today, practical AI adoption is accelerating. The tech giants have tremendous balance sheet resources to sustain investment. But they also face anti-trust scrutiny that could advantage smaller players.

Seeking exposure across the AI ecosystem is ideal to benefit from both large established players and potential rising challengers. AI promises outsized returns for those investors savvy enough to identify leaders powering this transformative technology through its period of exponential growth.

Sign up for the SEEKME newsletter to receive the latest artificial intelligence insights, case studies and research direct to your inbox each month. Stay ahead of the AI curve.

Follow this link:

The Race to Lead the AI Revolution: Tech Giants, Cloud Titans and ... - Medium

Decoding Opportunities and Challenges for LLM Agents in … – Unite.AI

We are seeing a progression of Generative AI applications powered by large language models (LLM) from prompts to retrieval augmented generation (RAG) to agents. Agents are being talked about heavily in industry and research circles, mainly for the power this technology provides to transform Enterprise applications and provide superior customer experiences. There are common patterns for building agents that enable first steps towards artificial general intelligence (AGI).

In my previous article, we saw a ladder of intelligence of patterns for building LLM powered applications. Starting with prompts that capture problem domain and use LLM internal memory to generate output. With RAG, we augment the prompt with external knowledge searched from a vector database to control the outputs. Next by chaining LLM calls we can build workflows to realize complex applications. Agents take this to a next level by auto determining how these LLM chains are to be formed. Let's look in detail.

A key pattern with agents is that they use the language understanding power of LLM to make a plan on how to solve a given problem. The LLM understands the problem and gives us a sequence of steps to solve the problem. However, it doesn't stop there. Agents are not a pure support system that will provide you recommendations on solving the problem and then pass on the baton to you to take the recommended steps. Agents are empowered with tooling to go ahead and take the action. Scary right!?

If we ask an agent a basic question like this:

Human: Which company did the inventor of the telephone start?

Following is a sample of thinking steps that an agent may take.

Agent (THINKING):

Agent (RESPONSE): Alexander Graham Bell co-founded AT&T in 1885

You can see that the agent follows a methodical way of breaking down the problem into subproblems that can be solved by taking specific Actions. The actions here are recommended by the LLM and we can map these to specific tools to implement these actions. We could enable a search tool for the agent such that when it realizes that LLM has provided search as an action, it will call this tool with the parameters provided by the LLM. The search here is on the internet but can as well be redirected to search an internal knowledge base like a vector database. The system now becomes self-sufficient and can figure out how to solve complex problems following a series of steps. Frameworks like LangChain and LLaMAIndex give you an easy way to build these agents and connect to toolings and API. Amazon recently launched their Bedrock Agents framework that provides a visual interface for designing agents.

Under the hood, agents follow a special style of sending prompts to the LLM which make them generate an action plan. The above Thought-Action-Observation pattern is popular in a type of agent called ReAct (Reasoning and Acting). Other types of agents include MRKL and Plan & Execute, which mainly differ in their prompting style.

For more complex agents, the actions may be tied to tools that cause changes in source systems. For example, we could connect the agent to a tool that checks for vacation balance and applies for leave in an ERP system for an employee. Now we could build a nice chatbot that would interact with users and via a chat command apply for leave in the system. No more complex screens for applying for leaves, a simple unified chat interface. Sounds exciting!?

Now what if we have a tool that invokes transactions on stock trading using a pre-authorized API. You build an application where the agent studies stock changes (using tools) and makes decisions for you on buying and selling of stock. What if the agent sells the wrong stock because it hallucinated and made a wrong decision? Since LLM are huge models, it is difficult to pinpoint why they make some decisions, hence hallucinations are common in absence of proper guardrails.

While agents are all fascinating you probably would have guessed how dangerous they can be. If they hallucinate and take a wrong action that could cause huge financial losses or major issues in Enterprise systems. Hence Responsible AI is becoming of utmost importance in the age of LLM powered applications. The principles of Responsible AI around reproducibility, transparency, and accountability, try to put guardrails on decisions taken by agents and suggest risk analysis to decide which actions need a human-in-the-loop. As more complex agents are being designed, they need more scrutiny, transparency, and accountability to make sure we know what they are doing.

Ability of agents to generate a path of logical steps with actions gets them really close to human reasoning. Empowering them with more powerful tools can give them superpowers. Patterns like ReAct try to emulate how humans solve the problem and we will see better agent patterns that will be relevant to specific contexts and domains (banking, insurance, healthcare, industrial, etc.). The future is here and technology behind agents is ready for us to use. At the same time, we need to keep close attention to Responsible AI guardrails to make sure we are not building Skynet!

Read the rest here:

Decoding Opportunities and Challenges for LLM Agents in ... - Unite.AI

Google looks to make Artificial Intelligence as simple as Search – Times of India

SAN FRANCISCO: Google is now doing to AI what it did to the internet. "We are taking the sophistication of the AI model and putting it behind a simple interface called chat which then lets you open it up to every department," Google Cloud's CEO Thomas Kurian said.Duet AI in Workspace and Vertex AI - both recently launched products by Google - are expected to revolutionise the market, he added. Kurian was speaking with some members of the press last week on the sidelines of the three-day Google Cloud Next - a mega event at Moscone Center in San Francisco from August 29."AI can be used in virtually every department, every business function in a company, and every industry. Retailers are testing it for shopping and commerce. Telecommunication companies are using it for customer service. Banks are using it to synthesise financial statements for their wealth managers. We expect the number of people who can use AI to grow just like when we simplified access to the internet and broadened it," he added.Vertex AI Search and Conversation, which was made available during the Cloud Next event, allows developers with minimum machine learning knowledge to take data, customise it, build an interactive chatbot or search engine within it, and deploy the apps within a few hours.Aparna Pappu, VP and general manager of Google Workspace, said Duet AI has your back. "It can help write emails and make presentations using different sources and summarise what was said in a virtual meeting and even attend the meet on the user's behalf," she said in another media interaction during the event.Kurian said that generative AI is moving technology out of the IT department to many other functions in companies. "When we look at users of generative AI - marketing departments, HR, supply chain organisations - none of them were talking to us earlier, but at this conference, many are from non-engineering backgrounds... from different business lines because they want to understand how they can use generative AI technology," he added.Google has provided an AI platform that protects data and ensures that it does not leak out. "We have capability in Vertex so data can be kept and any feedback or changes to the model are private to you," he added. Kurian said they have analysed a million users, understood their behaviour, and found that an average user of Duet can typically write 30-40% more emails with more than 50% of the content generated by the model.

Read the original here:

Google looks to make Artificial Intelligence as simple as Search - Times of India

Heated massages, AI counselling and films on the go: Will … – Euronews

LG presented a vision of what autonomous vehicles (AVs) could be like in the future - and its all about having more "me time" on the move.

Its been a stressful day at work so you decide to linger in the car and take a breath before getting out and facing the task of preparing dinner or tackling the household chores.

You recline your seat, listening to the soothing sounds of nature while it gives you a heated massage. Or maybe you opt for counselling from the onboard artificial intelligence (AI) to wind down and clear your head after a hectic day.

Compared with your current daily commute sitting in stop-start traffic, the concept might seem lightyears from reality. However, it is just one vision of what autonomous driving could look like proposed by South Korean electronics giant LG.

The technology behind autonomous vehicles (AVs) is currently geared towards the mechanics of getting the car to move and navigate independently while the onboard experience of passengers is, for now at least, relegated to a secondary talking point.

LG, on the other hand, is now actively turning its focus to the sensory elements of being inside the autonomous cars of the future, believing the perspective should shift to the opportunities that AVs will give to improve the driving experience.

"There have been a lot of discussions about future mobility in terms of physical transformation and the role of the car. However, despite many discussions, it is still unclear how the changes will exactly happen," William Cho, the companys CEO, said this week at IAA Mobility - one of the worlds largest trade fairs of its kind - in Munich.

"As we all know, the mobility industry is evolving dramatically, changing our traditional beliefs on cars. Our in-depth customer research directed us to see mobility through the lens of customer experience, focusing on expanding space in the car and quality of time spent on the road".

The companys idea? To redefine the car from a means of travel to a "personalised digital cave" for its occupant.

To date, billions have been invested in developing the technology to produce robot vehicles controlled and piloted by AI-powered computer systems, but prototypes so far all require human inputs.

AVs in the US are subject to standards set by SAE International, formerly known as the Society of Automotive Engineers, with level 0 being no automation and level 5 being the highest rating, full vehicle autonomy in all conditions and locations.

Teslas driver assistance system Autopilot, for example, which offers partial automation, is classified at level 2. The US carmakers basic Autopilot, which is available in all models, offers lane centring and assisted steering while more advanced systems, like Enhanced Autopilot and Full Self-Driving Capacity, have functions to help park, stop and summon the vehicle.

Earlier this summer, Mercedes-Benz announced its Drive Pilot system had been given level 3 approval, attaining the highest SAE rating for a commercial vehicle to date.

Unlike level 2, cars classified as level 3 can handle most driving situations but again, a driver must intervene to make inputs to avoid potentially dangerous incidents.

Last month, Cruise, an arm of US automaker General Motors, was granted a licence in California - along with Alphabet-owned company Waymo - to expand its existing fleet of self-driving taxis in San Francisco and operate on a 24/7 basis.

Unlike commercial vehicles, these taxis are operating at level 4 - in other words, near-complete autonomy. They are programmed to drive in a preset area - known as geofencing - in which they can negotiate their environment through a combination of cameras, sensors, machine learning algorithms and artificial intelligence (AI), determining their location, real-time information on pedestrians and traffic and how each is likely to behave.

If a difficult circumstance arises, a human operator is able to step in remotely to guide or stop the vehicle.

And difficulties do arise. Just 10 days after it was granted its latest licence, Cruise was asked to reduce its fleet following a series of accidents, including a collision with a fire engine.

According to data from the US National Highway Traffic Safety Administration (NHTSA), self-driving Tesla vehicles have also been involved in 736 crashes in the US since 2019, resulting in 17 known fatalities.

Despite the rollout of services like Cruise and Tesla's Autopilot, and the major investment in research, development and testing by the automotive industry, its unlikely a level 5 vehicle will be on the market anytime soon.

Cho believes, however, that electrification will only accelerate the shift to autonomous driving.

"Today's mobility is shifting towards software-defined vehicles [SDVs]. This means social mobility will transform into highly sophisticated electronic devices and can be seen as one of moving space to provide new experiences," he said.

LGs vision for these mobile experiences is theoretical for now but the company plans to design and produce technologies for future AVs based on three core themes collectively known as "Alpha-able": Transformable, Explorable and Relaxable.

For the first, LG predicts that cars will become personalised digital caves, spaces that will be able to easily adapt to suit different purposes and occasions. It could be a restaurant to dine in with your partner, a home office on wheels where you can make business deals in private or even recline and watch a film in a cinema on wheels.

For the second theme, LG is aiming to incorporate augmented reality (AR) and advanced AI to improve the interactions with the vehicle; whether this be voice assistants who recommend content based on the duration of the determined route to your destination or interactive windscreens made from OLED displays that show information about your location and journey.

And of course, the driving experience should be relaxing, through sensory stimuli such as films, massages, meditative music and so on through the cars infotainment system.

While level 5 AVs are yet to materialise, LG says it is already at work on the necessary technology to achieve its three-pronged objectives, including opening a new factory in Hungary in a joint venture with Magna International to produce e-powertrains, the power source of EVs.

"We strongly believe future mobility should focus on the mission to deliver another level of customer experience. LG, with innovative mobility solutions, is more than committed to this important mission," Cho said.

More here:

Heated massages, AI counselling and films on the go: Will ... - Euronews

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance – PYMNTS.com

Is it Moores Law, or mores law?

Anyone keeping an eye on the generative artificial intelligence (AI) landscape could be forgiven for confusing the two.

This, as another week has gone by, and with it another hyper-rapid clip of advances in the commercialization of generative AI solutions, and even a new executive orderfrom California Governor Gavin Newsom, around theneed for regulationof the innovative technology.

Were it any other technology, the rapid pace of change we are seeing within AI would require a least a year or more to make it to market.

Already, after China became the first major market economy last month to pass regulations policing AI,the nationsbiggest tech firmsdebuted theirown adherent products just weeks later this one.

And as generative AI technology continues to add more fuel its rocket ship trajectory, these are the stories and moonshots that PYMNTS has been tracking.

Generative AI can generate, well, anything. And while the possibilities are endless, they also run the gamut from widely positive and productively impactful, to dangerous and geared toward criminal goals. After all, genAI is a tool, and in the absence of a firm regulatory framework, the utility of a tool depends entirely on the hand that wields it.

Thats whyGooglehas announced anew policy mandating advertisers for the upcoming U.S. election to disclose when the ads they wish to display across any of Googles platforms (excluding YouTube) have been manipulated or created using AI.

Meta Platforms, the parent company of Instagram and Facebook, andX, formerly known as Twitter, both of which have faced allegations of spreading political misinformation, have not yet announced any specific rules around AI-generated ad content.

Complicating matters somewhat is the fact that PYMNTS Intelligence has found there doesnt yet exist a truly foolproof method to detect and exposeAI-generated content.

One of the questions that is immediately raised [around AI] ishow do you draw the linebetween human-generated and AI-generated content,John Villasenor,professor of electrical engineering, law, public policy and management atUCLAand faculty co-director of theUCLA Institute for Technology, Law and Policy, explained to PYMNTS on Friday (Sept. 8).

And as generative AI tools are increasingly leveraged by bad actors to fool ID authorization protocols and scam unsuspecting consumers, it is becoming incumbent on organizations toupgrade their own defenseswith AI capabilities.Phishing attacksalone have seen a 150% increase year over year as a result of new AI-driven techniques.

The technology is already proving to be both a blessing and a hindrance forpayments security, and as reported here on Tuesday (Sept. 5), payments firm ThetaRayrecently raised $57 million to boost its AI-powered financial crime detection capabilities.

While the artificial element of AI has its darker side, it is the intelligence aspect of the technology that enterprises and platforms want to capitalize on and integrate.

Apple isreportedly spending millions of dollars a day building out its generative AI capabilities across several product teams, including its voice assistant Siri, and there exists an attractive white space opportunity for AI to make todays smart assistants a whole lot smarter.

Chipmaker Qualcomm is working withMetato make that companys Llama 2-based artificial AI implementationsavailable on smartphones and PCs, and the Qualcomms CEO said on Tuesday (Sept. 5) he sees AI as potentially revivingthe smartphone market, where global sales are at their lowest levels in a decade.

Elsewhere, video communications companyZoomannounced that it is making its own generative AI assistantfree to paid users; while the buzzy,well-fundedAI startupAnthropic on Thursday (Sept. 7)introduced a paid planfor the Pro version of its AI assistant,Claude.

Not to be outdone, customer experience management platformSprinklrhas integrated its AI platform withGoogle CloudsVertex AI in order to let retail companieselevate contact center efficiency withgenerative AI capabilities that support service agents.

This, whileCaseys General Stores announced on Wednesday (Sept. 6) that the convenience retailer is turning to conversational voice AI ordering technologyin an ongoing push to gain share from quick-service restaurants (QSRs).

IBM also announced on Thursday (Sept. 7) that it is releasingnew enhancements to its AI platform, watsonx, and giving developers a preview next week at the companys TechXchange event in Las Vegas.

And IBM isnt the only tech company hosting a developer conference. Generative AI pioneer OpenAIannounced Wednesday (Sept. 6) that its first DevDay developer conference will take place this November.

Generative AI is also getting utilized for specialized purposes.

CFOs areincreasingly tapping the toolto help optimize their working capital and treasury management approaches; while consumer brand data platform Alloy.ai on Thursday (Sept. 7) announced the addition of new predictive AI featuresto its own forecasting and supply chain solution.

And over in the healthcare sector, the industry is reportedlyallocating over a tenth of its annual spend (10.5%) on AI and machine learning innovations.

As for what the health industry hopes to achieve with this investment? Hopefully thecure to its zettabyte-sizeddata fragmentation problems.

Continue reading here:

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance - PYMNTS.com

Dr Ben Goertzel – A.I. Wars: Google Fights Back Against OpenAI’s … – London Real

2023 may well be the year we look back on as a time of significant change with regards to the exponential growth of artificial intelligence. AI platforms and tools are starting to have a major impact on our daily lives and virtually every conceivable industry is starting to sit up and take notice.

While the doom mongers might be concerned that these superintelligent machines pose a genuine threat to humanity, and concern grows about our future here on planet earth, many experts point to the enormous potential and benefits such sophisticated technology could have on the world.

Just recently, the co-founder of Google DeepMind, Mustafa Suleyman said that he believes within the next five years, everybody is going to have their own AI-powered personal assistant as the technology becomes cheaper and more widespread.

While on the other hand, Godfather of AI Geoffrey Hinton quit his job at Google because hes concerned at the rate of improvement in AI development and what this means for society as a whole.

One thing is for certain, the world we live in is going to change drastically in the coming years. Thankfully, Im able to call upon one of the smartest human beings I know and someone who is not only at the forefront of this shift, but who also cares deeply about the ethical, political and social ramifications of AI development, and is focussed on the goal of creating benevolent AI systems.

Dr Ben Goertzel is a cross-disciplinary scientist, futurist, author and entrepreneur, who has spent the best part of his working life focused on creating benevolent superhuman artificial general intelligence (AGI).

In fact, Ben has been credited with popularising the term AGI in our mainstream thinking and has published a dozen scientific books, 150 technical papers, and numerous journal articles, making him one of the worlds foremost experts in this rapidly expanding field.

In 2017, Ben founded SingularityNet with the goal of creating a decentralised, democratic, inclusive and beneficial AGI which has become the worlds leading decentralised AI marketplace.

At SingularityNET, Bens goal is to create an AGI that is not dependent on any central entity, is accessible to anyone, and not restricted to the narrow goals of a single corporation or even a single country.

The platform is an open and decentralised network of AI services on a blockchain where developers publish their services to the network, and can be used by anyone with an internet connection.

SingularityNets latest project is Zarqa, a supercharged, intelligent, neural-symbolic, large language model on a massive scale that promises to not only take on OpenAIs ChatGPT, but go much, much further.

Such advanced neural-symbolic AI techniques will revolutionise and disrupt every industry, taking a giant step towards AGI.

Ive come to the conclusion that to make decentralised AGI really work. We have to launch something thats way smarter than ChatGPT and launch that on a decentralised infrastructure.

In a broader sense, Ben does of course concede that there are risks in building machines that are capable of learning anything and everything, including how to reprogram itself to become an order of magnitude more intelligent than any human.

I think the implications of superintelligence are huge and hard to foresee. Its like asking nomads living in early human tribes what civilisation is going to be like. They could foresee a few aspects of it, but to some extent, you just have to discover when you get there.

Moreover, Ben highlights a more pressing concern is the risk that selfish people and big business will use AI to exert their own greed and control over other people. Its a fascinating conundrum, and there is so much to consider, something that Ben has spent more time than most thinking about.

Ben truly believes that our focus should be on building AI systems that make the world a more compassionate, more just, and more sustainable place right now and moving forward into the future.

I really enjoy sitting down for these chats with Ben, theres so much to learn, and if youre interested in the technology that is shaping our future world or looking for an investment opportunity, then make sure to tune in. The economic potential of AI is huge and over the next decade is expected to generate multiple trillions of dollars.

Im optimistic about the potential for beneficial AGI, and decentralised is important because centralisation of control tends to bring with it some narrow motivational system separate from the ethics of whats best for everyone.

View original post here:

Dr Ben Goertzel - A.I. Wars: Google Fights Back Against OpenAI's ... - London Real

A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous – Harvard International Review

Everything dies baby, thats a fact. And, if the world cannot manage the current race to superhuman artificial intelligence between great powers, everything may die much sooner than expected.

The past year has witnessed an explosion in the capabilities of artificial intelligence systems. The bulk of these advances have occurred in generative AI systems that produce novel text, image, audio, or video content from human input. The American company OpenAI took the world by storm with its public release of the ChatGPT large language model (LLM) in November 2022. In March, it released an updated version of ChatGPT powered by the more powerful GPT-4 model. Microsoft and Google have followed suit with Bing AI and Bard, respectively.

Beyond the world of text, generative applications Midjourney, DALL-E, and Stable Diffusion produce unprecedentedly realistic images and videos. These models have burst into the public consciousness rapidly. Most people have begun to understand that generative AI is an unparalleled innovation, a type of machine that possesses capacities natural language generation and artistic production long thought to be sacrosanct domains of human ability.

But generative AI is only the beginning. A team of Microsoft AI scientists recently released a paper arguing that GPT-4 arguably the most sophisticated LLM yet is showing the sparks of artificial general intelligence (AGI), an AI that is as smart or smarter than humans in every area of intelligence, rather than simply in one task. They argue that, [b]eyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." In these multiple areas of intelligence, GPT-4 is strikingly close to human-level performance. In short, GPT-4 appears to presage a program that can think and reason like a human. Half of surveyed AI experts expect an AGI in the next 40 years.

AGI is the holy grail for tech companies involved in AI development primarily the fields leaders, OpenAI and Google subsidiary DeepMind because of the unfathomable profits and world-historical glory that would come with being the first to develop human-level machine intelligence.

The private sector, however, is not the only relevant actor.

Because leadership in AI offers advantages both in economic competitiveness and military prowess, great powers primarily the United States and China are racing to develop advanced AI systems. Much ink has been spilled on the risks of the military applications of AI, which have the potential to reshape the strategic and tactical domains alike by powering autonomous weapons systems, cyberweapons, nuclear command and control, and intelligence gathering. Many politicians and defense planners in both countries believe the winner of the AI race will secure global dominance.

But the consequences of such a race are potentially far more reaching than who wins global hegemony. The perception of an AI arms race is likely to accelerate the already-risky development of AI systems. The pressure to outpace adversaries by rapidly pushing the frontiers of a technology that we still do not fully understand or fully control without commensurate efforts to make AI safe for humans may well present an existential risk to humanitys continued existence.

The dangers of arms races are well-established by history. Throughout the late 1950s, American policymakers began to fear that the Soviet Union was outpacing the U.S. in deployment of nuclear-capable missiles. This ostensible missile gap pushed the U.S. to scale up its ballistic missile development to catch up to the Soviets.

In the early 1960s, it became clear the missile gap was a myth. The United States, in fact, led the Soviet Union in missile technology. However, just the perception of falling behind an adversary contributed to a destabilizing buildup of nuclear and ballistic missile capabilities, with all its associated dangers of accidents, miscalculations, and escalation.

Missile gap logic is rearing its ugly head again today, this time with regard to artificial intelligence, which could be more dangerous than nuclear weapons. Chinas AI efforts are raising fears among American officials, who are concerned about falling behind. New Chinese leaps in AI inexorably produce flurries of warnings that China is on its way to dominating the field.

The reality of such a purported AI gap is complicated. Beijing does appear to lead the U.S. in military AI innovation. China also leads the world in AI academic journal citations and commands a formidable talent base. However, when it comes to the pursuit of AGI, China seems to be the laggard. Chinese companies LLMs are 1-3 years behind their American counterparts, and OpenAI set the pace for generative models. Furthermore, the Biden administrations 2022 export controls on advanced computer chips cut China off from a key hardware prerequisite for building advanced AI.

Whoever is ahead in the AI race, however, is not the most important question. The mere perception of an arms race may well push companies and governments to cut corners and eschew safety research and regulation. For AI a technology whose safety relies upon slow, steady, regulated, and collaborative development an arms race may be catastrophically dangerous.

Despite dramatic successes in AI, humans still cannot reliably predict or control its outputs and actions. While research focused on AI capabilities has produced stunning advancements, the same cannot be said for research in the field of AI alignment, which aims to ensure AI systems can be controlled by their designers and made to act in a way that is compatible with humanitys interests.

Anyone who has used ChatGPT understands this lack of human control. It is not difficult to circumvent the programs guardrails, and it is far too easy to encourage chatbots to say offensive things. When it comes to more advanced models, even if designers are brilliant and benevolent, and even if the AI pursues only its human-chosen ultimate goals, there remains a path to catastrophe.

Consider the following thought experiment about how AGI may be deployed. A human-level or superhuman intelligence is programmed by its human creators with a defined, benign goal say, develop a cure for Alzheimers, or increase my factorys production of paperclips. The AI is given access to a constrained environment of instruments: for instance, a medical lab or a factory.

The problem with such deployment is that, while humans can program AI to pursue a chosen ultimate end, it is infeasible that each instrumental, or intermediate, subgoal that the AI will pursue think acquiring steel before it can make paperclips can be defined by humans.

AI works through machine learning: it trains on vast amounts of data and learns, based on that data, how to produce desired outputs from its inputs. However, the process by which AI connects inputs to outputs the internal calculations it performs under the hood is a black box. Humans cannot understand precisely what an AI is learning to do. For example, an AI trained to pick strawberries might instead have learned to pick the nearest red object and, when released into a different environment, pick both strawberries and red peppers. Further examples abound.

In short, an AI might do precisely what it was trained to do and still produce an unwanted outcome. The means to its programmed ends crafted by an alien, incomprehensible intelligence could be prejudicial to humans. The Alzheimers AI might kidnap billions of human subjects as test subjects. The paperclip AI might turn the entire Earth into metal to make paperclips. Because humans can neither predict every possible means AI might employ nor teach it to reliably perform a definite action, programming away any dangerous outcome is infeasible.

If sufficiently intelligent, and capable of defeating resistant humans, an AI may well wipe out life on Earth in its single-minded pursuit of its goal. If given control of nuclear command and control like the Skynet system in Terminator or access to chemicals and pathogens, AI could engineer an existential catastrophe.

How does international competition come into play when discussing the technical issue of alignment? Put simply, the faster AI advances, the less time we will have to learn how to align it. The alignment problem is not yet solved, nor is it likely to be solved in time without slower and more safety-conscious development.

The fear of losing a technological arms race may encourage corporations and governments to accelerate development and cut corners, deploying advanced systems before they are safe. Many top AI scientists and organizations among them the team at safety lab Anthropic, Open Philanthropys Ajeya Cotra, DeepMind founder Demis Hassabis, and OpenAI CEO Sam Altman believe that gradual development is preferable to rapid development because it offers researchers more time to build safety features into new models; it is easier to align a less powerful model than a more powerful one.

Furthermore, fears of Chinas catching up may imperil efforts to enact AI governance and regulatory measures that could slow down dangerous development and speed up alignment. Altman and former Google CEO Eric Schmidt are on record warning Congress that regulation will slow down American companies to Chinas benefit. A top Microsoft executive has used the language of the Soviet missile gap. The logic goes: AGI is inevitable, so the United States should be first. The problem is that, in the words of Paul Scharre, AI technology poses risks not just to those who lose the race but also to those who win it.

Likewise, the perception of an arms race may preclude the development of a global governance framework on AI. A vicious cycle may emerge where an arms race prevents international agreements, which increases paranoia and accelerates that same arms race.

International conventions on the nonproliferation of nuclear bombs and missiles and the multilateral ban on biological weapons were great Cold War successes that defused arms races. Similar conventions over AI could dissuade countries from rapidly deploying AI into more risky domains in an effort to increase national power. More global cooperation over AIs deployment will reduce the risk that a misaligned AI is integrated into military and even nuclear applications that would give it a greater capacity to create a catastrophe for humanity.

While it is currently unclear whether government regulation could meaningfully increase the chances of solving AI alignment, regulation both domestic and multilateral may at least encourage slower and steadier development.

Fortunately, momentum for private Sino-American cooperation on AI alignment may be building. American AI executives and experts have met with their Chinese counterparts to discuss alignment research and mutual governance. Altman himself recently went on a world tour to discuss AI capabilities and regulation with world leaders. As governments are educated as to the risks of AI, the tide may be turning toward a more collaborative world. Such a shift would unquestionably be good news.

However, the outlook is not all rosy: as the political salience of AI continues to increase, the questions of speed, regulation, and cooperation may become politicized into the larger American partisan debate over China. Regulation may be harder to push when China hawks begin to associate slowing AI with losing an arms race to China. Recent rhetoric in Congress has emphasized the AI arms race and downplayed the necessity of regulation.

Whether or not it is real, the United States and China appear convinced that the AI arms race is happening an extremely dangerous proposition for a world that does not otherwise appear to be on the verge of an alignment breakthrough. A detente on this particular technological race however unlikely it may seem today may be critical to humanitys long-term flourishing.

Link:

A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous - Harvard International Review

Why Silicon Valley AI prophecies just feel like repackaged religion – Vox.com

Suppose I told you that in 10 years, the world as you know it will be over. You will live in a sort of paradise. You wont get sick, or age, or die. Eternal life will be yours! Even better, your mind will be blissfully free of uncertainty youll have access to perfect knowledge. Oh, and youll no longer be stuck on Earth. Instead, you can live up in the heavens.

If I told you all this, would you assume that I was a religious preacher or an AI researcher?

Either one would be a pretty solid guess.

The more you listen to Silicon Valleys discourse around AI, the more you hear echoes of religion. Thats because a lot of the excitement about building a superintelligent machine comes down to recycled religious ideas. Most secular technologists who are building AI just dont recognize that.

These technologists propose cheating death by uploading our minds to the cloud, where we can live digitally for all eternity. They talk about AI as a decision-making agent that can judge with mathematical certainty whats optimal and whats not. And they envision artificial general intelligence (AGI) a hypothetical system that can match human problem-solving abilities across many domains as an endeavor that guarantees human salvation if it goes well, even as it spells doom if it goes badly.

These visions are almost identical to the visions of Christian eschatology, the branch of theology that deals with the end times or the final destiny of humanity.

Christian eschatology tells us that were all headed toward the four last things: death, judgment, and heaven or hell. Although everyone whos ever lived so far has died, well be resurrected after the second coming of Christ to find out where well live for all eternity. Our souls will face a final judgment, care of God, the perfect decision-maker. That will guarantee us heaven if it goes well, but hell if it goes badly.

Five years ago, when I began attending conferences in Silicon Valley and first started to notice parallels like these between religion talk and AI talk, I figured there was a simple psychological explanation. Both were a response to core human anxieties: our mortality; the difficulty of judging whether were doing right or wrong; the unknowability of our lifes meaning and ultimate place in this universe or the next one. Religious thinkers and AI thinkers had simply stumbled upon similar answers to the questions that plague us all.

So I was surprised to learn that the connection goes much deeper.

The intertwining of religion and technology is centuries old, despite the people wholl tell you that science is value-neutral and divorced from things like religion, said Robert Geraci, a professor of religious studies at Manhattan College and author of Apocalyptic AI. Thats simply not true. It never has been true.

In fact, historians tracing the influence of religious ideas contend that we can draw a straight line from Christian theologians in the Middle Ages to the father of empiricism in the Renaissance to the futurist Ray Kurzweil to the tech heavyweights hes influenced in Silicon Valley.

Occasionally, someone there still dimly senses the parallels. Sometimes I think a lot of the breathless enthusiasm for AGI is misplaced religious impulses from people brought up in a secular culture, Jack Clark, co-founder of the AI safety company Anthropic, mused on Twitter in March.

Mostly, though, the figures spouting a vision of AGI as a kind of techno-eschatology from Sam Altman, the CEO of ChatGPT-maker OpenAI, to Elon Musk, who wants to link your brain to computers express their ideas in secular language. Theyre either unaware or unwilling to admit that the vision theyre selling derives much of its power from the fact that its plugging into age-old religious ideas.

But its important to know where these ideas come from. Not because religious is somehow pejorative; just because ideas are religious doesnt mean theres something wrong with them (the opposite is often true). Instead, we should understand the history of these ideas of virtual afterlife as a mode of salvation, say, or moral progress understood as technological progress so we see that theyre not immutable or inevitable; certain people came up with them at certain times to serve certain purposes, but there are other ideas out there if we want them. We dont have to fall prey to the danger of the single story.

We have to be careful with what narratives we buy into, said Elke Schwarz, a political theorist at Queen Mary University of London who studies the ethics of military AI. Whenever we talk about something religious, theres something sacred at play. Having something thats sacred can enable harm, because if something is sacred its worth doing the worst things for.

In the Abrahamic religions that shaped the West, it all goes back to shame.

Remember what happens in the book of Genesis? When Adam and Eve eat from the tree of knowledge, God expels them from the garden of Eden and condemns them to all the indignities of flesh-and-blood creatures: toil and pain, birth and death. Humankind is never the same after that fall from grace. Before the sin, we were perfect creatures made in the image of God; now were miserable meat sacks.

But in the Middle Ages, Christian thinkers developed a radical idea, as the historian David Noble explains in his book The Religion of Technology. What if tech could help us restore humanity to the perfection of Adam before the fall?

The influential ninth-century philosopher John Scotus Eriugena, for example, insisted that part of what it meant for Adam to be formed in Gods image was that he was a creator, a maker. So if we wanted to restore humanity to the God-like perfection of Adam prior to his fall, wed have to lean into that aspect of ourselves. Eriugena wrote that the mechanical arts (a.k.a. technology) were mans links with the Divine, their cultivation a means to salvation.

This idea took off in medieval monasteries, where the motto ora et labora prayer and work began to circulate. Even in the midst of the so-called Dark Ages, some of these monasteries became hotbeds of engineering, producing inventions like the first known tidal-powered water wheel and impact-drilled well. Catholics became known as innovators; to this day, engineers have four patron saints in the religion. Theres a reason why some say the Catholic Church was the Silicon Valley of the Middle Ages: It was responsible for everything from metallurgy, mills, and musical notation to the wide-scale adoption of clocks and the printing press, as I noted in a 2018 Atlantic article.

This wasnt tech for techs sake, or for profits sake. Instead, tech progress was synonymous with moral progress. By recovering humanitys original perfection, we could usher in the kingdom of God. As Noble writes, Technology had come to be identified with transcendence, implicated as never before in the Christian idea of redemption.

The medieval identification of tech progress with moral progress shaped successive generations of Christian thinkers all the way into modernity. A pair of Bacons illustrates how the same core belief that tech would accomplish redemption influenced both religious traditionalists and those who adopted a scientific worldview.

In the 13th century, the alchemist Roger Bacon, taking a cue from biblical prophecies, sought to create an elixir of life that could achieve something like the Resurrection as the apostle Paul described it. The elixir, Bacon hoped, would give humans not just immortality, but also magical abilities like traveling at the speed of thought. Then in the 16th century, Francis Bacon (no relation) came along. Superficially he seemed very different from his predecessor he critiqued alchemy, considering it unscientific yet he prophesied that wed one day use tech to overcome our mortality for the glory of the Creator and the relief of mans estate.

By the Renaissance, Europeans dared to dream that we could remake ourselves in the image of God not only by inching toward immortality, but also by creating consciousness out of inanimate matter.

The possibility to make new life is, other than defeating death, the ultimate power, Schwarz pointed out.

Christian engineers created automata wooden robots that could move around and mouth prayers. Muslims were rumored to create mechanical heads that could talk like oracles. And Jewish folktales spread about rabbis who brought to life clay figures, called golems, by permuting language in magical ways. In the stories, the golem sometimes offers salvation by saving the Jewish community from persecution. But other times, the golem goes rogue, killing people and using its powers for evil.

If all of this is sounding distinctly familiar well, it should. The golem idea has been cited in works on AI risk, like the 1964 book God & Golem, Inc. by mathematician and philosopher Norbert Wiener. You hear the same anxieties today in the slew of open letters released by technologists, warning that AGI will bring upon us either salvation or doom.

Reading these statements, you might well ask: why would we even want to create AGI, if AGI threatens doom as much as it promises salvation? Why not just limit ourselves to creating narrower AI which could already work wonders in applications like curing diseases and stick with that for a while?

For an answer to that, come with me on one more romp through history, and well start to see how the recent rise of three intertwined movements have molded Silicon Valleys visions for AI.

A lot of people assume that when Charles Darwin published his theory of evolution in 1859, all religious thinkers instantly saw it as a horrifying, heretical threat, one that dethroned humans as Gods most godly creations. But some Christian thinkers embraced it as gorgeous new garb for the old spiritual prophecies. After all, religious ideas never really die; they just put on new clothes.

A prime example was Pierre Teilhard de Chardin, a French Jesuit priest who also studied paleontology in the early 1900s. He believed that human evolution, nudged along with tech, was actually the vehicle for bringing about the kingdom of God, and that the melding of humans and machines would lead to an explosion of intelligence, which he dubbed the omega point. Our consciousness would become a state of super-consciousness where we merge with the divine and become a new species.

Teilhard influenced his pal Julian Huxley, an evolutionary biologist who was president of both the British Humanist Association and the British Eugenics Society, as author Meghan OGieblyn documents in her 2021 book God, Human, Animal, Machine. It was Huxley who popularized Teilhards idea that we should use tech to evolve our species, calling it transhumanism.

That, in turn, influenced the futurist Ray Kurzweil, who made basically the same prediction as Teilhard: Were approaching a time when human intelligence merges with machine intelligence, becoming unbelievably powerful. Only instead of calling it the omega point, Kurzweil rebranded it as the singularity.

The human species, along with the computational technology it created, will be able to solve age-old problems and will be in a position to change the nature of mortality in a postbiological future, wrote Kurzweil in his 1999 national bestseller The Age of Spiritual Machines. (Strong New Testament vibes there. Per the book of Revelation: Death shall be no more, neither shall there be mourning nor crying nor pain any more, for the former things have passed away.)

Kurzweil has copped to the spiritual parallels, and so have those whove formed explicitly religious movements around worshiping AI or using AI to move humanity toward godliness, from Martine Rothblatts Terasem movement to the Mormon Transhumanist Association to Anthony Levandowskis short-lived Way of the Future church. But many others, such as Oxford philosopher Nick Bostrom, insist that unlike religion, transhumanism relies on critical reason and our best available scientific evidence.

These days, transhumanism has a sibling, another movement that was born in Oxford and caught fire in Silicon Valley: effective altruism (EA), which aims to figure out how to do the most good possible for the most people. Effective altruists also say their approach is rooted in secular reason and evidence.

Yet EA actually mirrors religion in many ways: functionally (it brings together a community built around a shared vision of moral life), structurally (its got a hierarchy of prophet-leaders, canonical texts, holidays, and rituals), and aesthetically (it promotes tithing and favors asceticism). Most importantly for our purposes, it offers an eschatology.

EAs eschatology comes in the form of its most controversial idea, longtermism, which Musk has described as a close match for my philosophy. It argues that the best way to help the most people is to focus on ensuring that humanity will survive far into the future (as in, millions of years from now), since many more billions of people could exist in the future than in the present assuming our species doesnt go extinct first.

And heres where we start to get the answer to our question about why technologists are set on building AGI.

To effective altruists and longtermists, just sticking with narrow AI is not an option. Take Will MacAskill, the Oxford philosopher known as the reluctant prophet of effective altruism and longtermism. In his 2022 book What We Owe the Future, he explains why he thinks a plateauing of technological advancement is unacceptable. A period of stagnation, he writes, could increase the risks of extinction and permanent collapse.

He cites his colleague Toby Ord, who estimates that the probability of human extinction through risks like rogue AI and engineered pandemics over the next century is one in six Russian roulette. Another fellow traveler in EA, Holden Karnofsky, likewise argues that were living at the hinge of history or the most important century a singular time in the story of humanity when we could either flourish like never before or bring about our own extinction. MacAskill, like Musk, suggests in his book that a good way to avoid extinction is to settle on other planets so we arent keeping all our eggs in one basket.

But thats only half of MacAskills moral case for space settlement. The other half is that we should be trying to make future human civilization as big and utopian as possible. As MacAskills Oxford colleague Bostrom has argued, the colonization of the universe would give us the area and resources with which to run gargantuan numbers of digital simulations of humans living happy lives. The more space, the more happy (digital) humans! This is where the vast majority of moral value lies: not in the present on Earth, but in the future in heaven Sorry, I meant in the virtual afterlife.

When we put all these ideas together and boil them down, we get this basic proposition:

Any student of religion will immediately recognize this for what it is: apocalyptic logic.

Transhumanists, effective altruists, and longtermists have inherited the view that the end times are nigh and that technological progress is our best shot at moral progress. For people operating within this logic, it seems natural to pursue AGI. Even though they view AGI as a top existential risk, they believe we cant afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence (which will surely end any minute!) and into a flourishing interstellar adulthood (so many happy people, so much moral value!). Of course we ought to march forward technologically because that means marching forward morally!

But is this rooted in reason and evidence? Or is it rooted in dogma?

The hidden premise here is technological determinism, with a side dash of geopolitics. Even if you and I dont create terrifyingly powerful AGI, the thinking goes, somebody else or some other country will so why stop ourselves from getting in on the action? OpenAIs Altman exemplifies the belief that tech will inevitably march forward. He wrote on his blog in 2017 that unless we destroy ourselves first, superhuman AI is going to happen. Why? As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.

Have we learned that? I see no evidence to suggest that anything that can be invented necessarily will be invented. (As AI Impacts lead researcher Katja Grace memorably wrote, Consider a machine that sprays shit in your eyes. We can technologically do that, but probably nobody has ever built that machine.) It seems more likely that people tend to pursue innovations when there are very powerful economic, social, or ideological pressures pushing them to.

In the case of the AGI fever thats gripped Silicon Valley, recycled religious ideas in the garb of transhumanism, effective altruism, and longtermism have supplied the social and ideological pressures. As for the economic, profit-making pressure, well, thats always operative in Silicon Valley.

Now, 61 percent of Americans believe AI may threaten human civilization, and that belief is especially strong among evangelical Christians, according to a Reuters/Ipsos poll in May. To Geraci, the religious studies scholar, that doesnt come as a surprise. Apocalyptic logic, he noted, is very, very, very powerful in American Protestant Christianity to the point that 4 in 10 US adults currently believe that humanity is living in the end times.

Unfortunately, apocalyptic logic tends to breed dangerous fanaticism. In the Middle Ages, when false messiahs arose, people gave up their worldly possessions to follow their prophet. Today, with talk of AGI doom suffusing the media, true believers drop out of college to go work on AI safety. The doom-or-salvation, heaven-or-hell logic pushes people to take big risks to ante up and go all in.

In an interview with me last year, MacAskill disavowed extreme gambles. He told me he imagines that a certain type of Silicon Valley tech bro, thinking theres a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI.

Thats not the sort of person I want building AGI, because they are not responsive to the moral issues, MacAskill told me. Maybe that means we have to delay the singularity in order to make it safer. Maybe that means it doesnt come in my lifetime. That would be an enormous sacrifice.

When MacAskill told me this, I pictured a Moses figure, looking out over the promised land but knowing he would not reach it. The longtermist vision seemed to require of him a brutal faith: You personally will not be saved, but your spiritual descendants will.

Theres nothing inherently wrong with believing that tech can radically improve humanitys lot. In many ways, it obviously already has.

Technology is not the problem, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me. In fact, Delio is comfortable with the idea that were already in a new stage of evolution, shifting from Homo sapiens to techno sapiens. She thinks we should be open-minded about proactively evolving our species with techs help.

But shes also clear that we need to be explicit about which values are shaping our tech so that we can develop the technology with purpose and with ethical boundaries, she said. Otherwise, technology is blind and potentially dangerous.

Geraci agrees. If a ton of people in Silicon Valley are going, Hey, Im in for this technology because its going to make me immortal, thats a little bit terrifying, he told me. But if somebody says, Im in for this technology because I think were going to be able to use it to solve world hunger those are two very different motives. It would impact the types of products you try to design, the population for which you are designing, and the way you try to deploy it in the world around you.

Part of making deliberate decisions about which values animate tech is also being keenly aware of who gets the power to decide. According to Schwarz, the architects of artificial intelligence have sold us on a vision of necessary tech progress with AI and set themselves up as the only experts on it, which makes them enormously powerful arguably more powerful than our democratically elected officials.

The idea that developing AGI is a kind of natural law becomes an ordering principle, and that ordering principle is political. It gives political power to some and a lot less to most others, Schwarz said. Its so strange to me to say, We have to be really careful with AGI, rather than saying, We dont need AGI, this is not on the table. But were already at a point when power is consolidated in a way that doesnt even give us the option to collectively suggest that AGI should not be pursued.

We got to this point in large part because, for the past thousand years, the West has fallen prey to the danger of the single story: the story equating tech progress with moral progress that we inherited from medieval religious thinkers.

Its the one narrative we have, Delio said. That narrative has made us inclined to defer to technologists (who, in the past, were also spiritual authorities) on the values and assumptions being baked into their products.

What are alternatives? If another narrative were to say, Just the dynamism of being alive is itself the goal, then we might have totally different aspirations for technology, Delio added. But we dont have that narrative! Our dominant narrative is to create, invent, make, and to have that change us.

We need to decide what kind of salvation we want. If were generating our enthusiasm for AI through visions of transcending our earthbound limits and our meat-sack mortality, that will create one kind of societal outcome. But if we commit to using tech to improve the well-being of this world and these bodies, we can have a different outcome. We can, as Noble put it, begin to direct our astonishing capabilities toward more worldly and humane ends.

We're here to shed some clarity

One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. Were so grateful that were on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.

Read more from the original source:

Why Silicon Valley AI prophecies just feel like repackaged religion - Vox.com