Category Archives: Artificial General Intelligence

AI, Augmented Reality, The Metaverse | Media@LSE – London School of Economics and Political Science

At a time when technology is changing fast and its consequences are hard to predict, Asher Kessler, a PhD researcher in LSEs Department of Media & Communications, explains how we can assess and rethink the visions of the future presented to us by big technology companies.

Over the past six months, it has sometimes felt as if were entering a science fiction film. Whether it is Apple promising a future in which well all be navigating the world through Augmented Reality (AR) headsets, or the plethora of Big Tech companies racing to create Artificial General Intelligence (AGI), we have been confronted with visions of a radically transformed future. And these futures come just two years after Metas big push to convince the world that we were entering a new social reality: the metaverse.

The dizzying whiplash of flinging from one supposedly inevitable future to another makes this a good moment to pause and question why Big Tech might be so intent on selling us these visions of the future. Doing so helps us recognise how the futures we anticipate or imagine shape how we navigate our present day, as well as how we come to remember the past. But it also might point us towards imagining the future in alternative ways.

To begin, I find it helpful to borrow the perspective of historian Jenny Andersson who, in her history of post war futurism and futurology, argues that the future is best thought of as a field of struggle. From this perspective, visions and predictions of the future are suffused with power. Imagined futures are often instilled with narratives of directionality and inevitability, and this shapes how people act in the present. For example, in launching their Vision Pro headset, Apple is not just selling a product but a vision of the future that they hope to convince us is just around the corner. If we are already heading towards this future, then we might as well prepare for it. To take another example, when graduates imagine a future increasingly filled with AI technology, they may reassess what career path to follow now.

But a vision of the future doesnt just reshape how we come to navigate the present, it can also remould how we retell the past so that it better fits a narrative of directionality towards this imagined future. Proponents of AI argue that we are on the verge of not simply a technological development but an unprecedented transformational shift. For example, Demis Hassabis, who runs Googles AI team, argues that the most important moment in human history lies just ahead of us. Against this looming future of unprecedented historical change, past events come to be reassessed. For example, Nick Bostrom, an Oxford philosopher, feels able to re-evaluate past catastrophic events, such as the Holocaust, as mere ripples on the surface of the great sea of life, compared to the future threats he envisions from technologies such as nanotechnology or AGI.

The power of imagined futures to reshape our present and our retelling of the past can help explain why our expectations of the future are so fought over. Through reshaping what we imagine to be possible or even expected, actors such as Meta, Apple or Open AI attempt to reshape how we understand our interests, so that they better align with their own interests. It is in Apples interests to convince people that AR is the next step in computer-human interaction not only to sell their products, or convince developers to work on AR, but so that wider audiences believe that the best way to not be left behind is to prepare for the future Apple seeks to construct.

Yet, the experience of the past six months, of being encouraged to imagine one seemingly-inevitable future then the next can begin to feel, dare I say it, a little repetitive. This might be because, as different as these technological visions of the future first appear, all of them follow the same core logic: the future is the next imagined technological breakthrough.

To make sense of this, I find it useful to go back and draw upon the work of political philosopher Hannah Arendt. In the 1950s, against the backdrop of space exploration and an emerging nuclear arms race, Arendt warned that society was handing over their imaginings of the future to science and technology. In Between Past and Future, Arendt argued that if we simply equate the future with the next scientific or technological breakthrough, we find ourselves closing a future that was once envisaged as open. The future becomes closed in that it is always the same, always the next scientific breakthrough or technology product, rather than a collective decision to reshape political or social life towards an ideal or betterment.

How then might we imagine the future outside of this temporal cycle of the next technological breakthrough? Perhaps a useful place to start is not trying to look ahead but instead look back. If we can recover the different, and often largely forgotten, ways in which the future has been imagined, and reassemble them into a new contemporary context, we might find alternative ways of imagining the future now. For example, we could reconsider how historical figures and movements, such as messianic religious groups, imagined alternative visions of radical transformation. How might reassembling fragments of these visions into our contemporary help us reimagine the future as unprecedented?More than just finding alternative ways of imagining the future, through these actions of recovery and reassembly, we resist the ways in which Big Tech is reshaping our expectations for the future, as well as the way we navigate the present, and remember the past.

This post represents the views of the author and not the position of the Media@LSE blog nor of the London School of Economics and Political Science.

Featured image: Photo by Andy Kelly on Unsplash

Visit link:

AI, Augmented Reality, The Metaverse | Media@LSE - London School of Economics and Political Science

AI ethics experts warn safety principals could lead to ‘ethicswashing’ – Citywire

New safeguarding artificial intelligence (AI) principles adopted by investors could do more harm than good. AI ethics experts are warning that overreliance on best practice risks create theimpression that the technology sits outside existing laws.

This week, four of the most influential AI companies announced the formation of an industry body whose stated goal is to promote responsible AI.

However, the Frontier Model Forum formed by ChatGPT developer OpenAIand AI startup Anthropicand their main investors Microsoft and Google has caused concern inthe AI ethics community around its effectiveness.

AI experts fear such forums could lead to ethicswashing, with one AI and data ethics advisor warning investors that signing up to new AI principles could lead people to believe that the fast-developing technology is not accountableunderthe law.

These fears come at a time when the hottest thing in equity markets today is AI. Among the best-known AI plays is Citywire Elite Companies AAA-rated Nvidia(US:NVDA), whose shares have more than tripled this year.

AI ethics and data advisor Ravit Dotan said the Frontier Model Forums activities and membership criteria do not mandate any action to actively identify or mitigate risks. By joining the Forum, she said, companies get to be seen as ethical because one of the criteria for membership is demonstrating a strong commitment to AI ethics themes.She added that the Forums objectives offer nothing new.

AI technology should already adhere to existing laws. Signing up to new principles suggests that AI technology somehow sits outside of these laws, which it does not, she said.

The laws around discrimination and data privacy apply to these technologies now, so what I would say to investors is: Do the due diligence in making sure you are not investing in companies that break these laws.

Dotansaid of the Forum: This initiative looks like a flash from the past. There are many initiatives of this kind already, but the surgein AI ethics research and best practices has not been accompanied by actions. The time for initiatives that only identify best practices, advanced research and facilitate knowledge sharing is over.

How are the asset managers with holdings in Microsoft and Alphabet reacting to these concerns?

In Europe alone, 206 open-ended funds with the highest sustainable investment objectives hold a combined 11.1million Microsoft and Alphabet shares, according to Morningstar data. These wereworth 3bn as of 30 June.

In the table below are some of the Luxembourg- or Ireland-domiciled funds with more than 1% of their assets in Microsoft and Alphabet.

Citywire Selector contacted 10 asset managers holding Luxembourg- or Ireland-domiciled ESG funds with more than 1% of assets in Microsoft and Alphabetto ask about their engagement practices on ethics and safety.

While Morgan Stanley and Danske declined to comment, those who responded said they have signed up to uphold safeguarding principles.

Fidelity International said its ESG team has been engaging with the World Benchmarking Alliance (WBA) to address concerns on safety and ethics.

We have felt it is extremely important to raise levels of understanding and discussion about issues of digital ethics broadly, and to promote commitments from companies to best practices and disclosures regarding the responsible development and deployment of artificial intelligence specifically,Fidelity International said.

Mirroring some of Dotans concerns, a Candriam spokesperson said it also recognises the limitations of new laws and stressed its backing of ethical practice.

We welcome the recent EU AI Act, which is one of the strongest pieces of legislation in the world. But it relies greatly on companies to self-assess the level of risk of their products and services. Hence, the importance that companies adopt strong ethical practices, the spokesperson said.

Additionally, Candriam said it has been taking an active role in several initiatives addressing responsible AI, including WBAs Responsible AI initiative, big tech and human rights engagement, Ranking Digital Rights engagement and Corporate Human Rights Benchmark engagement on human rightsdue diligence.

Johannes Lenhard, anthropologist of ethics and venture capital at the University of Cambridge, said more due diligence needs to be done by asset managers and venture capitalists (VCs), who are the first to back the next generation of major AI players. He added that VCs must be investors first line of attack.

VCs need to be under more pressure to do their homework right now, as the companies they are funding will be taken over by public investors in five to eight years time, he said.

If someone had thought about the unintended ESG consequences of Facebook when it was first invented, we may not have had to deal with all the trouble that has come out of social media today. AI is a parallel to that.

For Lenhard, the investor community is making the same mistakes again.

OpenAI is already going to be very hard to influence at this stage, but yet everyone, including the regulator, is focusing on companies that are now established when they should be looking at what the next generation of AI will look like.

Asset managers can ask how these firms are mitigating against doing harm. It is a very simple question that is not about reporting or requires metrics.

Talk of AI regulation was amplified last month after hundreds of AI chiefs, tech chiefs and executives signed a statement by the Centre for AI Safety, which said that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

What followed was executives from OpenAI, Google, Microsoft and other tech firms publicly calling for AI regulation. Since then, the Federal Trade Commission has opened an investigation into OpenAI to find out whether the maker of ChatGPT has violated consumer protection laws by putting personal data at risk.

For Dotan, the letter was wrongheadedand almost malicious in its intent.

There is a split within the ethics community around long-termism and near-termism, which explains the background of this letter. These tech bros foster long-termism to deflect from the real harms their firms are doing now, she said.

It is easier for them to point to far-off science fiction doomsday scenarios rather than address current issues.

Additionally, Dotan said AI can threaten human existence without becoming all-powerful or sentient.

It does not have to happen through artificial general intelligence, it could happen in more mundane ways, she said.

The carbon emissions and the water footprint from AI development would be so wild that we would become extinct because of climate change. Secondly, discrimination could become so systematic because of AI that we would have race wars and kill ourselves. Or disinformation would become so common that no one would know what is true or false, so that when the next pandemic comes, we all die.

Read the original:

AI ethics experts warn safety principals could lead to 'ethicswashing' - Citywire

AI bots could replace us, peer warns House of Lords during debate – The Guardian

Artificial intelligence (AI)

Crossbencher asks Lords to imagine bots with higher productivity and lower running costs as example of risk to UK jobs market

Mon 24 Jul 2023 17.22 EDT

The House of Lords could be replaced by bots with deeper knowledge, higher productivity and lower running costs, said a peer during a debate on the development of advanced artificial intelligence.

Addressing the upper chamber, Richard Denison hypothesised that AI services may soon be able to deliver his speeches in his own style and voice, with no hesitation, repetition or deviation.

He quoted the example to raise the wider issue of AIs potential effect on the UK jobs market.

In May, IBM put the brakes on nearly 7,800 jobs that could be replaced by AI and automation over time, shortly before BT announced it would cut up to 55,000 jobs by 2030, about 10,000 of which were predicted to be replaced by AI.

According to the Organisation for Economic Co-operation and Development (OECD), the jobs most likely to be affected by the impending AI revolution were in highly skilled professions such as law, medicine and finance.

Is it an exciting or alarming prospect that your lordships might one day be replaced by peer bots with deeper knowledge, higher productivity and lower running costs? the independent crossbencher asked. Yet this is the prospect for perhaps as many as 5 million workers in the UK over the next 10 years.

I was briefly tempted to outsource my AI speech to a chatbot and to see if anybody noticed. I did in fact test out two large language models. In seconds, both delivered 500-word speeches which were credible, if somewhat generic.

The crossbench peer Charles Colville, a freelance television producer, said that he had asked the AI programme ChatGPT to write a speech for him on the threat that AI poses to journalism.

He said one of the paragraphs stated: AI, in its tireless efficiency, threatens to overshadow human journalism. News articles can be automated and editorials composed without a single thought, a single beating heart behind the words.

My fear is, we will descend into a landscape where news is stripped of the very human elements that make it relatable, understandable and ultimately impactful.

He added: The new AI technology is further exacerbating this financial threat to the whole industry; AI-generating companies able to scrape for free the information from news websites, which are already facing increasing costs of creating original journalistic content.

Other peers noted additional potential threats, spanning bias and discrimination, privacy and security issues, and advised that humanity move forward with caution.

The AI minister, Jonathan Berry, said: These advances bring great opportunities, from improving diagnostics and healthcare to tackling climate change, but they also bring serious challenges, such as the threat of fraud and disinformation created by deepfakes.

We note the stark warnings from AI pioneers, however uncertain they may be about artificial general intelligence and AI biosecurity risks. We will unlock the extraordinary benefits of this landmark technology while protecting our society and keeping the public safe.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Continued here:

AI bots could replace us, peer warns House of Lords during debate - The Guardian

This Health AI Startup Aims To Keep Doctors Up To Date On The … – Forbes

OpenEvidence, which is valued at $425 million, is taking on one of AIs big engineering challenges: large language models whose training is stuck in the past.By Katie Jennings, Forbes Staff

One of the limitations of large language models is that their training is frozen in time. If you ask OpenAIs viral chatbot ChatGPT if Covid vaccines work against the most common variant circulating in 2023, it responds: As an AI language model, I don't have access to real-time data or information beyond my last update in September 2021.

A tremendous amount has changed since then there are new Covid strains, new vaccine and drug approvals, and tens of thousands of new scientific studies. In order for chatbots to be useful in a medical setting, they are going to need access to the latest research. Armed with $32 million in capital, nearly a dozen employees with PhDs (or PhD candidates) and a supercomputer in the Nevada desert, Daniel Nadler has been working to solve this knowledge cutoff problem with his new startup OpenEvidence.

Constantly retraining machine learning models requires huge amounts of costly computing power, but there is another option. Its a technical and engineering challenge that involves marrying these language models with a real-time firehose of clinical documents, says OpenEvidence founder Nadler, 40. Essentially, granting the AI access to a new pool of data right before it goes to answer the question a process computer scientists call retrieval augmented generation. If you ask OpenEvidences chatbot the question about vaccines and the new Covid variant, it responds that specific studies on this variant are limited and includes information from studies published in February and May 2023 with citations. The main difference, says Nadler, is that his model can answer with an open book, as opposed to a closed book.

This isnt Nadlers first time as the founder of an AI startup. He sold his previous company, Kensho Technologies, to S&P Global for $550 million (plus $150 million in stock) in 2018. Kensho is an AI-powered tool for Wall Street traders that analyzes millions of market data points to help identify arbitrage opportunities.

During the Covid pandemic, as the number of scientific studies about Covid-19 ballooned from zero to tens of thousands in the span of a few months, Nadler saw that healthcare providers were facing a similar problem to traders: how to separate credible, actionable information from the noise. He soon learned that wasnt just true of Covid studies, but of the medical field more broadly, as around two scientific papers are published every minute. The fundamental construct of the problem was identical, says Nadler. An information overload and a need to triage that information and a need to use computers to do so.

Venture capital investors have poured more than $46 billion dollars into U.S. healthcare-focused artificial intelligence and machine learning startups over the past decade, according to data from PitchBook. Investment peaked at $13.4 billion in 2021 and was down to $10.3 billion in 2022. Startups have raised $3 billion across 205 deals so far this year.

Nadler founded OpenEvidence in November 2021. After investing $5 million of his own money, he says he closed a $27 million Series B funding round from outside investors in July 2022, valuing the startup at $425 million. He opened the round to former Kensho investors, including billionaire venture capitalist Jim Breyer, billionaire Vista Equity Partners cofounder Brian Sheth and investment banker Ken Moelis, among others. In March, OpenEvidence was selected to participate in a Mayo Clinic Platform accelerator. Since then, Nadler says more than 10,000 clinicians have signed up for early access, which is whats driving him to come out of stealth now.

Nadler says OpenEvidence is trying to take on the big incumbent database used by two million healthcare workers worldwide called UpToDate from the Netherlands-based global data company Wolters Kluwer. The clinical solutions in Wolters Kluwers health division, which includes UpToDate, generated more than $900 million in revenue in 2022. UpToDate relies on more than 7,000 human experts to write and edit the entries around medical topics, according to Wolters Kluwer Health spokesperson Suzanne Moran. Topics in UpToDate are revised when important new information is published, Moran said in a statement. Editors review more than 420 peer-reviewed journals.

Where Nadler sees AI having an advantage over the human-edited entries, is that OpenEvidence is interactive rather than a static page of text, meaning users can tailor their questions to precise patient scenarios and ask follow-ups, rather than having to read through huge chunks of text. It can also scan tens of thousands of journals compared to hundreds. The document pool that OpenEvidence is retrieving information from includes more than 35 million journal articles. Nadler says it sifts through the National Library of Medicine, which includes more than 31,000 peer-reviewed journals, multiple times a day. He says there is around a 24-hour lag time to process the new journal articles and get them into the retrieval pool.

All that data poses one potential logjam for Nadlers goals, though: not all journal articles are created equal when it comes to the quality of what they publish. The scientific community has a ranking system known as impact factor, which means journals that are more highly cited are more important on a relative basis compared to journals with fewer citations. The OpenEvidence models factor this in when retrieving information from the pool of new journal articles. You have evidence weighted answers, says Nadler, meaning the quality of the input source is taken into account.

Every large language model behaves differently, but the general idea is that they compose answers by predicting the next most likely word in a sentence. When the models tend to get an answer wrong is when many different completions [are] equally probable, says Uri Alon, a postdoctoral researcher at the Language Technologies Institute at Carnegie Mellon University, who is not affiliated with OpenEvidence.

If you take a model thats been trained on the internet and ask it about a famous person, it is likely to get biographical information correct. But if you ask about a regular person that it doesnt have training data on, it might generate an incorrect response, known as an hallucination. Now, if you provide the model with a pool of information, including that regular persons biographical data, it would be much more likely to get it right. Some approaches allow you not only to generate an answer that is consistent with those documents that you retrieve, says Alon. But also pull the exact sentence or exact paragraph that says so.

This is the approach that OpenEvidence is taking by providing citations to the journal articles that it is pulling from. However, Alon cautions that while retrieval augmented systems may help reduce hallucinations, nothing is bulletproof. These models will always be fallible much the same as humans. If you gave a human a bunch of documents or paragraphs, let the human read it and then answer questions, and also ask the human to tell you where their answer came from in those documents, even humans would make mistakes, he says.

Right now, OpenEvidence is free to use for early adopters who are licensed medical professionals. Part of the rationale for this is the amount of computing power and expense it takes to run the queries. Antonio Forte, a professor of plastic surgery at Mayo Clinic who is on OpenEvidences medical advisory board, says he uses UpToDate on a regular basis. Forte says the biggest difference using OpenEvidence over the past few weeks has been the time savings. Rather than having to read through the equivalent of a book chapter, he can get an answer within 30 seconds, not within 10 minutes.

The hope is that other healthcare workers will have a similar reaction to Forte. Nadler says he hasnt decided on a revenue model yet. He is debating between subscription-based and ad-based, but is leaning towards a hybrid, an ad-based model with a subscription upsell. But one thing is for sure. OpenEvidence will not become a chatbot for the average patient. That is not a technical problem. That is a regulatory and ethical problem, says Nadler, which is why he wants to create a tool to help doctors and nurses but have them still rely on their human judgment. [There] is a very firm limit to any conceivable harm that could come from the usage of the technology to a patient, because it is always being intermediated by a professional.

Additional reporting by Kenrick Cai

See the original post here:

This Health AI Startup Aims To Keep Doctors Up To Date On The ... - Forbes

OpenAI Requires Millions of GPUs for Advanced AI Model – Fagen wasanni

When it comes to AI processing, Nvidias GPUs have established dominance in the industry. OpenAI, one of the leaders in the race for Artificial General Intelligence, is in need of more powerful GPUs for faster information processing and handling larger amounts of data. Both Nvidia and OpenAI are working on combining millions of GPUs, a significant increase compared to the thousands currently being used.

Reports suggest that Nvidia has already supplied over 20,000 GPUs to OpenAI. The collaboration between the two companies has resulted in the development of the GPT-4 Model, showcased in OpenAIs ChatGPT. Discussions are underway to create AI GPUs that surpass the capabilities of existing ones.

OpenAI has recently expanded accessibility to its AI models, making them available on web and mobile platforms, including iOS and Android devices. To improve and enhance their upcoming AI model, OpenAI requires more GPUs and computing power. Nvidia currently holds a 95% market share for AI GPUs used for graphics processors.

OpenAI aims to develop an AI model that requires approximately 10 million GPUs. Investing in such a large number of GPUs is a costly endeavor. However, it is a crucial step towards advancing artificial intelligence. The upcoming model is expected to have a wide range of applications and capabilities, making it necessary to process vast amounts of data, possibly reaching multiple terabytes.

Nvidia has the capacity to produce up to a million AI GPUs, but it would take approximately 10 years to fulfill an order of millions of GPUs. The industry has been facing GPU shortages due to high demand, leading to increased prices. Nvidia is collaborating with TSMC to increase production and supply of GPUs. However, interconnecting such a massive number of GPUs poses challenges.

Other companies, including Google, Microsoft, and Amazon, have also ordered thousands of GPUs from Nvidia to work on their own AGI projects. The demand for Nvidia GPUs has significantly contributed to the companys rise, making it nearly a trillion-dollar company. It is still unclear whether OpenAI can afford the high cost of acquiring millions of GPUs, and the possibility of another GPU shortage similar to the pandemic period remains uncertain.

Neither Nvidia nor OpenAI has officially confirmed the news about OpenAIs plan to use 10 million GPUs. Microsoft is rumored to be working on GPUs for AI development, which could potentially help reduce costs. The cost of these applications is currently around $10,000 for Nvidias 100 Chips, and they play a critical role in AGI projects like OpenAIs ChatGPT, Microsofts Bing AI, and Stabilitys Stable Diffusion.

Various companies, such as Stability AI, have been utilizing Nvidia GPUs for their AI models. Stability AI has employed 5,400 Nvidia A100 GPUs for image generation, while Meta AI has used around 2,048 Nvidia A100 GPUs for training their LLAMA Model. These GPUs are specifically designed to handle complex calculations and are suitable for training and using neural network models.

Initially used for graphics processing, GPUs like Nvidias A100 have been reconfigured for machine learning tasks and are now being deployed in data centers. Other companies, such as AMD and Intel, are also investing in AI-GPU hardware research and development to build their own AGI models. While it is possible to train on GPUs other than Nvidias, Nvidias advancements in frameworks and libraries are likely to continue generating profits and keeping upfront pricing competitive.

Read this article:

OpenAI Requires Millions of GPUs for Advanced AI Model - Fagen wasanni

Why AI cant answer the fundamental questions of life | Mint – Mint

Elon Musk recently announced his new artificial intelligence company, xAI. Its declared purpose is to understand the true nature of universe." Its 12-member team includes subject experts with experience across OpenAI, Google Research and Microsoft Research. With such impressive credentials, xAI is being positioned as a competitor to the likes of ChatGPT and Google Bard and as a front-runner in Artificial General Intelligence (AGI)a system that can perform tasks a human being is capable of. Renowned futurist Ray Kurzweil has predicted that we will have AGI by 2029 and machines smarter than humans by 2045.

Elon Musk recently announced his new artificial intelligence company, xAI. Its declared purpose is to understand the true nature of universe." Its 12-member team includes subject experts with experience across OpenAI, Google Research and Microsoft Research. With such impressive credentials, xAI is being positioned as a competitor to the likes of ChatGPT and Google Bard and as a front-runner in Artificial General Intelligence (AGI)a system that can perform tasks a human being is capable of. Renowned futurist Ray Kurzweil has predicted that we will have AGI by 2029 and machines smarter than humans by 2045.

In its first tweet, xAI asked, What are the most fundamental unanswered questions?" While existing AI systems excel in tasks like itinerary planning, essay writing and document rephrasing, they fall short when it comes to tackling profound inquiries. Questions such as the origin of life, possibilities of faster-than-light travel, potential cure for cancer or how to decipher ancient scripts like the Harappan one remain unanswered. To solve such questions, an AI system needs super-intelligence or a level of intelligence far surpassing that of the brightest and most gifted human minds. xAIs aspirations parallel those of the famous supercomputer in Douglas Adams book, The Hitchhikers Guide to the Galaxy, which was asked about the answer to the ultimate question of life, the universe, and everything?" and humorously replied with the number 42.

Hi! You're reading a premium article

In its first tweet, xAI asked, What are the most fundamental unanswered questions?" While existing AI systems excel in tasks like itinerary planning, essay writing and document rephrasing, they fall short when it comes to tackling profound inquiries. Questions such as the origin of life, possibilities of faster-than-light travel, potential cure for cancer or how to decipher ancient scripts like the Harappan one remain unanswered. To solve such questions, an AI system needs super-intelligence or a level of intelligence far surpassing that of the brightest and most gifted human minds. xAIs aspirations parallel those of the famous supercomputer in Douglas Adams book, The Hitchhikers Guide to the Galaxy, which was asked about the answer to the ultimate question of life, the universe, and everything?" and humorously replied with the number 42.

Wouldnt it be fascinating if we had a super-intelligent AI that could discover new laws of physics and answer questions about the meaning of life? Despite notable advances, achieving such a feat remains highly improbable if not impossible in the near future. Current AI systems like ChatGPT excel at assimilating vast amounts of information and leveraging machine learning to generate user-friendly responses to queries. These systems have undoubtedly acquired substantial knowledge. But, answering fundamental questions also requires wisdom.

Creative intelligence extends beyond mere reorganization of available information in our memory. It involves delving into our subconscious to uncover deeper connections and patterns. Intuition and spontaneity play vital roles in this form of intelligence. Einstein often used thought experiments that were radically unrelated to problem statements. Legend has it that his thought experiment of a man free falling in an elevator led him to the Special Theory of Relativity. Newtons quest for laws of motion has the story of having begun with an apple seen falling from a tree. Visionaries possess the ability to find a common framework bridging seemingly unrelated events. They can intuitively conceive ideas or stories and then formulate equations around them. Therefore, an AGI system may be able to grow in its complexity and computing capacity exponentially, but it does not guarantee a solution to fundamental problems.

Hence, it is unlikely that an AI system will discover a new law of physics. That would require a unique blend of skills, including expertise in mathematics and physics, keen observational abilities, proficiency in thought experiments, and the ability to construct novel narratives. The capability to self-reflect, which has predominantly been a unique attribute of human beings, would also be essential. Despite possessing all these qualities, even the brightest minds have been unable to answer some of the fundamental questions of the universe. It took Einstein 10 years to formulate the General Theory of Relativity after he gave us his Special Theory in 1905.

Another example is the hard problem of consciousness put forth by philosopher David Chalmers. It refers to the problem of how physical processes in the brain give rise to the subjective or first-person experience of the mind and of the world. Since subjective experience and consciousness are integral components of this enigma, an AI system that lacks access to these facets would be unable to solve the problem. At best, it can help us develop frameworks to better understand how consciousness works, but it seems unlikely that it would be able to comprehensively crack the task.

In the event that xAI or any other AI system manages to achieve super-intelligence, it would raise a significant concern known as the alignment problem. AI alignment is the challenge of encoding AI systems in a way that is compatible with human moral values. According to a thought experiment proposed by philosopher Nick Bostrom, a super-intelligent AI assigned with the objective of maximizing the production of paper clips might conclude that eliminating all humans is crucial to its mission, as they could potentially deactivate it and because their atoms could be converted into additional paper clips. xAI aims to make the system maximally curious so as to solve this alignment problem, as a curious AI would automatically align with human values.

Nevertheless, there are distinct advantages offered by AI that can prove valuable in tackling fundamental questions. Unlike humans, whose attention spans are decreasing due to exposure to digital distractions, AI can maintain focus on a problem for extended periods. Furthermore, AI can offer tremendous computational power. While AI may not have the capability to solve these problems independently, the most plausible scenario is that AI can assist humans in specific areas, thereby facilitating their efforts to address these challenges.

These are the authors personal views.

Read the original post:

Why AI cant answer the fundamental questions of life | Mint - Mint

Demis Hassabis – Information Age

Demis Hassabis is the CEO and co-founder of the machine learning start-up DeepMind, a subsidiary of Googles parent company Alphabet.

Hassabis was one of the AI leaders who co-signed the safe.ai open letter calling for mitigation of societal risks caused by AI to be as highly prioritised as that of pandemics and nuclear war. When speaking to Time magazine following the recent opening of DeepMinds London headquarters, urged caution around artificial intelligence development, commenting: Although we work on making machines smart, we wanted to keep humanity at the centre of what were doing here. The tech leader went on to claim that AI was on the cusp of potentially creating tools that could prove harmful to society.

With the company aiming to unleash the potential of artificial general intelligence (AGI) widely regarded as the next phase in AI development Hassabis said that this would serve as an epoch-defining technology akin to the discovery of electricity.

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.More by Aaron Hurst

View post:

Demis Hassabis - Information Age

OpenAI’s Head of Trust and Safety Quits: What Does This Mean for … – ReadWrite

Quite unexpectedly, Dave Willner, OpenAIs head of trust and safety, recently announced his resignation. Willner, who has been in charge of the AI companys trust and safety team since February 2022, announced his decision to take on an advisory role in order to spend more time with his family on his LinkedIn profile. This pivotal shift occurs as OpenAI faces increasing scrutiny and struggles with the ethical and societal implications of its groundbreaking innovations. This article will discuss OpenAIs commitment to developing ethical artificial intelligence technologies, as well as the difficulties the company is currently facing and the reasons for Willners departure.

Dave Willners departure from OpenAI is a major turning point for him and the company. After holding high-profile positions at Facebook and Airbnb, Willner joined OpenAI, bringing with him a wealth of knowledge and experience. In his LinkedIn post, OpenAI CEO Willner thanked his team for their hard work and reflected on how his role had grown since he was first hired.

For many years, OpenAI has been one of the most innovative organizations in the field of artificial intelligence. The company became well-known after its AI chatbot, ChatGPT, went viral. OpenAIs AI technologies have been successful, but this has resulted in heightened scrutiny from lawmakers, regulators, and the general public over their safety and ethical implications.

CEO of OpenAI Sam Altman has spoken out in favor of AI regulation and ethical growth. In a March Senate panel hearing, Altman voiced his concerns about the possibility of artificial intelligence being used to manipulate voters and spread disinformation. In light of the upcoming election, Altmans comments highlighted the significance of doing so.

OpenAI is currently working with U.S. and international regulators to create guidelines and safeguards for the ethical application of AI technology, so Dave Willners departure comes at a particularly inopportune time. Recently, the White House reached an agreement with OpenAI and six other leading AI companies on voluntary commitments to improve the security and reliability of AI systems and products. Among these pledges is the commitment to clearly label content generated by AI systems and to put such content through external testing before it is made public.

OpenAI recognizes the risks associated with advancing AI technologies, which is why the company is committed to working closely with regulators and promoting responsible AI development.

OpenAI will undoubtedly face new challenges in ensuring the safety and ethical use of its AI technologies with Dave Willners transition to an advisory role. OpenAIs commitment to openness, accountability, and proactive engagement with regulators and the public is essential as the company continues to innovate and push the boundaries of artificial intelligence.

To ensure that artificial general intelligence (AGI) benefits all of humanity, OpenAI is working to develop AI technologies that do more good than harm. Artificial general intelligence (AGI) describes highly autonomous systems that can compete or even surpass human performance on the majority of tasks with high economic value. Safe, useful, and easily accessible artificial general intelligence is what OpenAI aspires to create. OpenAI makes this pledge because it thinks its important to share the rewards of AI and to use any power over the implementation of AGI for the greater good.

To get there, OpenAI is funding studies to improve the AI systems dependability, robustness, and compatibility with human values. To overcome obstacles in AGI development, the company works closely with other research and policy groups. OpenAIs goal is to create a global community that can successfully navigate the ever-changing landscape of artificial intelligence by working together and sharing their knowledge.

To sum up, Dave Willners departure as OpenAIs head of trust and safety is a watershed moment for the company. OpenAI understands the significance of responsible innovation and working together with regulators and the larger community as it continues its journey toward developing safe and beneficial AI technologies. OpenAI is an organization with the goal of ensuring that the benefits of AI development are available to as many people as possible while maintaining a commitment to transparency and accountability.

OpenAI has stayed at the forefront of artificial intelligence (AI) research and development because of its commitment to making a positive difference in the world. OpenAI faces challenges and opportunities as it strives to uphold its values and address the concerns surrounding artificial intelligence (AI) after the departure of a key figure like Dave Willner. OpenAIs dedication to ethical AI research and development, combined with its focus on the long-term, positions it to positively influence AIs future.

First reported onCNN

Dave Willner was the head of trust and safety at OpenAI, responsible for overseeing the companys efforts in ensuring ethical and safe AI development.

Dave Willner announced his decision to take on an advisory role to spend more time with his family, leading to his departure from his position as head of trust and safety at OpenAI.

OpenAI is regarded as one of the most innovative organizations in the field of artificial intelligence, particularly after the success of its AI chatbot, ChatGPT.

OpenAI is facing increased scrutiny and concerns from lawmakers, regulators, and the public over the safety and ethical implications of its AI innovations.

OpenAI is actively working with U.S. and international regulators to create guidelines and safeguards for the ethical application of AI technology.

OpenAI has made voluntary pledges, including clearly labeling content generated by AI systems and subjecting such content to external testing before making it public.

OpenAI aims to create artificial general intelligence (AGI) that benefits all of humanity by working on systems that do more good than harm and are safe and easily accessible.

OpenAI is funding research to improve the dependability and robustness of AI systems and is working with other research and policy groups to navigate the challenges of AGI development.

OpenAI aims to create a global community that collaboratively addresses the challenges and opportunities in AI development to ensure widespread benefits.

OpenAI is committed to responsible innovation, transparency, and accountability in AI research and development, aiming to positively influence AIs future.

Featured Image Credit: Unsplash

John Boitnott is a news anchor at ReadWrite. Boitnott has worked at TV News Anchor, print, radio and Internet companies for 25 years. He's an advisor at StartupGrind and has written for BusinessInsider, Fortune, NBC, Fast Company, Inc., Entrepreneur and Venturebeat. You can see his latest work on his blog, John Boitnott

Read the original here:

OpenAI's Head of Trust and Safety Quits: What Does This Mean for ... - ReadWrite

From vision to victory: How CIOs embrace the AI revolution – ETCIO

Manuj Desai, Global Chief Information Officer, STLThe rise of AI tools like OpenAI and BARD along with the push for Artificial General Intelligence (AGI) has sparked enthusiasm in the industry, with experts projecting significant growth. UBS Global Wealth Management forecasts a 20% annual growth rate, reaching a $90 billion market value by 2025. However, I believe advancements in machine learning and generative AI may exceed expectations, leading to faster expansion.

Whilst CIOs remain the gatekeepers of the organization's Technology landscape, making sure enterprise architecture, software applications, transactional data traceability, industrial digitization & network connectivity remain stable. Their job also involves strategic planning, budgeting, and implementing IT transformation initiatives to keep the business growing smoothly along with the task of safeguarding data storage, backup, and cyber security defenses. And now in 2023, we're blazing trails in innovation, turbocharging operational efficiency, and supercharging data-based decision-making processes.

Power of AI: The Three Ds

For me, there are three Ds of AI that stand out as crucial elements - Data, Dynamic nature of this technology, and Deployment. If CIOs possess a deep understanding of these 3 Ds, they've already conquered half the battle.

However, data security is crucial, and CIOs must establish measures like encryption, access controls, and audits. Collaboration with legal teams ensures compliance, while data governance sets standards, cleansing procedures, and validation mechanisms. Skilled employees are vital for data analysis, AI implementation, and governance, so CIOs should foster continuous learning and provide training opportunities. As the world matures in AI, CIOs should leverage the existing AI platforms to enrich the data and make it meaningful for the respective organizations to consume.

But every coin has two sides. AI's dynamic nature can introduce bias if trained on biased or incomplete data. CIOs should ensure fairness by promoting transparency, accountability, and ethical AI practices. Collaboration with experts will help identify and mitigate bias. Adopting explainable AI techniques would enhance trust and regulatory compliance. Interpretable models and frameworks would aid in understanding and communicating AI outputs. Ethics committees would oversee initiatives, ensuring adherence to guidelines.

Deployment or the process of implementing and integrating AI solutions is where we get into the real-world applications and systems. Think of automation, enhanced decision-making, improved customer experience, predictive analytics, process optimization, scalability, and competitive advantage! AI deployment can unlock novel insights, uncover hidden patterns, and drive disruptive solutions, helping organizations stay ahead of the competition.

However, AI deployment can raise ethical and legal challenges, such as bias, fairness, privacy, and accountability. CIOs can play a crucial role in establishing ethical guidelines and frameworks for AI deployment. They can collaborate with legal and compliance teams to ensure compliance with relevant regulations and industry standards. AI deployment may bring about significant changes in workflows, job roles, and organizational processes. CIOs can play a critical role in change management by fostering a culture of innovation, creating awareness about the benefits of AI, and involving key stakeholders in the deployment process.

The vanguard of AIAccording to recent statistics, businesses with CIOs who embrace digital transformation are 1.5 times more likely to experience substantial revenue growth. These leaders understand that sitting on the fence or burying their heads in the sand is a recipe for disaster. Instead, they seize the bull by the horns, driving initiatives that propel their organizations to new heights. In fact, 82% of organizations with CIOs who embrace digital transformation report increased operational efficiency.

I encourage all my fellow CIOs to craft an AI strategy blueprint that encapsulates your vision and envisaged advantages, conducts risk assessments, addresses key performance indicators, and delineates exemplary approaches for value generation. Pinpoint advocates for AI initiatives and guarantee precise measurement and widespread dissemination of their KPIs. Allocate resources to enhance data literacy initiatives, fostering a culture that thrives on data-driven decision-making. Ingrain conscientious AI practices as the bedrock of your AI strategy, treating them as fundamental rather than an afterthought.

The author is the Global Chief Information Officer of STL.

Disclaimer: The views expressed are solely of the author and ETCIO.com does not necessarily subscribe to it. ETCIO.com shall not be responsible for any damage caused to any person/organization directly or indirectly.

Join the community of 2M+ industry professionals Subscribe to our newsletter to get latest insights & analysis.

Read the original:

From vision to victory: How CIOs embrace the AI revolution - ETCIO

Future of automotive journalism in India: Would AI take charge – Team-BHP

BHPian CentreOfGravity recently shared this with other enthusiasts.

Artificial Intelligence. A topic that has garnered quite a lot of interest in the past few years. All this while, we were talking of a future where AI would take over many jobs. That 'future' isn't too far away now...

A few days back, I was discussing with my parents about my dream job, automobile journalism. We were talking about its pros and cons when they raised an important point - the relevance of automobile journalism with the onset of AI. That was when it struck me; will the current crop of media houses fall prey to AI? Will we have AI-generated reviews instead of Team-BHP reviews? If it does indeed take over, how will the system function? Though I feel that AI cannot replace human opinions and feelings about a car, I know that my defense may not be good enough. So, my question to the esteemed members of this forum - Do you think AI will take over automobile journalism along with other jobs? If so, how?

I really hope and pray that automotive journalism is here to stay. We already have platforms like ChatGPT which provide reviews sourced from the internet, will it be long before they are the source? Looking forward to your views and opinions.

Here's what GTO had to say about the matter:

Net net, there will always be a place for high-quality content in the world. Be it Indian cars & Team-BHP, or international sources such as NYTimes, Bloomberg etc. AI will make things challenging for sure, but the human race will always consume content that is original, intelligent, informative & fun.

I recently read a mind-blowing Bloomberg article about a sneaker Ponzi scheme. AI won't give you this - I happily pay an annual subscription of 20 grand to Bloomberg for quality articles like these. Neither will AI give you a Thar ownership report like this, a motorcycle review like this or an emotionally-charged travelogue like this.

In comparison to such beautiful writeups, AI articles can be JUNK - check out CNET's money-making machine based on AI content.

Here's what BHPian Dr.AD had to say about the matter:

I run a small AI/Data Science company that builds exactly the kind of solutions and business applications of AI you are worried about, and we discuss such questions in our day-to-day job, with our customers, colleagues, peers, investors and other entrepreneurs too. I am a mathematician, scientist, engineer and finally an entrepreneur and have seen many such new technologies emerge over the last 25 years and closely seen how they affect jobs and careers (and seen my own career affected, evolved, and emerged through all this "mess", if I could joking call it so). My answers below come from that perspective.

The short answer to your question: No!

AI can not "take over" automobile journalism as you are thinking of. In fact, based on my understanding (and of course, I could be wrong), AI is hardly "taking over" anything and will not take over anything. Most of those are unfounded fears.

AI will for sure revolutionize automobile journalism, or the entire field of journalism, just like it is revolutionizing many fields! And to a good extent that is already happening. But revolutionizing something is not the same as taking it over or eliminating it, or making it redundant.

Human intelligence, human creativity, and human emotions, especially in fields such as automotive journalism, will always rise above the AI-generated text, and the AI-generated text will merely be a tool (and a very powerful tool) at the hands of humans to write even better reviews! Of course, a few can try to misuse AI but those will eventually fade out, or will not get any more importance than those "copy-paste" internet news channels that already exist today.

In my own experience, I am seeing that such fears are expressed in all fields - manufacturing, banking & finance, healthcare, corporate governance etc. And we develop AI solutions in all those fields. Eventually what ends up happening, and what will continue to happen at just a lot more rapid rate, is that the humans quickly learn to use these AI tools as their assistants and rise above those to add new value (through their own insights of those jobs and their intuition and creativity) to their own organizations or businesses.

AI + Human is a tremendously powerful combination that will outclass most AI applications in the foreseeable future. Having said that, there are some developments in the field of "Artificial General Intelligence (AGI)", which might be able to reach the same level as human intelligence in some of the job roles. However, that does not exist today (AGI is still a future vision and not a reality today) and moreover, even when AGI comes to reality, it remains to be seen to what level it can match human intelligence in day-to-day life.

For some jobs which are purely mechanical and repetitive in nature (such as some of the assembly line jobs in manufacturing), those are the most likely to be made redundant by the AI. But those jobs are and were being made redundant by simple "automation" which is and was available in plenty, long before this AI became so popular. So again not much changes there.

The relatively intelligent jobs such as journalism, legal practices, scientific research, medical research etc are least likely to be "taken over" by this generative language models or the models are ChatGPT as you mentioned. These jobs will hugely benefit from the advances in AI, but will not become redundant for humans!

In fact, coming to your specific question, and since you would like to become an automobile journalist as you said, I would say the future looks very exciting. Instead of worrying about AI, think about how you would use this powerful tool as your assistant and how you would use your human creativity to create something next level to what exists today. There are unlimited possibilities and the sky is the limit! I am actually looking forward to AI revolutionizes some of these things.

So in short, yes AI will revolutionize things and I am excited about that and looking forward to the future. And no, I am not worried about human jobs or human roles because I feel confident that we will rise above that and create a more beautiful future for ourselves!

Here's what BHPian Samurai had to say about the matter:

While I am not an AI expert, I did try to become one for 2-3 years before giving up. Now I provide software stack to multiple AI companies, so I do have some exposure to the latest trends in enterprise AI.

Some time ago I answered a similar question, regarding IT jobs. But your question is in the generative AI area, which would require a different answer.

Recently I read a wonderful farewell message by a business acquaintance, directed at my team. Everyone was very happy to read the glowing tribute. However, my AI antenna started buzzing. The message was too articulate, using words that person seldom uses. Also, the whole message was generic, without any personal anecdotes. Plenty of adjectives, but not one example of why such adjectives were deserved. So, I checked the message at copyleaks, and it reported 94% chances of AI-generated content. That's almost 2 sigma or two standard deviations away from the mean (human content).

Soon most of us will be able to detect AI content either instinctively or by using an AI-checking tool. Why should anyone read AI-generated content? It is just a word salad. Sites using AI-generated content will lose viewers in a hurry. Those who write genuine content will continue to thrive.

Here's what BHPian SmartCat had to say about the matter:

Check out BHPian comments for more insights and information.

Read the original:

Future of automotive journalism in India: Would AI take charge - Team-BHP