Page 1,281«..1020..1,2801,2811,2821,283..1,2901,300..»

Navigating artificial intelligence: Red flags to watch out for – ComputerWeekly.com

Lou Steinberg, founder and managing partner of CTM Insights, a cyber security research lab and incubator, doesnt watch movies about artificial intelligence (AI) because he believes what he sees in real life is enough.

Steinberg has also worn other hats, including a six-year tenure as chief technology officer of TD Ameritrade, where he was responsible for technology innovation, platform architecture, engineering, operations, risk management and cyber security.

He has worked with US government officials on cyber issues as well. Recently, after a White House meeting with tech leaders about AI, Steinberg spoke about the benefits and downsides of having AI provide advice and complete tasks.

Businesses with agendas, for example, might try to skew training data to get people to buy their cars, stay in their hotels, or eat at their restaurants.Hackers may also change training data to advise people to buy stocks that are being sold at inflated prices.They may even teach AI to write software with built-in security issues, he contended.

In an interview with Computer Weekly, Steinberg drilled down into these red flags and what organisations can do to mitigate the risks of the growing use of AI.

What would you say are the top three things we should really be worried about right now when it comes to AI?

Steinberg: My short- to medium-term concerns with AI are in three main areas. First, AI- and machine learning-powered chatbots and decision support tools will return inaccurate results that are misconstrued as accurate, as they used untrustworthy training data and lack traceability.

Second, the lack of traceability means we dont know why AI gives the answers it gives though Google is taking an interesting approach by providing links to supporting documentation that a user can assess for credibility.

Third, attempts to slow the progress of AI, while well meaning, will slow the pace of innovation in Western nations while countries like China will continue to advance. While there have been examples of internationally respected bans on research, such as human cloning, AI advancement is not likely to be slowed globally.

How soon can bad actors jail-break AI? And what would that mean for society? Can AI developers pre-empt such dangers?

People have already gotten past guardrails built into tools like ChatGPT through prompt engineering. For example, a chatbot might refuse to generate code that is obviously malware but will happily create one function at a time that can be combined to create malware. Jail-breaking of AI is already happening today, and will continue as both the guardrails and attacks gain in sophistication.

The ability to attack poorly protected training data and bias the outcome is an even larger concern. Combined with the lack of traceability, we have a system without feedback loops to self-correct.

The ability to synthetically recreate a real person's voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences Lou Steinberg, CTM Insights

When will we get past the black box problem of AI?

Great question. As I said, Google appears to be trying to reinforce answers with pointers to supporting data. That helps, though I would rather see a chain of steps that led to a decision. Transparency and traceability are key.

Who can exploit AI the most? Governments? big tech? Hackers?

All of the above can and will exploit AI to analyse data, support decision-making and synthesise new outputs. Exploiting AI comes down to whether the use cases will be good or bad for society.

If made by a tech company, it will be to gain commercial advantage, ranging from selling you products to detecting fraud to personalising medicine and medical diagnoses. Businesses will also tap cost savings by replacing humans with AI, whether to write movie scripts, drive a delivery truck, develop software, or board an airplane by using facial recognition as a boarding pass.

Many hackers are also profit-seeking, and will try to steal money by guessing bank account passwords or replicating a persons voice and likeness to scam others. Just look at recent examples of realistic, synthesised voices being used to trick people into believing a loved one has been kidnapped.

While autonomous killer robots from science fiction are certainly a concern with some nation states and terrorist groups, governments and some companies sit on huge amounts of data that would benefit from improved pattern detection. Expect governments to analyse and interpret data to better manage everything from public health to air traffic congestion. AI will also allow personalised decision-making at scale, where agencies like the US Internal Revenue Service will look for fraud while authoritarian governments will increase their ability to do surveillance.

What advice would you give to AI developers? As an incubator, does CTM Insights have any special lens here?

There are so many dimensions of protection needed. Training data must be curated and protected from malicious tampering. The ability to synthetically recreate a real persons voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences.

Similarly, the ability to realistically edit images and evade detection will create cases where even real images, like your medical scans, are untrustworthy. CTM has technology to isolate untrustworthy portions of data and images, without requiring everything to be thrown out. We are working on a new way to detect synthetic deepfakes.

Is synthetic data a good thing or a bad thing if we want to create safer AI?

Synthetic data is mostly a good thing, and we can use it to help create curated training data. The challenge is that attackers can do the same thing.

Will singularity and artificial general intelligence (AGI) be a utopia or a dystopia?

Im an optimist. While most major technology advances can be used to do harm, AI has the ability to eliminate a huge amount of work done by people but still create the value of that work. If the benefits are shared across society, and not concentrated, society will gain broadly.

For example, one of the most common jobs in the US is driving a delivery truck. If autonomous vehicles replace those jobs, society still gets the benefit of having things delivered. If all that does is raise profit margins at delivery companies, then that will be deeply impactful to laid-off drivers. But if some of the benefit is used to help those ex-drivers do something else like construction, then society benefits by getting new buildings.

Data poisoning, adversarial AI, co-evolution of good guys and bad guys how serious have these issues become?

Co-evolution of AI and adversarial AI have already started. There is debate as to the level of data poisoning out there today as many attacks arent made public. Id say they are all in their infancy. Im worried about what happens when they grow up.

If you were to create an algorithm thats water-tight on security, what broad areas would you be careful about?

The system would have traceability built in from the start. The inputs would be carefully curated and protected. The outputs would be signed and have authorised use built in. Today, we focus way too much on identity and authentication of people and not enough on whether those people authorised things.

Have you seen any evidence of AI-driven or assisted attacks?

Yes, deepfake videos exist of Elon Musk and others for financial scams, as well as Ukraines President Zelensky telling his troops to surrender in disinformation campaigns. Synthesised voices of real people have been used in fake kidnapping scams, and fake CEO voices on phone calls have asked employees to transfer money to a fraudsters account. AI is also being used by attackers to exploit vulnerabilities to breach networks and systems.

Whats your favourite Black Mirror episode or movie about AI that feels like a premonition?

I try to not watch stuff that might scare me real life is enough!

Read more from the original source:

Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com

Read More..

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Continue reading here:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD

Read More..

‘Godfather’ of AI is now having second thoughts – The B.C. Catholic

Until a few weeks ago British-born Canadian university professor Geoffrey Hinton was little known outside academic circles. His profile became somewhat more prominent in 2019 when he was a co-winner of the A. M. Turing Award, more commonly known as the Nobel Prize for computing.

However, it is events of the past month or so that have made Hinton a bit of a household name, after he stepped down from an influential role at Google.

Hintons life work, particularly that in computing at the University of Toronto, has been deemed groundbreaking and revolutionary in the field of artificial intelligence, AI. Anyone reading this column will surely have encountered numerous pieces on AI in recent months, be it on TV, through radio, or in print, physical and digital. AI applications such as large language model ChatGPT have completely altered the digital landscape in ways unimaginable even a year ago.

While at the U of T, Hinton and graduate students made major advances in deep neural networks, speech recognition, the classification of objects, and deep learning. Some of this work morphed into a technology startup which captured the attention of Google, leading to the acquisition of the business for around $44 million a decade ago.

Eventually, Hinton became a Google vice-president, in charge of running the California companys Toronto AI lab. Leaving that position recently, at the age of 75, led to speculation, particularly in a New York Times interview, that he did so in order to criticize or attack his former employer.

Not so, said Hinton in a tweet. Besides his age being a factor, he suggested he wanted to be free to speak about the dangers of AI, irrespective of Googles involvement in the burgeoning field. Indeed, Hinton noted in his tweet that in his view Google had acted very responsibly.

Underscoring his view of Googles public AI work may be the companys slow response to the adoption of Microsoft-backed ChatGPT in its various incarnations. Googles initial public AI product, Bard, appeared months after ChatGPT began its meteoric adoption in early December. It did not gain much traction at the outset.

In recent weeks weve seen news stories of large employers such as IBM serving notice that about 7,000 positions would be replaced by AI bots such as specialized versions of ChatGPT. Weve also seen stories about individuals turning over significant aspects of their day-to-day life to such bots. One person gained particular attention for giving all his financial, email, and other records to a specialized AI bot with a view to having it find $10,000 in savings and refunds through automated actions.

Perhaps it is these sorts of things that are giving Hinton pause as he looks back at his lifes work. In the NYT interview, he uses expressions such as, It is hard to see how you can prevent the bad actors from using it for bad things, and Most people will not be able to know what is true anymore -- the latter in reaction to AI-created photos, videos, and audio depicting objects or events that didnt occur.

Right now, they are not more intelligent than us, as far as I can tell. But they soon may be, said Hinton, speaking to the BBC about AI machines. He went on to add, Ive come to the conclusion that the kind of intelligence we are developing (via AI) is very different from the intelligence we have.

Hinton went on to note how biological systems (i.e. people) are different from digital systems. The latter, he notes, has many copies of the same set of weights and the same model of the world, and while these copies can learn separately, they can share new knowledge instantly.

In a somewhat enigmatic tweet on March 14 Hinton wrote: Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanitys butterfly.

Hinton spent the first week of May correcting various lines from interviews he gave to prominent news outlets. He took particular issue with a CBC online headline: Canadas AI pioneer Geoffrey Hinton says AI could wipe out humans. In the meantime, theres money to be made. In a tweet he said: The second sentence was said by a journalist, not me, but you wouldnt know that.

Whether the race to a God-like form of artificial intelligence fully materializes, or not, AI is already being placed alongside climate change and nuclear war as a trio of existential threats to human life. Climate change is being broadly tackled by most nations, and nuclear weapons use has been effectively stifled by the notion of mutually-assured destruction. Perhaps artificial general intelligence needs a similar global focus for regulation and management.

Follow me on Facebook (facebook.com/PeterVogelCA), or on Twitter (@PeterVogel)

[emailprotected]

More:

'Godfather' of AI is now having second thoughts - The B.C. Catholic

Read More..

Artificial intelligence poses real and present danger, headteachers warn – Yahoo Sport Australia

AI is a rapidly growing area of innovation (PA)

Artificial intelligence poses the greatestdangertoeducationand the Government is responding too slowlytothe threat, head teachers have claimed.

AIcould bring the biggest benefit since the printing press but the risks are more severe than any threat that has ever faced schools, accordingtoEpsom Colleges principal Sir Anthony Seldon.

Leaders from the countrys top schools have formed a coalition, led by Sir Anthony,towarn of the very real and present hazards anddangers being presented by the technology.

Totackle this, the group has announced the launch of a new bodytoadvise and protect schools from the risks ofAI.

They wish for collaboration between schoolstoensure thatAIserves the best interest of the pupils and teachers rather than those of largeeducationtechnology companies, the Times reported.

The head teachers of dozens of private and state schools support the initiative, including Helen Pike, the master of Magdalen College School in Oxford, and Alex Russell, the chief executive of BourneEducationTrust, which runs nearly 30 state schools.

The potentialtoaid cheating is a minor concern for head teachers whose fears extendtothe impact on childrens mental and physical health and the future of the teaching profession.

Professor Stuart Russell, one of the godfathers ofAIresearch, warned last week that ministers were not doing enoughtoguard against the possibility of a super intelligent machine wiping out humanity.

Rishi Sunak admitted at the G7 summit this week that guard-rails would havetobe put around it.

Read more:

Artificial intelligence poses real and present danger, headteachers warn - Yahoo Sport Australia

Read More..

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI – Fortune

OpenAI CEO Sam Altman helped bring ChatGPT to the world, which sparked the current A.I. race involving Microsoft, Google, and others.

But hes busy with other ventures that could be no less disruptiveand are linked in some ways. This week, Microsoft announced a purchasing agreement with Helion Energy, a nuclear fusion startup primarily backed by Altman. And Worldcoin, a crypto startup involving eye scans cofounded by Altman in 2019, is close to securing hefty new investments, according to Financial Times reporting on Sunday.

Before becoming OpenAIs leader, Altman served as president of the startup accelerator Y Combinator, so its not entirely surprising that hes involved in more than one venture. But the sheer ambition of the projects, both on their own and collectively, merits attention.

Microsoft announced a deal on Wednesday in which Helion will supply it with electricity from nuclear fusion by 2028. Thats bold considering nobody is yet producing electricity from fusion, and many experts believe its decades away.

During a Stripe conference interview last week, Altman said the audience should be excited about the startups developments and drew a connection between Helion and artificial intelligence.

If you really want to make the biggest, most capable super intelligent system you can, you need high amounts of energy, he explained. And if you have an A.I. that can help you move faster and do better material science, you can probably get to fusion a little bit faster too.

He acknowledged the challenging economics of nuclear fusion, but added, I think we will probably figure it out.

He added, And probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically, too. And if both of those things happen at the same timeI would argue that they are currently the two most important inputs in the whole economywe get to a super different place.

Worldcoinstill in beta but aiming to launch in the first half of this yearis equally ambitious, as Fortune reported in March. If A.I. takes away our jobs and governments decide that a universal basic income is needed, Worldcoin wants to be the distribution mechanism for those payments. If all goes to plan, itll be bigger than Bitcoin and approved by regulators across the globe.

That might be a long way off if it ever occurs, but in the meantime the startup might have found quicker path to monetization with World ID, a kind of badge you receive after being verified by Worldcoinand a handy way to prove that youre a human rather than an A.I. bot when logging into online platforms. The idea is your World ID would join or replace your user names and passwords.

The only way to really prove a human is a human, the Worldcoin team decided, was via an iris scan. That led to a small orb-shaped device you look into that converts a biometric scanning code into proof of personhood.

When youre scanned, verified, and onboarded to Worldcoin, youre given 25 proprietary crypto tokens, also called Worldcoins. Well over a million people have already participated, though of course the company aims to have tens and then hundreds of millions joining after beta. Naturally such plans have raised a range of privacy concerns, but according to the FT, the firm is now in advanced talks to raise about $100 million.

Go here to see the original:

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune

Read More..

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Yahoo News

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

More from The Hollywood Reporter

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

Story continues

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Best of The Hollywood Reporter

Click here to read the full article.

Link:

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Yahoo News

Read More..

We need to prepare for the public safety hazards posed by artificial intelligence – The Conversation

For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks.

However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for.

Over the past 20 years, my colleagues and I along with many other researchers have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making.

We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management phases mitigation or prevention, preparedness, response and recovery.

AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.

As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining.

Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.

In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally.

Many AI experts have already warned against such potential threats. A recent open letter by researchers, scientists and others involved in the development of AI called for a moratorium on its further development.

Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.

Hazards that have low frequency and low consequence or impact are considered low risk and no additional actions are required to manage them. Hazards that have medium consequence and medium frequency are considered medium risk. These risks need to be closely monitored.

Hazards with high frequency or high consequence or high in both consequence and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.

Up until now, AI hazards and risks have not been added into the risk assessment matrices much beyond organizational use of AI applications. The time has come when we should quickly start bringing the potential AI risks into local, national and global risk and emergency management.

AI technologies are becoming more widely used by institutions, organizations and companies in different sectors, and hazards associated with the AI are starting to emerge.

In 2018, the accounting firm KPMG developed an AI Risk and Controls Matrix. It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.

Governments have also started developing some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violation of individual rights.

At the government level, the Canadian government issued the Directive on Automated Decision-Making to ensure that federal institutions minimize the risks associated with the AI systems and create appropriate governance mechanisms.

The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced. According to this directive, risk assessments must be conducted by each department to make sure that appropriate safeguards are in place in accordance with the Policy on Government Security.

In 2021, the U.S. Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. The proposed voluntary AI risk assessment framework recommends banning the use of AI systems that present unacceptable risks.

Much of the national level policy focus on AI has been from national security and global competition perspectives the national security and economic risks of falling behind in the AI technology.

The U.S. National Security Commission on Artificial Intelligence highlighted national security risks associated with AI. These were not from the public threats of the technology itself, but from losing out in the global competition for AI development in other countries, including China.

In its 2017 Global Risk Report, the World Economic Forum highlighted that AI is only one of emerging technologies that can exacerbate global risk. While assessing the risks posed by the AI, the report concluded that, at that time, super-intelligent AI systems remain a theoretical threat.

However, the latest Global Risk Report 2023 does not even mention the AI and AI associated risks which means that the leaders of the global companies that provide inputs to the global risk report had not viewed the AI as an immediate risk.

AI development is progressing much faster than government and corporate policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think of an opportunity for governments to pause and develop risk governance mechanisms.

While we should collectively and proactively try for such governance mechanisms, we all need to brace for major catastrophic AIs impacts on our systems and societies.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Get news thats free, independent and based on evidence.

Get newsletter

Editor

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Read the original here:

We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation

Read More..

NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1… – The US Sun

ARTIFICIAL intelligence has infuriated fans across the nation with its top ten best teams since 2000 ranking.

The controversial list has unsurprisingly angered fans on social media, being labeled "the dumbest take on football I've ever seen."

Leading the way in the list created by ChatGPT for NFL on FOX are the 2007 New England Patriots.

A powerhouse team featuring the likes of Tom Brady, Randy Moss, Asante Samuel, Wes Welker, and Vince Wilfork among others, Bill Belichick's team went undefeated until the bitter end.

Eli Manning's New York Giants ultimately got the better of them in Super Bowl XLII, preventing what would have been only the second perfect season in league history.

The Patriots are followed by the 2013 Seattle Seahawks who were left by then-second-year starting quarterback, Russell Wilson.

Pete Carroll's 13-3 Seahawks team went on to hoist the Lombardi Trophy after the joint-third biggest Super Bowl blowout to date (43-8 over Peyton Manning's Denver Broncos).

Sean Peyton's 2009 New Orleans Saints team rounded out the top three.

Led by Drew Brees in his prime, he too beat a Peyton Manning-led team at the Super Bowl as they beat the Indianapolis Colts 31-17.

New England returned in fourth thanks to their 14-2 2016 team, which led Brady to his fifth ring during one of the most infamous comebacks in league history against the Atlanta Falcons at Super Bowl LI.

Ray Lewis and Rod Woodson's legendary 2000 Baltimore Ravens complete the top five, having guided the franchise to a Super Bowl win in just its fifth season since moving from Cleveland.

The second half of the ranking starts with the second non-Super Bowl-winning team, the 2004 Philadelphia Eagles.

They are followed by another team to fall short at the final hurdle despite having a prime Cam Newton leading the way, the 2015 Carolina Panthers.

Loaded with talent, the 2002 Tampa Bay Buccaneers made the list at eight thanks to their 12-4 record and a Super Bowl XXXVII ring.

The 11-5 Pittsburgh Steelers of 2005, featuring the likes of Ben Roethlisberger and Hines Ward follow, with the Patrick Mahomes-led 2019 Kansas City Chiefs closing out the top ten.

In response to the list, one unimpressed fan tweeted: "Woof. Terrible list. The 05 Steelers won in the most unimpressive season of football in recent memory.

"Them and the Seahawks played a dumpster fire Super Bowl. They won even though Roethlisbergers SB stats were:

"9-21, 123 yards, 2 interceptions."

Another said: "Nope. Where are the Peyton Manning led Broncos or Colts? Green Bay has been a perennial playoff/NFC Championship contender for near 20 years.

"Also no Ny Giants that was led by Eli Manning to the Super Bowl 3 different times and winning twice against Brady's Patriots."

As one added: "Cant accept the top team lost the Super Bowl."

While another simply said: "Absolutely not"

Here is the original post:

NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1... - The US Sun

Read More..

These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT – KCCI Des Moines

We asked ChatGPT to name the top 10 athletes of all time from the state of Iowa. The list the artificial intelligence app came up with had some obvious names on it (Dan Gable) but was missing some considered by most to be among the state's best athletes (Bob Feller, Shawn Johnson).The results are based on accomplishments and recognition. Here's what the model came up with. Do you agree with the list?Top 10 athletes of all time from the state of Iowa1. Nile Kinnick- College football player at the University of Iowa and Heisman Trophy winner.2. Kurt Warner - NFL quarterback and Super Bowl MVP.3. Dan Gable - Olympic wrestler, coach and motivational speaker.4. Lolo Jones - Olympic hurdler and bobsledder.5. Tom Brands - Olympic wrestler and coach.6. Jason Momoa - Actor and former college football player at the University of Hawaii. 7. Zach Johnson - Professional golfer and Masters Tournament winner.8. Fred Hoiberg - NBA player and collegiate basketball coach.9. B.J. Armstrong - NBA player and three-time NBA champion.10. Chuck Long - College football quarterback at the University of Iowa and college football analyst.The video above is from a previous report about a Nile Kinnick documentary.

We asked ChatGPT to name the top 10 athletes of all time from the state of Iowa.

The list the artificial intelligence app came up with had some obvious names on it (Dan Gable) but was missing some considered by most to be among the state's best athletes (Bob Feller, Shawn Johnson).

The results are based on accomplishments and recognition.

Here's what the model came up with. Do you agree with the list?

1. Nile Kinnick- College football player at the University of Iowa and Heisman Trophy winner.

2. Kurt Warner - NFL quarterback and Super Bowl MVP.

3. Dan Gable - Olympic wrestler, coach and motivational speaker.

4. Lolo Jones - Olympic hurdler and bobsledder.

5. Tom Brands - Olympic wrestler and coach.

6. Jason Momoa - Actor and former college football player at the University of Hawaii.

7. Zach Johnson - Professional golfer and Masters Tournament winner.

8. Fred Hoiberg - NBA player and collegiate basketball coach.

9. B.J. Armstrong - NBA player and three-time NBA champion.

10. Chuck Long - College football quarterback at the University of Iowa and college football analyst.

The video above is from a previous report about a Nile Kinnick documentary.

Read the original here:

These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT - KCCI Des Moines

Read More..

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. – The New York Times

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the systems underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software computer code that can be freely copied, modified and reused providing outsiders witheverything they needed to quickly build chatbots of their own.

The platform that will win will be the open one, Yann LeCun, Metas chief A.I. scientist, said in an interview.

As a race to lead A.I. heats up across Silicon Valley, Meta is standing out from its rivals by taking a different approach to the technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Google, OpenAI and others have been critical of Meta, saying an unfettered open-source approach is dangerous. A.I.srapid rise in recent months has raised alarm bells about the technologys risks, including how it could upend the job market if it is not properly deployed. And within days of LLaMAs release, the system leaked onto 4chan, the online message board known for spreading false and misleading information.

We want to think more carefully about giving away details or open sourcing code of A.I. technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee A.I. work. Where can that lead to misuse?

Some within Google have also wondered if open-sourcing A.I. technology may pose a competitive threat. In a memo this month, which was leaked on the online publication Semianalysis.com, a Google engineer warned colleagues that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their lead in A.I.

But Meta said it saw no reason to keep its code to itself. The growing secrecy at Google and OpenAI is a huge mistake, Dr. LeCun said, and a really bad take on what is happening. He argues that consumers and governments will refuse to embrace A.I. unless it is outside the control of companies like Google and Meta.

Do you want every A.I. system to be under the control of a couple of powerful American companies? he asked.

OpenAI declined to comment.

Metas open-source approach to A.I. is not novel. The history of technology is littered with battles between open source and proprietary,or closed, systems. Some hoard the most important tools that are used to build tomorrows computing platforms, while others give those tools away. Most recently, Google open-sourced the Android mobile operating system to take on Apples dominance in smartphones.

Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I. That shift began last year when OpenAI released ChatGPT. The chatbots wild success wowed consumers and kicked up the competition in the A.I. field, with Google moving quickly to incorporate more A.I. into its products and Microsoft investing $13 billion in OpenAI.

While Google, Microsoft and OpenAI have since received most of the attention in A.I., Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and the hardware needed to realize chatbots and other generative A.I., which produce text, images and other media on their own.

In recent months, Meta has worked furiously behind the scenes to weave its years of A.I. research and development into new products. Mr. Zuckerberg is focused on making the company an A.I. leader, holding weekly meetings on the topic with his executive team and product leaders.

On Thursday, in a sign of its commitment to A.I., Meta said it had designed a new computer chip and improved a new supercomputer specifically for building A.I. technologies. It is also designing a new computer data center with an eye toward the creation of A.I.

Weve been building advanced infrastructure for A.I. for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do, Mr. Zuckerberg said.

Metas biggest A.I. move in recent months was releasing LLaMA, which is what is known as a large language model, or L.L.M. (LLaMA stands for Large Language Model Meta AI.) L.L.M.s are systems that learn skills byanalyzing vast amounts of text, including books, Wikipedia articles and chat logs. ChatGPT and Googles Bard chatbot are also built atop such systems.

L.L.M.s pinpoint patterns in the text they analyze and learn to generate text of their own, including term papers, blog posts, poetry and computer code. They can even carry on complex conversations.

In February, Meta openly released LLaMA, allowingacademics, government researchers and others who provided their email address todownload the code and use it to build a chatbot of their own.

But the company went further than many other open-source A.I. projects. Itallowed people to download a version of LLaMA after it had been trained on enormous amounts of digital text culled from the internet. Researchers call this releasing the weights, referring to the particular mathematical values learned by the system as it analyzes data.

This was significant because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies do not have. Those who have the weights can deploy the software quickly, easily and cheaply, spending a fraction of what it would otherwise cost to create such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days, someone released the LLaMA weights onto 4chan.

At Stanford University, researchers used Metas new technology to build their own A.I. system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one instance, the system provided instructions for disposing of a dead body without being caught.It also generated racistmaterial, including commentsthat supported the views of Adolf Hitler.

In a private chat among the researchers, which was seen by The Times, Mr. Doumbouya said distributing the technology to the public would be like a grenade available to everyone in a grocery store. He did not respond to a request for comment.

Stanford promptly removed the A.I. system from the internet. The project was designed to provide researchers with technology that captured the behaviors of cutting-edge A.I. models, said Tatsunori Hashimoto, the Stanford professor who led the project. We took the demo down as we became increasingly concerned about misuse potential beyond a research setting.

Dr. LeCun argues that this kind of technology is not as dangerous as it might seem. He said small numbers of individuals could already generate and spread disinformation and hate speech. He added that toxic material could be tightly restricted by social networks such as Facebook.

You cant preventpeople from creating nonsense or dangerous information or whatever, he said. But you can stop it from being disseminated.

For Meta, more people using open-source software can also level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds programs using Metas tools, it could help entrench the company for the next wave of innovation, staving off potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta was committed toopen-sourcing A.I. technology. He said the evolution of the consumer internet was the result of open, communal standards that helped build the fastest, most widespread knowledge-sharing network the world had ever seen.

Progress is faster when it is open, he said. You have a more vibrant ecosystem where everyone can contribute.

Go here to see the original:

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. - The New York Times

Read More..