Page 1,350«..1020..1,3491,3501,3511,352..1,3601,370..»

DoD Co-funds Institute to Research the Neural, Biological, and … – Department of Defense

The Department of Defense announced the award of $10 million for the establishment of an institute dedicated to advancing unified research in artificial and natural intelligence. Co-funded with the National Science Foundation (NSF) as part of its National Artificial Intelligence (AI) Research Institutes program, the new institute will improve understanding of how the brain functions and pursue designs of more capable and trustworthy AI.

The purpose of the NSF program is to support institutes in the performance of long-term, high-reward research on AI-related themes such as the next generation of cybersecurity, climate-smart agriculture and forestry, trustworthy AI, and AI-augmented learning. The program includes a DoD-sponsored focus area on the neural and cognitive foundations of AI, under which DoD and NSF are making this award.

As our understanding of artificial intelligence grows, it has transformed the fields of biology and neuroscience, even as our understanding of cognition in nature informs advances in AI research. This institute seeks to unify these fields, said Dr. Bindu Nair, Director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering. Continued advancement in these areas holds the potential to deliver significant economic impact and further improvements in quality of life.

From a merit-based review of 15 proposals, a panel of experts selected one multi-university team, led by Columbia University, for the award. With joint funding, this award will total approximately $20 million over five years to explore how advances in understanding neural, biological, and cognitive processes can support a rich set of models and mechanisms for guiding the transformational development of AI.

About USD(R&E)

The Under Secretary of Defense for Research and Engineering (USD(R&E)) is the Chief Technology Officer of the Department of Defense. The USD(R&E) champions research, science, technology, engineering, and innovation to maintain the U.S. militarys technological advantage. Learn more at http://www.cto.mil, follow us on Twitter @DoDCTO, or visit us on LinkedIn at https://www.linkedin.com/company/ousdre.

Read the rest here:
DoD Co-funds Institute to Research the Neural, Biological, and ... - Department of Defense

Read More..

ASCRS 2023: Artificial intelligence application to ophthalmology – Ophthalmology Times

Alvin Liu, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on deep learning and 3D OCT at the ASCRS annual meeting in San Diego.

Editors note:This transcript has been edited for clarity.

We're joined with Dr. Alvin Liu, who's going to be presenting at this year's ASCRS. Welcome to you. Tell us a little bit more about your presentation regarding deep learning and 3D OCT.

Sheryl, thank you so much for having me speak today. I'm happy to share results. So let me introduce myself a little bit more. My name is Alvin Liu. I'm a retina specialist at the Wilmer Eye Institute at Johns Hopkins University.

My research focuses on the artificial intelligence application to ophthalmology. And specifically, I'm also the director of the Wilmer Precision Ophthalmology Center of Excellence. So the work that I will be presenting at the ASCRS this year, is directly related to our center of excellence.

The overall premise is that we know macular degeneration is a leading cause of central vision loss in the elderly in the US and around the world. Specifically, most patients with AMD lose vision because of the wet form of AMD. Specifically for wet AMD we know that earlier, timely treatment with better presenting visual acuity predicts better final visual acuity. So it is imperative for us to figure out which patients are at a high risk of imminent conversion to wet AMD.

Currently there are ways for us to provide an average estimate of conversion or progression to advanced AMD using ERAS criteria. However, the ERAS criteria can only provide an average risk estimate for over five years. The model we have developed can be used as a tool that can provide information in a more reasonable or more meaningful timeframe, which is six months. We start out by asking ourselves, can we use deep learning, which is the cutting edge artificial intelligence technique for medical image analysis. Can we use deep learning to analyze OCT images to predict imminent conversion from dry to wet AMD within six months.

To do that, we collected a dataset of over 2500 patients with AMD and over 30,000 OCT images. We train a model that is able to produce robust prediction for when an eye is at a high risk of converting to wet AMD within six months using an OCT image alone. In addition, we ran different experiments, trying to see what if we also feed this model additional information in the form of how many obtainable clinical variables, such as the patient's age, sex, visual acuity, or fellow eye status, and we were able to demonstrate that in the prediction of imminent conversion to wet AMD. In the first eye of patients, meaning these are patients who had never converted to wet AMD in either eye, this additional tabular clinical information was also helpful.

Read the original post:
ASCRS 2023: Artificial intelligence application to ophthalmology - Ophthalmology Times

Read More..

Artificial Intelligence and Jobs: Whos at Risk – Barron’s

Since the release of ChatGPT, companies have scrambled to understand how generative artificial intelligence will affect jobs. This past week, IBM CEO Arvind Krishna said the company will pause hiring for roles that could be replaced by AIaffecting as much as 30% of back-office jobs over five years. And Chegg , which provides homework help and online tutoring, saw its stock lose half of its value after warning of slower growth as students turned to ChatGPT.

A recent study by a team of professors from Princeton University, the University of Pennsylvania, and New York University analyzed how generative AI relates to 52 human abilities. The researchers then calculated AI exposure for occupations. (Exposure doesnt necessarily mean job loss.) Among high-exposure jobs, a few are obvioustelemarketers, HR specialists, loan officers, and law clerks. More surprising: Eight of the top 10 are humanities professors.

In a survey from customer-service software firm Tidio, 64% of respondents thought chatbots, robots, or AI can replace teachers, though many believe that empathy and listening skills may be tough to replicate. A survey from the Walton Family Foundation found that within two months of ChatGPTs introduction, 51% of teachers tapped it for lesson planning and creative ideas. Some 40% said they used it at least once a week, compared with 22% of students.

AI isnt just knocking on the door; its already inside. Language-learning app Duolingo has been using AI since 2020. Even Chegg unveiled an AI learning service called CheggMate using OpenAIs GPT-4. Still, Morgan Stanley analyst Josh Baer wrote that its highly unlikely that CheggMate can insulate the company from AI.

Write to Evie Liu at evie.liu@barrons.com

Advertisement - Scroll to Continue

Devon Energy , KKR , McKesson , PayPal Holdings , and Tyson Foods release earnings.

Airbnb , Air Products & Chemicals , Apollo Global Management , Duke Energy , Electronic Arts , Occidental Petroleum , and TransDigm Group report quarterly results.

The National Federation of Independent Business releases its Small Business Optimism Index for April. Consensus estimate is for a 90 reading, roughly even with the March figure. The index has had 15 consecutive readings below the 49-year average of 98 as inflation and a tight labor market remain top of mind for small-business owners.

Walt Disney

Advertisement - Scroll to Continue

Brookfield Asset Management , Roblox , Toyota Motor , and Trade Desk release earnings.

The Bureau of Labor Statistics releases the consumer price index for April. Economists forecast a 5% year-over-year increase, matching the March data. The core CPI, which excludes volatile food and energy prices, is expected to rise 5.4%, two-tenths of a percentage point less than previously. Both indexes are well below their peaks from last year but also much higher than the Federal Reserves 2% target.

Honda Motor , JD.com , PerkinElmer , and Tapestry hold conference calls to discuss quarterly results.

Advertisement - Scroll to Continue

The Bank of England announces its monetary-policy decision. The central bank is widely expected to raise its bank rate by a quarter of a percentage point, to 4.5%. The United Kingdoms CPI rose 10.1% in March from the year prior, making it the only Western European country with a double-digit rate of inflation.

Advertisement - Scroll to Continue

The Department of Labor reports initial jobless claims for the week ending on May 6. Claims averaged 239,250 in April, returning to historical averages after a prolonged period of being below trend, signaling a loosening of a very tight labor market.

The BLS releases the producer price index for April. The consensus call is for the PPI to increase 2.4% and the core PPI to rise 3.3%. This compares with gains of 2.7% and 3.4%, respectively, in March. The PPI and core PPI are at their lowest levels in about two years.

The University of Michigan releases its Consumer Sentiment Index for May. Economists forecast a dour 62.6 reading, about one point lower than in April. Consumers year-ahead inflation expectations surprisingly jumped by a percentage point in April to 4.6%.

Follow this link:
Artificial Intelligence and Jobs: Whos at Risk - Barron's

Read More..

Artificial intelligence can learn but its not ready to teach, experts say – Business in Vancouver

Human Resources & Education Computer scientists, educators dont see artificial intelligence fully replacing people

By Albert Van Santvoort | May 5, 2023, 4:30pm

Artificial intelligence can help students learn, but experts say it can replace teaching | Andrea De Santis/Unsplash

In late January, ChatGPT creator OpenAI addressed criticism from schools that the text generated from its AI-powered chatbot encouraged student plagiarism and cheating.

The company responded by releasing a software tool to help teachers detect whether the author of an assignment was a student, or artificial intelligence.

This is not the first time an online or digital tool has been accused of potentially destroying education. In 1998, The New York Times published an article titled, The Trouble with Cheating in the Digital Age one many stories published by various media outlets that warned about the dangers posed by the internet (library closures and increased plagiarism, among them).

Today, computer science educators say such concerns over AI are likely overblown.

Steve DiPaola is a professor in Simon Fraser Universitys cognitive science program, and leads the schools iVizLab, which focuses on AI-based computational models of human characteristics.

He expects AI will come to be used by students as a tool for gathering information, or to work through and consider an assignment. It wont be used to create essays or answers that are fully copied and pasted.

What you get cheaply out of these systems is going to be really obvious, really templatized, said DiPaola. And were all going to notice it, and were not going to care about it.

After all, ChatGPT isnt always accurate. Beyond referencing wrong information, it can hallucinate by providing a convincing but made-up answer.

For example, DiPaola asked a Vincent van Gogh chatbot how the artists friend Paul Gauguin helped him heal spiritually. The program responded randomly with: I have a friend in Jesus.

When asked how it would disrupt education, ChatGPT didnt focus on cheating.

Rather, the chatbot highlighted that AI could offer personalized learning by analyzing a students learning habits and shaping a curriculum to meet their specific needs. The program also responded that ChatGPT could enhance student engagement by providing immediate feedback, personalized recommendations and interactive discussions.

Though useful, Vered Shwartz, a University of British Columbia assistant professor of computer science, said ChatGPTs inaccuracies could create problems.

ChatGPT also said it could introduce automatic assignment grading, but Shwartz is skeptical. The industry has had automatic grading for questions with non-written answers for decades. And, with so much variation in written responses, Shwartz said she doubts that a program would be able to correctly grade written responses that vary too much from the template answer.

Another potential consequence of AI adoption in education, according to ChatGPT, is job losses.

Shwartz, however, was unconvinced. Jobs will definitely change, she said, but she added that it would be difficult for an AI to replace educators altogether. Even if the process of building a syllabus or a lesson plan could be automated, it would still have to be reviewed and likely taught by a person, she said.

avansantvoort@biv.com

See the rest here:
Artificial intelligence can learn but its not ready to teach, experts say - Business in Vancouver

Read More..

The coronation and artificial intelligence – Browser Media

I was planning on writing a blog post about how to use look up tables in Google Tag Manager to track numerous form conversions.

I am growing increasingly fond of the GTM / GA4 power combo and have plenty of examples of how GA4 is not all bad to share with you. But.. that is a bit of a chunky post and I have run out of time this week to finish it. I blame the bank holiday (they always throw my week into chaos), so that particular post will have to wait until next week. Today, you get some random musings.

In amongst the chaos, I have found myself thinking about two things this week:

1] The coronation

2] The rise of AI

I should probably clarify point 1. I have not actually really given the coronation itself any thought at all. I have simply reflected on the rampant apathy that seems to be surrounding the day. When asked about it, 90% of the people I have spoken to just roll their eyes and shrug their shoulders. 9% have launched into a tirade about how out of touch the monarchy is and the inappropriateness of all the pomp and ceremony during a period of such economic hardship for the people that the monarchy should serve. The remaining 1% is my mum and she is bubbling with excitement, bunting at the ready.

I am not really sure where I stand on the relevance of the monarchy, but the general ambivalence must surely be a warning sign for the future of the royal family?

With regards to the rise of AI, I hate to pour fuel on the AI is going to take away our jobs fire but it is getting interesting with increased calls for some form of control. In contrast to the apathy around the coronation, passions rise when people talk about artificial intelligence.

Whilst I think that a dystopian vision of AI fuelled cyborgs massacring humanity is a step too far, there is no doubt that AI is becoming increasingly capable and I do think that some job functions will shortly become extremely redundant.

Within the digital marketing world, there is constant debate about the role of AI content creation. Our very own Vic wrote about how good Googles Bard is (or isnt?) at creating content recently. Personally, I still believe that genuine human content will out-punch AI drivel for some time to come and that the search engines will use AI to identify AI content and ultimately prioritise genuine human content, but some of the drivel is actually quite good.

I should confess to a guilty pleasure using Open AIs ChatGPT to create random songs and poems about things that I am thinking about. It can be very amusing and you have to admire how quickly the bot will create something that would take me a long time.

In the spirit of celebrating the coronation (someone has to!), I thought I would ask the oracle about King Charles and artificial intelligence this morning. Without further ado, here is what is spat out:

In days of old, King Charles did reign,A monarch wise and just.But now in modern times we see,A new king rise from dust.

Artificial Intelligence, its name,A force both strong and fast.With algorithms and data sets,Its knowledge unsurpassed.

No longer do we need a king,To rule with iron hand.For AI can make decisions,That once were solely manned.

But still we must remember,That AI is but a tool.And though it may be powerful,It is not above the rule.

So let us use this technology,To better serve mankind.And let King Charles rest in peace,As we progress in kind.

Ouch! A politically correct ending, but definitely a punchy start.

The king is dead. Long live the king.

Have a great weekend everyone 🙂

Originally posted here:
The coronation and artificial intelligence - Browser Media

Read More..

Opinion: Let’s face it, artificial intelligence is becoming the new … – The Globe and Mail

Images are unavailable offline.

The recent hype over AI is much like the same fever that had fuelled crypto, when once you could slap 'blockchain' on the name of any company and see its stock soar fourfold.

Martin Meissner/The Associated Press

Amid all the hoo-ha over artificial intelligence this year, Microsoft Corp. MSFT-Q, which has a stake in the laboratory behind the ChatGPT bot, has seen its shares go up more than 25 per cent.

Various AI stocks, with names youve never heard of, are hotter than hot, even with a recession looming and at the foot of a tech beatdown in the markets. BigBear.ai Holdings Inc., an information-technology services company, is up about 250 per cent on the year; at one point in February, it was up 700 per cent.

Wanna make money? Boy, do I have a great idea for you. Just add AI to the name of your company. Theres a voice-recognition company that used to be called SoundHound Inc., but went public in 2022 as SoundHound AI Inc. SOUN-Q. The stock has admittedly pared back some gains since then, but it is still up nearly 100 per cent for the year.

Story continues below advertisement

Any of this sound familiar? Its the same fever that had fuelled crypto, when once you could slap blockchain on the name of any company and see its stock soar fourfold. Im pretty sure that soon, as with crypto, the term AI bro will enter the lexicon to describe a young man who is passionate and enthusiastic about the industry.

Oh, wait it has. An Urban Dictionary entry for AI bro was made in January of this year.

Will AI take over the world? And other questions Canadians are asking Google about the technology

Lets face it, AI is the new crypto. All the hype, investment mania and scams of past years investment cycles are going to come back.

To that, you might slam your table, squint your eye around your monocle and say: Wait thats not right! At least AI does something. Crypto is just make-believe money!

Story continues below advertisement

A commonly expressed view. And a wrong one. But lets for the sake of argument say that it is correct. Has that distinction resulted in any difference in the markets?

It wasnt just 2020, the year of the really expensive digital pictures, or NFTs, that crypto was booming. Remember 2017, when a market frenzy was sparked by the Canadian-founded Ethereum, which let anyone easily create their own coin?

At one point that year, the furniture chain Ethan Allen Interiors Inc. ETD-N was up 50 per cent, largely attributed to how its ticker at the time, ETH, was the same as the abbreviation for Ethereums ether coin.

While Ethan Allen eventually changed its ticker to distance itself, others fiercely coveted that nominal crypto association.

Story continues below advertisement

I wasnt being hyperbolic when I wrote earlier that companies can slap blockchain onto their names and see their stock quadruple in value. That was exactly what happened when Long Island Iced Tea Corp. changed its named to Long Blockchain Corp.

Meanwhile, Eastman Kodak Co. KODK-N, the camera maker, saw its stock triple in value after a bad year by announcing it would go into crypto mining.

Then there were the outright scams. The infamous OneCoin raised US$4-billion, but there is no evidence it had even developed a digital currency based on blockchain technology. Such scams are so plentiful that the U.S. Justice Department is still announcing new 2017-era cases to this day.

Such scams abounded because they were easy. Regardless of what many think of it, there are defined metrics for what makes a cryptocurrency namely in terms of the code that goes into it. But people cant see or hold a cryptocurrency. So, its easy to claim youve made one. The end user doesnt always have the sophistication to tell the difference until its too late.

Story continues below advertisement

Again sounds familiar? Have you ever wondered how many purported AI projects are actually AI?

A London-based startup, Engineer.ai, once claimed to use artificial intelligence to help people build apps. It attracted US$30-million from investors, including a unit of Japans SoftBank Group Corp. SFTBY. The Wall Street Journal later reported that Engineer.ais AI claims were greatly exaggerated actual humans in India were building the apps.

Such practices are so rampant, there is even a neologism coined for it: AI washing.

What it all boils down to is this: When crypto entered the mainstream, it was hard to define or even understand. In that messy environment, companies thrived and empires were built and so also rose the scams and OneCoins of the world. AI is having the same moment now.

Follow this link:
Opinion: Let's face it, artificial intelligence is becoming the new ... - The Globe and Mail

Read More..

Law firms embrace the efficiencies of artificial intelligence – Financial Times

Law firms have been racing to adopt artificial intelligence after developments in the technology have enabled it to draw up contracts, assist due diligence processes and draft legal opinions.

The launch of natural language chatbot ChatGPT in November marked a significant turning point in generative AI. Created by Microsoft-backed OpenAI, the bot produces convincing and humanlike sentences, using large language models to predict the likely next word in a sequence.

It has led other big tech companies, including Google and Microsoft itself, to quickly follow suit. Start-ups have also been leveraging the underlying technology used in these products to develop specialist AI for legal services.

Law firms and consultancies are now using the software to automate tasks and drive efficiencies, spurred to cut costs by falling revenues amid a corporate dealmaking drought. Magic Circle law firms and Big Four accounting groups have been experimenting with AI platforms built for legal tasks such as drafting contracts, translating documents into different languages, and suggesting legal opinions.

Recommended

We get several thousand queries daily, and it is pretty consistent across the offices...in multiple languages [and] different areas of law, says David Wakeling, head of Allen & Overys markets innovation group, which comprises both lawyers and developers. The law firm was the first Magic Circle firm to adopt generative AI and has been using an eponymous product by a US start-up called Harvey AI since November. It is [now] a serious part of the operating model, he explains. We are way past trial.

Harvey was built using the GPT language models created by OpenAI, which has also invested in the US start-up. As well as the general internet data that underlies GPT, Harvey is trained in legal data including case law. The system alerts A&Os lawyers to fact-check the content it creates, as generative AI is known for hallucinating: stating things confidently as fact, despite there being no basis for them in reality.

It is a blank page, quick first stab, Wakeling says. You know you are always going to edit it; it is never good enough. But [if] you apply that to 3,500 people, that is a serious saving in terms of time; an hour or two a week is a big deal.

We need human lawyers to ensure we keep expanding and pushing forwards the law so it doesnt atrophy

Almost half of all current tasks in the legal profession could be replaced by AI, according to a recent report by Goldman Sachs. Automating them would eliminate the need for humans to carry out some of the more administrative and mundane work although it could also mean that trainee solicitors and graduates at law firms no longer got to experience it.

We need to think about how we train young lawyers, says Kay Firth-Butterfield, head of artificial intelligence at the World Economic Forum. They cannot all instantly be able to advise on the most complex matters or do complex client meetings, or think of ways to challenge the status quo. This has to be learned.

She sees limits to AIs use: Because all that generative AI can do is look at historical data to give answers, we need human lawyers to ensure we keep expanding and pushing forwards the law so it doesnt atrophy.

Protagonists argue that AI-based tools will free up lawyers to do more skilled work and give strategic advice, while saving time and reducing costs for firms and clients.

It definitely reduces the billable hours, says Richard Robinson, founder and chief executive of Robin AI, which launched in 2019 and provides AI-based legal software. But he points out: The best firms want to be paid for high-level strategic work, things that fundamentally, at least today, no AI is trying to replicate like high-level negotiations, insights into whats happened in other [similar] deals in the market.

$1bnPwCs investment in AI to automate parts of its audit, tax and consulting business over the next three years

Robin AI works with two of the Big Four accounting firms, as well as private equity funds and law firm Clifford Chance. It sells software and also offers an added service, where its team of 30 in-house lawyers and paralegals oversee the AI-generated results. Its technology can also be used to scan legal documents to assess risk exposure.

But Robinson warns that these tools are basically not ready to be used without people safeguarding them highlighting that the pure output of such technologies should always be checked and edited by a qualified expert.

Large language models improve as more data is fed into them and as they are put to more use. PwC, which uses Harvey for mergers and acquisitions, due diligence and drafting contracts, says it has had an influx of clients saying they want to adopt AI, but are concerned about data protection.

Data confidentiality and security is paramount and really important...because data is sensitive and there is legal privilege, says Sandeep Agrawal, a partner in PwCs tax and legal services. Agrawal met executives from Harvey recently to discuss how it can ringfence data and encrypt the information, meaning it is more secure. Harvey segregates all customer data and offers encryption tools to protect access to client information.

Recommended

In a sign of growing confidence in the technology, PwC last week pledged to invest $1bn in AI to automate parts of its audit, tax and consulting business over the next three years.

Jerry Ting, chief executive of contract management business Evisort and a lecturer at Harvard Law School, reports a similar shift towards adopting AI in recent months. Evisort, which launched in the US in 2016, offers AI software that allows clients to create and manage contracts including drafting and signing through an automated process.

Before GPT, it was: here is what AI is, heres why it benefits you; it was almost that we had to convince them, he says. Now, they are showing up at the door already convinced. The question becomes: how do I use it in a way thats safe, that actually drives my business outcomes, and it does fit in my budget?

This article has been updated to reflect that Evisort is a contract management business

Original post:
Law firms embrace the efficiencies of artificial intelligence - Financial Times

Read More..

Did Stephen Hawking Warn Artificial Intelligence Could Spell the … – Snopes.com

Image Via Image Via Sion Touhig/Getty Images")}else if(is_tablet()) {document.write("")}

On May 1, 2023, the New York Post ran a story saying that British theoretical physicist Stephen Hawking had warned that the development of artificial intelligence (AI) could mean "the end of the human race."

Hawking, who died in 2018, had indeed said so in an interviewwith the BBC in 2014.

"The development of full artificial intelligence could spell the end of the human race," Hawking said during the interview. "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."

Another story, from CNBC in 2017, relayed a similar warning about AI from the physicist. It came from Hawking's speech at the Web Summit technology conference in Lisbon, Portugal, according to CNBC. Hawking reportedly said:

Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.

Such warnings became more common in 2023. In March, tech leaders, scientists, and entrepreneurs warned about the dangers posed by AI creations, like ChatGPT, to humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," they wrote in an open letter published by the Future of Life Institute, a nonprofit. The letter garnered over 27,500 signatures as of this writing in early May 2023. Among the signatories were CEO of SpaceX, Tesla, and Twitter Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp.

In addition, Snopes and other fact-checking organizations noted a dramatic uptick in misinformation conveyed on social media via AI-generated contentin 2022 and 2023.

Then, on May 2, long-time researcher at Google, Geoffrey Hinton, quit the technology behemoth to sound the alarm about AI products. Hinton, known as "Godfather of AI," told MIT Technology Review that chatbots like GPT-4 that OpenAI, an AI lab "are on track to be a lot smarter than he thought they'd be."

Given that Hawking was indeed documented as warning about the potential for AI to "spell the end of the human race," we rate this quote as correctly attributed to him.

"Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build." MIT Technology Review, https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.

"'Godfather of AI' Leaves Google, Warns of Tech's Dangers." AP NEWS, 2 May 2023, https://apnews.com/article/ai-godfather-google-geoffery-hinton-fa98c6a6fddab1d7c27560f6fcbad0ad.

"Pause Giant AI Experiments: An Open Letter." Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 May 2023.

Stephen Hawking Says AI Could Be "worst Event" in Civilization. 6 Nov. 2017, https://web.archive.org/web/20171106191334/https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

Stephen Hawking Warned AI Could Mean the "End of the Human Race." 3 May 2023, https://web.archive.org/web/20230503162420/https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/.

"Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC News, 2 Dec. 2014. http://www.bbc.com, https://www.bbc.com/news/technology-30290540.

Damakant Jayshi is a fact-checker for Snopes, based in Atlanta.

See the rest here:
Did Stephen Hawking Warn Artificial Intelligence Could Spell the ... - Snopes.com

Read More..

How Artificial Intelligence could become the future of MLS scouting and recruitment – AS USA

Major League Soccer has gone into partnership with aiScout, an Artificial Intelligence-based talent analysis and development platform run by ai.io, which will enable players to be scouted by clubs in the United States no matter where they are in the world. The agreement is part of the MLS Emerging Ventures program, which continues the leagues investment into MLS Next, which is aimed at players at under-19 level and younger.

Any player can sign up to the use the digital scouting product and effectively take part in a virtual trial, which any partner club, which now includes all 29 in MLS, will have access to. The aiScout app is free to download and players can run through a series of training drills and assessments whenever they so wish, as long as they have some open space and their mobile phone handy so they can film themselves.

After players have completed the assessments and uploaded their clips, the app evaluates their performances and gives them a score, which partner clubs can track and review for themselves. The most-highly-rated players will then have the chance to train with MLS clubs at different events across the US and Canada, according to MLS official announcement of the agreement.

At MLS, we believe this partnership is going to be a real solution for some of the most important issues faced in soccer across North America namely, cost, geography, and accessibility, said Fred Lipka, Technical Director of MLS NEXT.

It is critical that we provide all players with an opportunity to access MLS NEXT and MLS NEXT Pro programming, and ai.io has built a fantastic technology platform that enables us to eliminate these traditional barriers and increase opportunities for players at no cost to the player.

As well as all 29 MLS clubs, Premier League giants Chelsea also have an agreement with aiScout, who are their academy research partner. Fellow English club Burnley, who have just won promotion back to the top flight, are also part of ai.ios club network.

By encouraging aspirational players and fans to download our aiScout app and film themselves replicating club standard drills on their mobile devices, players have had success, and in some cases are now playing for clubs in the English Premier League, added Darren Peries, CEO of ai.io.

We are thrilled about this partnership with Major League Soccer, and greatly look forward to soon providing players around the world with an opportunity to be seen, analysed, evaluated and developed by MLS clubs.

MLS clubs will be able to use aiScout in full from December 2023 onwards.

View post:
How Artificial Intelligence could become the future of MLS scouting and recruitment - AS USA

Read More..

The open-source future of artificial intelligence – Exponential View

In todays commentary I delve into the question of open-source versus closed-source models for AI, and how this will shape the future of the internet.

It all started with leaked Google documents shared by an anonymous insider to

But the uncomfortable truth is, we arent positioned to win this arms race and neither is OpenAI. While weve been squabbling, a third faction has been quietly eating our lunch.

Im talking, of course, about open source. Plainly put, they are lapping us.

Ive been thinking about this open source since Stable Diffusion came out last summer. But the last two months, since Metas LLaMa model was leaked has seen a rich ecosystem of developers swarm around open-source LLMs. And things are moving quickly.

But before I share my thoughts, Ill first summarise the key observations the Google insider makes.

The first observation is that the gap between the current state of the art (GPT-4) and open-source models is changing quickly. For example, a $100 open-source model with 13 billion parameters is competing with a $10 million Google high-end model with 540 billion parameters.

Secondly, the open-source community has solved many scaling problems through a range of optimizations. MosaicML is a great example of this, demonstrating that they can train a Stable Diffusion model, which is not a large language model, six times cheaper than the original.

The third observation is that the dynamic of the open-source market creates a faster rate of iteration. This is because there are many developers who are contributing to the market, leading to a much faster rate of learning. Learning is a key factor in the Exponential Age; it drives down cost and improves price performance. Potentially, open-source models could learn and iterate far faster than closed-source ones.

In the last few days, Ive spoken to Emad Mostaque from StabilityAI, Yann LeCun from Meta, and several other people who are developing these things. In addition, Ive been thinking about open-source and public goods for more than two decades. Heres my take

Original post:
The open-source future of artificial intelligence - Exponential View

Read More..