Page 717«..1020..716717718719..730740..»

3 Artificial Intelligence (AI) Stocks That Are Long-Term Buys – The Motley Fool

Artificial intelligence (AI) has been the hottest theme on Wall Street this year. The hype around the potential for this growing technology sent investors scrambling to get in on the action. Any stock with any relation to AI got a bump, making AI much like a tide coming in and lifting all the boats in the harbor.

But tides eventually go back out, so it's critical to know which companies will continue to catch the wave of the AI revolution and which ones will be left high and dry once the hype dies down.

If you're looking for AI stocks that have what it takes to motor on for the long term, you're in luck. Here are three with some sustained horsepower.

Graphics processing unit (GPU) chipmaker Nvidia (NVDA 3.79%) has arguably been the face of AI's 2023 rally. The stock is up more than 170% since January, trouncing the broader market. Investors are giddy over the company's early AI dominance; its AI chips have an estimated 80% market share. Nvidia's chips specialize in demanding applications requiring lots of computing power. That was applications like gaming and cryptocurrency mining in the past, but Nvidia successfully embraced AI head-on, and now companies can't get enough of Nvidia's chips.

You can see how growth has gone into a higher gear since earlier this year, and analysts believe that momentum will continue due to the long-term opportunities in AI. Researchers believe the global AI market will multiply in size over the coming years, approaching $2 trillion by 2030. Nvidia will undoubtedly face competition as others bring rival products to market, but Nvidia's early start and the growing market should mean plenty of opportunities to continue growing.

NVDA Revenue (Quarterly YoY Growth) data by YCharts

The market has taken a step back in recent weeks and Nvidia's stock price is down nearly 20% from its high this year. The pullback could be an opportunity to begin sniffing around shares for a long-term holding. The stock's forward P/E ratio is 37, but that's arguably reasonable if the company can grow earnings at the 33% annualized rate analysts expect.

Training AI models is an underrated part of developing these new AI technologies you're seeing. It's one thing to program the model, but it must be trained with massive amounts of data that few companies can access outside of perhaps Alphabet, Meta Platforms, and Microsoft. But thanks to data cloud and analytics company Snowflake (SNOW -0.60%), that could soon change. Snowflake is a cloud-based platform that can store, organize, analyze, and exchange data.

Especially exciting is that Snowflake is building a marketplace where companies can readily access more than 2,100 data sets, services, and apps. The company aims to become the go-to resource for companies to find and extract the data they need to improve their businesses -- or potentially train and hone AI models. Admittedly, revenue growth has slowed, which you can see below. However, Snowflake's billing is based on usage, which means it is vulnerable to slowdowns in a more challenging economic environment if customers cut back.

SNOW Revenue (Quarterly YoY Growth) data by YCharts

Slowing growth doesn't mean ending growth, either. Analysts believe revenue will grow from $2.75 billion this year to over $18 billion over the next decade. Additionally, Snowflake is poised for strong earnings growth as its revenue grows faster than its expenses. The stock trades at a hefty forward P/E of 205, but that could come down quickly if earnings growth averages nearly 60% annually, as analysts predict.

A common misconception is that leading semiconductor companies like Nvidia build their chips, but that's untrue. Usually, semiconductor companies design the chips and then outsource production to specialized manufacturers called fabs, like Taiwan Semiconductor Manufacturing (TSM 1.73%). TSMC (as it is often called) is the world's largest chip fab, owning 56% of the global market.

That makes TSMC a true pick-and-shovel investment in AI because there's a likely chance that whatever AI chip becomes popular, TSMC is building it. Being the market leader, TSMC has a reputation for specializing in advanced manufacturing, which cutting-edge AI chips are likely to need.

TSM Revenue (Quarterly YoY Growth) data by YCharts

The business model isn't quite as lucrative for a manufacturer. Revenue has shrunk over the past year, and analysts predict around 7% earnings growth. However, the stock's valuation is more modest at just under 17 times forward earnings. The company's location in Taiwan is also somewhat riskier due to China-Taiwan geopolitical tensions. Assuming political conflicts don't escalate, TSMC is entrenched as a critical long-term player in AI.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Justin Pope has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Meta Platforms, Microsoft, Nvidia, Snowflake, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.

See the rest here:
3 Artificial Intelligence (AI) Stocks That Are Long-Term Buys - The Motley Fool

Read More..

Missed Out on Nvidia’s Magnificent Rally? Buy This Incredibly … – The Motley Fool

Shares of Nvidia (NVDA 3.79%) have delivered incredible returns of 193%to investors in the past year, and a big chunk of those gains can be attributed to the booming demand for the company's artificial intelligence (AI) chips.

After all, Nvidia's top and bottom lines started growing at an eye-popping pace thanks to the AI-fueled demand for its graphics processing units (GPUs). The chipmaker's revenue is expected to double in the current fiscal year to $54billion, which would be a massive jump over the flatrevenue growth it clocked in the previous fiscal year.

However, the big surge in Nvidia stock means that investors will now have to pay97 times trailing earnings and 31 times sales if they wish to buy the shares. Those are extremely rich multiples, though it cannot be denied that the company can justify them by sustaining its terrific growth. But investors who aren't comfortable paying rich multiples for Nvidia stock have an option to capitalize on the proliferation of AI in the form of Opera (OPRA -2.33%).

Shares of the web browser company are up an incredible 175% in the past year. But they are down more than 40% since mid-July on the news that Opera is looking to raise $300 million by offering new shares. However, the sharp decline in Opera stock over the past three and a half months indicates that the news of a fresh stock offering may already be priced in. The stock shot up more than 10% on Oct. 26 after it releasedbetter-than-expected quarterly results.

It won't be surprising to see Opera stock regain its mojo and go on another parabolic run after its latest report. A parabolic move refers to a rapid jump in the shares of a company in a short period of time -- similar to the right side of a parabolic curve on a chart -- and Opera investors have already seen that happen before the stock lost steam in July this year.

The stock currently carries a 12-month median price target of $20 according to a consensus of four analysts covering the stock, indicating a 72% jump from recent levels.

The Street-high price target of $23 suggests that Opera stock could jump 93% from where it is at this writing. The good part is that Opera could indeed deliver such impressive gains given its healthy pace of growth.

Opera's third-quarter revenuewas up 20% year over year to $102.6 million, exceeding the higher end of its guidance range of $97 million to $100 million. This was the 11th consecutive quarter of 20%-plus revenue growth for Opera and was well above the 15% growth that the company was originally anticipatingduring the quarter.

The company also reported an adjusted profit of $0.18 per share in Q3, more than double the year-ago period's reading of $0.08 per share.

The company's better-than-expected growth was drivenby a 24% year-over-year jump in advertising revenue, while search revenue was up 15%. Opera witnessed a healthy year-over-year jump of 24% in average revenue per user (ARPU) to $1.31.

In addition, Opera has raised its full-year revenue forecast to a range of $394 million to $397 million from the prior range of $380 million to $390 million. The company now anticipates full-year revenue to increase 20% from 2022levels. It has also increased its adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization) guidance to $89 million at the midpoint from the earlier expectation of $82 million.

A key reason Opera has been able to sustain strong growth levels is because of the improved monetization of its services. It is worth noting that the company's ARPU has jumped 3 timesover the past four years, and the trend is likely to continue thanks to Opera's focus on integrating generative AI into its browser and advertising platforms.

Opera has integratedChatGPT into the sidebar of its browser. It offers both GPT-3.5 and GPT-4 large language models. While GPT-3.5 is available for free, users have the option to upgrade to GPT-4 by paying a subscription of $20 per month.

Additionally, Opera also offers its in-house chatbot -- Aria -- along with ChatGPT in its browser.

So, users can simply use Opera's sidebar to launchchatbots and punch in text prompts to gain contextual insights for their queries. What's more, Opera users can highlight the text on a webpage and use Aria to gain more insights about the topic they are researching. The generative AI chatbot also allows users to translate text right from the webpage.

Opera has made a smart move by integrating chatbots within its browser as this will allow it to tap a fast-growing market. Grand View Research estimates that the global chatbot market could post annual growth of 23% through 2030, growing from $5 billion last year to $26 billion at the end of the forecast period.

As a result, it won't be surprising to see Opera attract more users and gain a bigger share of the web browser market, which was worth an estimated $183 billionlast year and could grow to $590 billion by 2028.

Opera is expected to maintain healthy growth levels for the next couple of years as well, which is not surprising given the points discussed above. This is evident from the chart below:

OPRA Revenue Estimates for Current Fiscal Year data by YCharts

What's more, analysts are projecting Opera's earnings to grow at an annual rate of 84% for the next five years. That's faster than the 74% annual earnings growth that Nvidia is forecast to deliver over the same period. This makes buying Opera a no-brainer right now, as it is trading at 19 times trailing earnings compared to Nvidia's price-to-earnings ratio of nearly 100.

Assuming Opera's earnings increase at an annual pace of 80% for the next five years, its bottom line could jump to $4.72 per share in 2027, using 2022's earnings of $0.25 as the base. Multiplying the projected earnings with Opera's earnings multiple of 19 points toward a stock price of almost $90 after five years. That's more than 7 times the company's current stock price.

So, there is a solid chance that Opera could turn out to be a strong AI stock in the long run and deliver red-hot gains. Investors looking for an alternative to Nvidia to take advantage of this fast-growing technology should take a close look at Opera.

More:
Missed Out on Nvidia's Magnificent Rally? Buy This Incredibly ... - The Motley Fool

Read More..

A year later, artificial intelligence remains a double-edged sword at … – The Setonian

AI has taken the internet by storm since its rise in popularity nearly a year ago, from generating images to writing essays in seconds. Today, AI seems to be embraced to a certain degree at Seton Hall, by being aware of some of its pros but also being aware of its cons.

The Department of Information Technology published a site called the Artificial Intelligence Resource Center, to guide faculty members in addressing the use of AI with strategies for integrating AI into curriculums, reducing AI-generated content by students and requesting consultations with the Teaching, Learning, and Technology Center Instructional Designer.

This new site will continue to change as more information is gathered, according to Paul Fisher, the director of the Teaching, Learning, and Technology Center and associate chief information officer.

We need to ensure the University community is aware of both the rewards this kind of technology can provide and the risks that it presents, Fisher said. For example, additions of AI in our security suite of software help us to weed out false positives and become aware of potential threats faster than before.

Fisher said that people can use AI with both positive and negative goals in mind.

Taking a shortcut with AI to write or analyze or some other skill wont make you very good at what you are trying to learn to do, he said. We need to make our students aware that in the long run, using AI will not get them to where they want to be.

He added the university licenses TurnItIn, a system that detects generative AI and other forms of plagiarism in written work and the department is actively looking at other tools to deploy that will help both faculty and students.

Some professors have expressed their worry over students relying on AI programs to do schoolwork.

Dr. Russell Sbriglia, associate professor and director of Undergraduate Literature Studies in the English Department, said that although some professors have proposed to do more in-class writing assignments to reduce the use of generative AI among students, students can still easily use ChatGPT in class as out of class.

My approach has thus far been to stress to students the limitations of ChatGPT, Sbriglia said. This includes modeling for students the kinds of errors to which generative AI software is prone. It also includes stressing just how integral writing is to the process of critical thinking.

He also said that students using AI software for their writing assignments outsource their ability to think critically.

It is through the very act of writing that we come to organize, construct, and even discover our thoughts on a given topic or text, he said. To skip that process altogether by using AI generative software is to curtail ones ability to think critically.

Enjoy what you're reading? Get content from The Setonian delivered to your inbox

He added that professors will need to become more nimble and creative when creating assignments that develop or test for the kinds of knowledge and skills that such programs simply cannot fake as generative AI continues to advance and become more sophisticated.

At the end of the day, artificial intelligence simply does not think like an actual human brain, Sbriglia said.

The English Department recently updated its Plagiarism Policy to address the use of generative AI, such as ChatGPT, amongst students stating that any use of AI tools for essays, journals, in-class writing assignments, etc. is in violation of the English Department Academic Integrity Policy, but with the exception of exercises developed by your professor.

Although some professors are worried about students relying on AI for their assignments, some professors have embraced the use of AI and adjusted their curriculum in a way their students can use it as a tool for learning instead of cheating.

Dr. Nada Khan, a professor in the Department of Chemistry and Biochemistry, said she is happy that AI is playing an important role in exploring new technologies and ideas.

Rather than worrying about students relying on AI, I work on tweaking my exams and assignments in a way that instead of cheating, copying, relying on AI, students can use it as a tool to expand their knowledge, Khan said.

She added that she uses different and new exam models and other ways to test her students knowledge, and it is easy for her to know if an exam was done with the help of AI.

Since I know the learning nature of my students during in-class work, it is easy for me to determine if the exam was completed with the help of AI, Khan said. This is another reason why I make them work on in-class assignments.

The Office of the Provost, the Center for Faculty Development, and the Teaching, Learning and Technology Center at Seton Hall will hold a panel discussion called Impact Talks Presents: Generative AI and Higher Education on Nov. 2 in Bethany Hall from 12:30 p.m.

The panel will host keynote speaker, Dr. Jason Gulya, professor at Berkeley College and chair of the Artificial Intelligence Council, along with Seton Hall faculty and student representatives sharing their thoughts on the rapid advancement of generative Artificial Intelligence (AI) and its impact on higher education.

View post:
A year later, artificial intelligence remains a double-edged sword at ... - The Setonian

Read More..

UTC to invest in artificial intelligence, prompting organizational … – Chattanooga Times Free Press

Advertisement Advertisement

October 30, 2023 at 3:41 p.m.

by Shannon Coan

With the rise in the use of artificial intelligence, the University of Tennessee at Chattanooga is taking steps to prepare students for an AI-driven world by integrating the technology across campus.

"As AI becomes much more commonplace and easier to access, there's a real opportunity to position both UTC and Chattanooga in the forefront," Vicki Farnsworth, the vice chancellor and chief information officer, said in a phone interview.

Farnsworth will oversee the university's efforts to apply generative AI in the university's instruction, business operations and research, Chancellor Steve Angle announced in a letter to staff Friday. Generative AI tools, such as ChatGPT, create new content using algorithms.

In September, UTC hosted two open forums to gather information about the existing work around AI on campus, as well as what would be needed to further integrate AI in a meaningful way.

Follow this link:
UTC to invest in artificial intelligence, prompting organizational ... - Chattanooga Times Free Press

Read More..

Generative Artificial Intelligence (GAI): Increasing the Fog of War … – ADL

As the war between Israel and Hamas continues, people are turning to social media in search of reliable information about their family and relatives, events on the ground, and an understanding of where the current crisis might lead. The introduction of Generative Artificial Intelligence (GAI) toolssuch as deepfakes and synthetic audioare further complicating how the public engages with and trusts online information in an environment already rife with misinformation and inflammatory rhetoric. There is a dual problem of bad actors intentionally exploiting these tools to share misleading information, and their ability to cast doubt on any content with even the idea that it might be fake. This is leading to a dangerous erosion of trust in online content.

Fake news stories and doctored images are not unique to the current conflict in the Middle East; they have long been an integral part of warfare, with armies, states, and other parties using the media as an extension of their battles on the ground, in the air, and at sea. What has changed radically is the advent of digital technology. Any reasonably tech-savvy person with access to the internet can now generate increasingly convincing fake images and videos (deepfakes,) and audio, using GAI tools (such as DALL-E, that generates images from text,) and then distribute this fake material to huge global online audiences at little or no cost. Recent examples of such fakes include an AI-generated video supposedly showing First Lady Jill Biden condemning her own husbands support for Israel.

By using GAI to, for example, spread mis- and disinformation, bad actors are actively seeking not just to score propaganda victories, but also to pollute the information environment. Promoting a climate of general mistrust in online content means that bad actors need not even actively exploit GAI tools; the mere awareness of deepfakes and synthetic audio makes it easier for these bad actors to manipulate certain audiences into questioning the veracity of authentic content. This phenomenon has become known as the liars dividend.

In the immediate aftermath of Hamass brutal invasion of Israel on October 7, we have witnessed an increase in antisemitism and Islamophobia online as well as in the physical world. Images of Israelis brutally executed by Hamas have been widely distributed on social media. While these images have helped document the horrors of the war, they have also become fodder for misinformation. So far, we have not been able to confirm that generative AI images are being created and shared as part of large-scale disinformation campaigns. Rather, bad-faith actors are claiming that real photographic or video evidence is AI-generated.

On Wednesday, October 11, a spokesperson for Israeli Prime Minister Benjamin Netanyahu said that babies and toddlers had been found decapitated in the Kfar Aza kibbutz following the Hamas attack on October 7. In US President Joe Bidens address to the nation later that evening, he said that he had seen photographic evidence of these atrocities. The Israeli government and US State Department both later clarified that they could not confirm these stories, which unleashed condemnation on social media and accusations of government propaganda.

To counter these claims, the Israeli Prime Ministers office posted three graphic photos of dead infants to its X account on Thursday, October 12. Although these images were later verified by multiple third-party sources as authentic, they set off a series of claims to the contrary. The images were also shared by social media influencers, including Ben Shapiro of The Daily Wire, who has been outspoken in his support for Israel.

Critics of Israel, including self-described MAGA communist Jackson Hinkle, questioned the veracity of the images and claimed that they had been generated by AI. As supporting evidence, Hinkle and others showed screenshots from an online tool called AI or Not, which allows users to upload images and check if they were likely AI- or human-generated. The tool determined that the image Shapiro shared was generated by AI. An anonymous user on 4chan then went a step further and posted an image purporting to show the original image of a puppy about to undergo a medical procedure, alleging that the Israeli government had used this image to create the fake one of the infants corpse:

The 4chan screenshot was circulated on Telegram and X as well.

Other X users, including YouTuber James Klg, disputed Hinkles assertion, sharing their own examples where AI or Not determined that the image was human-generated. Hinkles post has received over 22 million impressions at the time of this writing, whereas Klgs has only received 156,000.

Our researchers at ADL Center for Tech & Society (CTS) replicated the experiment with AI or Not and got both results: When using the photo shared by the Israeli PMs X account, the tool determined it was AI. But when using a different version downloaded from Google image search, it determined the photo was human-generated. This discrepancy says more about the reliability of the tool than about any deliberate manipulation by the Israeli government. AI or Nots inconsistencies are well documented, especially its tendency to produce false positives.

The Jerusalem Post confirmed the images were indeed real and had been shown to US Secretary of State Anthony Blinken during his visit to the Israeli Prime Ministers office on October 12. In addition, Hany Farid, a professor at the UC Berkeley School of Information, says his team analyzed the photos and concluded AI wasn't used. Yet this is unlikely to convince social media users and bad-faith actors who believe the images were faked. Xs Community Notes feature, which crowdsources fact-checks from users, applied labels to some posts supporting the claim of AI generation and other labels refuting the claim:

This incident exposes a perfect storm of an online environment rife with misinformation and the impact that many experts feared generative AI could have on social media and the trustworthiness of information. The harm caused by the publics awareness that images can be generated by AI, and the so-called liars dividend that lends credibility to claims of real evidence being faked, outweigh any attempts to counter these claims. Optics AI or Not tool includes warning labels that the tool is a free research preview and may produce inaccurate results. But this warning is only as effective as the publics willingness to trust it.

CTS has already commented on the misinformation challenges that the Israel-Hamas war poses to social media platforms. Generative AI adds another layer of complexity to those challenges, even when it is not actually being used.

Amid an information crisis that is exacerbated by harmful GAI-created disinformation, social media platforms and generative AI developers alike have a crucial role to play in identifying, flagging, and if necessary, removing synthetic media disinformation. The Center for Tech & Society recommends the following for each:

Impose a clear ban on harmful manipulated content: In addition to regular content policies that ban misinformation in cases where it is likely to increase the risk of imminent harm or interference with the function of democratic processes, social media platforms must implement and enforce policies prohibiting synthetic media that is particularly deceptive or misleading, and likely to cause harm as a result. Platforms should also ensure that users are able to report violations of such a policy with ease.

Prioritize transparency: Platforms should maintain records/audit trails of both the instances that they detect of harmful media and the subsequent steps that they take upon discovery that such a piece of media is synthetically created. They should also be transparent with users and the public about their synthetic media policy and enforcement mechanisms. Similarly, as noted in the Biden-Harris White Houses recent executive order on artificial intelligence, AI developers must be transparent about their findings as they train AI models.

Proactively detect GAI-created content during times of unrest: Platforms should implement automated mechanisms to detect indicators of synthetic media at scale, and use them even more robustly than usual during periods of war and unrest. During times of crisis, social media platforms must increase resources to their trust and safety teams to ensure that they are well-equipped to respond to surges in disinformation and hate.

Implement GAI disclosure requirements for developers, platforms, and users: GAI disclosure requirements must exist in some capacity to prevent deception and harm. Requiring users to disclose their use of synthetic media and social media platforms to identify it can play a significant role in promoting information integrity. Disclosure could include a combination of labeling requirements, prominent metadata tracking, or watermarks to demonstrate clearly that a post involved the creation or use of synthetic media. As the Department of Commerce develops guidance for content authentication and watermarking to clearly label AI-generated content, industry players should prioritize compliance with these measures.

Collaborate with trusted civil society organizations: Now more than ever, social media platforms must be responsive to their trusted fact-checking partners and civil society allies flags of GAI content, and they must make consistent efforts to apply those labels, and moderate if necessary, before the content can exacerbate harm. Social media companies and AI developers alike should engage consistently with civil society partners, whose research and red-teaming efforts can help reveal systemic flaws and prevent significant harm before an AI model is released to the public.

Promote media literacy: Industry should encourage users to be vigilant when consuming online information and media. They may consider developing and sharing educational media resources with users, and creating incentives for users to read them. For instance, platforms can encourage users in doubt about a piece of content to consider the source of the information, conduct a reverse image-search, and check multiple sources to verify reporting.

See more here:
Generative Artificial Intelligence (GAI): Increasing the Fog of War ... - ADL

Read More..

Artificial Intelligence: The Biggest Dangers Aren’t The Ones We Are … – Joseph Steinberg

Published on November 1, 2023

While many people seem to be discussing the dangers of Artificial Intelligence (AI) many of these discussions seem to focus on, what I believe, are the wrong issues.

I began my formal work with AI while a graduate student at NYU in the mid-1990s; the world of AI has obviously advanced quite a bit since that time period, but, many of the fundamental issues that those of us in the field began recognizing almost 3 decades ago not only remain un-addressed, but continue to pose increasingly large dangers. (I should note, that, in some ways, I have been involved in the field of artificial intelligence since I was a child by the time I was 7 I was playing checkers against a specialized checkers-playing computer, and trying to figure out both why the device sometimes lost as well as how to improve its performance).

While I will describe in another article why many of the concerns with AI that seem to be commonly discussed in the media should actually not be of grave concern to anyone, I will first publish a series of piece discussing what I DO consider to be the biggest dangers of AI.

So, in no particular order, here is the first:

One of the great powers of AI is its ability to automate translations, something that will eventually, in the not so distant future, enable any two people on this planet to communicate with one another; AI is already well on its way towards effectively establishing the utopian level of communications envisioned by the Bible in Genesis 11: Now the whole world had one language and a common speech.

There is little doubt that AI translation technology is already starting to have a dramatic, transformative impact on human society and that the magnitude of that impact will only grow with time.

As is always the case with new technologies, however, enabling universal communication can be used for good or for bad; in our world, the power to do good always comes with a trade off.

In terms of offering human beings the capability to communicate unbounded by language and culture, AI is already enabling criminals who might otherwise be constrained by their knowledge of a particular language or set of languages to social engineer people who speak other languages. In the past, translators have been used to create phishing emails which, naturally, were far from perfectly crafted. Today, however, we already see voice and video translators that can quickly, sometimes in real time, transform oral and visual communications from one language to another enabling social engineering attacks by phone or even by video call.

To see the power and danger of AI-based language conversion, as it already exists, please watch the following one-minute video; the video was generated in just a few minutes by the team at GoHuman.AI using only the video below it as input.

The original video (unadulterated by AI modification) follows:

Visit link:
Artificial Intelligence: The Biggest Dangers Aren't The Ones We Are ... - Joseph Steinberg

Read More..

Pigeons problem-solve similarly to artificial intelligence, research shows – The Guardian

Birds

The intelligent birds, thought to be a nuisance by some, learn from consequences and can recognize resemblance between objects

Thu 26 Oct 2023 05.00 EDT

A new study has found that the way pigeons problem-solve matches artificial intelligence.

Often overlooked as a nuisance, pigeons are actually highly intelligent animals that can remember faces, see the world in vivid colors, navigate complex routes, deliver news and even save lives.

In the study, 24 pigeons were given a variety of visual tasks, some of which they learned to categorize in a matter of days, and others in a matter of weeks. The researchers found evidence that the mechanism that pigeons used to make correct choices is similar to the method that AI models use to make the right predictions.

Pigeon behavior suggests that nature has created an algorithm that is highly effective in learning very challenging tasks, said Edward Wasserman, study co-author and professor of experimental psychology at the University of Iowa. Not necessarily with the greatest speed, but with great consistency.

On a screen, pigeons were shown different stimuli, like lines of different width, placement and orientation, as well as sectioned and concentric rings. Each bird had to peck a button on the right or left to decide which category they belonged to. If they got it correct, they got food, in the form of a pellet; if they got it wrong, they got nothing.

Pigeons dont need a rule, said Brandon Turner, lead author of the study and professor of psychology at Ohio State University. Instead they learn through trial and error. For example, when they were given a visual, say category A, anything that looked close to that they also classified as category A, tapping into their ability to identify similarities.

Over the course of the experiments, pigeons improved their ability to make right choices from 55% to 95% of the time when it came to some of the simpler tasks. Presented with a more complex challenge, their accuracy went up from 55% to 68%.

Using more humble animals like pigeons, we can test how far they can go with a mind that is [we think] solely or mostly associative, said Onur Gntrkn, professor of behavioral neuroscience at Ruhr University Bochum who was not involved in the study. This paper shows how incredibly strong associative systems can be, how true cognition-like they are.

In an AI model, the main goal is to recognize patterns and make decisions. Pigeons, as research shows, can do the same. Learning from consequences, when not given a food pellet, pigeons have a remarkable ability to correct their errors. Similarity function is also at play for pigeons, by using their ability to find resemblance between two objects.

With just those two mechanisms alone, you can define a neural network or an artificial intelligent machine to basically solve these categorization problems, said Turner. It stands to reason that the mechanisms that are present in the AI are also present in the pigeon.

The researchers now aim to collaborate with scientists who study pigeons and their brains. They are hoping that these findings can have practical applications in better understanding human brain damage.

Maybe we can get some further insight into what is going on in that little bird brain, said Wasserman. Its a damn good brain it may be small in size, but they pack a punch when it comes to the capacity to learn.

No pigeons were harmed in the course of the study.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Originally posted here:
Pigeons problem-solve similarly to artificial intelligence, research shows - The Guardian

Read More..

Artificial Intelligence already working one million hours a year for … – LBC

1 November 2023, 13:56

One police force has worked its way through 65 years of data in just six months thanks to advances in artificial intelligence (AI), according to the chief scientific adviser for police.

Speaking to LBC, Paul Taylor said the technology - which is being discussed at a safety summit this week - is already working the equivalent shifts of 600 officers every year.

"All forces are benefiting from AI already, its integrated into systems around unmanned vehicles and drones and in language translation for rapid crisis situations, he said.

Were using AI in facial recognition technology, identifying hundreds of offenders every month.

"Its looking through hundreds of thousands of images to identify illegal child pornography material. Historically our teams would have had to look at that material manually, now were able to use artificial intelligence to find those explicit and horrible images.

That not only speeds up the investigation, it also means our workforce is not having to look at lots of that material - which is important.

Of course, in every call its a human making the final decision but what the AI is doing is helping those humans complete their tasks in a rapid manner.

Read more: King Charles warns AI risks need to be addressed with 'urgency, unity and collective strength' in surprise summit speech

Read more: Top Tory can't tell Nick Ferrari where 38million of taxpayer cash will go in car crash interview on AI

Mr Taylor insisted the increased use of the technology does not mean people will lose their jobs - rather, it would free officers up to get back to the things they joined the police for in the first place.

Researchers have been developing the use of artificial intelligence for more than a decade across different sectors.

The government has been using it to identify fraudulent benefit claims.

National Grid uses AI drones to maintain energy infrastructure.

And the NHS has been working on systems to manage hospital capacity, train and support surgeons in carrying out complex operations and to more accurately diagnose conditions.

Jorge Cardosa, a researcher at Kings College London, showed LBC a system theyve developed which compares MRI scans, to quantify issues to aid diagnoses - rather than relying on a humans educated guess.

A lot of these AI systems will do many of the really boring jobs that clinicians and nurses currently do and release their time to focus more on the patients. But its also making it easier to diagnose issues and give clinicians all the information they need.

In this example, its a way to transform complex images into a series of numbers that can help figure out whats wrong, while AI is also gathering all the data the NHS holds about a patient to stitch it together and help build a better picture.

The ultimate decision is always with the clinician and the patient though, who should always be able to opt in or opt out.

Concerns have been raised about the rapid development of the technology, though, particularly when it comes to national security.

Paul Taylor, who works closely with police chiefs across the UK, went on to tell LBC that they need to be aware of the AI race as criminals look to exploit the use of the technology.

"We have that kind of tension of making were rolling it out proportionately and sensibly but equally understanding that as its moving forwards, that criminals dont have the same moral standards that we would have.

"Two of our most present concerns are around deepfakes, where images and videos are being used in exploitation cases. We are concerned about that becoming easier and easier to do and are building technologies to help us spot those fakes and stop them at source.

"And the other is automation of fraud, with things like ChatGPT which can create very convincing narratives. You can imagine that being automated into a system where we can see large scale fraud utilising Artificial Intelligence.

"Those are two areas of many threats that we are alive to, but the opportunities hopefully outweigh the threats."

Continue reading here:
Artificial Intelligence already working one million hours a year for ... - LBC

Read More..

The Top Artificial Intelligence (AI) Chipmaker Is Finally Starting to … – The Motley Fool

Behind the jaw-dropping technologies that Nvidia, Apple, Broadcom, and many others put out is Taiwan Semiconductor Manufacturing (TSM 1.73%). TSMC -- as it's commonly called -- is the world's largest chip foundry and only makes chips on a contract basis, which allows it to stay neutral as competitors battle it out in the smartphone, GPU, and automotive worlds.

While its chips have always been cutting-edge, TSMC's latest technology had yet to hit the markets until recently. Its 3 nanometer (nm) chips are finally starting to contribute to revenue, making this an exciting time for TSMC investors. So, what's the big deal with these chips? Read on to find out.

Taiwan Semiconductor's 3 nm chips represent the next iteration in chip technology. By increasing the density, Taiwan Semiconductor can pack more transistors onto a single chip, making them more powerful or energy efficient (depending on how the designer configures the chip).

This allows for more powerful technologies to be launched by the likes of Nvidia or Apple. With the latest wave of artificial intelligence interest hitting various companies, the 3 nm chip launch couldn't have come at a better time.

While the legacy 5 nm chip still holds the lion's share of revenue, investors should expect 3 nm's revenue share to grow rapidly over the next few quarters.

Data source: Taiwan Semiconductor.

Conversely, the 7 nm revenue share has declined as 5 nm revenue has increased. Eventually, 3 nm will do the same to 5 nm chips, but that won't be for a few years. Regardless, now that 3 nm chips are starting to impact Taiwan Semiconductor's financial results materially, it's an exciting time for investors as they can finally see returns on the capital TSMC has invested into developing the process.

Despite the great news of the 3 nm chips' arrival, Taiwan Semiconductor still posted declining revenue in the third quarter. Because customers have excess inventory, demand for TSMC's chips has been lower in 2023. This contributed to revenue declining by 15% year over year in U.S. dollars. But management also sees signs of PC and smartphone demand returning, indicating that 2024 could be a great year for the company.

With a stronger outlook ahead, you'd think investors would get excited and pump the stock up, but that's not happening. Taiwan Semiconducutor's valuation is cheap from both a trailing- and a forward-earnings perspective.

TSM PE Ratio data by YCharts

With the stock trading at 15 times trailing and 18 times forward earnings, it's much cheaper than the S&P 500's 25 times trailing and 19 times forward earnings. This is despite the fact that Taiwan Semiconductor is one of the best-run businesses and is looking at huge future demand.

Taiwan Semiconductor is your investment if you're looking for a stock that can display growth characteristics while trading like a value stock. These two investing fields rarely intersect, but the results can be fantastic when they do.

The company is just starting to roll out its innovative 3 nm chip technology, which ultimately costs more than its predecessors. This incremental increase should drive its stock price higher, and investors ought to buy the stock now with at least a three- to five-year holding period in mind to capitalize on this transition.

Keithen Drury has positions in Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Apple, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.

Read more:
The Top Artificial Intelligence (AI) Chipmaker Is Finally Starting to ... - The Motley Fool

Read More..

Artificial intelligence: definitions and implications for public services – The Institute for Government |

What is artificial intelligence (AI)?

The definition of artificial intelligence is contested, but the term is generally used to refer to computer systems that can perform tasks normally requiring human intelligence. 1 Central Digital and Data Office, Data Ethics Framework: glossary and methodology, last updated 16 March 2020, retrieved 23 October 2023,www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-glossary-and-methodology The 2023 AI white paper defines it according to two characteristics that make it particularly difficult to regulate adaptivity (which can make it difficult to explain the intent or logic of outcomes) and autonomy (which can make it difficult to assign responsibility for outcomes). 2 Department for Science, Innovation and Technology,A pro-innovation approach to AI regulation, CP 815, TheStationeryOffice, 2023,www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, p. 22.

It is helpful to think of a continuum between narrow AI on the one hand which can be applied only to specific purposes, e.g. playing chess and artificial general intelligence on the other which may have the potential to surpass the powers of the human brain. Somewhere along this continuum, general purpose AI is a technology that enables algorithms, trained on broad data, to be applied for a variety of purposes.

The models underlying general purpose AI are known as foundation models. 3 Jones E,Explainer: What is a foundation model?,Ada Lovelace Institute,2023,www.adalovelaceinstitute.org/resource/foundation-models-explainer/ A subset of these that are trained on and produce text are known as large language models (LLMs). These include GPT3.5, which underpins ChatGPT. 4 BommasaniR and Liang P,Reflections on Foundation Models,StanfordInstitute for Human-CenteredArtificial Intelligence,18 October2021,https://hai.stanford.edu/news/reflections-foundation-models General purpose AI programs such as ChatGPT which can provide responses to a wide range of user inputs are sometimes imprecisely referred to as generative AI.

General purpose AI relies on very large datasets (e.g. most written text available on the Internet). The complex models that interpret this data known as foundation models learn, iteratively, what response to draw from the data when prompted to do so (e.g. through questions asked otherwise known as prompts). 5 Visual Storytelling Team and Murgia M,Generative AI exists because of the transformerFinancial Times,12 September 2023,retrieved 23 October 2023,https://ig.ft.com/generative-ai/ The models learn in part autonomously, but also through human feedback, with rules set by their developers to tune their outputs. This process hones the models to provide outputs increasingly tailored to their intended audience, often refining them based on user feedback.

General purpose AI programs enable foundation models to be applied by users in particular contexts. General purpose AI is capable of emergent behaviour 6 WeiJ, Tay Y,BommasaniR and others,Emergent Abilities of Large Language Models,Transactions on Machine Learning Research,August 2022,https://openreview.net/pdf?id=yzkSU5zdwD, p.22. where software can learn new tasks with little additional information or training. 7 Ibid,p. 6.https://openreview.net/pdf?id=yzkSU5zdwD This has led to models learning moderate arithmetic or another language. 8 NgilaF, A googleAI model developed a skill it wasnt expected to have,Quartz,17 April 2023, retrieved 23 October 2023,https://qz.com/google-ai-skills-sundar-pichai-bard-hallucinations-1850342984 Concerningly, AI developers are unsure of how these emergent behaviours are being learned. 9 Ornes S,The Unpredictable Abilities Emerging From Large AIModels,Quanta Magazine,16 March 2023, retrieved 23 October 2023,www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

General purpose AI models have already been used in a range of circumstances. Whilst the most common usage to date has been in marketing and customer relations, foundation models have also been essential for radical improvements in healthcare, for instance by predicting protein structures which will increase the speed of drug development, 10 TheAlphaFoldteam,AlphaFold: a solution to a 50-year-old grand challenge in biology,blog,Google DeepMind,www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology developing antibody therapies 11 CallawayE, How generative AI is building better antibodies,Nature,4 May 2023, retrieved 23 October 2023www.nature.com/articles/d41586-023-01516-w and designing vaccines. 12 Dolgin E,Remarkable AI tool designs mRNA vaccines that are more potent and stable,Nature,2 May 2023, retrieved 23 October 2023,www.nature.com/articles/d41586-023-01487-yAI has also been used to aid the transition to net zero, for example by informing the siting and design of new wind farms and improving the efficiency of carbon capture systems. 13 LarosaF,Hoyas S,Garca-MartnezSand others,Halting generative AI advancements may slow down progress in climate research,Nature Climate Change,2023,vol.13, no.6, pp. 4979,www.nature.com/articles/s41558-023-01686-5;NeslenA, Here's how AI can help fight climate change,World Economic Forum,11 August 2021, retrieved 23 October 2023www.weforum.org/agenda/2021/08/how-ai-can-fight-climate-change/

In public services, general purpose AI can be utilised to provide highly personalised services at scale. It has already been tested in education, improving student support services at multiple universities 14 UNESCO, Artificial intelligence in education, [no date], retrieved 23 October2023,www.unesco.org/en/digital-education/artificial-intelligence but its biggest impact could be in schools 15 Ahmed M,UK passport photo checker shows bias against dark-skinned women,BBC News,8October 2020, retrieved 23 October 2023,www.bbc.co.uk/news/technology-54349538 where student data can be used to design learning activities best suited to an individuals subject understanding and style of learning, rather than via a more standardised approach to classroom learning (albeit that further testing and careful safeguards would be required). AI has also been deployed for facial recognition in policing and to identify fraudulent activity, for example.

More here:
Artificial intelligence: definitions and implications for public services - The Institute for Government |

Read More..