Page 185«..1020..184185186187..190200..»

NASA Names First Chief Artificial Intelligence Officer – NASA

NASA Administrator Bill Nelson on Monday named David Salvagnini as the agencys new chief artificial intelligence (AI) officer, effective immediately. The role is an expansion of Salvagninis current role as chief data officer.

A wide variety of AI tools are used by NASA to benefit humanity from supporting missions and research projects across the agency, analyzing data to reveal trends and patterns, and developing systems capable of supporting spacecraft and aircraft autonomously.

Artificial intelligence has been safely used at NASA for decades, and as this technology expands, it can accelerate the pace of discovery, said Nelson. Its important that we remain at the forefront of advancement and responsible use. In this new role, David will lead NASAs efforts to guide our agencys responsible use of AI in the cosmos and on Earth to benefit all humanity.

This appointment is in accordance with President Bidens Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Salvagnini now is responsible for aligning the strategic vision and planning for AI usage across NASA. He serves as a champion for AI innovation, supporting the development and risk management of tools, platforms, and training.

In his expanded capacity, Salvagnini will continue NASAs collaboration with other government agencies, academic institutions, industry partners, and other experts to ensure the agency is on the cutting edge of AI technology.

Salvagnini joined NASA in June 2023 after more than 20 years working in technology leadership in the intelligence community. Prior to his role at NASA, he served the Office of the Director of National Intelligence as director of the architecture and integration group and chief architect.

Salvagnini also worked in a variety of roles leading enterprise level IT research and development, engineering, and operations advancing data, IT, and artificial intelligence programs. David served in the Air Force for 21 years, retiring in May 2005 as a communications and computer systems officer.

NASA continues developing recommendations on leveraging emerging AI technology to best serve our goals and missions, from sifting through Earth science imagery to identifying areas of interest, to searching for data on planets outside our solar system from NASAs James Webb Space Telescope, scheduling communications from the Perseverance Mars rover through the Deep Space Network, and more.

Prior to Salvagninis appointment, the agencys Chief Scientist Kate Calvin served as NASAsactingresponsible AI official.

Learn more about artificial intelligence at NASA at:

https://www.nasa.gov/artificial-intelligence

-end-

Faith McKie / Jennifer Dooren Headquarters, Washington 202-358-1600 faith.d.mckie@nasa.gov / jennifer.m.dooren@nasa.gov

Link:
NASA Names First Chief Artificial Intelligence Officer - NASA

Read More..

This Could Be 2024’s Biggest Artificial Intelligence IPO – 24/7 Wall St.

Investing

Published: May 19, 2024 8:44 pm

The AIspace is bustling with excitement and crowded with established players, but new opportunities are emerging. CoreWeave, backed by NVIDIA (NASDAQ: NVDA), is one such company to watch. Specializing in GPU-based cloud computing, CoreWeaves partnerships with tech giants like Microsoft (NASDAQ: MSFT) have driven remarkable growth, with revenue increasing 1,700% in 2023. Valued at $19 billion, CoreWeave is poised to become a significant IPO in 2024. However, the sustainability of its success may depend on its ability to compete as GPU supply constraints ease. For investors seeking pure AI exposure, CoreWeave presents a promising, though potentially volatile, opportunity.

Okay, it feels like the AI space is incredibly exciting, but also incredibly crowded at this point.

There are a couple dozen companies out there, but it seems like investors really have to pick and choose from the same relatively small basket of AI-focused companies, whether theyre producing chips or generating models off of them.

Theres a somewhat limited range of companies that are available to investors today.

So lets look at potential artificial intelligence IPOs.

What new companies could be coming on the market that investors can look forward to this year?

Yeah, I wanted to get the company CoreWeave on investors radar.

Its a company that was backed by NVIDIA.

And basically, when you think about cloud computing, running computing resources, they have specialized in GPUs, the kinds of processors made by NVIDIA.

So its basically cloud computing for the age of AI.

Now, they raised $1.1 billion at the beginning of May, which gave them a valuation of $19 billion, which is up from $7 billion previously.

Now, Ive been a little bit dismissive of CoreWeave just because cloud computing, its an incredibly capital-intensive industry to play in with companies with huge resources.

Im talking about Amazon.

Im talking about Google.

Im talking about Microsoft.

So it might shock investors that Microsoft decided to actually partner with CoreWeave and revenue for the company jumped 1,700% to $440 million in 2023.

And projections for this year are another 440% growth to $2.3 billion.

So you can see why this would be the hottest possible IPO right now because of its artificial intelligence, its partnerships with Nvidia and Microsoft, and its 400% revenue growth.

You know, you cant write a sweeter story than that.

I will issue one note of caution, though, with CoreWeave.

One of their biggest assets has been, believe it or not, its been very hard to buy NVIDIA GPUs.

And their partnership with NVIDIA has given them preferential access to GPUs.

Its believed thats part of the reason Microsoft signed their partnership was its just another path to getting GPUs.

So were seeing the supply constraints begin to slowly fade away with GPUs.

The question will be, was this companys run a feature of this era of limited GPUs when they had them, or are they truly built to compete with these giant companies for the long run?

Regardless, I think theres a good chance we could see an IPO path opening up with the kinds of valuations its fetching.

This could be one of the biggest IPOs of 2024.

Its one company to put on your radar.

You know, one of the things thats really interesting about that too, is its a bit more of a pure AI play.

Whereas a lot of the companies that investors have had to look at are, you know, their AI is only part of the story.

You know, Microsoft is certainly an AI company, but theyre also a Microsoft Office company and a gaming company and a cloud company.

So being able to actually get that pure play AI exposure is really interesting.

Certainly, it invites a little bit more volatility as well, but for the long-term patient investor, that volatility can spell a lot of upside.

Yep, and when we look at a lot of these startups that might be looking at the IPO market, if youre a company that does servers, you need to track AI growth against losing revenue to other servers that are seeing their spending decline.

So were going to be able to see companies where, as you said, its all upside as AI grows.

Thank you for reading! Have some feedback for us? Contact the 24/7 Wall St. editorial team.

Read more here:
This Could Be 2024's Biggest Artificial Intelligence IPO - 24/7 Wall St.

Read More..

Surprise: Artificial Intelligence (AI) Might Already Be Used to Allocate Your Retirement Savings – The Motley Fool

Investors using generalized, all-purpose AI platforms for retirement planning help may not even realize these tools' shortcomings.

Have you ever considered asking an artificial intelligence (AI)-powered assistant like Google's Gemini or Microsoft's Copilot for help allocating your retirement portfolio? If so, you wouldn't be the first. You may already be using AI to allocate your retirement savings without knowing it.

These tools are obviously powerful by virtue of being able to access much of the world's collective knowledge. A handful of brokerages and investment managers know this and have been providing such technology for some time now. They just don't tell clients that their stock picks are being powered by artificial intelligence because ... well, because the world's still not completely convinced that generalized AI-powered chat tools like OpenAI's ChatGPT or Gemini always get things right.

That's because the AI doesn't always get things right, by the way. Indeed, when it comes to retirement planning, all-encompassing platforms like the aforementioned Copilot or ChatGPT can sometimes be alarmingly problematic by missing key details about your specific situation.

With that as the backdrop, here's why you're better served by sticking with the AI tools built specifically to help you manage your retirement investments.

If you're a customer of brokerage firm Charles Schwab (SCHW 0.95%), then you've already been exposed to an AI-powered stock-picking tool. Unveiled in early 2022, this technology allows investors to narrow their investment choices down to a manageable handful based on the usual criteria like risk, growth, and valuation. But the tool also helps investors identify and capitalize on qualitative trends or themes.

It's not exactly a new idea. Investors have had access to theme-based stock suggestions for years now. Usually, it just seems like another criterion offered by a stock screener.

There's far more artificial intelligence being applied here than there seems to be on the surface. Much like Microsoft's Copilot and Google's Gemini, Schwab's tool uses AI "to find companies linked to those keywords and phrases, combing through millions of public documents, such as patents, clinical trials, and company filings." It very likely comes up with picks most investors would have otherwise never come across.

Perhaps more relevant to future or current retirees is Schwab's menu of so-called Intelligent Portfolios, which build and then update a portfolio pre-selected for your particular situation. What the company refers to as a robo-advisor doing this portfolio maintenance is actually a form of automated AI. Although these portfolios certainly don't outperform the broad market each and every quarter, they do offer less volatile, more consistent performances without requiring constant monitoring and management by investors.

Image source: Getty Images.

Mutual fund giant Fidelity also has its own robo-advisor tech, powering an automated investment service called Fidelity Go. Although it's not Fidelity's invention, the brokerage firm offers its customers access to Capitalize.ai, allowing users to turn the simplest of word-based instructions into a trading algorithm.

Investment management outfit BlackRock (BLK 0.56%) -- the name behind the iShares family of exchange-traded funds (ETFs) -- is another financial services name waist-deep in AI waters.

While most of the company's interest in AI has been to empower financial advisors via a revenue-bearing platform called Aladdin, it's not unaware of the potential of AI as a stock-picking tool. For instance, commenting specifically on retirement portfolios, BlackRock explains that "by analyzing vast datasets, including satellite imagery and labor mobility data, AI can extract early insights on economic activities across regions, which can be used to inform macro (e.g., regional) and micro (e.g., company level) tilts in our portfolios."

To this end, while it doesn't offer a Schwab-like robo-advisor to individual investors, BlackRock has never denied the fact that it's using AI to help guide its fund managers' investment choices. So, in a sense, some individual investors are directly benefiting from BlackRock's AI tech.

Some might argue that general-purpose AI chat platforms like ChatGPT and Gemini have come a long way since Schwab and Fidelity first launched their AI-powered stock-picking tools. Namely, they've become incredibly user-friendly.

And that's not incorrect.

But all-purpose AI platforms still aren't great self-service options for allocating your retirement savings even when you can convince a general AI assistant to help you do so. These platforms are seemingly aware that even they are not ideally suited to offer you a full-blown portfolio plan; as such, they don't always provide more than a broad allocation theory.

Not all AI is the same. Microsoft's Copilot and OpenAi's ChatGPT, and Google's Gemini are all large language model (or LLM) AI, meaning they rely on the collective wisdom created by the text available on billions and billions of web pages. By interpreting and integrating as much credible information available on the web as they can, these platforms can end up ignoring a nuance that might be uniquely important to you.

It's also worth adding that while LLMs can analyze numerical data, they're not built to deal with numbers. They often struggle when it comes to doing predictive numerical analysis. This, of course, is the kind of math retirees typically need done (and need done right).

The missing element? Context. Gemini or Copilot "understand" words largely based on the words appearing before or after them. With numbers though, these tools don't always understand what it is you're trying to accomplish or determine, or how to do it, particularly when you're giving these AI platforms instructions using words.

That's not the case with the portfolio-allocation tools offered by Schwab or Fidelity, or perhaps eventually by BlackRock. Their technology is built from the ground up to handle numbers and meet needs specific to investors. They may require more input from you on the front end, but they're far better suited to provide you with the solutions you need in the end.

These investor-oriented AI tools also do something important that most general AI-powered chatbots don't; they ask you questions that force you to think about what it is you ultimately want to accomplish. ChatGPT, Gemini, and Copilot don't ask any follow-up, clarifying questions. In fact, these tools don't even recognize that they should be asking more probing questions of investors using them.

Connect the dots. The additional information and investment of your time in the creation of a customized retirement portfolio will very likely lead to superior returns (relative to the risk you're taking) from your retirement fund. So, stick with the tools purpose-built to allocate a portfolio specifically for you. It's worth it even if they're not the easiest and most accessible option to use.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Charles Schwab is an advertising partner of The Ascent, a Motley Fool company. James Brumley has positions in Alphabet. The Motley Fool has positions in and recommends Alphabet, Charles Schwab, and Microsoft. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft, short January 2026 $405 calls on Microsoft, and short June 2024 $65 puts on Charles Schwab. The Motley Fool has a disclosure policy.

See more here:
Surprise: Artificial Intelligence (AI) Might Already Be Used to Allocate Your Retirement Savings - The Motley Fool

Read More..

From Artificial Intelligence (AI) to iPhones to China, This Chart Sums Up Apple – sharewise

Among the "Magnificent Seven" stocks, only two have produced negative returns so far in 2024: Tesla and (NASDAQ: AAPL). Although Apple shares are only down a modest 1.5% as of this writing, the company's latest earnings signal that further losses could be on the horizon.

Apple is one of the most innovative companies of all time. From the iPad to the iPhone, it has put on a decades-long master class in developing hit consumer electronics.

However, in recent years, one of the biggest knocks against the company has been its lack of innovation. Although the company introduced its Vision Pro virtual reality headset earlier this year, demand has been uninspiring as consumers have less expensive alternatives -- namely, from Meta Platforms.

Continue reading

Source Fool.com

View original post here:
From Artificial Intelligence (AI) to iPhones to China, This Chart Sums Up Apple - sharewise

Read More..

Artificial Intelligence Tool Detects Sex-Related Differences in Brain Structure – NYU Langone Health

Artificial intelligence (AI) computer programs that process MRI results show differences in how the brains of men and women are organized at a cellular level, a new study shows. These variations were spotted in white matter, tissue primarily located in the human brains innermost layer, which fosters communication between regions.

Men and women are known to experience multiple sclerosis, autism spectrum disorder, migraines, and other brain issues at different rates and with varying symptoms. A detailed understanding of how biological sex impacts the brain is therefore viewed as a way to improve diagnostic tools and treatments. However, while brain size, shape, and weight have been explored, researchers have only a partial picture of the brains layout at the cellular level.

Led by researchers at NYU Langone Health, the new study used an AI technique called machine learning to analyze thousands of MRI brain scans from 471 men and 560 women. Results revealed that the computer programs could accurately distinguish between biological male and female brains by spotting patterns in structure and complexity that were invisible to the human eye. The findings were validated by three different AI models designed to identify biological sex using their relative strengths in either zeroing in on small portions of white matter or analyzing relationships across larger regions of the brain.

Our findings provide a clearer picture of how a living, human brain is structured, which may in turn offer new insight into how many psychiatric and neurological disorders develop and why they can present differently in men and women, said study senior author and neuroradiologist Yvonne W. Lui, MD.

Dr. Lui, a professor and vice chair for research in the Department of Radiology at NYU Grossman School of Medicine, notes that previous studies of brain microstructure have largely relied on animal models and human tissue samples. In addition, the validity of some of these past findings has been called into question for relying on statistical analyses of hand-drawn regions of interest, meaning researchers needed to make many subjective decisions about the shape, size, and location of the regions they choose. Such choices can potentially skew the results, says Dr. Lui.

The new study results, published online May 14 in the journal Scientific Reports, avoided that problem by using machine learning to analyze entire groups of images without asking the computer to inspect any specific spot, which helped to remove human biases, the authors say.

For the research, the team started by feeding AI programs existing data examples of brain scans from healthy men and women and also telling the machine programs the biological sex of each brain scan. Since these models were designed to use complex statistical and mathematical methods to get smarter over time as they accumulated more data, they eventually learned to distinguish biological sex on their own. Importantly, the programs were restricted from using overall brain size and shape to make their determinations, says Lui.

According to the results, all of the models correctly identified the sex of subject scans between 92 percent and 98 percent of the time. Several features in particular helped the machines make their determinations, including how easily and in what direction water could move through brain tissue.

These results highlight the importance of diversity when studying diseases that arise in the human brain, said study co-lead author Junbo Chen, MS, a doctoral candidate at NYU Tandon School of Engineering.

If, as has been historically the case, men are used as a standard model for various disorders, researchers may miss out on critical insight, added study co-lead author Vara Lakshmi Bayanagari, MS, a graduate research assistant at NYU Tandon School of Engineering.

Bayanagari cautions that while the AI tools could report differences in brain-cell organization, they could not reveal which sex was more likely to have which features. She adds that the study classified sex based on genetic information and only included MRIs from cisgendered men and women.

According to the authors, the team next plans to explore the development of sex-related brain structure differences over time to better understand environmental, hormonal, and social factors that could play a role in these changes.

Funding for the study was provided by the National Institutes of Health grants R01NS119767, R01NS131458, and P41EB017183, as well as by the United States Department of Defense grant W81XWH2010699.

In addition to Dr. Lui, Chen, and Bayanagari, other NYU Langone Health and NYU researchers involved in the study were Sohae Chung, PhD, and Yao Wang, PhD.

Shira Polan Phone: 212-404-4279 Shira.Polan@NYULangone.org

Read the rest here:
Artificial Intelligence Tool Detects Sex-Related Differences in Brain Structure - NYU Langone Health

Read More..

Predictive Quantum Artificial Intelligence Lab 1950.Ai Launches to Advance AI through Research and Collaboration – InvestorsObserver

Predictive Quantum Artificial Intelligence Lab 1950.Ai Launches to Advance AI through Research and Collaboration

The 17th International Conference on Hybrid Artificial Intelligence Systems (HAIS 2022), hosted by 1950.ai and led by Dr. Shahid Masood, convened in Salamanca, Spain. Over 200 global experts gathered to discuss AI advancements, ethics, and applications in healthcare, finance, and autonomous systems, solidifying Salamanca's status as a hub for AI research and innovation.

Brussels, Belgium--(Newsfile Corp. - May 19, 2024) - Dr. Shahid Masood, a renowned AI expert has announced the launch of the Predictive Quantum Artificial Intelligence Lab 1950.Ai. The lab is dedicated to advancing the field of predictive AI through research and collaboration.

The lab's mission is to harness the power of AI to make predictions and drive decision-making in various industries, from healthcare to finance to transportation. Through its research reports, the lab aims to provide cutting-edge insights that will help shape the future of this field.

Dr. Shahid Masood explaining quantum computing.

The lab's diverse team of researchers, scientists, academics, and analysts is dedicated to advancing the field of predictive AI. They come from various backgrounds and have expertise in different areas of AI, including machine learning, deep learning, natural language processing, and computer vision.

The lab is located in the heart of Brussels, Belgium where the digital landscape is constantly evolving.

The lab is currently working on several research projects, including the development of predictive models for disease diagnosis, the optimization of transportation networks, and the prediction of stock prices. These projects hope to have a significant impact on various industries and help shape the future of AI.

Dr. Shahid Masood explaining artificial intelligence at a conference.

Dr. Masood is excited about the potential of the lab and its impact on the world. He believes that predictive AI has the power to transform industries and improve people's lives. He hopes that the lab's research will inspire others to join him in this quest to advance AI through research and collaboration.

For more information, please contact:

Webmail: ceo@1950.ai Person Name: Dr. Shahid Masood Website URL: https://www.1950.ai/ Youtube: https://www.youtube.com/@DrShahidMasoodYouTube Company Name: 1950.ai

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/209763

Go here to read the rest:
Predictive Quantum Artificial Intelligence Lab 1950.Ai Launches to Advance AI through Research and Collaboration - InvestorsObserver

Read More..

Council of Europe adopts first international treaty on artificial intelligence – Council of Europe

The Council of Europe has adopted the first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems. The treaty, which is also open to non-European countries, sets out a legal framework that covers the entire lifecycle of AI systems and addresses the risks they may pose, while promoting responsible innovation. The convention adopts a risk-based approach to the design, development, use, and decommissioning of AI systems, which requires carefully considering any potential negative consequences of using AI systems.

The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law was adopted in Strasbourg during the annual ministerial meeting of the Council of Europe's Committee of Ministers, which brings together the Ministers for Foreign Affairs of the 46 Council of Europe member states.

Council of Europe Secretary General Marija Pejinovi said: The Framework Convention on Artificial Intelligence is a first-of-its-kind, global treaty that will ensure that Artificial Intelligence upholds peoples rights. It is a response to the need for an international legal standard supported by states in different continents which share the same values to harness the benefits of Artificial intelligence, while mitigating the risks. With this new treaty, we aim to ensure a responsible use of AI that respects human rights, the rule of law and democracy.

The convention is the outcome of two years' work by an intergovernmental body, the Committee on Artificial Intelligence (CAI), which brought together to draft the treaty the 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay), as well as representatives of the private sector, civil society and academia, who participated as observers.

The treaty covers the use of AI systems in the public sector including companies acting on its behalf - and in the private sector. The convention offers parties two ways of complying with its principles and obligations when regulating the private sector: parties may opt to be directly obliged by the relevant convention provisions or, as an alternative, take other measures to comply with the treaty's provisions while fully respecting their international obligations regarding human rights, democracy and the rule of law. This approach is necessary because of the differences in legal systems around the world.

The convention establishes transparency and oversight requirements tailored to specific contexts and risks, including identifying content generated by AI systems. Parties will have to adopt measures to identify, assess, prevent, and mitigate possible risks and assess the need for a moratorium, a ban or other appropriate measures concerning uses of AI systems where their risks may be incompatible with human rights standards.

They will also have to ensure accountability and responsibility for adverse impacts and that AI systems respect equality, including gender equality, the prohibition of discrimination, and privacy rights. Moreover, parties to the treaty will have to ensure the availability of legal remedies for victims of human rights violations related to the use of AI systems and procedural safeguards, including notifying any persons interacting with AI systems that they are interacting with such systems.

As regards the risks for democracy, the treaty requires parties to adopt measures to ensure that AI systems are not used to undermine democratic institutions and processes, including the principle of separation of powers, respect for judicial independence and access to justice.

Parties to the convention will not be required to apply the treaty's provisions to activities related to the protection of national security interests but will be obliged to ensure that these activities respect international law and democratic institutions and processes. The convention will not apply to national defence matters nor to research and development activities, except when the testing of AI systems may have the potential to interfere with human rights, democracy or the rule of law.

In order to ensure its effective implementation, the convention establishes a follow-up mechanism in the form of a Conference of the Parties.

Finally, the convention requires that each party establishes an independent oversight mechanism to oversee compliance with the convention, and raises awareness, stimulates an informed public debate, and carries out multistakeholder consultations on how AI technology should be used. The framework convention will be opened for signature in Vilnius (Lithuania) on 5 September on the occasion of a conference of Ministers of Justice.

Explanatory report of the Convention

Read the original:
Council of Europe adopts first international treaty on artificial intelligence - Council of Europe

Read More..

The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister – The Conversation Indonesia

If you search shrimp Jesus on Facebook, you might encounter dozens of images of artificial intelligence (AI) generated crustaceans meshed in various forms with a stereotypical image of Jesus Christ.

Some of these hyper-realistic images have garnered more than 20,000 likes and comments. So what exactly is going on here?

The dead internet theory has an explanation: AI and bot-generated content has surpassed the human-generated internet. But where did this idea come from, and does it have any basis in reality?

The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

These agents can rapidly create posts alongside AI-generated images designed to farm engagement (clicks, likes, comments) on platforms such as Facebook, Instagram and TikTok. As for shrimp Jesus, it appears AI has learned its the current, latest mix of absurdity and religious iconography to go viral.

But the dead internet theory goes even further. Many of the accounts that engage with such content also appear to be managed by artificial intelligence agents. This creates a vicious cycle of artificial engagement, one that has no clear agenda and no longer involves humans at all.

At first glance, the motivation for these accounts to generate interest may appear obvious social media engagement leads to advertising revenue. If a person sets up an account that receives inflated engagement, they may earn a share of advertising revenue from social media organisations such as Meta.

So, does the dead internet theory stop at harmless engagement farming? Or perhaps beneath the surface lies a sophisticated, well-funded attempt to support autocratic regimes, attack opponents and spread propaganda?

While the shrimp Jesus phenomenon may seem harmless (albeit bizarre), there is potentially a longer-term ploy at hand.

As these AI-driven accounts grow in followers (many fake, some real), the high follower count legitimises the account to real users. This means that out there, an army of accounts is being created. Accounts with high follower counts which could be deployed by those with the highest bid.

This is critically important, as social media is now the primary news source for many users around the world. In Australia, 46% of 18 to 24-year-olds nominated social media as their main source of news last year. This is up from 28% in 2022, taking over from traditional outlets such as radio and TV.

Already, there is strong evidence social media is being manipulated by these inflated bots to sway public opinion with disinformation and its been happening for years.

In 2018, a study analysed 14 million tweets over a ten-month period in 2016 and 2017. It found bots on social media were significantly involved in disseminating articles from unreliable sources. Accounts with high numbers of followers were legitimising misinformation and disinformation, leading real users to believe, engage and reshare bot-posted content.

This approach to social media manipulation has been found to occur after mass shooting events in the United States. In 2019, a study found bot-generated posts on X (formerly Twitter) heavily contribute to the public discussion, serving to amplify or distort potential narratives associated with extreme events.

More recently, several large-scale, pro-Russian disinformation campaigns have aimed to undermine support for Ukraine and promote pro-Russian sentiment.

Uncovered by activists and journalists, the coordinated efforts used bots and AI to create and spread fake information, reaching millions of social media users.

On X alone, the campaign used more than 10,000 bot accounts to rapidly post tens of thousands of messages of pro-Kremlin content attributed to US and European celebrities seemingly supporting the ongoing war against Ukraine.

This scale the influence is significant. Some reports have even found that nearly half of all internet traffic in 2022 was made by bots. With recent advancements in generative AI such as OpenAIs ChatGPT models and Googles Gemini the quality of fake content will only be improving.

Social media organisations are seeking to address the misuse of their platforms. Notably, Elon Musk has explored requiring X users to pay for membership to stop bot farms.

Social media giants are capable of removing large amounts of detected bot activity, if they so chose. (Bad news for our friendly shrimp Jesus.)

The dead internet theory is not really claiming that most of your personal interactions on the internet are fake.

It is, however, an interesting lens through which to view the internet. That it is no longer for humans, by humans this is the sense in which the internet we knew and loved is dead.

The freedom to create and share our thoughts on the internet and social media is what made it so powerful. Naturally, it is this power that bad actors are seeking to control.

The dead internet theory is a reminder to be sceptical and navigate social media and other website with a critical mind.

Any interaction, trend, and especially overall sentiment could very well be synthetic. Designed to slightly change the way in which you perceive the world.

Link:
The 'dead internet theory' makes eerie claims about an AI-run web. The truth is more sinister - The Conversation Indonesia

Read More..

Advocates: Pass N.Y. bills over artificial intelligence – Spectrum News

Advocates are pushing for the passage of bills in Albany to protect New Yorkers from the negative impacts of artificial intelligence.

They're calling for more industry and legal accountability in the use of AI in the state.

That includes legislation to protect people from the biases of AI systems in employment, policing and other high-risk areas.

Lawmakers are also looking to regulate the government use of artificial intelligence and to ensure there's transparency within state agencies.

They say these bills are needed as AI continues to evolve.

"We're really at a critical moment where whether we decide to step up to this challenge and proactivily regulate in a responsible way can determine whether our future is built by every single one of us that protects our rights, or a future that will be written for us by the few," Democratic state Sen. Kristen Gonzalez, of Queens, said at the Capitol on Wednesday.

More here:
Advocates: Pass N.Y. bills over artificial intelligence - Spectrum News

Read More..

U.S. elections face more threats from foreign actors and artificial intelligence – NPR

Director of National Intelligence Avril Haines testifying before a Senate hearing earlier this month. During a May 15 hearing, she identified Russia as the greatest foreign threat to this year's U.S. elections. Win McNamee/Getty Images hide caption

Director of National Intelligence Avril Haines testifying before a Senate hearing earlier this month. During a May 15 hearing, she identified Russia as the greatest foreign threat to this year's U.S. elections.

U.S. elections face more threats than ever from foreign actors, enabled by rapid developments in artificial intelligence, the country's top intelligence official told lawmakers on Wednesday.

Federal, state and local officials charged with protecting voting integrity face a "diverse and complex" threat landscape, Director of National Intelligence Avril Haines told the Senate Intelligence Committee at a hearing about risks to the 2024 elections. But she also said the federal government "has never been better prepared" to protect elections, thanks to lessons learned since Russia tried to influence voters in 2016.

This year, "Russia remains the most active foreign threat to our elections," Haines said. Using a "vast multimedia influence apparatus" encompassing state media, intelligence services and online trolls, Russia's goals "include eroding trust in U.S. democratic institutions, exacerbating sociopolitical divisions in the United States, and degrading Western support to Ukraine."

But it's a crowded field, with China, Iran and other foreign actors also trying to sway American voters, Haines added.

In addition, she said the rise of new AI technologies that can create realistic "deepfakes" targeting candidates and of commercial firms through which foreign actors can launder their activities are enabling more sophisticated influence operations at larger scale that are harder to attribute.

Wednesday's hearing was the first in a series focused on the election, said committee chair Sen. Mark Warner, D-Va., as lawmakers seek to avoid a repeat of 2016, when Russia's meddling caught lawmakers, officials and social media executives off-guard.

Since then, "the barriers to entry for foreign malign influence have unfortunately become incredibly small," Warner said. Foreign adversaries have more incentives to intervene in U.S. politics in an effort to shape their own national interests, he added, and at the same time, Americans' trust in institutions has eroded across the political spectrum.

Sen. Marco Rubio of Florida, the committee's top Republican, questioned how those tasked with protecting the election would themselves be received in a climate of distrust. He raised the specter of a fake video targeting himself or another candidate in the days before November's election.

"Who is in charge of letting people know, this thing is fake, this thing is not real?" he asked. "And I ask myself, whoever is in charge of it, what are we doing to protect the credibility of the entity that is ... saying it, so that the other side does not come out and say, 'Our own government is interfering in the election'?"

Haines said in some cases it would make sense for her or other federal agencies to debunk false claims, while in others it may be better for state or local officials to speak out.

Read more here:
U.S. elections face more threats from foreign actors and artificial intelligence - NPR

Read More..