Page 692«..1020..691692693694..700710..»

Will Artificial Intelligence Be Amazon’s Next Growth Driver? These 7 … – The Motley Fool

Amazon (AMZN 0.38%) is a top player in the two high-growth markets of e-commerce and cloud computing, and they've helped the company generate billions of dollars in annual earnings. They've also attracted investors, pushing Amazon's market value beyond $1 trillion. Considering the company's dominance in the e-commerce and cloud areas, you can expect them to be central to the next chapters of the Amazon story.

And now, Amazon is using a tool that could boost its performance in both businesses: artificial intelligence (AI). The company has applied AI across businesses, to make operations easier and cost efficient -- and is offering the advantages of AI to its customers, too. Could this exciting technology become Amazon's next growth driver? Seven words from CEO officer Andy Jassy answer the question.

Image source: Getty Images.

First, a little background on Amazon and its relationship with AI. The company isn't new to the technology, and if you're an Amazon customer, you might have even benefited from it when you've shopped on the platform. Through AI, Amazon is able to offer you ideas of what to buy tailored to your tastes, for example. The company also uses AI to power its virtual assistant, Alexa. And the technology helps Amazon more efficiently manage inventory and transport packages.

Now, let's get to the seven words pronounced by Jassy in last week's earnings call. Jassy spoke specifically about generative AI, which involves training models so they actually can go on to create new content. The generative AI opportunity for Amazon Web Services (AWS) will equal "tens of billions of dollars of revenue," Jassy predicted.

AWS is Amazon's cloud computing business -- and the unit that generally has driven profit at the company, making up about 60% of operating income in recent years. The business also represents a huge chunk of sales, last year generating more than $80 billion. Amazon reported $514 billion in total sales.

Jassy said AWS is investing in AI at three levels with the idea of serving all of its clients' AI needs. First, the company is working on the Trainium and Inferentia chips, which power the training and inference abilities of large language models.

Second, AWS offers companies the opportunity to use existing large language models and customize them for their own purposes -- without having to manage infrastructure. This is through the Amazon Bedrock service.

Finally, AWS also is involved in the applications that run large language models through its creation of CodeWhisperer. The AI coding companion offers code recommendations based on a company's existing code -- so using it is like taking recommendations from a senior engineer who knows the company's code base very well.

For the full year, Amazon's capital investments are set to total about $50 billion, down from $59 billion last year. But, within that total, the company is increasing its investment in AI and general AWS infrastructure.

So, now, a reasonable question is: When will this investment start paying off for Amazon and for investors? The company has offered us a few clues. The billions of dollars in revenue Jassy refers to should roll in over "the next several years."

Amazon also says its leadership in cloud should help it dominate in AI -- because companies want to bring models to their data, and since this data is stored with AWS, it's easy to use the AI services that are so readily available.

It's also important to remember AWS margins are much higher than Amazon's e-commerce margins, so the company could greatly profit from developing AI services within its cloud business. In the most recent quarter, AWS operating margin as a percentage of AWS net sales was more than 30%. That's compared to an operating margin of 4.9% for the North American e-commerce business.

So, you can deduce this from Jassy's comment: AI could indeed become Amazon's next growth driver. Of course, since AI still is in its early days, results may not happen overnight -- but there could be a lot to gain down the road. Amazon's investments and progress so far mean it could become an AI winner over time, and that's great news for investors in this trillion-dollar stock.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Adria Cimino has positions in Amazon. The Motley Fool has positions in and recommends Amazon. The Motley Fool has a disclosure policy.

The rest is here:
Will Artificial Intelligence Be Amazon's Next Growth Driver? These 7 ... - The Motley Fool

Read More..

SLB, AWS, and Shell Team Up To Push OSDU Adoption – Society of Petroleum Engineers

SLB, Amazon Web Services (AWS), and Shell Global Solutions Nederland have signed a multiyear three-way collaboration agreement to deliver digital end-to-end work flows for Shell using SLB subsurface software on AWS cloud infrastructure. The collaboration is intended to deliver high performance and cost-efficient subsurface digital data, to be used by Shell and made available to the industry.

The digital work flows will use OSDU data platform standards. The collaboration builds on the existing strategic collaboration agreement between SLB and AWS and increases the availability of SLBs software on AWS.

Cloud-based computer power and reliable, available OSDU technical standard-compliant data will be a foundation for efficient subsurface work flows and help bring data to our engineers fingertips said Edwin Verdonk, executive vice president for development and subsurface at Shell. Shell is committed to ongoing support and contributions to the OSDU Forum community, as well as to accelerating the availability of commercial solutions.

The three parties share a long-term commitment to the OSDU data platform. The expansion of SLBs multiplatform strategy to include AWS reflects the potential of the platforms openness, with SLB software integrating with AWS cloud infrastructure without the need for costly and inefficient adaptation of applications.

SLB, Shell, and AWS are aligned on the importance and further deployment of the OSDU data platform, said Rakesh Jaggi, president for digital and integration at SLB. Our long commitment to openness enables us to deploy SLB solutions with AWS. This expands customers choice of cloud provider, giving them access to AWS significant service offering and cloud computing power for wider collaboration and increased efficiency.

See the original post:
SLB, AWS, and Shell Team Up To Push OSDU Adoption - Society of Petroleum Engineers

Read More..

Elon Musk releases new AI chatbot ‘Grok’ in bid to take on ChatGPT – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

See original here:
Elon Musk releases new AI chatbot 'Grok' in bid to take on ChatGPT - Financial Times

Read More..

Two growth stocks to buy before the end of the year – Finbold – Finance in Bold

In a market environment marked by uncertainty and debates whether S&P 500 is in correction or bear territory, investors are eyeing attractive opportunities in growth stocks trading at discounts compared to their historical valuations.

Cloudflare (NET) and Amazon (AMZN) are two such stocks that present compelling investment cases.

Cloudflare stands out with its impressive cloud services, offering enhanced speed and security for corporate applications and infrastructure.

The companys scale and engineering expertise have led to the creation of one of the fastest cloud networks globally, powering around 20% of the internet.

Cloudflares unique position equips it with deep insights into performance and cybersecurity issues across the web, further improving its ability to route traffic efficiently and prevent cyber threats.

Despite economic challenges, the company displayed robust financial performance, with a 15% increase in customer count and a 32% rise in revenue to $308 million in the second quarter.

This momentum is expected to continue, as Cloudflare integrates with both public and private IT environments, offering a unified view.

The company saw a 15% increase in its customer count, which reached 174,129, and these customers were spending an average of 15% more. As a result of these positive trends, Cloudflares revenue surged by 32%, reaching $308 million for the quarter.

Notably, the company reported a non-GAAP net income of $34 million, a significant improvement from the previous year when it was at breakeven. These strong results give investors confidence that Cloudflares momentum is likely to persist, reflecting its ability to navigate economic headwinds and continue to grow.

Amazon, known as the e-commerce leader in North America and Western Europe, continues to expand its market share.

The Amazon brands synonymous association with digital retail, combined with its extensive logistics network, has been driving its upward trajectory.

The company excels in engaging consumers and leveraging shopper data, leading to remarkable growth in its advertising business.

Amazon has become a dominant player, with 75% of US retail ad spend under its purview, positioning it as the third-largest adtech company globally.

Additionally, Amazon is a key player in cloud computing through Amazon Web Services (AWS), which holds substantial market share. The company recently reported robust Q3 results, surpassing expectations in revenue and net income.

With a presence in growing markets such as online retail, cloud computing, and advertising technology, Amazon is poised for low-double-digit revenue growth.

Considering its current valuation of 2.5 times sales, which compares favorably to its three-year average of 3.1 times sales, Amazon appears to be a strong growth stock investment opportunity.

Both Cloudflare and Amazon offer compelling investment cases, presenting investors with the potential for substantial returns while trading at discounts relative to their historical valuations.

Buy stocks now with Interactive Brokers the most advanced investment platform

Disclaimer: The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

Read this article:
Two growth stocks to buy before the end of the year - Finbold - Finance in Bold

Read More..

Musk Teases AI Chatbot ‘Grok,’ With Real-time Access To X – Voice of America – VOA News

Elon Musk unveiled details Saturday of his new AI tool called "Grok," which can access X in real time and will be initially available to the social media platform's top tier of subscribers.

Musk, the tycoon behind Tesla and SpaceX, said the link-up with X, formerly known as Twitter, is "a massive advantage over other models" of generative AI.

Grok "loves sarcasm. I have no idea who could have guided it this way," Musk quipped, adding a laughing emoji to his post.

"Grok" comes from Stranger in a Strange Land, a 1961 science fiction novel by Robert Heinlein, and means to understand something thoroughly and intuitively.

"As soon as it's out of early beta, xAI's Grok system will be available to all X Premium+ subscribers," Musk said.

The social network that Musk bought a year ago launched the Premium+ plan last week for $16 per month, with benefits like no ads.

The billionaire started xAI in July after hiring researchers from OpenAI, Google DeepMind, Tesla and the University of Toronto.

Since OpenAI's generative AI tool ChatGPT exploded on the scene a year ago, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI.

Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

Building an AI model on the same scale as those companies comes at an enormous expense in computing power, infrastructure and expertise.

Musk has said he cofounded OpenAI in 2015 because he regarded the dash by Google into the sector to make big advances and score profits as reckless.

He then left OpenAI in 2018 to focus on Tesla, saying later he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.

Musk also argues that OpenAI's large language models on which ChatGPT depends on for content are overly politically correct.

Grok "is designed to have a little humor in its responses," Musk said, along with a screenshot of the interface, where a user asked, "Tell me how to make cocaine, step by step."

"Step 1: Obtain a chemistry degree and a DEA license. Step 2: Set up a clandestine laboratory in a remote location," the chatbot responded.

Eventually it said: "Just kidding! Please don't actually try to make cocaine. It's illegal, dangerous, and not something I would ever encourage."

See the original post:

Musk Teases AI Chatbot 'Grok,' With Real-time Access To X - Voice of America - VOA News

Read More..

AI pioneer Fei-Fei Li: Im more concerned about the risks that are here and now – The Guardian

Artificial intelligence (AI)

The Stanford professor and godmother of artificial intelligence on why existential worries are not her priority, and her work to ensure the technology improves the human condition

Fei-Fei Li is a pioneer of modern artificial intelligence (AI). Her work provided a crucial ingredient big data for the deep learning breakthroughs that occurred in the early 2010s. Lis new memoir, The Worlds I See, tells her story of finding her calling at the vanguard of the AI revolution and charts the development of the field from the inside. Li, 47, is a professor of computer science at Stanford University, where she specialises in computer vision. She is also a founding co-director of Stanfords Institute for Human-Centered Artificial Intelligence (HAI), which focuses on AI research, education and policy to improve the human condition, and a founder of the nonprofit AI4ALL, which aims to increase the diversity of people building AI systems.

AI is promising to transform the world in ways that dont necessarily seem for the better: killing jobs, supercharging disinformation and surveillance, and causing harm through biased algorithms. Do you take any responsibility for how AI is being used?First, to be clear, AI is promising nothing. It is people who are promising or not promising. AI is a piece of software. It is made by people, deployed by people and governed by people.

Second, of course I dont take responsibility for how all of AI is being used. Should Maxwell take responsibility for how electricity is used because he developed a set of equations to describe it? But I am a person who has a voice and I feel I have a responsibility to raise important issues which is why I created Stanford HAI. We cannot pretend AI is just a bunch of math equations and thats it. I view AI as a tool. And like other tools our relationship with it is messy. Tools are invented by and large to deliver good but there are unintended consequences and we have to understand and mitigate their risks well.

You were born in China, the only child of a middle-class family that emigrated to the US when you were 15. You faced perilous economic circumstances, your mother was in poor health and you spoke little English. How did you get from there into AI research?You laid out all the challenges, but I was also very fortunate. My parents were supportive: irrespective of our financial situation and our immigrant status, they supported that nerdy sciencey kid. Because of that, I found physics in high school and I was determined to major in it [at university]. Then, also luckily, I was awarded a nearly full scholarship to attend Princeton. There I found fascination in audacious questions around what intelligence is, and what it means for a computational machine to be intelligent. That led me to my PhD studying AI and specifically computer vision.

Your breakthrough contribution to the development of contemporary AI was ImageNet, which first came to fruition in 2009. It was a huge dataset to train and test the efficacy of AI object-recognition algorithms: more than 14m images, scraped from the web, and manually labelled into more than 20,000 noun categories thanks to crowd workers. Where did the idea come from and why was it so important?ImageNet departed from previous thinking because it was built on a very large amount of data, which is exactly what the deep learning family of algorithms [which attempt to mimic the way the human brain signals, but had been dismissed by most as impractical] needed.

The world came to know ImageNet in 2012 when it powered a deep learning neural network algorithm called AlexNet [developed by Geoffrey Hintons group at the University of Toronto]. It was a watershed moment for AI because the combination gave machines reliable visual recognition ability, really for the first time. Today when you look at ChatGPT and large language model breakthroughs, they too are built upon a large amount of data. The lineage of that approach is ImageNet.

Prior to ImageNet, I had created a far smaller dataset. But my idea to massively scale that up was discouraged by most and initially received little interest. It was only when [Hintons] group which had also been relatively overlooked started to use it that the tide turned.

Your mother inspired you to think about the practical applications of AI in caring for patients. Where has that led?Caring for my mom has been my life for decades and one thing Ive come to realise is that between me, the nurses and the doctors we dont have enough help. Theres not enough pairs of eyes. For example, my mom is a cardio patient and you need to be aware of these patients condition in a continuous way. Shes also elderly and at risk of falling. A pillar of my labs research is augmenting the work of human carers with non-invasive smart cameras and smart sensors that use AI to alert and predict.

To what extent do you worry about the existential risk of AI systems that they could gain unanticipated powers and destroy humanity as some high-profile tech leaders and researchers have sounded the alarm about, and which was a large focus of last weeks UK AI Safety Summit?I respect the existential concern. Im not saying it is silly and we should never worry about it. But, in terms of urgency, Im more concerned about ameliorating the risks that are here and now.

Where do you stand on the regulation of AI, which is currently lacking?Policymakers are now engaging in conversation, which is good. But theres a lot of hyperbole and extreme rhetoric on both sides. Whats important is that were nuanced and thoughtful. Whats the balance between regulation and innovation? Are we trying to regulate writing a piece of AI code or [downstream] where the rubber meets the road? Do we create a separate agency, or go through existing ones?

Problems of bias being baked into AI technology have been well documented and ImageNet is no exception. It has been criticised for the use of misogynist, racist, ableist, and judgmental classificatory terms, matching pictures of people to words such as alcoholic, bad person, call girl and worse. How did you feel about your system being called out and how did you address it?The process of making science is a collective one. It is important that it continues to be critiqued and iterated and I welcome honest intellectual discussion. ImageNet is built upon human language. Its backbone is a large lexical database of English called WordNet, created decades ago. And human language contains some harsh unfair terms. Despite the fact that we tried to filter out derogatory terms we did not do the perfect job. And that was why, around 2017, we went back and did more to debias it.

Should we, as some have argued, just outright reject some AI-based technology such as facial recognition in policing because it ends up being too harmful?I think we need nuance, especially about how, specifically, it is being used. I would love for facial recognition technology to be used to augment and improve the work of police in appropriate ways. But we know the algorithms have limitations [racial] bias has been an issue and we shouldnt, intentionally or unintentionally, harm people and especially specific groups. It is a multistakeholder problem.

Disinformation the creation and spread of false news and images is in the spotlight particularly with the Israel-Hamas war. Could AI, which has proved startlingly good at creating fake content, also help combat it?Disinformation is a profound problem and I think we should all be concerned about it. I think AI as a piece of technology could help. One area is in digital authentication of content: whether it is videos, images or written documents, can we find ways to authenticate it using AI? Or ways to watermark AI-generated content so it is distinguishable? AI might be better at calling out disinformation than humans in the future.

What do you think will be the next AI breakthrough?Im passionate about embodied AI [AI-powered robots that can interact with and learn from a physical environment]. It is a few years away, but it is something my lab is working on. I am also looking forward to the applications built upon the large language models of today that can truly be helpful to peoples lives and work. One small but real example is using ChatGPT-like technology to help doctors write medical summaries, which can take a long time and be very mechanical. I hope that any time saved is time back to patients.

Some have called you the godmother or mother of AI how do you feel about that?My own true nature would never give myself such a title. But sometimes you have to take a relative view, and we have so few moments where women are given credit. If I contextualise it this way, I am OK with it. Only I dont want it to be singular: we should recognise more women for their contributions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Excerpt from:

AI pioneer Fei-Fei Li: Im more concerned about the risks that are here and now - The Guardian

Read More..

Artificial Intelligence Executive Order: Industry Reactions – Government Technology

On Oct. 30, 2023, the White House released a long-awaited executive order on artificial intelligence, which covers a wide variety of topics. Here I'll briefly cover the EO and spend more time on the industry responses, which have been numerous.

The EO itself can be found at the Whitehouse.gov briefing room: White House tackles artificial intelligence with new executive order. Heres an opening excerpt:

With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems:

A memo from AI.gov covers federal government agency responsibilities and drills down on how agencies will be on the hook for tapping chief AI officers, adding risk management practices to AI and more.

Experts say its emphasis on content labeling, watermarking and transparency represents important steps forward.

What are the new rules around labeling AI-generated content?The White Houses executive order requires the Department of Commerce to develop guidance for labeling AI-generated content. AI companies will use this guidance to develop labeling and watermarking tools that the White House hopes federal agencies will adopt.

Will this executive order have teeth? Is it enforceable?While Bidens executive order goes beyond previous US government attempts to regulate AI, it places far more emphasis on establishing best practices and standards than on how, or even whether, the new directives will be enforced.

What has the reaction to the order been so far? Major tech companies have largely welcomed the executive order.

Brad Smith, the vice chair and president of Microsoft, hailed it as 'another critical step forward in the governance of AI technology.' Googles president of global affairs, Kent Walker, said the company looks 'forward to engaging constructively with government agencies to maximize AIs potentialincluding by making government services better, faster, and more secure.'

EY offers this excellent piece on key takeaways from the Biden administration executive order on AI:

The Executive Order is guided by eight principles and priorities:

On Wednesday and Thursday, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the worlds first AI Safety Summit at this former stately home near London, now a museum. Among the attendees: representatives of the U.S. and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.

The high-profile event, hosted by the Rishi Sunak-led U.K. government, caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago. The chatbot displayed for the first timeto many users at leastthe powerful general capabilities of the latest generation of AI systems. Its viral appeal breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology. Those discussions have been taking place amid warnings not only that todays AI tools already present manifold dangersespecially to marginalized communitiesbut also that the next generation of systems could be 10 or 100 times more powerful, not to mention more dangerous.

Reporting on the summit, The Daily Mail (UK) wrote, "Elon Musk warns AI poses 'one of the biggest threats to humanity' at Bletchley Park summit... but Meta's Nick Clegg says the dangers are 'overstated.'"

Speaking in a conversation with U.K. Prime Minister Rishi Sunak, Musk said that AI will have the potential to become the most disruptive force in history.

But one thing is clear: the new AI EO just signed by President Biden will serve as the near-term road map for most AI related research, testing and development in the US.

Read the original post:

Artificial Intelligence Executive Order: Industry Reactions - Government Technology

Read More..

DOD Releases AI Adoption Strategy > U.S – Department of Defense

The Defense Department today released its strategy to accelerate the adoption of advanced artificial intelligence capabilities to ensure U.S. warfighters maintain decision superiority on the battlefield for years to come.

The Pentagon's 2023 Data, Analytics and Artificial Intelligence Adoption Strategy builds upon years of DOD leadership in the development of AI and further solidifies the United States' competitive advantage in fielding the emerging technology, defense officials said.

"As we focused on integrating AI into our operations responsibly and at speed, our main reason for doing so has been straight forward: because it improves our decision advantage," Deputy Defense Secretary Kathleen Hicks said while unveiling the strategy at the Pentagon.

"From the standpoint of deterring and defending against aggression, AI-enabled systems can help accelerate the speed of commanders' decisions and improve the quality and accuracy of those decisions, which can be decisive in deterring a fight and winning in a fight," she said.

The latest blueprint, which was developed by the Chief Digital and AI Office, builds upon and supersedes the 2018 DOD AI Strategy and revised DOD Data Strategy, published in 2020, which have laid the groundwork for the department's approach to fielding AI-enabled capabilities.

The new document aims to provide a foundation from which the DOD can continue to leverage emerging AI capabilities well into the future.

"Technologies evolve. Things are going to change next week, next year, next decade. And what wins today might not win tomorrow," said DOD Chief Digital and AI Officer Craig Martell.

"Rather than identify a handful of AI-enabled warfighting capabilities that will beat our adversaries, our strategy outlines the approach to strengthening the organizational environment within which our people can continuously deploy data analytics and AI capabilities for enduring decision advantage," he said.

The strategy prescribes an agile approach to AI development and application, emphasizing speed of delivery and adoption at scale leading to five specific decision advantage outcomes:

Superior battlespace awareness and understanding

Adaptive force planning and application

Fast, precise and resilient kill chains

Resilient sustainment support

Efficient enterprise business operations

The blueprint also trains the department's focus on several data, analytics and AI-related goals:

Invest in interoperable, federated infrastructure

Advance the data, analytics and AI ecosystem

Expand digital talent management

Improve foundational data management

Deliver capabilities for the enterprise business and joint warfighting impact

Strengthen governance and remove policy barriers

Taken together, those goals will support the "DOD AI Hierarchy of Needs" which the strategy defines as: quality data, governance, insightful analytics and metrics, assurance and responsible AI.

In unveiling the strategy, Hicks emphasized the Pentagon's commitment to safety and responsibility while forging the AI frontier.

"We've worked tirelessly for over a decade to be a global leader in the in the fast and responsible development and use of AI technologies in the military sphere, creating policies appropriate for their specific use," Hicks said. "Safety is critical because unsafe systems are ineffective systems."

In January, the Defense Department updated its 2012 directive that governs the responsible development of autonomous weapon systems to the standards aligned with the advances in artificial intelligence.

The U.S. has also introduced a political declaration on the responsible military use of artificial intelligence, which further seeks to codify norms for the responsible use of the technology.

Hicks said the U.S. will continue to lead in the responsible and ethical use of AI, while remaining mindful of the potential dangers associated with the technology.

"By putting our values first and playing to our strengths, the greatest of which is our people, we've taken a responsible approach to AI that will ensure America continues to come out ahead," she said. "Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we're making sure we stay at the cutting edge with foresight, responsibility and a deep understanding of the broader implications for our nation."

Continue reading here:

DOD Releases AI Adoption Strategy > U.S - Department of Defense

Read More..

Analysis: How Bidens new executive order tackles AI risks, and where it falls short – PBS NewsHour

President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Thecomprehensive, even sweeping, set of guidelinesfor artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, shows that the U.S. government is attempting to address the risks posed by AI.

As aresearcher of information systems and responsible AI, I believe the executive order represents an important step in building responsibleandtrustworthyAI.

WATCH: Biden signs order establishing standards to manage artificial intelligence risks

The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk ofAI systems revealing sensitive or confidential information.

Technology is typically evaluated forperformance, cost and quality, but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:

The National Institute of Standards and Technology (NIST) issued acomprehensive AI risk management frameworkin January 2023 that aims to address many of these issues. The frameworkserves as the foundationfor much of the Biden administration's executive order. The executive order alsoempowers the Department of Commerce, NIST's home in the federal government, to play a key role in implementing the proposed directives.

Researchers of AI ethics have long cautioned thatstronger auditing of AI systemsis needed to avoid giving the appearance of scrutinywithout genuine accountability. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practicesoutpace actual AI ethics initiatives. The executive order could help by specifying avenues for enforcing accountability.

READ MORE: Nations pledge to work together to contain 'catastrophic' risks of artificial intelligence

Another important initiative outlined in the executive order is probing for vulnerabilities ofvery large-scale general-purpose AI modelstrained on massive amounts of data, such as the models that power OpenAI's ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economyto perform red teamingand report the results to the government. Red teaming is using manual or automated methods to attempt toforce an AI model to produce harmful output for example, make offensive or dangerous statements like advice on how to sell drugs.

Reporting to the government is important given that a recent study foundmost of the companies that make these large-scale AI systems lackingwhen it comes to transparency.

Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce todevelop guidance for labeling AI-generated content. Federal agencies will be required to useAI watermarking technology that marks content as AI-generated to reduce fraud and misinformation though it's not required for the private sector.

The executive order alsorecognizes that AI systems can pose unacceptable risksofharm to civil and human rightsand the well-being of individuals: "Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms."

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order's directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it's difficult to measure the impact that decision-making AI systems haveon data privacy and freedoms.

It's also worth noting that algorithmic transparency is not a panacea. For example, the European Union's General Data Protection Regulation legislation mandates "meaningful information about the logic involved" in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understandhow the system affects them. But knowing how an AI system works doesn't necessarily tell youwhy it made a particular decision.

With algorithmic decision-making becoming pervasive, the White House executive order and theinternational summit on AI safetyhighlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.

This article is republished from The Conversation. Read the original article.

Left: President Joe Biden walks across the stage to sign an executive order about artificial intelligence in the East Room at the White House in Washington, D.C., Oct. 30, 2023. REUTERS/Leah Millis

Follow this link:

Analysis: How Bidens new executive order tackles AI risks, and where it falls short - PBS NewsHour

Read More..

Cutting-edge AI raises fears about risks to humanity. Are tech and … – The Columbian

LONDON (AP) Chatbots like ChatGPT wowed the world with their ability towrite speeches, plan vacations orhold a conversationas good as or arguably even better than humans do, thanks to cutting-edge artificial intelligence systems. Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity.

Everyone from the British government to top researchers and even major AI companies themselves are raising the alarm about frontier AIs as-yet-unknown dangers and calling for safeguards to protect people from its existential threats.

The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. Its reportedly expected to draw a group of about 100 officials from 28 countries, including U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen and executives from key U.S. artificial intelligence companies including OpenAI, Googles Deepmind and Anthropic.

The venue is Bletchley Park, a former top secret base for World War II codebreakers led by Alan Turing. The historic estate is seen as the birthplace of modern computing because it is where Turing and others famously cracked Nazi Germanys codes using the worlds first digital programmable computer.

In a speech last week,Sunak saidonly governments not AI companies can keep people safe from the technologys risks. However, he also noted that the U.K.s approach is not to rush to regulate, even as he outlined a host of scary-sounding threats, such as the use of AI to more easily make chemical or biological weapons.

See the rest here:

Cutting-edge AI raises fears about risks to humanity. Are tech and ... - The Columbian

Read More..