Page 304«..1020..303304305306..310320..»

US and Great Britain Forge AI Safety Pact – PYMNTS.com

The U.S. and U.K. have pledged to work together on safe AI development.

Theagreement, inked on Monday (April 1) by U.S. Commerce SecretaryGina Raimondoand U.K. Technology SecretaryMichelle Donelan, will see the AI Safety Institutes of both countries collaborate on tests for the most advanced artificial intelligence (AI) models.

The partnership will take effect immediately and is intended to allow both organizations to work seamlessly with one another, theDepartment of Commercesaid in a news release.

AI continues to develop rapidly, and both governments recognize the need to act now to ensure a shared approach to AI safety which can keep pace with the technologysemerging risks.

In addition, the two countries agreed to forge similar partnerships with other countries to foster AI safety around the world. The institutes also plan to conduct at least one joint test on a publicly accessible model and to tap into a collective pool of expertise by exploring personnel exchanges between both organizations.

The agreement comes days after the White House unveiled a policy requiring federal agencies to identify and mitigate the potential risks of AI and todesignate a chief AI officer.

Agencies must also create detailed and publicly accessible inventories of their AI systems. These inventories will highlight use cases that could potentially impact safety or civil rights, such as AI-powered healthcare or law enforcement decision-making.

Speaking to PYMNTS following this announcement,Jennifer Gill, vice president of product marketing atSkyhawk Security, stressed the need for the policy to require uniform standards across all agencies.

If each chief AI officer manages and monitors the use of AI at their discretion for each agency, there will be inconsistencies, which leads to gaps, which leads to vulnerabilities, said Gill, whose company specializes in AI integrations for cloud security.

These vulnerabilities in AI can be exploited for a number ofnefarious uses. Any inconsistency in the management and monitoring of AI use puts the federal government as a whole at risk.

This year also saw the National Institute of Standards and Technology (NIST) launch the Artificial Intelligence Safety Institute Consortium(AISIC), is designed to promote collaboration between industry and government to foster safe AI use.

To unlockAIs full potential, we need to ensure there is trust in the technology,MastercardCEOMichael Miebachsaid at the time of the launch. That starts with a common set of meaningful standards that protects users and sparks inclusive innovation.

Mastercard is among the more than 200 members of the group, composed of tech giants such asAmazon,Meta,Google andMicrosoft, schools like Princeton and Georgia Tech, and a variety of research groups.

Go here to see the original:
US and Great Britain Forge AI Safety Pact - PYMNTS.com

Read More..

OneTrust Joins Responsible Artificial Intelligence Institute – PR Newswire

OneTrust partners with RAI Institute to contribute to its development of tangible governance tools for trustworthy, safe, and fair Artificial Intelligence

ATLANTA, April 3, 2024 /PRNewswire/ -- OneTrust, the market-defining leader for trust intelligence, today announced that it has joinedtheResponsible Artificial Intelligence Institute(RAI Institute), the prominent non-profit enabling global organizations to harness the power of responsible AI.

For over five years, OneTrust has led the market in privacy management software, with offerings designed to operationalize integrated risk management. As AI adoption accelerates, OneTrust recognizes responsible AI practices are critical for building trust and unlocking AI's full potential across industries. Last May, the Company introduced OneTrust AI Governance, a comprehensive solution designed to help organizations inventory, assess, and monitor the wide range of risks associated with AI. As organizations use AI and machine learning (ML) to process large amounts of data and drive innovation, AI Governance provides visibility and control over data used and risks generated by AI models. The end-to-end solution helps organizations to operationalize regulatory requirements for laws such as the EU AI Act and align with key industry frameworks such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF), Organization for Economic Co-operation and Development (OECD) Framework for the Classification of AI Systems, and more.

By implementing responsible AI practices, companies build trust with customers, regulators, and society at large

"We're delighted to welcome OneTrust as a member of the Responsible AI Institute," said Alyssa Lefaivre kopac, Head of Global Partnerships & Growth at Responsible AI Institute. "OneTrust's governance solutions and deep expertise in privacy, security, and ethics will be invaluable in our collective work to shape the practices, policies, and standards that enable AI for good across all sectors."

"Responsible AI is not an option, but a necessity in today's business landscape," said Jisha Dymond, Chief Ethics & Compliance Officer at OneTrust. "With OneTrust, organizations can not only observe the AI revolution, but also actively enable innovation. By implementing responsible AI practices, companies build trust with customers, regulators, and society at large, and facilitate a future where technology and human ingenuity converge to create unprecedented value. We look forward to partnering with RAI Institute as we continue to build a responsible AI future together."

This partnership with RAI Institute builds upon OneTrust's commitment to ethical and safe AI deployment. The Company is also a foundational supporter of the International Association of Privacy Professionals (IAPP) AI Governance Center, created to address the industry's need for AI governance professionals. Through its own expert-led OneTrust for AI Governance Masterclass webinar series, OneTrust enables compliance and technology professionals alike to mature their technology-driven AI compliance programs and foster responsible AI practices across their businesses.

About the Responsible AI InstituteFounded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks, and certifications that are closely aligned with global standards and emerging regulations.

Members include leading companies such as ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands, Shell, Chevron, Roche, and many others dedicated to bringing responsible AI to all industry sectors.

About OneTrustOneTrust enables every organization to transform siloed compliance initiatives into world-class, coordinated trust programs with the category-defining Trust Intelligence Platform. Customers use OneTrust to build and demonstrate trust, measure and manage risk, and go beyond compliance. As trust has emerged as the ultimate enabler for innovation, OneTrust delivers the intelligence and automation organizations need to meet critical program goals across data privacy, responsible AI, security, ethics, and ESG. http://www.onetrust.com

2024 OneTrust LLC. All rights reserved. OneTrust and the OneTrust logo are trademarks or registered trademarks of OneTrust LLC in the United States and other jurisdictions. All other brand and product names are trademarks or registered trademarks of their respective holders.

Media ContactsAinslee Shea, OneTrust [emailprotected] +1 (404) 855-0803

Nicole McCaffrey, Responsible AI Institute [emailprotected] +1 (440) 785-3588

SOURCE OneTrust

Continued here:
OneTrust Joins Responsible Artificial Intelligence Institute - PR Newswire

Read More..

Artificial Intelligence Rockets to the Top of the Manufacturing Priority List – Bain & Company

This article is part of Bain's Global Machinery & Equipment Report 2024

As machinery and equipment companies build new tech muscle, they are investing heavily in artificial intelligence (AI). In fact, the AI market in industrial machinery, which includes intelligent hardware, software, and services, is expected to reach $5.46 billion in 2028, according to the Business Research Company.

Why? From supply chain volatility to cost pressures to the shortage of skilled workers, AI can help address top challenges facing machinery and equipment executives.

Many machinery executives increasingly see AI adoption as an urgent task. In the broader advanced manufacturing industry, 75% of executives say that adopting emerging technologies such as AI is their top priority in engineering and R&D, according to Bain research. Yet, while many companies have collected a mountain of data, a basic enabler of AI, most are not using it.

Leading advanced machinery companies offer a clue to success. Before investing in AI, they identify their core business challenges and how AI can help them improve processes and overall performance. That includes evaluating how specific types of AI, such as machine learning (ML) or generative AI, use data to create value. Early movers are using AI to solve key problems in procurement, assembly, maintenance, quality control, and warehouse logistics.

Some forward thinkers are beginning to deploy generative AI to synthesize huge volumes of unstructured data in order to revolutionize knowledge work, such as retrieving and summarizing relevant information from across the enterprise to answer questions from employees. Others are experimenting with generative AI service bots that partner with field technicians, for instance, to recognize more quickly when maintenance is required and to improve the quality of that work.

Those who are pulling ahead are also integrating AI solutions into processes and back-end systems.

Explore the use cases with the highest potential.

Artificial intelligence is a broad term that encompasses technologies such as basic data analytics, ML, deep learning, and generative AI. Winning companies start by identifying their top business challenges and then selecting the specific AI solutions best suited to solve their unique key issues.

Ongoing disruptions such as Covid-19 and geopolitical instability have forced organizations to improve supply chain resilience and sustainability. The challenge is moving beyond reacting to problems after they happen. AI, however, can report supply chain bottlenecks in real time, predict potential disruptions in advance, and enable proactive planning to mitigate impacts to supply chains from an end-to-end business perspective.

AI can also track employee productivity and measure costs across all levels. AI helps companies shift their business models from simply selling machinery to offering machinery as a service, in which after-sales support and maintenance become part of the core offering. This includes applying ML to predict when equipment or parts need replacement, thereby reducing unplanned production downtime.

Finding qualified workers remains a challenge across the industry, especially for more complex engineering tasks. AI provides workers with information and insights to free them to focus on activities that add more value. It can also help train and upskill new workers to quickly come up to speed.

Generative AI in manufacturing is in its infancy, but many believe it will transform the sector. Specifically, the large language models that underpin generative AI fundamentally change how people interact with systems and documents. Generative AI can surface hidden insights from unstructured data that can lead to dramatic improvements in productivity, customer service, and financial performance.

More than 90% of machinery companies already collect and store production data, according to a recent Bain survey. But most do not know how to derive value from that data. One reason is a lack of understanding about where AI can deliver the greatest returns.

Front-runners are already using AI to solve a variety of supply chain challenges (from cutting costs in procurement to using predictive monitoring) to identify failures before they occur in industrial assets, equipment, and infrastructure. In short, AI enables many digital applications that are top of mind for the industry (see Figure 1).

Three specific areas (of many) in which companies are cashing in on AI include minimizing assembly defects/improving quality control; boosting productivity; and streamlining warehouse management.

Minimizing assembly defects/improving quality control: AI can help identify mistakes in real time to improve assembly efficiency and product quality. For example, one machinery original equipment manufacturer (OEM) adopted AI-based video processing to track manual assembly activities, automate quality checks of manual assembly activities, and help optimize the use of resources and employees. Those solutions helped the machinery OEM reduce failures in the assembly process by as much as 70% while also cutting down efforts for quality checks by 50% for some lines.

In another case, a material supplier for machinery OEMs used computer vision to detect foreign objects in chemical bulk material instead of relying only on human inspections. The accuracy of the automated inspection increased by 80%, to greater than 99%, compared with todays mainly manual visual inspection.

Boosting productivity: AI can also supercharge employee productivity, providing a boost to companies short on staff. One machinery manufacturer adopted an AI-powered industrial copilot that converts natural language into code and translates old programming languages into natural language, completing both tasks more expeditiously and at a higher quality than human developers. Among other benefits, engineers using this AI solution were approximately 5% more productive, according to preliminary results. Downtime costs also went down as there were fewer data deployment errors and issues were mitigated more quickly.

Streamlining warehouse management: AI can also help ensure that warehouses operate as efficiently as possible, meaning that they carry the appropriate items to meet demand and minimize extra inventory. One equipment machinery company, for instance, adopted an AI-based inventory management system that helped it minimize overstock while still fulfilling all orders.

AI also provides more flexible job production planning so that companies can allocate specific assembly activities to the most relevant assembly expert at a given time to maximize productivity. As a result, the manufacturer can simultaneously enhance the quality of its products and adjust processes to meet specific customer needs. In short, AI allows companies to customize and personalize without negatively affecting planning, productivity, and costs on the shop floor.

Scaling AI and taking successful AI pilots from one manufacturing line to other lines or other plants is not easy, but it is important. A 2022 survey by MIT Technology Review Insights showed that scaling AI use cases to generate value is the top priority for 78% of executives across industries (see Figure 2).

Top-performing companies monitor their return on investment throughout the AI implementation and ensure that they factor in all costs. While this may seem obvious, many companies forget to log computation costs on the cloud, for instance. Leaders also conduct regular governance checks (e.g., every quarter) to reassess their AI investment decisions.

Legacy software systems and fragmented data can also often pose problems as they create a chaotic data environment with low-quality data. The best teams standardize analytics systems and platforms to enable multiple AI use cases. They also use unified data models that allow them to merge many fragmented data sources into one.

To keep pace with rapid changes in AI, leaders use modular and loosely coupled components, connected via microservices, to make it easy to replace software. When integrating generative AI, they ensure that these new components enhance the existing data architecture. Successful companies also verify that efficient processes and tools (MLOps/DevOps) are factored into the technical architecture so that they can deploy AI at scale.

Leaders in AI also embrace a test-and-learn approach. Machinery engineers typically favor rigorous thinking and perfect product design. Software and AI work, however, require a test-and-learn, fail-fast approach using Agile methodology. In successful AI implementations, plant engineers and AI experts collaborate closely to create, test, and refine AI models until they meet the companys goals.

Finally, machinery companies often struggle to find and retain employees with strong AI skills. To build in-house AI capability, many are bringing in external AI experts to train existing employees and increase data literacy throughout the entire workforce.

To retain skilled workers who may feel that some aspects of the work are uninteresting, successful companies have several approaches. Some are automating simple AI tasks so that experts can focus on more data- and analytics-intensive work. Others are developing expert squads to handle more complex AI use cases and crack data insight problems.

While each company faces different AI challenges, the leaders are addressing three core dimensions. First, they determine where AI unlocks the greatest value for the business. Second, they tailor the technology to address core problems and integrate it with their IT and operational technology setup. That means making sure that the technology is flexible so that it can be applied to immediate use cases but is also scalable in the future. Finally, they are developing a data culture that integrates AI skills and AI-enabled ways of working into the operating model.

AI has captured the imagination of machinery executives. As a growing number of companies experiment with and deploy new solutions, they are raising the industry bar for productivity and performance. Companies that defer investing will need to run twice as fast to keep pace.

The authors would like to express thanks to Josef Waltl, Kevin Denker, Robert Recknagel, Dennis Kuesters, Leonides De Ocampo, Marian Zoll, and Mary Stroncek for their contributions to this article.

Original post:
Artificial Intelligence Rockets to the Top of the Manufacturing Priority List - Bain & Company

Read More..

FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk … – The White House

Administration announces completion of 150-day actions tasked by President Bidens landmark Executive Order on AI

Today, Vice President Kamala Harris announced that the White House Office of Management and Budget (OMB) is issuing OMBs first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits delivering on a core component of the President Bidens landmark AI Executive Order. The Order directed sweeping action to strengthen AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more. Federal agencies have reported that they have completed all of the 150-day actions tasked by the E.O, building on their previous success of completing all 90-day actions.

This multi-faceted direction to Federal departments and agencies builds upon the Biden-Harris Administrations record of ensuring that America leads the way in responsible AI innovation. In recent weeks, OMB announced that the Presidents Budget invests in agencies ability to responsibly develop, test, procure, and integrate transformative AI applications across the Federal Government.

In line with the Presidents Executive Order, OMBs new policy directs the following actions:

Address Risks from the Use of AI

This guidance places people and communities at the center of the governments innovation goals. Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety.

By December 1, 2024, Federal agencies will be required to implement concrete safeguards when using AI in a way that could impact Americans rights or safety. These safeguards include a range of mandatory actions to reliably assess, test, and monitor AIs impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI. These safeguards apply to a wide range of AI applications from health and education to employment and housing.

For example, by adopting these safeguards, agencies can ensure that:

If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.

To protect the federal workforce as the government adopts AI, OMBs policy encourages agencies to consult federal employee unions and adopt the Department of Labors forthcoming principles on mitigating AIs potential harms to employees. The Department is also leading by example, consulting with federal employees and labor unions both in the development of those principles and its own governance and use of AI.

The guidance also advises Federal agencies on managing risks specific to their procurement of AI. Federal procurement of AI presents unique challenges, and a strong AI marketplace requires safeguards for fair competition, data protection, and transparency. Later this year, OMB will take action to ensure that agencies AI contracts align with OMB policy and protect the rights and safety of the public from AI-related risks. The RFI issued today will collect input from the public on ways to ensure that private sector companies supporting the Federal Government follow the best available practices and requirements.

Expand Transparency of AI Use

The policy released today requires Federal agencies to improve public transparency in their use of AI by requiring agencies to publicly:

Today, OMB is also releasing detailed draft instructions to agencies detailing the contents of this public reporting.

Advance Responsible AI Innovation

OMBs policy will also remove unnecessary barriers to Federal agencies responsible AI innovation. AI technology presents tremendous opportunities to help agencies address societys most pressing challenges. Examples include:

Advances in generative AI are expanding these opportunities, and OMBs guidance encourages agencies to responsibly experiment with generative AI, with adequate safeguards in place. Many agencies have already started this work, including through using AI chatbots to improve customer experiences and other AI pilots.

Grow the AI Workforce

Building and deploying AI responsibly to serve the public starts with people. OMBs guidance directs agencies to expand and upskill their AI talent. Agencies are aggressively strengthening their workforces to advance AI risk management, innovation, and governance including:

Strengthen AI Governance

To ensure accountability, leadership, and oversight for the use of AI in the Federal Government, the OMB policy requires federal agencies to:

In addition to this guidance, the Administration announcing several other measures to promote the responsible use of AI in Government:

With these actions, the Administration is demonstrating that Government is leading by example as a global model for the safe, secure, and trustworthy use of AI. The policy announced today builds on the Administrations Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and will drive Federal accountability and oversight of AI, increase transparency for the public, advance responsible AI innovation for the public good, and create a clear baseline for managing risks.

It also delivers on a major milestone 150 days since the release of Executive Order 14110, and the table below presents an updated summary of many of the activities federal agencies have completed in response to the Executive Order.

###

Read this article:
FACT SHEET: Vice President Harris Announces OMB Policy to Advance Governance, Innovation, and Risk ... - The White House

Read More..

1 Magnificent Artificial Intelligence (AI) Stock to Buy and Hold Forever – sharewise

Artificial intelligence (AI) has been garnering plenty of headlines over the past 18 months. Though the technology has been around for a while, recent breakthroughs could lead to massive innovations. The companies that lead the pack in this space will be rewarded.

There are plenty of businesses investors could consider if they want to profit from the AI boom. Let's examine one of them: (NASDAQ: MSFT). The tech giant could be a winner in AI over the long run and deliver market-beating returns along the way.

AI's recent momentum arguably began with the November 2022 launch of ChatGPT, a generative AI platform created by the privately held, Microsoft-backed company OpenAI. ChatGPT quickly became one of the fastest-growing apps ever, gaining more than a staggering 100 million users in just five days. OpenAI's success was clear proof that Microsoft was right to invest in the company. That's why the tech giant decided to double down. In January 2023, Microsoft announced a new multiyear, multimillion-dollar deal with OpenAI.

Continue reading

Source Fool.com

Originally posted here:
1 Magnificent Artificial Intelligence (AI) Stock to Buy and Hold Forever - sharewise

Read More..

Nvidia, Microsoft, and Amazon Are Leaders in Artificial Intelligence (AI), but Don – sharewise

Nvidia, , and Amazon are stocks investors commonly associate with artificial intelligence (AI). Each company is developing the technology in its own way to take a leadership position in this emerging industry.

Nvidia makes the most powerful graphics processing units (GPUs) for AI workloads in the data center. The company is worth $2.2 trillion, with $1.5 trillion of that value added in the past year alone thanks to surging demand for those chips.

Microsoft invested $10 billion in ChatGPT developer OpenAI last year, and is using the start-up's latest GPT-4 models to weave AI into its entire product portfolio. Applications like Word and Excel now come with an optional AI assistant called Copilot, and developers can access advanced AI models on the Azure cloud platform to build their own applications.

Continue reading

Source Fool.com

Go here to see the original:
Nvidia, Microsoft, and Amazon Are Leaders in Artificial Intelligence (AI), but Don - sharewise

Read More..

Forget Nvidia: Billionaires Are Selling It and Buying 2 Top Artificial Intelligence (AI) Stocks Instead – sharewise

Chipmaker Nvidia (NASDAQ: NVDA) has created substantial shareholder value in recent months. The stock has soared 517% since the beginning of 2023 amid surging interest in artificial intelligence (AI). But several billionaire hedge fund managers sold down their positions in Nvidia during the fourth quarter, while purchasing other AI stocks.

Those three billionaires have two important traits in common. They rank among the 15 most successful hedge fund managers in history, and they beat the S 500 (SNPINDEX: ^GSPC) over the past three years. Those qualities lend them credibility.

With that in mind, all three hedge fund managers bought shares of Amazon (NASDAQ: AMZN) in the fourth quarter. Englander and Tepper also started positions in (NYSE: HUBS). Those companies have already achieved a strong presence in certain AI markets -- Amazon in cloud AI developer services, and HubSpot in AI sales assistant software -- but both are leaning into AI product development in a way that could create more shareholder value.

Continue reading

Source Fool.com

Go here to read the rest:
Forget Nvidia: Billionaires Are Selling It and Buying 2 Top Artificial Intelligence (AI) Stocks Instead - sharewise

Read More..

Evaluating Artificial Intelligence Model for Thyroid Nodule Management and Diagnosis – Physician’s Weekly

The following is a summary of Artificial Intelligence Model Assisting Thyroid Nodule Diagnosis and Management: A Multicenter Diagnostic Study, published in the February 2024 issue of Endocrinology by Ha, et al.

For a study, researchers sought to develop and validate an artificial intelligence (AI)-based model, AI-Thyroid, to diagnose thyroid cancer and assess its impact on diagnostic performance.

The AI-Thyroid model was trained using 19,711 images from 6,163 patients at a tertiary hospital (Ajou University Medical Center; AUMC). Validation was conducted using 11,185 images from 4,820 patients in 24 hospitals (test set 1) and 4,490 images from 2,367 patients at AUMC (test set 2). Clinical implications were evaluated by comparing the diagnostic findings of six physicians with varying experience levels (group 1: 4 trainees, group 2: 2 faculty radiologists) before and after AI-Thyroid assistance.

AI-Thyroid achieved an area under the receiver operating characteristic (AUROC) curve of 0.939. For the test set 1, AUROC, sensitivity, and specificity were 0.922, 87.0%, and 81.5%, respectively, and for test set 2, AUROC, sensitivity, and specificity were 0.938, 89.9%, and 81.6%, respectively. AI-Thyroids AUROC did not significantly differ based on malignancy prevalence (>15.0% vs 15.0%, P = .226). In simulated scenarios, AI-Thyroid assistance significantly improved AUROC, sensitivity, and specificity from 0.854 to 0.945, 84.2% to 92.7%, and 72.9% to 86.6% (all P < .001) in group 1, and from 0.914 to 0.939 (P = .022), 78.6% to 85.5% (P = .053), and 91.9% to 92.5% (P = .683) in group 2. Interobserver agreement improved from moderate to substantial in both groups.

AI-Thyroid enhanced diagnostic performance and interobserver agreement in thyroid cancer diagnosis, particularly benefiting less-experienced physicians.

Reference: academic.oup.com/jcem/article-abstract/109/2/527/7250484

Link:
Evaluating Artificial Intelligence Model for Thyroid Nodule Management and Diagnosis - Physician's Weekly

Read More..

AI George Carlin case settled as performers demand better protection – The Verge

George Carlins estate has reached a settlement with the media company that purportedly used generative artificial intelligence to imitate the late comedian. The decision arrives as a group representing artists like Billie Eilish, Nicki Minaj, and Stevie Wonder calls for performers to be better protected against being mimicked by AI technology.

According to the New York Times, Will Sasso and Chad Kultgen the Dudesy podcast creators who imitated Carlin in a faked comedy special titled George Carlin: Im Glad Im Dead agreed as part of the settlement reached on Tuesday to take the offending content offline and never upload it on any platform. Sassoo and Kultgen also agreed to not use Carlins voice or likeness in content they produce without seeking prior approval from the Carlin estate. Information regarding any monetary exchange in the settlement hasnt been disclosed.

The world has begun to appreciate the power and potential dangers inherent in AI tools, which can mimic voices, generate fake photographs and alter video

The world has begun to appreciate the power and potential dangers inherent in AI tools, which can mimic voices, generate fake photographs and alter video, Josh Schiller, a lawyer representing the Carlin estate, said in a statement to the New York Times on Tuesday. This is not a problem that will go away by itself. It must be confronted with swift, forceful action in the courts, and the AI software companies whose technology is being weaponized must also bear some measure of accountability.

Whether AI was actually used to create the faked comedy special was brought into question during the lawsuit in January. Regardless, the resolution to this case may bring some reassurance to performers who are currently fighting against generative AI tools being used to imitate their voice, style, and appearance.

On Tuesday, the Artist Rights Alliance a group representing over 200 musicians, including the estates of Frank Sinatra and Bob Marley signed an open letter calling for technology companies to avoid developing AI tools that risk replacing human performers. Unchecked, AI will set into motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it, said the letter. This assault on human creativity must be stopped.

View original post here:
AI George Carlin case settled as performers demand better protection - The Verge

Read More..

Is C3.ai a Top Artificial Intelligence (AI) Stock to Buy Right Now? – sharewise

Many artificial intelligence (AI) companies rose to prominence in 2023, including C3.ai (NYSE: AI). With its ticker being "AI," it was one of the first names to appear when someone searched for "AI stocks." This likely helped drive the stock higher as it rose to more than $40 per share by last August.

However, C3.ai's shares have since pulled back significantly and now trade in the mid-$20s range. With that large pullback, some investors might wonder if this is the time to scoop up some shares. Should you?

C3.ai has gone through multiple iterations as a company. It started as a software company focused on oil and gas, then moved to become an Internet of Things (IoT) business. Then, it moved to integrate AI due to the direction of its IoT work. Now, it's pivoting to integrate generative AI into its products.

Continue reading

Source Fool.com

Go here to see the original:
Is C3.ai a Top Artificial Intelligence (AI) Stock to Buy Right Now? - sharewise

Read More..