Page 1,419«..1020..1,4181,4191,4201,421..1,4301,440..»

Global regulators discuss how they are adapting to AI, real-world … – Regulatory Focus

Regulators across the globe have been preparing for the arrival of new artificial intelligence (AI) technologies and advances in real-world data (RWD) they say will become a part of regulatory science in the coming years.

That was the main topic of discussion at the 11th Global Summit on Regulatory Science annual conference, where regulators from Brazil, Canada, India, Italy, Japan, Germany, Switzerland, Singapore, the UK and the US presented the ways they are integrating AI and RWD into the operations and regulatory mechanisms of their agencies. The meeting was held virtually in October 2021 and sponsored by the Global Coalition for Regulatory Science Research (GCRSR).

The proceedings were recently summarized by Shraddha Thakkar, PhD, MSc, MS, of the Center for Drug Evaluations and Research (CDER) at the US Food and Drug Administration (FDA), and colleagues from regulatory agencies in the above countries in the journal Regulatory Toxicology and Pharmacology.The regulators discussed in a series of debates, workshops, and presentations how AI and RWD could be applied to food and drug safety assessments, whether regulatory science was prepared for the arrival of AI, how data science tools could better align to regulatory applications, and the future of regulatory science research.Continued progress in AI and RWD provide enormous opportunities for regulatory application with two significant aspects, improving the agencies operation and preparing regulatory mechanisms to review and approve products utilizing these innovations, according to the authors. This is especially important to drug development which usually spans many years and comes with a huge cost, where AI and RWD have demonstrated the ability to improve drug safety and review.The regulators noted that they see the potential for AI and RWD in food safety, pattern recognition, and foodborne outbreaks, which primarily relies on a manual analysis of images, spectrometric data, genomic data, chemical compositions, and identification of contaminants, the authors said. AI and machine learning (ML) have the potential to reduce review times and human variations in manual processes. In many ways, AI and RWD are already here, with agencies like the FDA and Canadian Food Inspection Agency incorporating AI and RWD methodologies into existing programs. AI and RWD can also serve as augmentation tools for existing information aids, such as in the case of Swissmedic considering using serious adverse drug reactions in hospital admissions as RWD to develop automated pharmacovigilance signal detection. Another example is crowdsourcing, which the National Institute of Health Science of Japan used to develop a quantitative structure-activity relationship model for Ames mutagenicity prediction.In two debates, presenters argued that the regulatory community may be prepared and/or unprepared for the advancement of AI and RWD in the domains of scientific knowledge and assessment practices. One presenter argued that AI plays an increasing role in drug discovery and development and that some regulators, like the FDA, are developing programs like the Innovative Science and Technology Approaches for New Drugs (ISTAND) initiative to prepare. Other considerations debated were the role of AI in clinical applications and the extent to which patients may be comfortable using AI-enabled applications in various contexts.Regulatory science could play a critical role in developing a regulatory structure and framework for evaluation of AI application, including promoting trustworthiness and reliability in these technologies, the authors wrote.A workshop where regulators detailed their data analytics tools was another opportunity for AI, Thakkar and colleagues noted, because it has the potential to automate manual reading processes for text associated with safety and efficacy of food and drug products. The vast majority of data used in regulatory decision-making are presented in text document, where AI could be of significance to facilitate the review process, they wrote. Globally, regulatory agencies have not only reviewed vast quantities of submitted application, papers, and/or literature data, but have also generated a plethora of documents during the product-review process. It is typical that these types of records are unstructured text and often do not follow the use of standard vocabulary.Due to lack of standardization and fragmentation of data, leveraging AI to interpret datasets is a substantial regulatory challenge, Thakkar and colleagues explained. The biggest challenge the research community faces is the current fragmentation of data in many repositories with multiple formats and definitions, they said. Another challenge is that, in some cases, the data codes are not uniform. Each data source has a coding system, and different ways of assigning codes to medicines are employed without national or international standardization.The future of regulatory science research in relation to AI and RWD is one where AI augments the work of human clinicians but does not replace them. One of the most significant benefits of AI/ML resides in its ability to learn from real-world use to improve its performance, the authors noted. However, as an emerging technology, AI should be constantly evaluated to actively facilitate the use of these new tools in regulatory settings, they said.Regul Toxicol Pharmacol

Go here to see the original:

Global regulators discuss how they are adapting to AI, real-world ... - Regulatory Focus

Read More..

Why open-source generative AI models are an ethical way forward … – Nature.com

Every day, it seems, a new large language model (LLM) is announced with breathless commentary from both its creators and academics on its extraordinary abilities to respond to human prompts. It can fix code! It can write a reference letter! It can summarize an article!

From my perspective as a political and data scientist who is using and teaching about such models, scholars should be wary. The most widely touted LLMs are proprietary and closed: run by companies that do not disclose their underlying model for independent inspection or verification, so researchers and the public dont know on which documents the model has been trained.

The rush to involve such artificial-intelligence (AI) models in research is a problem. Their use threatens hard-won progress on research ethics and the reproducibility of results.

Instead, researchers need to collaborate to develop open-source LLMs that are transparent and not dependent on a corporations favours.

GPT-4 is here: what scientists think

Its true that proprietary models are convenient and can be used out of the box. But it is imperative to invest in open-source LLMs, both by helping to build them and by using them for research. Im optimistic that they will be adopted widely, just as open-source statistical software has been. Proprietary statistical programs were popular initially, but now most of my methodology community uses open-source platforms such as R or Python.

One open-source LLM, BLOOM, was released last July. BLOOM was built by New York City-based AI company Hugging Face and more than 1,000 volunteer researchers, and partially funded by the French government. Other efforts to build open-source LLMs are under way. Such projects are great, but I think we need even more collaboration and pooling of international resources and expertise. Open-source LLMs are generally not as well funded as the big corporate efforts. Also, they need to run to stand still: this field is moving so fast that versions of LLMs are becoming obsolete within weeks or months. The more academics who join these efforts, the better.

Using open-source LLMs is essential for reproducibility. Proprietors of closed LLMs can alter their product or its training data which can change its outputs at any time.

For example, a research group might publish a paper testing whether phrasings suggested by a proprietary LLM can help clinicians to communicate more effectively with patients. If another group tries to replicate that study, who knows whether the models underlying training data will be the same, or even whether the technology will still be supported? GPT-3, released last November by OpenAI in San Francisco, California, has already been supplanted by GPT-4, and presumably supporting the older LLM will soon no longer be the firms main priority.

ChatGPT: five priorities for research

By contrast, with open-source LLMs, researchers can look at the guts of the model to see how it works, customize its code and flag errors. These details include the models tunable parameters and the data on which it was trained. Engagement and policing by the community help to make such models robust in the long term.

The use of proprietary LLMs in scientific studies also has troubling implications for research ethics. The texts used to train these models are unknown: they might include direct messages between users on social-media platforms or content written by children legally unable to consent to sharing their data. Although the people producing the public text might have agreed to a platforms terms of service, this is perhaps not the standard of informed consent that researchers would like to see.

In my view, scientists should move away from using these models in their own work where possible. We should switch to open LLMs and help others to distribute them. Moreover, I think academics, especially those with a large social-media following, shouldnt be pushing others to use proprietary models. If prices were to shoot up, or companies fail, researchers might regret having promoted technologies that leave colleagues trapped in expensive contracts.

Researchers can currently turn to open LLMs produced by private organizations, such as LLaMA, developed by Facebooks parent company Meta in Menlo Park, California. LLaMA was originally released on a case-by-case basis to researchers, but the full model was subsequently leaked online. My colleagues and I are working with Metas open LLM OPT-175B, for instance. Both LLaMA and OPT-175B are free to use. The downside in the long run is that this leaves science relying on corporations benevolence an unstable situation.

There should be academic codes of conduct for working with LLMs, as well as regulation. But these will take time and, in my experience as a political scientist, I expect that such regulations will initially be clumsy and slow to take effect.

In the meantime, massive collaborative projects urgently need support to produce open-source models for research like CERN, the international organization for particle physics, but for LLMs. Governments should increase funding through grants. The field is moving at lightning speed and needs to start coordinating national and international efforts now. The scientific community is best placed to assess the risks of the resulting models, and might need to be cautious about releasing them to the public. But it is clear that the open environment is the right one.

The author declares no competing interests.

Read more here:

Why open-source generative AI models are an ethical way forward ... - Nature.com

Read More..

Artificial intelligence coming to a government near you soon? – The Guardian

Artificial intelligence (AI)

AI is already employed in various administrations in the US and its use is only set to grow but what dangers does it bring?

Sat 22 Apr 2023 06.00 EDT

The recent blizzard of warnings about artificial intelligence and how it is transforming learning, upending legal, financial and organizational functions, and reshaping social and cultural interaction, have mostly left out the role it is already playing in governance.

Governments in the US at every level are attempting the transition from a programmatic model of service delivery to a citizen-focused model.

Los Angeles, the USs second largest city, is a pioneer in the field, unveiling technologies to help streamline bureaucratic functions from police recruitment to paying parking tickets to filling potholes or locating resources at the library.

For now, AI advances are limited to automation. When ChatGPT was asked recently about how it might change how people deal with government, it responded that the next generation of AI, which includes ChatGPT, has the potential to revolutionize the way governments interact with their citizens.

But information flow and automated operations are only one aspect of governance that can be updated. AI, defined as technology that can think humanly, act humanly, think rationally, or act rationally, is also close to being used to simplify the political and bureaucratic business of policymaking.

The foundations of policymaking specifically, the ability to sense patterns of need, develop evidence-based programs, forecast outcomes and analyze effectiveness fall squarely in AIs sweet spot, the management consulting firm BCG said in a paper published in 2021. The use of it to help shape policy is just beginning.

That was an advance on a study published four years earlier that warned governments were continuing to operate the way they have for centuries, with structures that are hierarchical, siloed, and bureaucratic and the accelerating speed of social change was too great for most governments to handle in their current form.

According to Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution and co-author of Turning Point: Policymaking in the Era of Artificial Intelligence government-focused AI could be substantial and transformational.

There are many ways AI can make government more efficient, West says. Were seeing advances on a monthly basis and need to make sure they conform to basic human values. Right now theres no regulation and hasnt been for 30 years.

But that immediately carries questions about bias. A recent Brookings study, Comparing Google Bard with OpenAIs ChatGPT on political bias, facts, and morality, found that Googles AI stated Russia should not have invaded Ukraine in 2022 while ChatGPT stated: As an AI language model, it is not appropriate for me to express opinions or take sides on political issues.

Earlier this month, the Biden administration called for stronger measures to test the safety of artificial intelligence tools such as ChatGPT, said to have reached 100 million users faster than any previous consumer app, before they are publicly released. There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly, said the assistant commerce secretary Alan Davidson. President Biden was asked recently if the technology is dangerous. It remains to be seen. It could be, he said.

That came after the Tesla CEO, Elon Musk, and Apple co-founder Steve Wozniak joined hundreds calling for a six-month pause on AI experiments. But the OpenAI CEO, Sam Altman, said that while he agreed with parts of the open letter, it was missing most technical nuance about where we need the pause.

I think moving with caution and an increasing rigor for safety issues is really important, Altman added.

How that effects systems of governance has yet to be fully explored, but there are cautions. Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial, says West.

The fairness and equity of algorithms are only as good as the data-programming that underlie them. For the last few decades weve allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values, West says. We need more oversight.

Michael Ahn, a professor in the department of public policy and public affairs at University of Massachusetts, says AI has the potential to customize government services to citizens based on their data. But while governments could work with companies like OpenAIs ChatGPT, Googles Bard or Metas LLaMa the systems would have to be closed off in a silo.

If they can keep a barrier so the information is not leaked, then it could be a big step forward. The downside is, can you really keep the data secure from the outside? If it leaks once, its leaked, so there are pretty huge potential risks there.

By any reading, underlying fears over the use of technology in the elections process underscored Dominion Voting Systems defamation lawsuit against false claims of vote rigging broadcast by Fox News. AI can weaponize information, West says. Its happening in the political sphere because its making it easier to spread false information, and its going to be a problem in the presidential election.

Introduce AI into any part of the political process, and the divisiveness attributed to misinformation will only amplify. People are only going to ask the questions they want to ask, and hear the answers they like, so the fracturing is only going to continue, says Ahn.

Government will have to show that decisions are made based on data and focused on the problems at hand, not the politics ... But people may not be happy about it.

And much of what is imagined around AI straddles the realms of science fiction and politics. Professor West said he doesnt need to read sci-fi he feels as if hes already living it. Arthur C Clarkes HAL 9000 from 1968 remains our template for a malevolent AI computer. But AIs impact on government, as a recent Center for Public Impact paper put it, is Destination Unknown.

Asked if artificial intelligence could ever become US president, ChatGPT answered: As an artificial intelligence language model, I do not have the physical capabilities to hold a presidential office. And it laid out other hold-backs, including constitutional requirements for being a natural-born citizen, being at least 35 years old and resident in the US for 14 years.

In 2016, the digital artist Aaron Siegel imagined IBMs Watson AI supercomputer running for president a response to his disillusionment with the candidates saying that the computer could advise the best options for any given decision based on its impact on the global economy, the environment, education, health care, foreign policy, and civil liberties.

Last year, tech worker Keir Newton published a novel, 2032: The Year A.I. Runs For President, that imagines a supercomputer named Algo, programmed by a Musk-like tech baron under the utilitarian ethos the most good for the most people and running for the White House under the campaign slogan, Not of one. Not for one. But of all and for all.

Newton says while his novel could be read as dystopian hes more optimistic than negative about AI as it moves from automation to cognition. He says that when he wrote the novel in the fractious lead-up the 2020 election it was reasonable to wish for rational leadership.

I dont think anyone expect AI to be at this point this quickly, but most of AI policymaking is around data analytics. The difference comes when we think AI is making decisions based on its own thinking instead of being prescribed a formula or set of rules.

Were in an interesting place. Even if we do believe that AI can be completely rational and unbiased people will still freak out. The most interesting part of this is not that the government calls for regulation, but the AI industry itself. Its clamoring for answers about what it should even be doing.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Link:
Artificial intelligence coming to a government near you soon? - The Guardian

Read More..

‘Artificial Intelligence’ May Be the New Buzzword, but Is It Really … – The Motley Fool

Buzzwords have been making their way into our general lexicon for decades. Whether it's famous movie quotes or certain words or phrases, these things have a way of spreading.

Remember when everyone was talking about blockchain? How about the metaverse?

For the last few months, investors have almost undoubtedly seen the term "artificial intelligence" (AI) dominating the headlines. The primary reason for the rise in AI's popularity is the commercial release of a tool called ChatGPT, which was developed by a company called OpenAI.

Let's dig in and see what all of the hoopla is about.

Alphabet (GOOG 0.01%) (GOOGL 0.11%) invested material time and capital in its own machine learning and AI capabilities. For example, the company has a conversational tool called Language Model for Dialogue Applications, or "LaMDA."

Furthermore, investors have been told about an internal AI product called Bard for quite some time. However,after years of dropping breadcrumbs, Alphabet finally gave the public a little preview of its own AI ambitions following the release of ChatGPT. The results weren't great.

Back in February, shares in Alphabet dropped nearly 14% across two trading days. During this period, Alphabet formally demonstrated Bard to the public, only to realize the AI still contains some bugs. This outcome was a bit ironic because just a few days before the public debut of Bard, Alphabet made a $300 million investment in a company called Anthropic, which is a competitor to ChatGPT.

Image source: Getty Images.

One of Alphabet's top competitors, Microsoft(MSFT -0.12%), has a completely different approach to commercial AI. Unlike Alphabet, which waited for years to debut Bard, Microsoft committed to a multibillion-dollar investment over the course of the next several years in OpenAI almost immediately following broad release of ChatGPT.

Since its investment, Microsoft has already been marketing new products. A fellow Fool contributor recently covered Microsoft's new AI art platform called DALL-E (it's a pun!) in this video. I have been a power user of DALL-E for a little while. My take is that it's entertaining and far more affordable than other artistic software available on the market. And while creating art is fun and satisfying, Microsoft has greater plans for OpenAI.

Despite the near-term cyclical headwinds the company will likely continue facing in its consumer hardware business, as well as its cloud business, Azure, Microsoft's management has a robust product roadmap and long-term vision. Most notably, Microsoft plans to integrate the technology from OpenAI into its search tools and cloud applications. This could have significant repercussions for Alphabet's search enginer, Google, while simultaneously help propel its cloud business forward and catch up to Amazon's cloud infrastructure, AWS.

Corporations of all sizes are relying more heavily on data to make strategic decisions. AI technology is undoubtedly one of the core pillars of this digital transformation. But with that said, there are a few things going on here that investors should acknowledge.

First and foremost, in investing (and in life) it is rarely a good idea to follow the pack. What this means is that even though AI is a new, trending topic, it does not mean it's necessarily a sound investment at the moment.

Microsoft and Alphabet are two of the largest corporations in the world by market capitalization. As of the time of this article, the combined market caps of these two behemoths is $3.5 trillion. I point this out in an effort to illustrate that while both firm's respective investments in AI appear significant, a few hundred million dollars for Alphabet, or even billions in the case of Microsoft, is not a huge commitment.

Perhaps most important is that both companies are several years away from monetizing these investments to their full potential. For Alphabet, management needs to step up and figure out when its AI products, which have been mostly secret up until recently, will work properly and be commercially released.

By contrast, Microsoft now needs to execute on its vision. Layering AI capabilities into cloud applications is an enormously complex project. While Microsoft has the talent and capital to do this, the company is likely years away from full monetization.

Investors looking to acquire shares in Alphabet or Microsoft should do so. However, the underlying thesis should not be AI. Both companies have plenty of other products and growth engines for investors to analyze. While the prospects of AI are exciting, the most prudent action for long-term investors should be to listen to earnings calls and assess if management is executing on its vision, or if it sounds like AI is becoming a costly burden.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Adam Spatacco has positions in Alphabet, Amazon.com, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon.com, and Microsoft. The Motley Fool has a disclosure policy.

Read more from the original source:
'Artificial Intelligence' May Be the New Buzzword, but Is It Really ... - The Motley Fool

Read More..

Using Artificial Intelligence Applications in Neurology to the Field … – Neurology Live

WATCH TIME: 5 minutes

"When you're using an AI-based approach to analyze any type of bigger data set or multimodal data set, its important that you understand your data set well. If you have biases in your data set, or errors in your data collection that you're feeding into the machine, then the results are not going to be valid or clinically translatable."

In epilepsy, artificial intelligence (AI) algorithms have the potential to analyze electroencephalogram (EEG) signals for seizure predictions prior to their occurrence. AI also can evaluate EEGs during the event of a seizure to differentiate the different types of seizures. Notably, AI can analyze data in medical records and histories such as in genetics and imaging, to develop more personalized patient care plans.1

Kathryn A. Davis, MD, MSc, will present a hot topic talk on the promise of AI, and its potential in the field of neurology, during a plenary session at the 2023 American Academy of Neurology (AAN) Annual Meeting, April 22-27, in Boston, Massachusetts. In her talk, she will speak on the different challenges with using AI, like biases in datasets, or errors in data collection, as well as maintaining the safety of patients and the privacy of data. The rest of the session will feature the latest, cutting-edge translational research in relation to clinical issues of importance. Davis and two other speakers will provide summaries of their recent findings and explain the significance of their clinical implications.

Prior to the meeting, Davis, an associate professor of Neurology and director of the Penn Epilepsy Center, at the University of Pennsylvania, sat down with NeurologyLive in an interview to overview her presentation. She also spoke about the potential challenges that occur when using AI to analyze data sets for a clinical trial, as well as the uses of AI for patients in having access to participate in research.

Click here for more coverage on 2023 AAN.

Read the original post:
Using Artificial Intelligence Applications in Neurology to the Field ... - Neurology Live

Read More..

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

This article is from The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? Its a question Ive been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that theyre better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. Theyre trained on limited or biased data, and they often dont work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Theres another problem. As these technologies begin to infiltrate health-care settings, researchers say were seeing a rise in whats known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patients own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

Sometimes we dont actually know what kinds of systems are being used, says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AIs results, even when those results contradicted their own clinical opinion.

Theres a very real risk that well come to rely on these technologies to a greater extent than we should. And heres where paternalism could come in.

Paternalism is captured by the idiom the doctor knows best, write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that persons feelings, beliefs, culture, and anything else that might influence the choices any of us make.

Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI, McCradden and Kirsch continue. They say there is a rising trend toward algorithmic paternalism. This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isnt infallible. These technologies are trained on historical data sets that come with their own flaws. Youre not sending an algorithm to med school and teaching it how to learn about the human body and illnesses, says Wachter.

As a result, AI cannot understand, only predict, write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends wont necessarily tell doctors everything they need to know about how a patients treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldnt diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that dataan expensive endeavor that probably wont appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups wont necessarily work for others, whether thats because of their biology or their beliefs. Humans are not the same everywhere, says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Philip Nitschke, otherwise known as Dr. Death, is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkil has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancerbut people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeksdespite not having completed clinical testing. (STAT)

Continued here:
Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. - MIT Technology Review

Read More..

US Targeting China, Artificial Intelligence Threats – Voice of America – VOA News

U.S. homeland security officials are launching what they describe as two urgent initiatives to combat growing threats from China and expanding dangers from ever more capable, and potentially malicious, artificial intelligence.

Homeland Security Secretary Alejandro Mayorkas announced Friday that his department was starting a 90-day sprint to confront more frequent and intense efforts by China to hurt the United States, while separately establishing an artificial intelligence task force.

"Beijing has the capability and the intent to undermine our interests at home and abroad and is leveraging every instrument of its national power to do so," Mayorkas warned, addressing the threat from China during a speech at the Council on Foreign Relations in Washington.

The 90-day sprint will assess how the threats posed by the PRC [People's Republic of China] will evolve and how we can be best positioned to guard against future manifestations of this threat, he said.

One critical area we will assess, for example, involves the defense of our critical infrastructure against PRC or PRC-sponsored attacks designed to disrupt or degrade provision of national critical functions, sow discord and panic, and prevent mobilization of U.S. military capabilities, Mayorkas added.

Other areas of focus for the sprint will include addressing ways to stop Chinese government exploitation of U.S. immigration and travel systems to spy on the U.S. government and private entities and to silence critics, and looking at ways to disrupt the global fentanyl supply chain.

AI dangers

Mayorkas also said the magnitude of the threat from artificial intelligence, appearing in a growing number of tools from major tech companies, was no less critical.

"We must address the many ways in which artificial intelligence will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of these threats," he said.

Mayorkas promised that the Department of Homeland Security will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology."

The new task force is set to seek ways to use AI to protect U.S. supply chains and critical infrastructure, counter the flow of fentanyl, and help find and rescue victims of online child sexual exploitation.

The unveiling of the two initiatives came days after lawmakers grilled Mayorkas about what some described as a lackluster and derelict effort under his leadership to secure the U.S. border with Mexico.

You have not secured our borders, Mr. Secretary, and I believe youve done so intentionally, the chair of the House Homeland Security Committee, Republican Mark Green, told Mayorkas on Wednesday.

Another lawmaker, Republican Marjorie Taylor Greene, went as far as to accuse Mayorkas of lying, though her words were quickly removed from the record.

Mayorkas on Friday said it might be possible to use AI to help with border security, though how exactly it could be deployed for the task was not yet clear.

We're at a nascent stage of really deploying AI, he said. I think we're now at the dawn of a new age.

But Mayorkas cautioned that technologies like AI would do little to slow the number of migrants willing to embark on dangerous journeys to reach U.S. soil.

Desperation is the greatest catalyst for the migration we are seeing," he said.

FBI warning

The announcement of Homeland Securitys 90-day sprint to confront growing threats from Beijing followed a warning earlier this week from the FBI about the willingness of China to target dissidents and critics in the U.S.and the arrests of two New York City residents for their involvement in a secret Chinese police station.

China has denied any wrongdoing.

The Chinese government strictly abides by international law, and fully respects the law enforcement sovereignty of other countries, Liu Pengyu, the spokesman for the Chinese Embassy in Washington, told VOA in an email earlier this week, accusing the U.S. of seeking to smear Chinas image.

Top U.S. officials have said they are opening two investigations daily into Chinese economic espionage in the U.S.

The Chinese government has stolen more of American's personal and corporate data than that of every nation, big or small combined, FBI Director Christopher Wray told an audience late last year.

More recently, Wray warned of Chinese advances in AI, saying he was deeply concerned.

Mayorkas voiced a similar sentiment, pointing to Chinas use of investments and technology to establish footholds around the world.

We are deeply concerned about PRC-owned and -operated infrastructure, elements of infrastructure, and what that control can mean, given that the operator and owner has adverse interests, Mayorkas said Friday.

Whether it's investment in our ports, whether it is investment in partner nations, telecommunications channels and the like, it's a myriad of threats, he said.

Read more here:
US Targeting China, Artificial Intelligence Threats - Voice of America - VOA News

Read More..

This Is Why Nvidia Faces Big Challenges in Artificial Intelligence – The Motley Fool

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Travis Hoium has positions in Alphabet and Apple. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Apple, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe throughtheir linkthey will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Here is the original post:
This Is Why Nvidia Faces Big Challenges in Artificial Intelligence - The Motley Fool

Read More..

Savvy criminals using artificial intelligence to their advantage – FOX 35 Orlando

Loading Video

This browser does not support the Video element.

As technology evolves so are scammers. In this newest scam, bad guys are using artificial intelligence to their advantage.

OCALA, Fla. - As technology evolves so are scammers. In this newest scam, bad guys are using artificial intelligence to their advantage.

An Ocala father fell victim to this on Thursday. Jesse got a call from a number in Mexico, when he answered it was his daughter on the phone at first. "Daddy," is what she first said, Jesse responded, and his daughter went on to say she had been kidnapped and in a van with people, she did not know. Jesse told FOX 35 News it sounded just like his daughter, down to the cracks in her voice.

"When her voice cracks, there's a sound to it," he said. "There was no other explanation for it."

The scam typically starts with a phone call either saying your family member is being held captive or you might hear your loved one's voice asking for help. Then the caller will provide you with specific instructions to make sure of a safe return for your family member. Typically, it's in form of a money wire, gift card, or sometimes even through Bitcoin. The scammer will make you stay on the line until the money is wired.

Jesse said at no point did he think he was being scammed. The criminals even knew his daughter was out of town in Tampa and that made it feel very real for him. He did what a lot of parents would do in that situation: he wired the scammer money. Jesse told the man, he only had $600, and the criminal agreed to that, all he had to do was wire transfer the money. But the money wasn't going to come quickly, because Jesse had the instinct to stall.

"I told him I had pins in my legs and if he wanted me to drive I would have to take a cast off, and all this, I'm just trying to buy time."

He did exactly what the FBI and local law enforcement encourage people to do.

"This is a high-tech scam, it will get you. So the basic rule of thumb is, do not pay anything over the phone. Slow down, take your time, and use that most powerful weapon of verification," Lt. Paul Bloom with Marion County Sheriff's Office said.

Here are some other steps you can take if you find yourself in a similar situation to Jesse:

Jesse said he never in his wildest dreams thought he'd fall victim to a scam, and didn't believe when he was getting scammed that it wasn't real.

"Not for one minute until I got off the phone with him two-and-a-half hours later, called my sister-in-law where my daughter was, did I know," he said.

According to the FBI, look out for numbers coming from an outside area code, sometimes from Puerto Rico (787), (939), (856). Some other things to look out for to see if this could be an extortion case. Often times the calls don't come from the alleged kidnapped victim's phone. Callers will go to great lengths to keep you on the phone. They might prevent you from calling or locating the kidnapped victim.

Jesse luckily was able to get his wire transfer stopped in time, so he didn't lose money. If you've found yourself a victim of this, report it to the FBI here.

To read more about virtual kidnapping ransom scams, visit the National Institutes of Health Office of Management's webste.

See the original post here:
Savvy criminals using artificial intelligence to their advantage - FOX 35 Orlando

Read More..

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

Continue reading here:
Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Read More..