Page 305«..1020..304305306307..310320..»

DataPathology: combining artificial intelligence and digital technology to combat cervical cancer – African Development Bank

The sun rises over El Jadida, a fortified city in south-western Morocco, bathed by the Atlantic Ocean. Pathologist Hicham El Attar is already at work. As a co-founder of DataPathology, he analyses and diagnoses 3D images to detect the slightest cell that could indicate cervical cancer.

A few years ago, Hicham El Attars mother died from the second most-frequent cancer in women. A tragic loss, which made him decide to work on providing an efficient and affordable service to medical practitioners and patients to combat this significant public health problem.

Along with Mohammed El Khannoussi, an information systems and data consultant, he managed to design a solution. Together, they founded DataPathology, a Medtech start-up specializing in medical pathology, in 2020. Combining pathology diagnosis and data digitization, the company uses artificial intelligence and image processing technologies to analyse tissue samples and detect signs of cancer accurately.

By speeding up the diagnostic process and reducing the risks of human error, DataPathology is helping tackle non- or late detection of cervical cancer in patients in Morocco and elsewhere in Africa, thanks to latest-generation screening systems and a network of connected laboratories.

DataPathology has implemented its AI-PAP solution, a complete process that covers everything from distributing a medical sampling kit to a digital platform. The solution is designed to record, analyse and diagnose the results in one of the many laboratories connected to the network.

Easy to access and use, the system offers gynaecologists, general practitioners and laboratory technicians the ability to take the sample themselves and therefore increase the number of patients screened. Using the kit provided, they take a cervical smear on site, enter the associated patient code on the digital platform developed by DataPathology and send in the samples. There is then a battery of tests before the final diagnosis is shared with the patients doctor. The entire process takes two to three days.

The platform we use can deal with up to 250 tests a day, which is an extraordinary improvement: in the past, only around 10 tests a day could be processed. We can do it thanks to artificial intelligence and data digitization, explains Dr El Attar.

DataPathology works with multiple partners and strives continuously to make the screening test as affordable as possible. There is both an economic impact and an impact on survival, because a screening tests that costs USD 100 means you can avoid cancer treatment that would cost you much more than that, say USD 10,000 to 20,000. We hope that one day, the test will be free of charge, comments Hicham El Attar.

However, vaccination remains a critical step before the use of screening, which women can access from age 25. The World Health Organization (WHO), as part of its plan to eliminate cervical cancer by 2030, has set a target of vaccinating 90% of girls, screening 70% of women and treating 90% of women with invasive cancer.

However, although there is a vaccine to prevent cervical cancer linked to the papillomavirus, diagnosing it faces a number of hurdles in Africa. For Dr El Attar, implementing strategies that involve both the public and private sectors, the African Development Bank, private banks and insurance companies, i.e. everyone who has a part to play, would be the most effective direction to take.

Thanks to its innovative approach, DataPathology is currently working on establishing connected laboratories throughout Morocco but also in Kinshasa, in the Democratic Republic of Congo, and in Dakar, in Senegal. In June 2023, the start-up raised USD 1 million from the Azur Innovation Fund, supported by the African Development Bank, to support its expansion in Africa.

Hicham El Attar is more determined than ever, and wants to continue to improve access to high-quality care and save lives by diagnosing cancers earlier.

Original post:
DataPathology: combining artificial intelligence and digital technology to combat cervical cancer - African Development Bank

Read More..

AI is pervasive. Here’s when we’ll see its real economic benefits materialize – Fortune

This year marks a turning point for artificial intelligence (AI). The EU parliament has voted to approve the EU AI Act after three years of negotiations, moving the conversation around responsible AI from theory to reality and setting a new global standard for AI policy.

IBM welcomed this legislation and its balanced, risk-based approach to regulating AI. Why? Because history has shown us time and again that with every new disruptive technology, we must balance that disruption with responsibility.

Weve known for years that AI will touch every aspect of our lives and work, and theres been much attention paid to the incredible potential of this technology to solve our most pressing problems. But not all of AIs impact will be flashy and newsworthyits success will also lie in the day-to-day ways that it will help humans be more productive.

Right now, technology is advancing faster than ever, but productivity is not. A recent McKinsey report shows labor productivity in the U.S. has grown at a lackluster 1.4%. The findings show that regaining historical rates of productivity growth would add $10 trillion to U.S. GDPa boost needed to confront workforce shortages, debt, inflation, and the energy transition. Similar productivity slowdown is happening globally, despite the technology boom of the past 15 years.

Anthropologist Jason Hickel said nearly every government in the world rich and poor alike, is focused single-mindedly on GDP (Gross Domestic Product) growth. This is no longer a matter of choice.

The formula for GDP growth has historically been population growth + productivity growth + debt growth. Two-thirds of this formula, population and debt growth, are unlikely to move in the near future. Aging populations and a shrinking workforce could lead to significant talent gaps, especially in terms of highly skilled and educated workers and as skills-first training and hiring continue to ramp up. Debt access is tightening after 15 years of the lowest interest rates in modern history come to an end.

That leaves productivity gains as our main driver of GDP growth. The world needs increased productivity to drive financial success for companies, as well as economic growth for countries.

AI is the answer to the productivity problembut only if it can be developed and deployed responsibly and with clear purpose.

Gartner estimates $5 trillion in technology spending in 2024, growing to $6.5 trillion by 2026. This will be the ultimate catalyst for the next stage of growth in the global economy.

However, one in five companies surveyed for the 2023 IBM Global AI Adoption Index say they dont yet plan to use AI across their business. Cited among their concerns: limited AI skills and expertise, too much data complexity, and ethical concerns. This is the status quo component in our current paradox. But responsibility and disruption canand mustco-exist.

As governments focus on smart AI regulation, business leaders must focus on accelerating responsible AI adoption. I meet with clients dailyand Ive seen four priorities emerge in the path to adoption: Model choice, governance, skills, and open AI.

Providing model choice is critical to accelerating AI adoption. Different models will be better at some tasks than they are at other tasks. The best model will depend on the industry, domain, use case, and size of model, meaning most will utilize many smaller models versus one larger model.

And with the right governance, companies can be assured that their workflows are compliant with existing and upcoming government regulations and free of bias.

In todays economy, jobs require skills, not just degrees. Technology is evolving faster than many can follow, creating a gap between demand and skills. Leaders must now prioritize skills-first hiring and training and upskilling the existing workforce to thrive in the AI era.

Finally, leveraging open-source models and proprietary models, with well-documented data sources, is the best way to achieve the transparency needed to advance responsible AI. Open is good for diversity because it makes it much easier to identify bias, for sovereignty because all the data sources are easily identifiable, and for education because it naturally lends itself to collaboration across the community.

AI can drive a level of GDP growth that none of us have ever seen in our lifetimes. It may mean the evolution of jobs in the near term. But just as with any other technological revolution, as upskilling occurs, there will eventually be new jobs, markets, and industries.

For business and government, 2024 must be the year of adoption, where we move from the experimentation phase to the deployment phase. With the right vision and approach to responsible AI adoption, we will begin to see widespread economic benefits of this technology in the next three years, with many more years of sustained growth and prosperity to come.

Rob Thomas is SVP of Software and Chief Commercial Officer at IBM.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs ofFortune.

More here:
AI is pervasive. Here's when we'll see its real economic benefits materialize - Fortune

Read More..

Dr. Scott Brennen of UNC-CH’s Center on Technology Policy discusses Artificial Intelligence NC Newsline – NC Newsline

Artificial intelligence or AI. Its a topic that generates a lot of talk, a lot of hope, and a lot of concerns these days.

On the positive side, AI has the potential to aid in solving an array of vexing societal problems and to help average people work better and faster in numerous ways.

On the other hand, of course, AI also raises real concerns both with respect to the ability of humans to control and manage it and the possibility that it will be abused by bad actors in order to mislead people with false or phony communications. This latter concern is of particular relevance as Americans prepare to vote in the 2024 election. Fortunately, a lot of smart people are paying close attention to these kinds of issues and recently NC Newsline caught up with one of them for a special extended conversation on the topic, the head of online expression policy at the UNC-Chapel Hill Center on Technology Policy Dr. Scott Brennen.

In Part One of our conversation, we delved into the subject of artificial intelligence (or AI) and some of the good and bad things it can do when it comes to communication. In Part Two of our chat, we continued this discussion with a special focus on political communication both with respect to the 2024 election and more generally. And as Brennen told me, while he remains optimistic that phony AI messaging poses only a limited risk to the national election, there are risks and some important steps that elected officials can and should consider taking to help preserve the health of our democracy.

Read: The new political ad machine: Policy frameworks for political ads in an age of AI

Excerpt from:
Dr. Scott Brennen of UNC-CH's Center on Technology Policy discusses Artificial Intelligence NC Newsline - NC Newsline

Read More..

CoSN2024: School Technology Leaders to Drill Deeper into Artificial Intelligence – EdTech Magazine: Focus on K-12

The conferences other AI offerings will include a look at prompt engineering; an AI playground focusing on several smaller sessions covering policies and practices; perspectives from three superintendents; an AI readiness workshop specifically for administrators; and a session on student data privacy. AI even takes two slots in CoSNs seven spotlight sessions.

With the Department of Educations guidance on AI last year and this years release of the National Educational Technology Plan (the nations flagship educational technology policy document), attendees will want to catch one of two Tuesday sessions that will touch on the NETP to get further insights on federal or state policies and initiatives regarding digital equity and possibly AI.

DISCOVER: What does the National Educational Technology Plan say about the digital divides?

Unfortunately, because K12 districts are a popular ransomware target, cybersecurity continues to be a top concern for technology leaders. CoSN attendees can attend several workshops to help them shore up their defenses. Leveraging AI for Strengthening Cybersecurity in School Districts promises to share AI-driven solutions that can detect, prevent and respond to cyber threats effectively.

Another session, Defend Your School Like a Pro: The Cybersecurity Rubric for Education Is Thriving, Join the Community! will address The Cybersecurity Rubric, a tool that is informed by National Institute of Standards and Technology and other relevant cybersecurity and privacy frameworks for assessing and improving cybersecurity in schools.

Taking a Deeper Look at National Cybersecurity Standards will examine several federal cybersecurity standards and guidance from other organizations to help K12 IT leaders build or improve their cybersecurity plans.

Another cybersecurity-focused session, 5 Strategic Google & Microsoft K-12 Security & Safety Investigations, will feature panelists discussing risk assessment, threat detection and incident response strategies tailored to Google Workspace for Education and Microsoft 365.

Of course, funding is one of the major sticking points in improving cybersecurity in schools. For tips on how to relieve that pressure, attendees may want to check out Charting the Course: A Post-ESSER Roadmap to Optimizing Edtech ROI and Equitable, Innovative Learning Opportunities: Mastering E-rate and Ed Tech Policy.

RELATED: Schools can win federal cybersecurity funding.

CoSN2024 is not only about professional development. The conference also offers attendees of all backgrounds many opportunities to network with peers in K12. There will be a large district summit, a CTO forum, a digital equity reception, a women in technology breakfast and opportunities for people of color and members of the LGBTQ communities to connect.

Join EdTech as we provide written coverage of CoSN2024. Bookmark this page and follow us on X (formerly Twitter) @EdTech_K12.

Read the rest here:
CoSN2024: School Technology Leaders to Drill Deeper into Artificial Intelligence - EdTech Magazine: Focus on K-12

Read More..

Why Jon Stewart and Apple parted ways – Fortune

Jon Stewart is finally talking about why he and Apple parted ways so suddenly and surprisingly.

On Mondays episode of The Daily Show, Stewart welcomed Lina Khanand told the Federal Trade Commission (FTC) chair he had wanted to host her on his Apple TV+ program, but Apple asked us not do it.

They literally said, Please dont talk to her, Stewart said. Having nothing to do with what you do for a living, I dont think they cared for you.

Additionally, Stewart said Apple refused to let him discuss artificial intelligence on The Problem With Jon Stewart.

Apple did not respond to Fortunes request for comment.

The conversation with Khan came after Stewart talked about AI in the opening moments of the show. Later, he noted to his guest They wouldnt let us do even that dumb thing we just did in the first act on AI. Like, what is that sensitivity? Why are they so afraid to even have these conversations out in the public sphere?

Apple and Jon Stewarts relationship came to an abrupt end last October. The Problem With Jon Stewart, which was Stewarts first regular television appearance in six years after his initial 16-year run at his Comedy Central topical comedy show, never shied away from controversial topics, tackling issues including gender identity and gun control, with many episodes going viral. The final season was nominated for an Emmy in the outstanding talk series category.

Apple is facing an antitrust lawsuit from the U.S. Justice Department and 16 states which accuses Apple of monopolizing the smartphone market. Khans FTC is not a part of that suit.

Read more:
Why Jon Stewart and Apple parted ways - Fortune

Read More..

U.S., U.K. Announce Partnership to Safety Test AI Models | TIME – TIME

The U.K. and U.S. governments announced Monday they will work together in safety testing the most powerful artificial intelligence models. An agreement, signed by Michelle Donelan, the U.K. Secretary of State for Science, Innovation and Technology, and U.S. Secretary of Commerce Gina Raimondo, sets out a plan for collaboration between the two governments.

I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government, Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually.

The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023. While the two organizations cooperation was announced at the time of their creation, Donelan says that the new agreement formalizes and puts meat on the bones of that cooperation. She also said it offers the opportunity for themthe United States governmentto lean on us a little bit in the stage where they're establishing and formalizing their institute, because ours is up and running and fully functioning.

The two AI safety testing bodies will develop a common approach to AI safety testing that involves using the same methods and underlying infrastructure, according to a news release. The bodies will look to exchange employees and share information with each other in accordance with national laws and regulations, and contracts. The release also stated that the institutes intend to perform a joint testing exercise on an AI model available to the public.

The U.K. and the United States have always been clear that ensuring the safe development of AI is a shared global issue, said Secretary Raimondo in a press release accompanying the partnerships announcement. Reflecting the importance of ongoing international collaboration, todays announcement will also see both countries sharing vital information about the capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security.

Safety tests such as those being developed by the U.K. and U.S. AI Safety Institutes are set to play an important role in efforts by lawmakers and tech company executives to mitigate the risks posed by rapidly progressing AI systems. OpenAI and Anthropic, the companies behind the chatbots ChatGPT and Claude, respectively, have published detailed plans for how they expect safety tests to inform their future product development. The recently passed E.U. AI Act and U.S. President Joe Bidens executive order on AI both require companies developing powerful AI models to disclose the results of safety tests.

Read More: Nobody Knows How to Safety-Test AI

The U.K. government under Prime Minister Rishi Sunak has played a leading role in marshaling an international response to the most powerful AI modelsoften referred to as frontier AIconvening the first AI Safety Summit and committing 100 million ($125 million) to the U.K. AI Safety Institute. The U.S., however, despite its economic might and the fact that almost all leading AI companies are based on its soil, has so far committed $10 million to the U.S. AI Safety Institute. (The National Institute of Standards and Technology, the government agency that houses the U.S. AI Safety Institute, suffers from chronic underinvestment.) Donelan rejected the suggestion that the U.S. is failing to pull its weight, arguing that the $10 million is not a fair representation of the resources being dedicated to AI across the U.S. government.

They are investing time and energy in this agenda, said Donelan, fresh off a meeting with Raimondo, who Donelan says fully appreciates the need for us to work together on gripping the risks to seize the opportunities. Donelan argues that in addition to the $10 million in funding for the U.S. AI Safety Institute, the U.S. government is also tapping into the wealth of expertise across government that already exists.

Despite its leadership on some aspects of AI, the U.K. government has decided not to pass legislation that would mitigate the risks from frontier AI. Donelans opposite number, the U.K. Labour Partys Shadow Secretary of State for Science, Innovation and Technology Peter Kyle, has said repeatedly that a Labour government would pass laws mandating that tech companies share the results of AI safety tests with the government, rather than relying on voluntary agreements. Donelan however, says the U.K. will refrain from regulating AI in the short term to avoid curbing industry growth or passing laws that are made obsolete by technological progress.

We don't think it would be right to rush to legislate. We've been very outspoken on that, Donelan told TIME. That is the area where we do diverge from the E.U. We want to be fostering innovation, we want to be getting this sector to grow in the UK.

The memorandum commits the two countries to developing similar partnerships with other countries. Donelan says that a number of nations are either in the process of or thinking about setting up their own institutes, although she did not specify which. (Japan announced the establishment of its own AI Safety Institute in February.)

AI does not respect geographical boundaries, said Donelan. We are going to have to work internationally on this agenda, and collaborate and share information and share expertise if we are going to really make sure that this is a force for good for mankind.

Read more from the original source:
U.S., U.K. Announce Partnership to Safety Test AI Models | TIME - TIME

Read More..

AI-generated disinformation will impact the 2024 election – KGW.com

AI-created images, video, audio and text are already being used to spread disinformation heading into the 2024 election. Here's what to know and how to spot it.

PORTLAND, Ore. Most Americans are aware by now that artificial intelligence is here. They may have seen examples of, or even used, a large language model like the one behind ChatGPT or an image generator like Midjourney. But they may not be aware of how rapidly the technology is improving and the potential it has to disrupt something like the 2024 election campaign season.

The concept of a "deepfake" has been around for some time, but AI tech is rapidly making the creation of that material quicker, easier and cheaper to produce.

It's perhaps best demonstrated with audio. We used an AI audio generator and fed it a few minutes of The Story's Pat Dooris speaking. We then typed in a couple sentences of dialogue. It was able to replicate Dooris' voice, now speaking the dialogue we'd fed it, with pretty astounding accuracy. The whole process took about 2 minutes and cost less than $5 to produce.

Much of the AI content already making the rounds will have certain "tells" upon closer observation. But many people encounter media like this while scrolling on their phones; they might only see or hear it briefly, and often on a very small screen.

So what can you do to inoculate yourself against AI disinformation? The best rule of thumb is to be skeptical about the content that you see or hear.

Al Tompkins (that's Al as in Albert, not AI as in artificial intelligence) is a well-known journalism consultant. He's taught professional reporters the world over, and is currently teaching at Syracuse University in New York. He recently led news personnel from throughout KGW's parent company, TEGNA, in a training on disinformation and AI.

"This technology has moved so fast, I mean faster than any technology in my lifetime," Tompkins told Dooris in a recent interview. "We've gone from, you know, basically Alexander Graham Bell to cell phones in maybe a year or so. So the technology is moving very, very quickly. And here's what's interesting that this technology is moving so fast and causing so much disruption already, but the tools that we need to detect it have not moved with the same speed. And I compare this, sometimes, if you remember when we really started using email all the time and it really took a number of years of getting so much spam email for us to start getting email spam detectors."

In the not-so-distant future, Tompkins suggested, there may be software systems better-equipped to alert us to AI fakes. But right now there are few, if any, that can perform with much accuracy.

With AI detection tech lagging behind, it's best that people learn how to look for deepfakes on their own. In fact, it's something people should probably have learned yesterday, because some of these AI tools have been around for a while.

"Photoshop tools and so on have used a version of AI, a kind of AI, for quite a number of years," Tompkins explained. "AI in its most basic form only does one thing ... But the newer versions of AI, now the ones that are most disruptive, are what we call multimodal AI, so they can change audio and video and text simultaneously. It's not just one thing, it's a bunch of things that you can change all at one time."

Tompkins said he's been tracking the development of AI for years. Most people will have some experience with it, even if they weren't aware of it.

"If, for example, you have a grammar check that comes on and looks at your text and replaces words for you or suggest words, that's a form of artificial intelligence. It's just that it would only do one thing at a time," Tompkins continued. "And this is also a good time for us to say, Pat, that AI isn't the work of the devil. I mean, I think we're going to see that AI actually does some wonderful things and that it's going to make our life much more productive in important ways. Virtually every industry is going to find some useful way of using artificial intelligence if they're not already, because it will take care of things that we need to be taken care of."

There are now some sophisticated programs that can either convincingly alter an existing photo in Adobe's new Photoshop Beta, for example or create images wholesale. And fake news websites are already using the latter, in particular, and passing the images off as real.

Fake audio is already making the rounds as well. Just before the Iowa caucuses this year, an ad running on TV took aim at Donald Trump for attacking the popular conservative governor of Iowa, Kim Reynolds. It featured audio from Trump himself. But it was not, in fact, Trump speaking. The ad, which was put out by supporters of Ron DeSantis, employed an AI-generated voice, although it was "reading" the words that Trump used in a real post on his Truth Social platform.

The more insidious examples probably aren't going to be running on TV. They might instead pop up on social media and quickly make the rounds before anyone's had a chance to fact-check them hurting a candidate's reputation by putting words in their mouth, for example, or giving voters bad information from an otherwise trusted source.

Spotting AI isn't an exact science, but there are some things to look out for. Because the technology is advancing rapidly, the obvious flaws present in early iterations are becoming less common. For the time being, Tompkins said, AI continues to struggle with things like hands and ears in images of people.

"It turns out that, you know, when they take mug shots, one reason they do a side shot is and in passports too we're looking at ears," Tompkins said. "It turns out that our ears are very unique, and AI has a really big problem making ears ... Sometimes one's way longer than the other .... (in) some they're just misshapen fingers.

"It turns out these sculptors of old knew a lot. Michelangelo knew a lot. And it turns out that they sculpted hands and fingers all the time because they're so difficult to do partly because there's not a consistent algorithm between the size of your fingers, and they grow at different rates as you get older and younger, and so on ... AI often makes big mistakes with fingers. Commonly we'll chop off fingers or we'll add fingers and so on. So, hands are sometimes, they're way too big for the person too, so I call them gorilla hands."

AI image models also struggle with text in images, so a closer look at text in the background of an AI-created image might reveal that it's total nonsense. In general, looking carefully at an AI-generated or edited image may reveal a host of things that just don't quite make sense, which should all be red flags.

Audio recordings created by AI might lack the natural pauses that someone makes to take a breath, or do strange things with cadence, pronunciation or emphasis. They might sound unnaturally flat, lacking in emotion or nuance, or they may be a little too clean for audio created outside of a recording studio.

But again, flaws like these come and go AI models are getting better every day. The best thing you can do is stop and think about the context. Does this content, whatever form it takes, seem too good to be true? Could it harm reputations, stoke anger or spread fear?

Tompkins explained that disinformation tends to work because of something called confirmation bias. We tend to believe things that agree with our pre-existing views, or that seem to fit with the actual facts of a situation. The more believable a piece of disinformation appears to us, the more likely we are to accept it as fact without taking a closer look or pumping the brakes.

Oregon's senior U.S. Senator, Ron Wyden, is a member of the Senate Committee on Intelligence. He's also worried about how AI could be used to produce deepfakes that impact real-world politics.

"My view is, particularly deepfakes as you see in the political process, are gonna undermine elections, they're gonna undermine democracy. And we're gonna have to take very strong action," Wyden said. "Because already we've got this gigantic misinformation machine out there. And AI just makes it a lot more powerful and easier to use."

Last week, Oregon Gov. Tina Kotek signed two new bills into law on this subject. The first creates a task force to look at the issue, and it was sponsored by state Rep. Daniel Nguyen of Lake Oswego. The second, sponsored by state Sen. Aaron Woods of Wilsonville, requires political campaigns to include a disclaimer in their ads if any part of the commercial has been altered using AI.

Wyden noted that several other states have AI bills, but he thinks it will take national action to really protect voters from deepfakes.

Right now, the environment of disinformation supercharged by AI can seem pretty daunting. But Tompkins warns against just becoming jaded to the issue.

"This election year, it's going to be very tempting for you to say, 'Everybody's a liar, every politician is a liar, everybody's a liar, they're all lying to me. I don't believe anybody. I'm just going to live in my shell and forget about it or I'm never going to change my mind because I have no idea who else is right. So I'm just going to trust my gut and I'm going to quit exploring,'" Tompkins said. "That's not the way to live. Because that is cynicism you don't believe anything.

"Instead, I'd rather us all be skeptics, and that is open to truth. Stay open to truth. Stay open to proof, because that's what the smart people do. Smart people are constantly learning. They're constantly open to evidence. Don't shut down from the evidence when something isn't true but it's widely circulating. I think it's part of our civic duty to call it out, to say, 'You know what, I'm seeing this circulating. It's just not true. And here's how I know it's not.' And that's not being unkind. It's not being rude. You don't have to be mean to anybody. Just say, 'Listen, this is circulating. And here's how I know it's not true.' Or, 'Here are some questions I would ask about that.'"

See more here:
AI-generated disinformation will impact the 2024 election - KGW.com

Read More..

New Hampshire House takes on artificial intelligence in political advertising – WMUR Manchester

New Hampshire House takes on artificial intelligence in political advertising

Updated: 3:44 PM EDT Mar 29, 2024

Political advertisements featuring deceptive synthetic media would be required to include disclosure language under a bill passed Thursday by the New Hampshire House.Sophisticated artificial intelligence tools, such as voice-cloning software and image generators, are already in use in elections in the U.S. and around the world, leading to concerns about the rapid spread of misinformation.In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the Jan. 21 presidential primary that featured an AI-generated voice mimicking President Joe Biden. Steve Kramer, a political consultant, later said he orchestrated the calls to publicize the potential dangers of artificial intelligence and spur action from lawmakers. But the attorney general's office has said the calls violated the state's voter suppression law.The bill sent to the Senate on Thursday would require disclosure when deceptive artificial intelligence is used in political advertising within 90 days of an election. Such disclosures would explain that the advertisings image, video or audio has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.The bill, which passed without debate, includes exemptions for satire or parody.

Political advertisements featuring deceptive synthetic media would be required to include disclosure language under a bill passed Thursday by the New Hampshire House.

Sophisticated artificial intelligence tools, such as voice-cloning software and image generators, are already in use in elections in the U.S. and around the world, leading to concerns about the rapid spread of misinformation.

In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the Jan. 21 presidential primary that featured an AI-generated voice mimicking President Joe Biden. Steve Kramer, a political consultant, later said he orchestrated the calls to publicize the potential dangers of artificial intelligence and spur action from lawmakers. But the attorney general's office has said the calls violated the state's voter suppression law.

The bill sent to the Senate on Thursday would require disclosure when deceptive artificial intelligence is used in political advertising within 90 days of an election. Such disclosures would explain that the advertisings image, video or audio has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.

The bill, which passed without debate, includes exemptions for satire or parody.

Continued here:
New Hampshire House takes on artificial intelligence in political advertising - WMUR Manchester

Read More..

1 Artificial Intelligence Stock You Don’t Want to Overlook – The Motley Fool

This company has been using artificial intelligence for over a decade and has the potential for massive returns.

Artificial intelligence (AI) is growing like crazy, but there are very few companies making a profit on AI today. One exception is Matterport (MTTR -5.96%), which has been using AI for a decade and recently announced some new advancements in the technology.

In this video, Travis Hoium interviews Matterport CEO RJ Pittman and asks about the future of AI at Matterport.

*Stock prices used were end-of-day prices of March 28, 2024. The video was published on March 29, 2024.

Travis Hoium has positions in Matterport. The Motley Fool has positions in and recommends Matterport and Nvidia. The Motley Fool has a disclosure policy. Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

See the rest here:
1 Artificial Intelligence Stock You Don't Want to Overlook - The Motley Fool

Read More..

IRE Expo brings students and engineering together | News | mesabitribune.com – Mesabi Tribune

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Link:

IRE Expo brings students and engineering together | News | mesabitribune.com - Mesabi Tribune

Read More..