Category Archives: Artificial Intelligence

U.S., U.K. Announce Partnership to Safety Test AI Models | TIME – TIME

The U.K. and U.S. governments announced Monday they will work together in safety testing the most powerful artificial intelligence models. An agreement, signed by Michelle Donelan, the U.K. Secretary of State for Science, Innovation and Technology, and U.S. Secretary of Commerce Gina Raimondo, sets out a plan for collaboration between the two governments.

I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government, Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually.

The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023. While the two organizations cooperation was announced at the time of their creation, Donelan says that the new agreement formalizes and puts meat on the bones of that cooperation. She also said it offers the opportunity for themthe United States governmentto lean on us a little bit in the stage where they're establishing and formalizing their institute, because ours is up and running and fully functioning.

The two AI safety testing bodies will develop a common approach to AI safety testing that involves using the same methods and underlying infrastructure, according to a news release. The bodies will look to exchange employees and share information with each other in accordance with national laws and regulations, and contracts. The release also stated that the institutes intend to perform a joint testing exercise on an AI model available to the public.

The U.K. and the United States have always been clear that ensuring the safe development of AI is a shared global issue, said Secretary Raimondo in a press release accompanying the partnerships announcement. Reflecting the importance of ongoing international collaboration, todays announcement will also see both countries sharing vital information about the capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security.

Safety tests such as those being developed by the U.K. and U.S. AI Safety Institutes are set to play an important role in efforts by lawmakers and tech company executives to mitigate the risks posed by rapidly progressing AI systems. OpenAI and Anthropic, the companies behind the chatbots ChatGPT and Claude, respectively, have published detailed plans for how they expect safety tests to inform their future product development. The recently passed E.U. AI Act and U.S. President Joe Bidens executive order on AI both require companies developing powerful AI models to disclose the results of safety tests.

Read More: Nobody Knows How to Safety-Test AI

The U.K. government under Prime Minister Rishi Sunak has played a leading role in marshaling an international response to the most powerful AI modelsoften referred to as frontier AIconvening the first AI Safety Summit and committing 100 million ($125 million) to the U.K. AI Safety Institute. The U.S., however, despite its economic might and the fact that almost all leading AI companies are based on its soil, has so far committed $10 million to the U.S. AI Safety Institute. (The National Institute of Standards and Technology, the government agency that houses the U.S. AI Safety Institute, suffers from chronic underinvestment.) Donelan rejected the suggestion that the U.S. is failing to pull its weight, arguing that the $10 million is not a fair representation of the resources being dedicated to AI across the U.S. government.

They are investing time and energy in this agenda, said Donelan, fresh off a meeting with Raimondo, who Donelan says fully appreciates the need for us to work together on gripping the risks to seize the opportunities. Donelan argues that in addition to the $10 million in funding for the U.S. AI Safety Institute, the U.S. government is also tapping into the wealth of expertise across government that already exists.

Despite its leadership on some aspects of AI, the U.K. government has decided not to pass legislation that would mitigate the risks from frontier AI. Donelans opposite number, the U.K. Labour Partys Shadow Secretary of State for Science, Innovation and Technology Peter Kyle, has said repeatedly that a Labour government would pass laws mandating that tech companies share the results of AI safety tests with the government, rather than relying on voluntary agreements. Donelan however, says the U.K. will refrain from regulating AI in the short term to avoid curbing industry growth or passing laws that are made obsolete by technological progress.

We don't think it would be right to rush to legislate. We've been very outspoken on that, Donelan told TIME. That is the area where we do diverge from the E.U. We want to be fostering innovation, we want to be getting this sector to grow in the UK.

The memorandum commits the two countries to developing similar partnerships with other countries. Donelan says that a number of nations are either in the process of or thinking about setting up their own institutes, although she did not specify which. (Japan announced the establishment of its own AI Safety Institute in February.)

AI does not respect geographical boundaries, said Donelan. We are going to have to work internationally on this agenda, and collaborate and share information and share expertise if we are going to really make sure that this is a force for good for mankind.

Read more from the original source:
U.S., U.K. Announce Partnership to Safety Test AI Models | TIME - TIME

AI-generated disinformation will impact the 2024 election – KGW.com

AI-created images, video, audio and text are already being used to spread disinformation heading into the 2024 election. Here's what to know and how to spot it.

PORTLAND, Ore. Most Americans are aware by now that artificial intelligence is here. They may have seen examples of, or even used, a large language model like the one behind ChatGPT or an image generator like Midjourney. But they may not be aware of how rapidly the technology is improving and the potential it has to disrupt something like the 2024 election campaign season.

The concept of a "deepfake" has been around for some time, but AI tech is rapidly making the creation of that material quicker, easier and cheaper to produce.

It's perhaps best demonstrated with audio. We used an AI audio generator and fed it a few minutes of The Story's Pat Dooris speaking. We then typed in a couple sentences of dialogue. It was able to replicate Dooris' voice, now speaking the dialogue we'd fed it, with pretty astounding accuracy. The whole process took about 2 minutes and cost less than $5 to produce.

Much of the AI content already making the rounds will have certain "tells" upon closer observation. But many people encounter media like this while scrolling on their phones; they might only see or hear it briefly, and often on a very small screen.

So what can you do to inoculate yourself against AI disinformation? The best rule of thumb is to be skeptical about the content that you see or hear.

Al Tompkins (that's Al as in Albert, not AI as in artificial intelligence) is a well-known journalism consultant. He's taught professional reporters the world over, and is currently teaching at Syracuse University in New York. He recently led news personnel from throughout KGW's parent company, TEGNA, in a training on disinformation and AI.

"This technology has moved so fast, I mean faster than any technology in my lifetime," Tompkins told Dooris in a recent interview. "We've gone from, you know, basically Alexander Graham Bell to cell phones in maybe a year or so. So the technology is moving very, very quickly. And here's what's interesting that this technology is moving so fast and causing so much disruption already, but the tools that we need to detect it have not moved with the same speed. And I compare this, sometimes, if you remember when we really started using email all the time and it really took a number of years of getting so much spam email for us to start getting email spam detectors."

In the not-so-distant future, Tompkins suggested, there may be software systems better-equipped to alert us to AI fakes. But right now there are few, if any, that can perform with much accuracy.

With AI detection tech lagging behind, it's best that people learn how to look for deepfakes on their own. In fact, it's something people should probably have learned yesterday, because some of these AI tools have been around for a while.

"Photoshop tools and so on have used a version of AI, a kind of AI, for quite a number of years," Tompkins explained. "AI in its most basic form only does one thing ... But the newer versions of AI, now the ones that are most disruptive, are what we call multimodal AI, so they can change audio and video and text simultaneously. It's not just one thing, it's a bunch of things that you can change all at one time."

Tompkins said he's been tracking the development of AI for years. Most people will have some experience with it, even if they weren't aware of it.

"If, for example, you have a grammar check that comes on and looks at your text and replaces words for you or suggest words, that's a form of artificial intelligence. It's just that it would only do one thing at a time," Tompkins continued. "And this is also a good time for us to say, Pat, that AI isn't the work of the devil. I mean, I think we're going to see that AI actually does some wonderful things and that it's going to make our life much more productive in important ways. Virtually every industry is going to find some useful way of using artificial intelligence if they're not already, because it will take care of things that we need to be taken care of."

There are now some sophisticated programs that can either convincingly alter an existing photo in Adobe's new Photoshop Beta, for example or create images wholesale. And fake news websites are already using the latter, in particular, and passing the images off as real.

Fake audio is already making the rounds as well. Just before the Iowa caucuses this year, an ad running on TV took aim at Donald Trump for attacking the popular conservative governor of Iowa, Kim Reynolds. It featured audio from Trump himself. But it was not, in fact, Trump speaking. The ad, which was put out by supporters of Ron DeSantis, employed an AI-generated voice, although it was "reading" the words that Trump used in a real post on his Truth Social platform.

The more insidious examples probably aren't going to be running on TV. They might instead pop up on social media and quickly make the rounds before anyone's had a chance to fact-check them hurting a candidate's reputation by putting words in their mouth, for example, or giving voters bad information from an otherwise trusted source.

Spotting AI isn't an exact science, but there are some things to look out for. Because the technology is advancing rapidly, the obvious flaws present in early iterations are becoming less common. For the time being, Tompkins said, AI continues to struggle with things like hands and ears in images of people.

"It turns out that, you know, when they take mug shots, one reason they do a side shot is and in passports too we're looking at ears," Tompkins said. "It turns out that our ears are very unique, and AI has a really big problem making ears ... Sometimes one's way longer than the other .... (in) some they're just misshapen fingers.

"It turns out these sculptors of old knew a lot. Michelangelo knew a lot. And it turns out that they sculpted hands and fingers all the time because they're so difficult to do partly because there's not a consistent algorithm between the size of your fingers, and they grow at different rates as you get older and younger, and so on ... AI often makes big mistakes with fingers. Commonly we'll chop off fingers or we'll add fingers and so on. So, hands are sometimes, they're way too big for the person too, so I call them gorilla hands."

AI image models also struggle with text in images, so a closer look at text in the background of an AI-created image might reveal that it's total nonsense. In general, looking carefully at an AI-generated or edited image may reveal a host of things that just don't quite make sense, which should all be red flags.

Audio recordings created by AI might lack the natural pauses that someone makes to take a breath, or do strange things with cadence, pronunciation or emphasis. They might sound unnaturally flat, lacking in emotion or nuance, or they may be a little too clean for audio created outside of a recording studio.

But again, flaws like these come and go AI models are getting better every day. The best thing you can do is stop and think about the context. Does this content, whatever form it takes, seem too good to be true? Could it harm reputations, stoke anger or spread fear?

Tompkins explained that disinformation tends to work because of something called confirmation bias. We tend to believe things that agree with our pre-existing views, or that seem to fit with the actual facts of a situation. The more believable a piece of disinformation appears to us, the more likely we are to accept it as fact without taking a closer look or pumping the brakes.

Oregon's senior U.S. Senator, Ron Wyden, is a member of the Senate Committee on Intelligence. He's also worried about how AI could be used to produce deepfakes that impact real-world politics.

"My view is, particularly deepfakes as you see in the political process, are gonna undermine elections, they're gonna undermine democracy. And we're gonna have to take very strong action," Wyden said. "Because already we've got this gigantic misinformation machine out there. And AI just makes it a lot more powerful and easier to use."

Last week, Oregon Gov. Tina Kotek signed two new bills into law on this subject. The first creates a task force to look at the issue, and it was sponsored by state Rep. Daniel Nguyen of Lake Oswego. The second, sponsored by state Sen. Aaron Woods of Wilsonville, requires political campaigns to include a disclaimer in their ads if any part of the commercial has been altered using AI.

Wyden noted that several other states have AI bills, but he thinks it will take national action to really protect voters from deepfakes.

Right now, the environment of disinformation supercharged by AI can seem pretty daunting. But Tompkins warns against just becoming jaded to the issue.

"This election year, it's going to be very tempting for you to say, 'Everybody's a liar, every politician is a liar, everybody's a liar, they're all lying to me. I don't believe anybody. I'm just going to live in my shell and forget about it or I'm never going to change my mind because I have no idea who else is right. So I'm just going to trust my gut and I'm going to quit exploring,'" Tompkins said. "That's not the way to live. Because that is cynicism you don't believe anything.

"Instead, I'd rather us all be skeptics, and that is open to truth. Stay open to truth. Stay open to proof, because that's what the smart people do. Smart people are constantly learning. They're constantly open to evidence. Don't shut down from the evidence when something isn't true but it's widely circulating. I think it's part of our civic duty to call it out, to say, 'You know what, I'm seeing this circulating. It's just not true. And here's how I know it's not.' And that's not being unkind. It's not being rude. You don't have to be mean to anybody. Just say, 'Listen, this is circulating. And here's how I know it's not true.' Or, 'Here are some questions I would ask about that.'"

See more here:
AI-generated disinformation will impact the 2024 election - KGW.com

New Hampshire House takes on artificial intelligence in political advertising – WMUR Manchester

New Hampshire House takes on artificial intelligence in political advertising

Updated: 3:44 PM EDT Mar 29, 2024

Political advertisements featuring deceptive synthetic media would be required to include disclosure language under a bill passed Thursday by the New Hampshire House.Sophisticated artificial intelligence tools, such as voice-cloning software and image generators, are already in use in elections in the U.S. and around the world, leading to concerns about the rapid spread of misinformation.In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the Jan. 21 presidential primary that featured an AI-generated voice mimicking President Joe Biden. Steve Kramer, a political consultant, later said he orchestrated the calls to publicize the potential dangers of artificial intelligence and spur action from lawmakers. But the attorney general's office has said the calls violated the state's voter suppression law.The bill sent to the Senate on Thursday would require disclosure when deceptive artificial intelligence is used in political advertising within 90 days of an election. Such disclosures would explain that the advertisings image, video or audio has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.The bill, which passed without debate, includes exemptions for satire or parody.

Political advertisements featuring deceptive synthetic media would be required to include disclosure language under a bill passed Thursday by the New Hampshire House.

Sophisticated artificial intelligence tools, such as voice-cloning software and image generators, are already in use in elections in the U.S. and around the world, leading to concerns about the rapid spread of misinformation.

In New Hampshire, authorities are investigating robocalls sent to thousands of voters just before the Jan. 21 presidential primary that featured an AI-generated voice mimicking President Joe Biden. Steve Kramer, a political consultant, later said he orchestrated the calls to publicize the potential dangers of artificial intelligence and spur action from lawmakers. But the attorney general's office has said the calls violated the state's voter suppression law.

The bill sent to the Senate on Thursday would require disclosure when deceptive artificial intelligence is used in political advertising within 90 days of an election. Such disclosures would explain that the advertisings image, video or audio has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.

The bill, which passed without debate, includes exemptions for satire or parody.

Continued here:
New Hampshire House takes on artificial intelligence in political advertising - WMUR Manchester

1 Artificial Intelligence Stock You Don’t Want to Overlook – The Motley Fool

This company has been using artificial intelligence for over a decade and has the potential for massive returns.

Artificial intelligence (AI) is growing like crazy, but there are very few companies making a profit on AI today. One exception is Matterport (MTTR -5.96%), which has been using AI for a decade and recently announced some new advancements in the technology.

In this video, Travis Hoium interviews Matterport CEO RJ Pittman and asks about the future of AI at Matterport.

*Stock prices used were end-of-day prices of March 28, 2024. The video was published on March 29, 2024.

Travis Hoium has positions in Matterport. The Motley Fool has positions in and recommends Matterport and Nvidia. The Motley Fool has a disclosure policy. Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

See the rest here:
1 Artificial Intelligence Stock You Don't Want to Overlook - The Motley Fool

AI generated deepfake of Kari Lake used to highlight dangers in election – The Washington Post

Hank Stephenson has a finely tuned B.S. detector. The longtime journalist has made a living sussing out lies and political spin.

But even he was fooled at first when he watched the video of one of his home states most prominent congressional candidates.

There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake an artificial-intelligence-generated video produced by his news organization, Arizona Agenda, to underscore the dangers of AI misinformation in a pivotal election year.

When we started doing this, I thought it was going to be so bad it wouldnt trick anyone, but I was blown away, Stephenson, who co-founded the site in 2021, said in an interview. And we are unsophisticated. If we can do this, then anyone with a real budget can do a good enough job that itll trick you, itll trick me, and that is scary.

As a tight 2024 presidential election draws ever nearer, experts and officials are increasingly sounding the alarm about the potentially devastating power of AI deepfakes, which they fear could further corrode the countrys sense of truth and destabilize the electorate.

There are signs that AI and the fear surrounding it is already having an impact on the race. Late last year, former president Donald Trump falsely accused the producers of an ad, which showed his well-documented public gaffes, of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.

Now some officials are rushing to respond. In recent months, the New Hampshire Justice Department announced it was investigating a spoofed robocall featuring an AI-generated voice of President Biden; Washington state warned its voters to be on the lookout for deepfakes; and lawmakers from Oregon to Florida passed bills restricting the use of such technology in campaign communications.

And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come. The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscribers.

They brainstormed ideas for about a week and enlisted the help of a tech-savvy friend. On Friday, Stephenson published the piece, which included three deepfake clips of Lake.

It begins with a ploy, telling readers that Lake a hard-right candidate whom the Arizona Agenda has pilloried in the past had decided to record a testimonial about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.

Subscribe to the Arizona Agenda for hard-hitting real news, the fake Lake says to the camera, before adding: And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.

By Saturday, the videos had generated tens of thousands of views and one very unhappy response from the real Lake, whose campaign attorneys sent the Arizona Agenda a cease-and-desist letter. The letter demanded the immediate removal of the aforementioned deep fake videos from all platforms where they have been shared or disseminated. If the outlet refuses to comply, the letter said, Lakes campaign would pursue all available legal remedies.

A spokesperson for the campaign declined to comment when contacted on Saturday.

Stephenson said he was consulting with lawyers about how to respond, but as of Saturday afternoon, he was not planning to remove the videos. The deepfakes, he said, are good learning devices, and he wants to arm readers with the tools to detect such forgeries before theyre bombarded with them as the election season heats up.

Fighting this new wave of technological disinformation this election cycle is on all of us, Stephenson wrote in the article accompanying the clips. Your best defense is knowing whats out there and using your critical thinking.

Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda videos were useful public service announcements that appeared carefully crafted to limit unintended consequences. Even so, he said, outlets should be wary of how they frame their deepfake reportage.

Im supportive of the PSAs, but theres a balance, Farid said. You dont want your readers and viewers to look at everything that doesnt conform to their worldview as fake.

Deepfakes present two distinct threat vectors, Farid said. First, bad actors can generate false videos of people saying things they never actually said; and second, people can more credibly dismiss any real embarrassing or incriminating footage as fake.

This dynamic, Farid said, has been especially apparent during Russias invasion of Ukraine, a conflict rife with misinformation. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlins aggression with as much urgency as they might show if the Eiffel Tower had been targeted.

It was a potent message, Farid said, but it opened the door for Russias baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.

I am worried that everything is becoming suspect, he said.

Stephenson, whose backyard is a political battleground that lately has become a crucible of conspiracy theories and false claims, has a similar fear.

For many years now weve been battling over whats real, he said. Objective facts can be written off as fake news, and now objective videos will be written off as deep fakes, and deep fakes will be treated as reality.

Researchers like Farid are feverishly working on software that would allow journalists and others to more easily detect deepfakes. Farid said the suite of tools he currently uses easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes. However, deepfake technology is improving at a rapid rate, and future phonies could be much harder to spot.

And even Stephensons admittedly sub-par deepfake managed to dupe a few people: After blasting out Fridays newsletter with the headline Kari Lake does us a solid, a handful of paying readers unsubscribed. Most likely, Stephenson suspects, they thought Lakes endorsement was real.

Maegan Vazquez contributed to this report.

View post:
AI generated deepfake of Kari Lake used to highlight dangers in election - The Washington Post

Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More – The New York Times

The floor in front of the stage at Roberts Western World, a beloved lower Broadway honky-tonk in Nashville, was packed on Thursday afternoon.

But even with the country music superstar Luke Bryan and multiple other musicians on hand, the center of attention was Gov. Bill Lee and his Elvis Act.

And Mr. Lee did not disappoint, signing into law the Ensuring Likeness, Voice and Image Security Act, a first-in-the-nation bill that aims to protect musicians from artificial intelligence by adding penalties for copying a performers voice without permission.

There are certainly many things that are positive about what A.I. does, Mr. Lee told the crowd. But, he added, when fallen into the hands of bad actors, it can destroy this industry.

The use of A.I. technology and its rapid fire improvement in mimicking public figures has led several legislatures to move to tighten regulations over A.I., particularly when it comes to election ads. The White House late last year imposed a sweeping executive order to push for more guardrails as Congress wrestles with federal regulations.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read the original here:
Tennessee Makes A.I. an Outlaw to Protect Its Country Music and More - The New York Times

Using novel micropore technology combined with artificial intelligence to differentiate Staphylococcus aureus and … – Nature.com

A micropore device

Micropores (3m in diameter) were fabricated on a 50nm-thick silicon nitride film cast on a silicon substrate. The silicon substrate was sandwiched between a 25mm25mm 0.5-mm-thick plastic channel. This structure is termed a micropore module or a module (Fig.1a, b). Bacterial suspensions (18 L) in 1phosphate buffered saline (PBS) and PBS (15 L) were introduced into the cis and trans chambers, respectively. We measured ionic currenttime waveforms through applying a voltage of 0.1V between the Ag/AgCl electrodes placed in the flow channel. When the bacteria passed through the micropores, the ionic current decreased owing to obstruction due to the flowing ions (Fig.1c). S. aureus and S. epidermidis are spherical in shape with a diameter ranging from 0.6 to 1.0m under a low vacuum of 98hPa. We observed no specific differences in the bacterial structures through scanning electron microscopy (SEM) under the culture conditions (Fig.1dg). The waveforms were collected in the data server and analysed, as described in the Methods section. The number of available waveforms for the bacterial suspensions was independent of their optical densities measured at a wavelength of 600nm (OD600) (Fig.1h). Therefore, sufficient bacteria were present in the suspensions regardless of their OD600 values.

(a) The structure of the micropore module. The micropore module is 25 mm2 in size and 0.5mm thick. The bacterial suspension is introduced from the cis channel, and PBS is introduced from the trans channel. (b) An optical image of the pore of the module. The optical microscopic examination of the micropore module suggests that the 3m diameter micropore is in the centre of the silicon substrate. (c) A schema of the micropore. When the bacteria pass through the micropore, the ionic current decreases because of obstruction in flowing ions. The processing software obtains the change in ionic current as a waveform. (d) Scanning electron microscope (SEM) observation of S. epidermidis14,000. (e) SEM observation of S. epidermidis60,000. (f) SEM observation of S. aureus14,000. (g) SEM observation of S. aureus60,000. (h) A scatter diagram of the OD600 of the bacterial suspensions and their average pulse counts. No correlations were observed between the OD600 and pulse counts of the bacterial suspensions.

Ionic current measurements were performed on S. aureus and S. epidermidis cultures using micropore devices placed under an optical microscope. The base current was approximately 0.4 A. Using the equations for access resistance (1/D) and micropore resistance (4L/D2), the base current was estimated to be 0.5 nA when the ionic conductivity () of 1PBS was 1.61 Sm1, the average diameter of the micropores was D=3m, and the thickness of the micropores was L=50nm. The ionic current obtained was consistent with the theoretically estimated ionic current. With application of a voltage of 0.1V, a single bacterium (black spot indicated by yellow arrow, Fig.2a) was pulled into the micropore. Bacteria within an approximate radius of 15m from the micropore were pulled in at an accelerated rate as they approached the micropore (Fig.2a(langle 1rangle)). When one bacterium passed through the micropore, we observed a single ionic currenttime waveform, and the ionic current did not change until it entered the micropore. Bacteria within an approximate radius of 40m from the micropore were drawn to the micropore by Brownian motion (Fig.2a(langle 2rangle)). The duration of Brownian motion was>1min while approaching the micropores. When a bacterium reached an approximate 15m radius from the micropore threshold, it was rapidly pulled into the micropore, which suggested it was being pulled under the influence of electric forces.

(a) Optical microscope images (scale bar=10m) of S. epidermidis (small dot indicated by yellow arrow) being drawn towards and pulled into a micropore (3m black circles):<1>the bacterium near (within 15m radius) the micropore is pulled in at a fast rate;<2>the trajectory of a distant bacterium being drawn towards the micropore until it is pulled in. (b) Actual display screen during the measurement. The moment at which the bacteria (yellow arrow) is being sucked into the micropore. Its waveform is denoted on the right top column of the right-side window. The waveforms are indicated at three different time scales using the waveform viewer software.

Negatively charged bacteria were subjected to diffusion and electric forces between the electrodes. The radius (r) from the centre of the micropore at which the bacteria were trapped by the electric field is denoted by the equation:

$$r = d^{{2}} mu Delta V/{8}hD$$

where d, , and V are the diameter of the micropore, the mobility of the bacteria, and applied voltage, respectively, and h and D are the thickness of the micropores and diffusion constant of the bacteria, respectively. The bacterial mobility and diffusion constants were approximately108mV1s1 and 109 m2 s1, respectively. The experimentally observed r=15m was relatively close to the theoretically predicted r=22.5m. When the bacteria did not pass through the micropores, the ionic current remained constant at the base current, which was correlated with a non-event (Supplementary Material: the captured video of the actual screen during the measurement of the ionic currents of the bacteria. The optical microscopic image is shown on the left side, and the waveforms of ionic currents are shown on the right side). When a bacterium passed through the micropore, we obtained a spike-shaped ionic currenttime waveform corresponding to one bacterium (Fig.2b). The combination of optical microscopy and ionic current measurements demonstrated a correlation between the movement of a single bacterium and the ionic currenttime waveform.

Spike-shaped waveforms are characterised by a maximum current value (Ip) and current duration (td) (Fig.1c). The histograms of Ip and td nearly overlapped completely (Fig.3a, b). Similar Ip levels reflected a small difference in the size between the two bacterial species. It was difficult to distinguish between them using these histograms. However, we observed differences in the shapes of the waveforms between the bacteria, suggesting distinguishable features (Fig.3c). We used machine learning for identifying the waveform features such that the waveforms obtained could be used to identify the two species.

(a) Histograms of Ip and td of S. epidermidis. (b) Histograms of Ip and td of S. aureus. (c) Differences in the absolute value of the histograms of Ip and td between S. epidermidis and S. aureus. (d) Confusion matrix of all isolates. (e) Confusion matrix of the assembly machine learning results.

Fifty isolates of S. aureus and S. epidermidis were used for ionic current measurements. Each bacterial isolate was measured in triplicate using a micropore device, and each measurement lasted for 3min. The 15,000 waveforms obtained from the measurements were provided as inputs for the machine-learning training set. The accuracy of differentiating between the two species in a single waveform was F-value=0.59, which exceeded the F-value=0.5 for random discrimination (Fig.3d). In this single-waveform learning, it is determined which species each single waveform belongs to. The F-value is denoted by the harmonic mean of the sensitivity and precision, 2/(1/sensitivity+1/precision), as follows:

$$Sensitivity= frac{True , Positive}{True , Positive+False , Negative}$$

$$Precision=frac{True , Positive}{True , Positive+False , Positive}$$

$$F-measure=frac{2times Sensitivity , times , Precision}{Sensitivity , + , Precision}$$

For S. epidermidis, the confusion matrix yielded a sensitivity and precision of 9781/(9781+5219)=0.65 and 9781/(9781+6968)=0.58, respectively. This resulted in an F-value of 0.62. Similarly, the sensitivity and precision for S. aureus were 0.61 and 0.54, respectively, yielding an F-value of 0.57. The overall adopted F-value was the average of the F-values obtained for the two species. We determined the species-level accuracy of the bacterial identification on an isolate-by-isolate basis using machine learning. Of the 50 bacterial isolates, 25 were selected randomly as the training set for each species, respectively. We determined the training set yielding the highest F-value. Using the selected training set, we performed assembly learning to develop a classifier to determine whether a bacterial isolate belonged to S. aureus or S. epidermidis. In this assembly learning, single waveforms of one isolate/strain are treated as an aggregated data. Therefore, this assembly learning is an isolate/strain-focused learning. In contrast to the single-waveform learning, it is determined which species the aggregated data of each isolate/strain belongs to. In addition, the assembly machine learning uses the entire distribution of waveforms in each species, in addition to independent parameters (Ip, td, current vector, and time vector). The F-value was 0.93 (Fig.3e).

The classifiers created during the machine learning process were used to distinguish the remaining 25 bacterial isolates (Fig.4a) and the additional ATCC standard strains. Each isolate was assessed by the classifier for three measurements, and two or three correct responses from the three trials were regarded as the final correct answers for the isolate or strain.

(a) Hold-out method. We employed a hold-out method for machine learning that splits data into the following two groups: a training dataset and a testing dataset. (b) Receiver operating characteristic curve of the classifier. (c) Characteristic distribution of the waveform. The pulse data are acquired 250,000 times per min. Steps denote the number of data points acquired, Height denotes the current value of the pulse, and Peak ratio denotes the location of the peak of the pulse, when the left edge of the pulse is 0 and the right edge of the pulse is 1. (d) Zeta potential distribution of the bacteria. The dashed line denotes the median, and the dotted line denotes the quartiles. No statistically significant difference was noted between the two species.

The area under the receiver operating characteristic curve (AUROC) was 0.94 (>0.9), demonstrating that the trained classifier could distinguish S. aureus from S. epidermidis at a high accuracy (Fig.4b)20. The sensitivity and specificity for S. aureus detection were 96.4% and 80.8%, respectively, with an accuracy of 88.9% (Table 1). The positive agreement was 84.4%, and the negative agreement was 95.5% (Table 1).

Ionic currenttime waveforms consist of information in relation to the size, shape, and surface charges of bacteria passing through the micropores. Micropore measurements demonstrated few statistically significant differences in the size and shape of S. aureus and S. epidermidis (Fig.4c).

The Zeta potential affects the ionic current of the particles measured using the micropore device, and indicates the electrical charge of the surface layer of the particles21. Zeta potentials of the S. aureus and S. epidermidis isolates were measured using a Zetasizer (Malvern Instruments, Worcestershire, UK)22. The surface charges of the two species indicated that they were negatively charged, with Zeta potentials>20mV. While the Zeta potential range of S. epidermidis was greater than that of S. aureus and the distribution pattern was not completely the same, there were no significant difference between the Zeta potentials of S. aureus and S. epidermidis (Fig.4d). Machine learning used surface charge differences between the two bacteria to distinguish between the species in reference to the entire distribution pattern of the features, of which difference is too subtle to statistically detect. The ion currenttime waveform provides information on the volume, structure, and surface charge of bacteria passing through micropores. The machine learning, which inputs the shape of the ion currenttime waveform as a feature, is considered to distinguish differences in surface charge.

See the original post:
Using novel micropore technology combined with artificial intelligence to differentiate Staphylococcus aureus and ... - Nature.com

The Risks of Artificial Intelligence and the Response of Korean Civil Society – The Good Men Project

By Byoung-il Oh

With the launch of ChatGPT at the end of 2022, people around the world realised the arrival of the artificial intelligence (AI) era, and South Korea was no exception. At the same time, 2023 was also a year of global awareness of the need to control the risks of AI. In November 2023, at the AI Safety Summit in Bletchley Park, UK, legislators in the European Union (EU) agreed on an AI act, and the Biden administration in the US issued an executive order to regulate AI. In the coming years, discussions on AI regulation in various countries are bound to influence each other. Korean civil society also believes that it is necessary to enact a law to stem the risks of AI, but the bill currently being pushed by the Korean government and the National Assembly has been met with opposition from civil society because, in the name of fostering Koreas own AI industry, the proposed billlacks the proper regulatory framework.

South Koreans have already embraced AI in their lives right from chatbots to translation to recruitment and platform algorithms, a variety of AI-powered services have already been introduced into our society. While AI can provide efficiency and convenience in work and life, its development and use can also pose a number of risks, including threats to safety and violations of human rights. The risks commonly cited are invasion of privacy, discriminatory decisions, lack of accountability due to opacity, and sophisticated surveillance, which, when coupled with unequal power relations in society, can perpetuate inequities and discriminatory structures and threaten democracy.

Jinbonet recently published a researchreport, produced with the support of an APCsubgrant, on controversial AI-related cases in South Korea. And indeed there have been several that have raised concerns. Lee Luda, a chatbot launched in December 2020, was criticised for its hate speech against vulnerable groups such as women, people with disabilities, LGBTQIA+communities and Black people, and was punished by the Personal Information Protection Commission (PIPC) for violating the Personal Information Protection Act (PIPA). dIn addition, the use of AI during recruitment processes has increased across both public and private companies in recent years, as corruption in recruitment in public institutions has become a social issue. Also, with remote work becoming the norm during the COVID-19 pandemic, institutions have not properly verified the risks or performance AI recruitment systems and have no data in this regard. It also remains an open question whether private companies AI-driven recruitment works fairly without discrimination based on gender, region, education, etc. The Ministry of Justice and Ministry of Science and ICT sparked a huge controversy when they provided facial recognition data of 170 million Koreans and foreigners to a private company without consent in the guise of upgrading the airport immigration system. Civil society groups suspect that public authorities provided such personal data to favour private tech companies.

There is suspicion that big tech platforms use their algorithms to abuse this data to gain advantage over their competitors. Kakao, which provides KakaoTalk, a messenger app used by almost all Koreans, used its dominance to take over the taxi market. It was fined by the Korean Fair Trade Commission (KFTC) in February 2023 after it was found to have manipulated its AI dispatching algorithm in favour of its taxis. Similarly, another Korean big tech company, Naver, was fined by the KFTC in 2020 for manipulating shopping search algorithms to favour its own products. Korean civil society is also concerned about the use of AI systems for state surveillance. While the use of AI systems by intelligence and investigative agencies has not yet become controversial, the Korean government has invested in R&D for so-called smart policing, and, given that South Korea has one of the highest numbers of CCTVs installed globally, there are concerns that surveillance through intelligent CCTVs could be introduced.

While existing regulations such as the PIPA and the Fair Trade Act can be applied to AI systems, there is no specific legislation to regulate AI in South Korea as a whole. For example, as in the case of AI recruitment systems, there are no requirements for public institutions to develop their own AI systems or to adopt private sector AI systems to ensure accountability. There is also no obligation to take measures to proactively control problems with AI, such as verifying data bias, or to reactively track the source of problems, such as record-keeping.

The Korean government has been promoting the development of the AI industry as a national strategy for several years. The National Strategy for Artificial Intelligence (AI) was released on 17 December 2019 by all ministries, including the Ministry of Science and ICT. As the slogan Beyond IT powerhouse to AI powerhouse shows, the strategy is an expression of the governments policy to understand AI as a civilisational change and use it as an opportunity to develop the economy and solve social problems. However, this strategy is based on an allow first, regulate later approach to AI regulation. Therefore, the policy is mainly based on the establishment of an AI ethical code that can serve as a guide for self-regulation of companies.

Civil society organisations (CSOs) in South Korea have also been making their voices heard on AI-related policies for several years. On 24 May 2021, 120 CSOs released their manifesto, Civil Society Declaration on Demand for an AI Policy that Ensures Human Rights, Safety, and Democracy. Calling for ensuring human rights and legal compliance of AI and the legislation of the AI act, the CSOs suggested that the act should include

In cooperation with the National Human Rights Commission (NHRC), the CSOs have also urged the NHRC to play an active role in regulating AI from a human rights perspective. On 11 May 2022, the NHRC released its road map, Human Rights Guidelines on the Development and Use of Artificial Intelligence, to prevent any human rights violations and discrimination that may occur in the process of developing and using AI. It plans to release an AI human rights impact assessment tool in 2024. Activists of Jinbonetparticipated in research work to establish human rights guidelines for the NHRC and to develop a human rights impact assessment tool.

The Korean government, particularly the Ministry of Science and ICT, which is the lead ministry, is also pushing for legislation to regulate AI. In early 2023, the relevant standing committee of the National Assembly discussed an AI bill that was a merger of bills proposed by several lawmakers, and also consulted by the Ministry of Science and ICT. However, while the bill aims to establish a basic law on AI, it is mainly focused on fostering the industry. It advocates the principle of allow first, regulate later, but does not include any obligations or penalties for providers to control the risks of AI, nor any remedies for those who suffer harm from AI.

Korean civil society agrees that laws are needed to regulate AI and is vehemently opposed to the AI bill currently being debated in the National Assembly. Instead, Korean CSOs have been discussing their own proposal for an AI bill in 2023. Led by digital rights groups, including Jinbonet, they developed a draft and received inputs from a wider panel of CSO activists and experts at the civil society forum, Artificial Intelligence and The Role of Civil Society for Human Rights and Safety, held on 22 November 2023 and funded by APC. They intend to propose a civil society version of the AI bill to the National Assembly.

The AI legislation being debated in Europe has also influenced the Korean civil society. It examined the positions of its European counterpart on the AI bill, the positions of the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS), the negotiating position of the European Parliament, etc. Although the European AI Bill, which was agreed at the end of 2023, is a step backward compared to civil societys position and the European Parliaments position, it contains a number of references at the global level. Of course, when discussing AI legislation in Korea, it is necessary to consider the different legal systems and social conditions apropos of Europe and Korea.

Korean industry, pro-industry experts and conservative media oppose AI regulation. They argue that the European Union is trying to regulate AI because it is lagging behind, and that there is no need to rush to regulate AI, in order to foster Koreas AI industry. They have used the same logic for privacy regulation and big tech regulation. South Koreasown big tech companies such as Naver and Kakao are also developing hyperscale AI. Therefore, there is a very strong public opinion in favour of domestic big tech and AI industries .

South Korea is holding its general election in April 2024. Any bills that fail to pass in the current National Assembly will be abandoned when the new National Assembly is constituted in June 2024. It is unlikely that AI bills will be fully discussed in the current National Assembly. Korean civil society intends to ask the new National Assembly to introduce a civil society AI bill and urge it to pass legislation that will actually regulate AI. To build public opinion for the passage of the AI bill, Korean civil society, including Jinbonet, is set on identifying and publicising more instances of the dangers of AI.

Byoung-il Oh is the president of the Korean progressive network Jinbonet, a member of the Association for Progressive Communications, which advocates for human rights in the information society, especially the rights to communication, free speech and privacy. He is also a member of the Korea Internet Governance Alliance (KIGA) Steering Committee.

Previously Published on apc.org with Creative Commons License

***

All Premium Members get to view The Good Men Project with NO ADS. A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community. A $25 annual membership gives you access to one class, one Social Interest group and our online communities. A $12 annual membership gives you access to our Friday calls with the publisher, our online community.

Photo credit: unsplash

Read more here:
The Risks of Artificial Intelligence and the Response of Korean Civil Society - The Good Men Project

Unveiling New Physics With AI-Powered Particle Tracking – SciTechDaily

By The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences March 24, 2024

AI is emerging as a key tool in nuclear physics, offering solutions for the data-intensive and complex task of particle track reconstruction. Credit: SciTechDaily.com

Particles colliding in accelerators produce numerous cascades of secondary particles. The electronics processing the signals avalanching in from the detectors then have a fraction of a second in which to assess whether an event is of sufficient interest to save it for later analysis. In the near future, this demanding task may be carried out using algorithms based on AI.

Electronics has never had an easy life in nuclear physics. There is so much data coming in from the LHC, the most powerful accelerator in the world, that recording it all has never been an option. The systems that process the wave of signals coming from the detectors therefore specialize in forgetting they reconstruct the tracks of secondary particles in a fraction of a second and assess whether the collision just observed can be ignored or whether it is worth saving for further analysis. However, the current methods of reconstructing particle tracks will soon no longer suffice.

Research presented in the journal Computer Science by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, suggests that tools built using artificial intelligence could be an effective alternative to current methods for the rapid reconstruction of particle tracks. Their debut could occur in the next two to three years, probably in the MUonE experiment which supports the search for new physics.

The principle of reconstructing the tracks of secondary particles based on hits recorded during collisions inside the MUonE detector. Subsequent targets are marked in gold, and silicon detector layers are marked in blue. Credit: IFJ PAN

In modern high-energy physics experiments, particles diverging from the collision point pass through successive layers of the detector, depositing a little energy in each. In practice, this means that if the detector consists of ten layers and the secondary particle passes through all of them, its path has to be reconstructed on the basis of ten points. The task is only seemingly simple.

There is usually a magnetic field inside the detectors. Charged particles move in it along curved lines and this is also how the detector elements activated by them, which in our jargon we call hits, will be located with respect to each other, explains Prof. Marcin Kucharczyk, (IFJ PAN) and immediately adds: In reality, the so-called occupancy of the detector, i.e. the number of hits per detector element, may be very high, which causes many problems when trying to reconstruct the tracks of particles correctly. In particular, the reconstruction of tracks that are close to each other is quite a problem.

Experiments designed to find new physics will collide particles at higher energies than before, meaning that more secondary particles will be created in each collision. The luminosity of the beams will also have to be higher, which in turn will increase the number of collisions per unit time. Under such conditions, classical methods of reconstructing particle tracks can no longer cope. Artificial intelligence, which excels where certain universal patterns need to be recognized quickly, can come to the rescue.

The artificial intelligence we have designed is a deep-type neural network. It consists of an input layer made up of 20 neurons, four hidden layers of 1,000 neurons each and an output layer with eight neurons. All the neurons of each layer are connected to all the neurons of the neighboring layer. Altogether, the network has two million configuration parameters, the values of which are set during the learning process, describes Dr. Milosz Zdybal (IFJ PAN).

The deep neural network thus prepared was trained using 40,000 simulated particle collisions, supplemented with artificially generated noise. During the testing phase, only hit information was fed into the network. As these were derived from computer simulations, the original trajectories of the responsible particles were known exactly and could be compared with the reconstructions provided by the artificial intelligence. On this basis, the artificial intelligence learned to correctly reconstruct the particle tracks.

In our paper, we show that the deep neural network trained on a properly prepared database is able to reconstruct secondary particle tracks as accurately as classical algorithms. This is a result of great importance for the development of detection techniques. Whilst training a deep neural network is a lengthy and computationally demanding process, a trained network reacts instantly. Since it does this also with satisfactory precision, we can think optimistically about using it in the case of real collisions, stresses Prof. Kucharczyk.

The closest experiment in which the artificial intelligence from IFJ PAN would have a chance to prove itself is MUonE (MUon ON Electron elastic scattering). This examines an interesting discrepancy between the measured values of a certain physical quantity to do with muons (particles that are about 200 times more massive equivalents of the electron) and predictions of the Standard Model (that is, the model used to describe the world of elementary particles). Measurements carried out at the American accelerator centre Fermilab show that the so-called anomalous magnetic moment of muons differs from the predictions of the Standard Model with a certainty of up to 4.2 standard deviations (referred as sigma). Meanwhile, it is accepted in physics that a significance above 5 sigma, corresponding to a certainty of 99.99995%, is a value deemed acceptable to announce a discovery.

The significance of the discrepancy indicating new physics could be significantly increased if the precision of the Standard Models predictions could be improved. However, in order to better determine the anomalous magnetic moment of the muon with its help, it would be necessary to know a more precise value of the parameter known as the hadronic correction. Unfortunately, a mathematical calculation of this parameter is not possible. At this point, the role of the MUonE experiment becomes clear. In it, scientists intend to study the scattering of muons on electrons of atoms with low atomic number, such as carbon or beryllium. The results will allow a more precise determination of certain physical parameters that directly depend on the hadronic correction. If everything goes according to the physicists plans, the hadronic correction determined in this way will increase the confidence in measuring the discrepancy between the theoretical and measured value of the muons anomalous magnetic moment by up to 7 sigma and the existence of hitherto unknown physics may become a reality.

The MUonE experiment is to start at Europes CERN nuclear facility as early as next year, but the target phase has been planned for 2027, which is probably when the Cracow physicists will have the opportunity to see if the artificial intelligence they have created will do its job in reconstructing particle tracks. Confirmation of its effectiveness in the conditions of a real experiment could mark the beginning of a new era in particle detection techniques.

Reference: Machine Learning based Event Reconstruction for the MUonE Experiment by Miosz Zdyba, Marcin Kucharczyk and Marcin Wolter, 10 March 2024, Computer Science. DOI: 10.7494/csci.2024.25.1.5690

The work of the team of physicists from the IFJ PAN was funded by a grant from the Polish National Science Centre.

See the original post:
Unveiling New Physics With AI-Powered Particle Tracking - SciTechDaily

Consensus Adoption of U.S.-Led Resolution on Artificial Intelligence by the United Nations General Assembly – United … – Department of State

With todays adoption in the UN General Assembly of the U.S.-led resolution on Artificial Intelligence (AI), UN Member States have spoken with one voice to define a global consensus on safe, secure, and trustworthy AI systems for advancing sustainable development. This consensus resolution, developed with direct input from more than 120 countries and cosponsored by more than 120 Member States from every region, is a landmark effort and a first-of-its-kind global approach to the development and use of this powerful emerging technology.

Artificial intelligence has enormous potential to advance sustainable development and the Sustainable Development Goals (SDGs). This resolution helps ensure that the benefits of AI reach countries from all regions and at all levels of development and focuses on capacity building and bridging digital divides, especially for developing countries. It underscores the consensus that AI systems can respect human rights and fundamental freedoms, while delivering on aspirations for sustainable development, as these are fundamentally compatible goals.

Governments must work withthe private sector, civil society, international and regional organizations, academia and research institutions and technical communities, and all other stakeholders to build this approach.Importantly, this resolution will serve as a foundation for multilateral AI efforts and existing and future UN initiatives.

The United States will continue to work with governments and other partners to ensure the design, development, deployment, and use of emerging technologies, including AI, are safe, secure, and trustworthy and are directed to achieving our common goals and solving our most pressing challenges.

View original post here:
Consensus Adoption of U.S.-Led Resolution on Artificial Intelligence by the United Nations General Assembly - United ... - Department of State