Page 302«..1020..301302303304..310320..»

Fountain Hills council addresses use of artificial intelligence – Fountain Hills Times

You must be a member to read this story.

Join our family of readers for as little as $5 per month and support local, unbiased journalism.

Already have an account? Log in to continue.

Current print subscribers can create a free account by clicking here

Otherwise, follow the link below to join.

To Our Valued Readers

Visitors to our website will be limited to five stories per month unless they opt to subscribe. The five stories do not include our exclusive content written by our journalists.

For $6.99, less than 20 cents a day, digital subscribers will receive unlimited access to YourValley.net, including exclusive content from our newsroom and access to our Daily Independent e-edition.

Our commitment to balanced, fair reporting and local coverage provides insight and perspective not found anywhere else.

Your financial commitment will help to preserve the kind of honest journalism produced by our reporters and editors. We trust you agree that independent journalism is an essential component of our democracy. Please click here to subscribe.

Sincerely, Charlene Bisson, Publisher, Independent Newsmedia

Read this article:
Fountain Hills council addresses use of artificial intelligence - Fountain Hills Times

Read More..

Misuse of Artificial Intelligence in Public Administration – PA TIMES Online | PA TIMES Online – PA Times

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Michael Turtz and Jill Goldstein April 5, 2024

Artificial Intelligence (AI) platforms are gaining widespread attention throughout the worldmuch like Tesla when it debuted its line of electric vehicles and self-driving capabilities using AI. As everyone wants the opportunity to use it, not everyone has access to it or knows how to use it. In so many ways, the world is still not ready for electric vehicles. Currently, there arent as many charging stations as there are gas stations, and in some cities, the Tesla-brand charging stations are not in convenient locations. AI also faces similar challenges, for the new AI user, there remains a lack of comprehensive guidance on how to use it and an understanding of its vast capabilities. Regulatory scrutiny of Tesla spans areas such as vehicle safety, labor practices and data privacy, while overarching federal legislation on AI has not advanced yet. Both Tesla and AI represent new-age technology that holds the potential to revolutionize various aspects of our society. However, this also brings uncertainty surrounding the full extent of AIs capabilities.

AI platforms are becoming more widely utilized, especially since Google debuted Gemini (formerly Bard) in 2023. Given Googles extensive user base, Gemini has the potential to be the worlds leading AI platform. Gemini was developed by Google, which is in direct competition with OpenAIs platform, which includes their notorious ChatGPT, DALL-E and their premium subscription of GPT-4.Focusing on Gemini, as it gives free multi-modal capabilities including creating images, and uses up-to-date information from Google, it has the potential to surpass ChatGPTs name brand over time. However, as AI advances throughout our society and workforce, concerns are arising regarding its ethical use, even echoing the concept of guerrilla government.

Rosemary OLeary introduced the term guerrilla government in her 2006 book titled Ethics of Dissent. Guerrilla government is defined as career public servants who implicitly or explicitly work against the wishes of their superiors. Guerrilla government differentiates from organizational misbehavior. Public administrators who display organizational behavior act openly. However, those who perform guerrilla government work behind the scenes.

In 2017, OLeary envisioned a perfect storm for guerrilla government with the combination of big data, social media and contracting out in her article titled The New Guerrilla Government: Are Big Data, Hyper Social Media and Contracting Out Changing the Ethics of Dissent? OLeary discussed the easy access for government officials to collect data starting at an entry level and access to multiple social media outlets. Finally, contractors can easily implement government activity sometimes better than career civil servants.

OLeary used the polarizing examples of both Bradley Manning and Edward Snowden due to the easy access they both had to the complete enormity of national intelligence data. For example, career bureaucrats had less access to top-secret documents than Snowden did as a contractor. Snowden acquired 1.7 million government files and sent them to journalists at prestigious newspapers. Manning on the other hand was able to download thousands of government files from the Iraq War to a compact disc and then leaked it to WikiLeaks.

These were two extreme examples that involved national intelligence. However, guerrilla government can occur at any level of government and any government entity. In both cases, AI did not have the easy access or mainstream advances that it does today. If one of OLearys harsh realities was that The combination of big data, hyper social media and contracting is likely to increase the incidents of guerrilla government was a perfect storm, what would provide these contractors a tool such as AI be called?

Based on the Associated Press article AI-powered misinformation is the worlds biggest short-term threat, Davos report says, on January 10, 2024, the World Economic Forum believed that AI could create major problems for democracy as well as the global economy. Specifically, AI can empower malicious actors to carry out cyberattacks and phishing attempts. Anyone can use AI and they do not need to be an expert.

Government organizations on the federal, state and local levels could be years behind by the time they implement guidance on how to use AI since tech companies are constantly creating AI models open to the public. On the federal level, in October 2022, the White Houses Office of Science and Technology issued a 73-page Blueprint for an AI Bill of Rights. However, there are no legal ramifications with the Blueprint. Congress has also considered several proposals for AI legislation; however, they havent enacted any of them.

To conclude, we asked Gemini to list us five hypothetical examples of guerrilla government scenarios where AI plays a role, and these are what scenarios they came up with: Scenario One: Eco-Warriors in the Department of EnergyA group of engineers within the Department of Energy, concerned about climate change, secretly use government AI to analyze energy grid vulnerabilities and simulate the potential impact of implementing renewable energy sources at a faster pace than official policy allows. They leak anonymized reports to environmental groups, hoping to pressure for change.

The other scenarios included: Scenario Two: Whistleblower AI in the Justice Department, Scenario Three: Rogue AI in the Department of Defense, Scenario Four: Algorithmic Robin Hood in the IRS and Scenario Five: AI-Powered Protests in the Justice Department. AI not only grants easy accessibility for any government employee but also can provide them with the idea of how to implement guerrilla government or help them understand documents at a faster rate than outsourcing to WikiLeaks or the press.

Author: Michael Turtz, MPA, is currently the Director of Administration at the Charles E. Schmidt College of Medicine at Florida Atlantic University. Michael is also currently a Ph.D. Candidate in FAUs School of Public Administration.

Author: Jill Goldstein, MS, is currently the Administrative Coordinator at the Charles E. Schmidt College of Medicine and is an Adjunct Faculty member in the Department of Communications at Florida Atlantic University. Jill received her M.S. in Strategic Communications from the University of Maryland.

Emails: [emailprotected]; [emailprotected]

(3 votes, average: 5.00 out of 5)Loading...

See more here:
Misuse of Artificial Intelligence in Public Administration - PA TIMES Online | PA TIMES Online - PA Times

Read More..

COMMENTARY: We must be proactive in guiding the influence of artificial intelligence on education – EdSource

Photo: Flickr/Rainer Stropek

The topic of AI has already grown trite, but dont let that fool you. Its not a fad. It feels more akin to the atmospheric river storms hitting California a phenomenon we didnt hear or know abouta few years ago that is now changing how we look at rain and mudslides and fires and insurance. The storms also bring life-giving water desperately needed in the West.

Artificial intelligence is an atmospheric river impacting everything we do including how teachers teach, how students learn and creating opportunities to rethink and redesign the 200-year-old institution called public education. While some may view AI as a threat, I see it as breathing new life into education.

With education at a critical juncture, the recent K-12 AI Summit in Anaheim providededucation, policy, philanthropy, and industry leaders(from 31 states and over 100 districts) an opportunity to explore ways of integrating these new technologies into K-12 experiences for both students and teachers. Spearheaded by key partners such as the Anaheim Union High School District, Digital Promise, AI EDU, and UC Irvine, this summit landed on one resounding message: The powerfulrole of AI as an assistant and thought partner, not a replacement for teachers.

AI technologies offer opportunities to personalize learning experiences, provide immediate feedback and identify areas where students need support. They complementteachers expertise, fostering a human-centered approach to education while enhancing learning outcomes. Other themes that emerged include the need to:

Address equity and access disparities. As AI becomes increasingly integrated into classrooms, we must ensure that all students have equitable access to these resources. Participants stressed the importance of bridging the expensive AI digital divide, providing training for educators (but not in traditional top-down ways that edtech has delivered in the past), inclusive design practices in AI development, and addressing infrastructure gaps to promote equitable access to technology.

Incorporate ethical and responsible AI use in education. Concerns about data privacy, algorithmic bias and the ethical implications of automated decision-making have grown. Participants emphasized the need for collaborative efforts to establish frameworks and guidelines for ethical AI use that foster transparency, accountability and equity as AI becomes a tool for enhanced curriculum and instruction and the reinvention of schooling where the walls of learning between school and community come down.

Equip students with skills for an AI-driven economy. AI can help teachers assist students with technical proficiencies and mastering substantive knowledge, but also in critical thinking, problem-solving, creativity, and collaboration. Participants emphasized how AI can accelerate interdisciplinary teachingand hands-on learning to prepare students for the challenges and opportunities ahead.

Share knowledge and collaborate. Partnerships between schools, universities, industry and community organizations are essential for developing AI curriculum, providing professional development and piloting initiatives to connect school experiences with career opportunities.

Sharing best practices and research findings fosters a community dedicated to advancing AI education. It is estimated that over 30% of current jobs require some type of AI skill set. This number will likely increase sharply over the next few years. School leaders who put their heads in the sand ignoring AI are committing a serious disservice to their students when it comes to competitiveness in the job market.

I believe that this movement in K-12 spaces couldenergize the vibrant community school initiatives happening across California where folks are rethinking schools andteachers are developing experiences for students to problem-solve local and national issues. The AI future holds immense potential to empower teachers, students, parents and community members around what is the purpose of school. By leveraging the community school movement,which is a relationship-centered, inclusive process that uplifts the voices, needs and assetsof historically marginalized students and groups, advancedAI tools can help teachers develop more personalized instruction, promote equity, foster ethical use, and prepare students to thrive through civic engagement and discover real-world solutions to real-world problems.AI can also help us assemble evidence of student learning and teacher leadership as well as insights from community stakeholders in ways heretofore impossible.

The journey toward integrating AI into K-12 education is just beginning, with summit partners committed to continuing this crucial work. Therefore, lets seize this opportunity to rethink and re-imagine what schools can be. As Martin Luther King Jr. once emphasized, Our very survival depends on our ability to stay awake, to adjust to new ideas, to remain vigilant and to face the challenge of change.

Michael Matsuda is superintendent of the Anaheim Union High School District.

The opinions expressed in this commentary represent those of the author. EdSource welcomes commentaries representing diverse points of view. If you would like to submit a commentary, please review ourguidelinesandcontact us.

Read more from the original source:
COMMENTARY: We must be proactive in guiding the influence of artificial intelligence on education - EdSource

Read More..

Microsoft: China Uses AI to Sow Disinformation and Discord Around the World – TIME

Faking a political endorsement in Taiwan ahead of its crucial January election, sharing memes to amplify outrage over Japans disposal of nuclear wastewater, and spreading conspiracy theories that claim the U.S. government was behind Hawaiis wildfire and Kentuckys train derailment last year. These are just some of the ways that Chinas influence operations have ramped up their use of artificial intelligence to sow disinformation and stoke discord worldwide over the last seven months, according to a new report released Friday by Microsoft Threat Intelligence.

Microsoft has observed notable trends from state-backed actors, the report said, that demonstrate not only doubling down on familiar targets, but also attempts to use more sophisticated influence techniques to achieve their goals. In particular, Chinese influence actors experimented with new media and continued to refine AI-generated or AI-enhanced content.

Among the operations highlighted in the report was a a notable uptick in content featuring Taiwanese political figures ahead of the January 13 presidential and legislative elections. This included an AI-generated audio recording, posted by a Chinese Communist Party (CCP)-linked group known as Storm-1376 or Spamouflage, that made it falsely appear like Foxconn owner Terry Gou, who had mounted a short-lived independent presidential campaign, had endorsed another candidate, when he in fact did not. Microsoft also recorded the dissemination of AI-generated news anchors as well as AI-generated memes to mislead audiences and influence Taiwans elections.

Read More: Taiwans Election Isnt a Disaster for XiUnless He Makes It One

This was the first time that Microsoft Threat Intelligence has witnessed a nation state actor using AI content in attempts to influence a foreign election, the report said, warning that as populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections.

Read More: All the Elections Around the World in 2024

Chinas influence operations observed by Microsoft extended beyond Taiwan during the last several months, too. Last August, Storm-1376 launched a multilingual large-scale, aggressive messaging campaign, including through AI-generated memes, to amplify Chinese propaganda and stoke international public angerespecially among South Koreansover Japans disposal of nuclear wastewater from Fukushima last year.

Read More: Chinas Concern About Nuclear Wastewater May Be More About Politics Than Science

Microsoft also pointed to multiple instances of Storm-1376 spreading conspiratorial narratives ultimately encouraging mistrust of and disillusionment with the U.S. government. After the devastating wildfires that ripped through Maui last August, Storm-1376 posted on dozens of websites and platforms that the fires were caused by the U.S. military testing a weather weapon, accompanied with AI-generated photos of flames raging along roads and buildings to make the content more eye-catching, the Microsoft report said. And in the days after a train derailment in Kentucky in November, a social media campaign by Storm-1376 urged audiences to consider that it was orchestrated by the U.S. government, which it said was deliberately hiding something. Microsoft said the disinformation campaign likened the derailment to 9/11 and Pearl Harbor cover-up theories.

In addition, Microsoft highlighted U.S.-election focused influence operations through sockpuppet accounts posing as Americans and sometimes enhanced through generative AI. The Microsoft Threat Analysis Center had previously reported that social media accounts very likely affiliated with the CCP impersonated U.S. voters to influence the 2022 midterm elections. This activity has continued and these accounts nearly exclusively post about divisive U.S. domestic issues such as global warming, U.S. border policies, drug use, immigration, and racial tensions, the center said on Friday, adding that the campaign has adopted an increasing focus on asking questions and seeking perspectiveswhat amounts to effectively, polling questions about contentious topics in the U.S.indicating a deliberate effort to understand better which U.S. voter demographic supports what issue or position and which topics are the most divisive.

Despite the growing sophistication involved in these influence operations, however, there remains little evidence that they have been successful in changing public opinion, Microsoft notes.

While the impact of such content in swaying audiences remains low, said the Microsoft report, Chinas increasing experimentation in augmenting memes, videos, and audio will continueand may prove effective down the line.

Besides influence operations, Chinese cyber actors have also infiltrated important networks used by its adversaries. Last year, it compromised telecommunication entities in the South China Sea around the time of U.S. military drills, including in Indonesia, Malaysia, the Philippines, Cambodia, and Taiwan. China has been embroiled in maritime territorial disputes in the region with Southeast Asian states, which have long looked to the U.S. for strategic defense support.

Read More: Is Southeast Asia Leaning More Toward China? New Survey Shows Mixed Results

FBI director Christopher Wray warned a congressional committee in January about Chinese hackers targeting critical U.S. infrastructure such as water treatment plants, electric grids, energy pipelines, and transportation. In February, the Cybersecurity and Infrastructure Security Agency said that several of such critical infrastructure were confirmed to have been compromised by Chinese state-sponsored cyber group Volt Typhoon, whose operations Microsoft also outlined in its latest report.

North Korea was also documented in the Microsoft report, as continuing to loot cryptocurrency for state revenue and looking to AI tools to enhance its cyber operations. It was also found to have continued targeting aerospace and defense organizations in the U.S. and South Korea, as well as diplomats, government officials, and think tank expertsin what Microsoft assesses as intelligence collection.

Microsoft has been embattled by criticisms from U.S. lawmakers over its concessions to China and failure to fend off Chinese threats. This week, a federal report blamed Microsoft for its inadequate security culture, claiming that a cascade of avoidable errors by the company allowed Chinese cyber actors to access senior U.S. officials emails accounts, which were hosted by Microsoft Exchange Online.

In response, Microsoft said that the hackers were well-resourced nation state threat actors who operate continuously and without meaningful deterrence but vowed to adopt a new culture of engineering security in our own networks.

Read more:
Microsoft: China Uses AI to Sow Disinformation and Discord Around the World - TIME

Read More..

Tech Giants Unite to Upskill Workforce for AI – PYMNTS.com

Tech heavyweights like Google and Microsoft have formed atask force to skill-up workers for the artificial intelligence (AI) era, addressing the urgent need for AI-savvy professionals in evolving job markets.

The consortium includes Accenture, Eightfold, Google, IBM, Indeed, Intel, Microsoft and SAP, as well as six advisers. As AI reshapes industries and job roles, experts say theres a growing need to prepare workers with the skills required to succeed in environments influenced by AI technology.

AI is accelerating the pace of change for the global workforce, presenting a powerful opportunity for the private sector to help upskill and reskill workers for the future, Francine Katsoudas, executive vice president and chief people, policy and purpose officer at Cisco, one of the members of the group, said in astatement. We look forward to engaging other stakeholders including governments, NGOs and the academic community as we take this important first step toward ensuring that the AI revolution leaves no one behind.

As PYMNTSreported last year, generative AI technology like ChatGPT has the potential to boost productivity significantly but also disrupt the labor market. Aresearch report by Goldman Sachs suggested that automation could impact 25% of jobs in the U.S. and Europe, particularly those in the legal and administrative sectors. Approximately two-thirds of employment in these regions is exposed to some level of AI automation. However, most workers would likely retain their positions with less than half of their tasks automated, and only 7% of U.S. workers are in roles where AI could automate at least half of their responsibilities, potentially leading to job replacement.

Going by the incredibly unwieldy name, theAI-Enabled Information and Communication Technology (ICT) Workforce Consortium, the group seeks to provide recommendations and training programs that will enable businesses to adapt to the changing landscape and empower workers to secure opportunities in the digital economy.

In its initial phase, the consortium will evaluate the impact of AI on 56 ICT job roles, representing a significant portion of the ICT sector in the United States and five of the largest European countries by ICT workforce numbers. The findings will provide practical insights and recommendations to employers seeking to reskill and upskill their workers in preparation for AI-enabled environments.

The consortium members said they plan to help over 95 million individuals worldwide over the next decade through skills development and training programs. Cisco aims to train 25 million people with cybersecurity and digital skills by 2032, while IBM plans to train 30 million individuals by 2030, including 2 million in AI.

Intel has committed to empowering more than 30 million people with AI skills for current and future jobs by 2030. Microsoft intends to train and certify 10 million people from underserved communities with in-demand digital skills by 2025. SAP has set a goal to upskill 2 million people worldwide by 2025, and Googlehas recently announced 25 million in funding to support AI training and skills for people across Europe.

Every industry will need workers to be reskilled. Ellyn Shook, chief leadership and human resources officer atAccentureemphasized in apress release, Organizations that invest as much in learning as they do in technology not only create career pathways, they are well positioned to lead in the market.

According to anIMF report, AIs reach into the workforce is vast and uneven, affecting around 40% of jobs globally. This impact intensifies in advanced economies, where 60% of roles could be automated or altered, particularly high-skilled ones, potentially halving labor demand and opportunities even rendering some jobs obsolete. In contrast, emerging and low-income nations face lower risks, with 40% and 26% exposure rates, illustrating the varied effects of AI across different economic landscapes.

View original post here:
Tech Giants Unite to Upskill Workforce for AI - PYMNTS.com

Read More..

How to detect AI deepfakes – The Washington Post

AI-generated images are everywhere. Theyre being used to make nonconsensual pornography, muddy the truth during elections and promote products on social media using celebrity impersonations.

When Princess Catherine released a video last month disclosing that she had cancer, social media went abuzz with the latest baseless claim that artificial intelligence was used to manipulate the video. Both BBC Studios, which shot the video, and Kensington Palace denied AI was involved. But it didnt stop the speculation.

Experts say the problem is only going to get worse. Today, the quality of some fake images is so good that theyre nearly impossible to distinguish from real ones. In one prominent case, a finance manager at a Hong Kong bank wired about $25.6 million to fraudsters who used AI to pose as the workers bosses on a video call. And the tools to make these fakes are free and widely available.

A growing group of researchers, academics and start-up founders are working on ways to track and label AI content. Using a variety of methods and forming alliances with news organizations, Big Tech companies and even camera manufacturers, they hope to keep AI images from further eroding the publics ability to understand whats true and what isnt.

A year ago, we were still seeing AI images and they were goofy, said Rijul Gupta, founder and CEO of DeepMedia AI, a deepfake detection start-up. Now theyre perfect.

Heres a rundown of the major methods being developed to hold back the AI image apocalypse.

Digital watermarks arent new. Theyve been used for years by record labels and movie studios that want to be able to protect their content from being pirated. But theyve become one of the most popular ideas to help deal with a wave of AI-generated images.

When President Biden signed a landmark executive order on AI in October, he directed the government to develop standards for companies to follow in watermarking their images.

Some companies already put visible labels on images made by their AI generators. OpenAI affixes five small colored boxes in the bottom-right corner of images made with its Dall-E image generators. But the labels can easily be cropped or photoshopped out of the image. Other popular AI image-generation tools like Stable Diffusion dont even add a label.

So the industry is focusing more on unseen watermarks that are baked into the image itself. Theyre not visible to the human eye but could be detected by, say, a social media platform, which would then label them before viewers see them.

Theyre far from perfect though. Earlier versions of watermarks could be easily removed or tampered with by simply changing the colors in an image or even flipping it on its side. Google, which provides image-generation tools to its consumer and business customers, said last year that it had developed a watermark tech called SynthID that could withstand tampering.

But in a February paper, researchers at the University of Maryland showed that approaches developed by Google and other tech giants to watermark their AI images could be beat.

That is not going to solve the problem, said Soheil Feizi, one of the researchers.

Developing a robust watermarking system that Big Tech and social media platforms agree to abide by should help significantly reduce the problem of deepfakes misleading people online, said Nico Dekens, director of intelligence at cybersecurity company ShadowDragon, a start-up that makes tools to help people run investigations using images and social media posts from the internet.

Watermarking will definitely help, Dekens said. But its certainly not a waterproof solution, because anything thats digitally pieced together can be hacked or spoofed or altered, he said.

On top of watermarking AI images, the tech industry has begun talking about labeling real images as well, layering data into each pixel right when a photo is taken by a camera to provide a record of what the industry calls its provenance.

Even before OpenAI released ChatGPT in late 2022 and kicked off the AI boom, camera makers Nikon and Leica began developing ways to imprint special metadata that lists when and by whom a photo was taken directly when the image is made by the camera. Canon and Sony have begun similar programs, and Qualcomm, which makes computer chips for smartphones, says it has a similar project to add metadata to images taken on phone cameras.

News organizations like the BBC, Associated Press and Thomson Reuters are working with the camera companies to build systems to check for the authenticating data before publishing photos.

Social media sites could pick up the system, too, labeling real and fake images as such, helping users know what theyre looking at, similar to how some platforms label content that might contain anti-vaccine disinformation or government propaganda. The sites could even prioritize real content in algorithmic recommendations or allow users to filter out AI content.

But building a system where real images are verified and labeled on social media or a news website might have unintended effects. Hackers could figure out how the camera companies apply the metadata to the image and add it to fake images, which would then get a pass on social media because of the fake metadata.

Its dangerous to believe there are actual solutions against malignant attackers, said Vivien Chappelier, head of research and development at Imatag, a start-up that helps companies and news organizations put watermarks and labels on real images to ensure they arent misused. But making it harder to accidentally spread fake images or giving people more context into what theyre seeing online is still helpful.

What we are trying to do is raise the bar a bit, Chappelier said.

Adobe which has long sold photo- and video-editing software and is now offering AI image-generation tools to its customers has been pushing for a standard for AI companies, news organizations and social media platforms to follow in identifying and labeling real images and deepfakes.

AI images are here to stay and different methods will have to be combined to try to control them, said Dana Rao, Adobes general counsel.

(Video: Washington Post Live)

Some companies, including Reality Defender and Deep Media, have built tools that detect deepfakes based on the foundational technology used by AI image generators.

By showing tens of millions of images labeled as fake or real to an AI algorithm, the model begins to be able to distinguish between the two, building an internal understanding of what elements might give away an image as fake. Images are run through this model, and if it detects those elements, it will pronounce that the image is AI-generated.

The tools can also highlight which parts of the image the AI thinks gives it away as fake. While humans might class an image as AI-generated based on a weird number of fingers, the AI often zooms in on a patch of light or shadow that it deems doesnt look quite right.

There are other things to look for, too, such as whether a person has a vein visible in the anatomically correct place, said Ben Colman, founder of Reality Defender. Youre either a deepfake or a vampire, he said.

Colman envisions a world where scanning for deepfakes is just a regular part of a computers cybersecurity software, in the same way that email applications like Gmail now automatically filter out obvious spam. Thats where were going to go, Colman said.

But its not easy. Some warn that reliably detecting deepfakes will probably become impossible, as the tech behind AI image generators changes and improves.

If the problem is hard today, it will be much harder next year, said Feizi, the University of Maryland researcher. It will be almost impossible in five years.

Even if all these methods are successful and Big Tech companies get fully on board, people will still need to be critical about what they see online.

Assume nothing, believe no one and nothing, and doubt everything, said Dekens, the open-source investigations researcher. If youre in doubt, just assume its fake.

With elections coming up in the United States and other major democracies this year, the tech may not be ready for the amount of disinformation and AI-generated fake imagery that will be posted online.

The most important thing they can do for these elections coming up now is tell people they shouldnt believe everything they see and hear, said Rao, the Adobe general counsel.

Original post:
How to detect AI deepfakes - The Washington Post

Read More..

What is AGI and how will we know when it’s been attained? – Fortune

Theres a race underway to build artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Achieving such a concept commonly referred to as AGI is the driving mission of ChatGPT-maker OpenAI and a priority for the elite research wings of tech giants Amazon, Google, Meta and Microsoft.

Its also a cause for concernfor world governments. Leading AI scientists published research Thursday in the journal Science warning that unchecked AI agents with long-term planning skills could pose an existential risk to humanity.

But what exactly is AGI and how will we know when its been attained? Once on the fringe of computer science, its now a buzzword thats being constantly redefined by those trying to make it happen.

Not to be confused with the similar-soundinggenerative AI which describes the AI systems behind the crop of tools that generate new documents, images and sounds artificial general intelligence is a more nebulous idea.

Its not a technical term but a serious, though ill-defined, concept, said Geoffrey Hinton, apioneering AI scientistwhos been dubbed a Godfather of AI.

I dont think there is agreement on what the term means, Hinton said by email this week. I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do.

Hinton prefers a different term superintelligence for AGIs that are better than humans.

A small group of early proponents of the term AGI were looking to evoke how mid-20th century computer scientists envisioned an intelligent machine. That was before AI research branched into subfields that advanced specialized and commercially viable versions of the technology from face recognition to speech-recognizing voice assistants like Siri and Alexa.

Mainstream AI research turned away from the original vision of artificial intelligence, which at the beginning was pretty ambitious, said Pei Wang, a professor who teaches an AGI course at Temple University and helped organize the first AGI conference in 2008.

Putting the G in AGI was a signal to those who still want to do the big thing. We dont want to build tools. We want to build a thinking machine, Wang said.

Without a clear definition, its hard to know when a company or group of researchers will have achieved artificial general intelligence or if they already have.

Twenty years ago, I think people would have happily agreed that systems with the ability of GPT-4 or (Googles) Gemini had achieved general intelligence comparable to that of humans, Hinton said. Being able to answer more or less any question in a sensible way would have passed the test. But now that AI can do that, people want to change the test.

Improvements in autoregressive AI techniques that predict the most plausible next word in a sequence, combined with massive computing power to train those systems on troves of data, have led to impressive chatbots, buttheyre still not quitethe AGI that many people had in mind. Getting to AGI requires technology that can perform just as well as humans in a wide variety of tasks, including reasoning, planning and the ability to learn from experiences.

Some researchers would like to find consensus on how to measure it. Its one of the topics of an upcoming AGI workshop next month in Vienna, Austria the first at a major AI research conference.

This really needs a communitys effort and attention so that mutually we can agree on some sort of classifications of AGI, said workshop organizer Jiaxuan You, an assistant professor at the University of Illinois Urbana-Champaign. One idea is to segment it into levels in the same way thatcarmakers try to benchmarkthe path between cruise control and fully self-driving vehicles.

Others plan to figure it out on their own. San Francisco company OpenAI has given itsnonprofit board of directors whose members include a former U.S. Treasury secretary the responsibility of deciding when its AI systems have reached the point at which they outperform humans at most economically valuable work.

The board determines when weve attained AGI, says OpenAIs own explanation of its governance structure. Such an achievement would cut off the companys biggest partner, Microsoft, from the rights to commercialize such a system, since the terms of their agreements only apply to pre-AGI technology.

Hinton made global headlines last year when he quit Googleand sounded a warningabout AIs existential dangers. A newScience study published Thursdaycould reinforce those concerns.

Its lead author is Michael Cohen, a University of California, Berkeley, researcher who studies the expected behavior of generally intelligent artificial agents, particularly those competent enough to present a real threat to us by out planning us.

Cohen made clear in an interview Thursday that such long-term AI planning agents dont yet exist. But they have the potential to get more advanced as tech companies seek to combine todays chatbot technology with more deliberate planning skills using a technique known as reinforcement learning.

Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity, according to the paper whose co-authors include prominentAI scientists Yoshua BengioandStuart Russelland law professor and former OpenAI adviser Gillian Hadfield.

I hope weve made the case that people in government (need) to start thinking seriously about exactly what regulations we need to address this problem, Cohen said. For now, governments only know what these companies decide to tell them.

With so much money riding on the promise of AI advances, its no surprise that AGI is also becoming a corporate buzzword that sometimes attracts a quasi-religious fervor.

Its divided some of the tech world between those who argue it should be developed slowly and carefully and others including venture capitalists and rapper MC Hammer whove declared themselves part of an accelerationist camp.

The London-based startup DeepMind, founded in 2010 and now part of Google, was one of the first companies to explicitly set out to develop AGI. OpenAI did the same in 2015 with a safety-focused pledge.

But now it might seem that everyone else is jumping on the bandwagon. Google co-founder Sergey Brin was recently seen hanging out at a California venue called the AGI House. And less than three yearsafter changing its namefrom Facebook to focus on virtual worlds, Meta Platforms in January revealed that AGI was also on the top of its agenda.

Meta CEO Mark Zuckerberg said his companys long-term goal was building full general intelligence that would require advances in reasoning, planning, coding and other cognitive abilities. While Zuckerbergs company has long had researchersfocused on those subjects, his attention marked a change in tone.

At Amazon, one sign of the new messaging was when the head scientist for the voice assistant Alexa switched job titles to become head scientist for AGI.

While notas tangible to Wall Streetas generative AI, broadcasting AGI ambitions may help recruit AI talent who have a choice in where they want to work.

In deciding between an old-school AI institute or one whose goal is to build AGI and has sufficient resources to do so, many would choose the latter, said You, the University of Illinois researcher.

Visit link:
What is AGI and how will we know when it's been attained? - Fortune

Read More..

YouTube’s Videos Can’t Be Used to Train OpenAI’s Sora – PYMNTS.com

YouTubeCEONeal Mohanis addressing concerns regarding using YouTube content to train artificial intelligence models.

Mohan highlighted during aninterviewwith Bloomberg the lack of concrete evidence on whetherOpenAIhas used YouTube videos to enhance its video generation AI software,Sora. However, he stressed that any such use without permission would contravene YouTubes terms of service.

From a creators perspective, when a creator uploads their hard work to our platform, they have certain expectations, Mohan said, per the report. One of those expectations is that the terms of service is going to be abided by. It does not allow for things like transcripts or video bits to be downloaded, and that is a clear violation of our terms of service. Those are the rules of the road in terms of content on our platform.

Mohan elaborated on the expectations set by content creators when they upload their work to YouTube, emphasizing that the platforms terms prohibit unauthorized downloads or use of video and transcript data. This policy underpins the trust and integrity between YouTube and its users, ensuring creators contributions are protected under the platforms guidelines.

The issue arises amid a debate and legal battle over the data sources employed by AI companies to train their models. OpenAI, a player in the field backed byMicrosoft, has been at the center of discussions for its use of diverse web content in developing technologies likeChatGPTandDALL-Ein addition to Sora.

The pursuit of advanced AI capabilities has led companies to seek vast amounts of data, raising questions about the ethical use of internet-sourced content. OpenAI has beensued by The New York Times, other publications andauthorsover its use of content to train its models.

Despite inquiries, OpenAI has not clarified the specific use of YouTube videos for Soras development, with Chief Technology Officer Mira Murati expressinguncertaintyin previous statements. This ambiguity extends to discussions around the training of OpenAIs forthcoming model, GPT-5, with reports suggesting consideration of public YouTube video transcriptions as potential data sources.

In the interview with Bloomberg, Mohan discussedGoogles approach to using YouTube content for its AI model,Gemini. He indicated a cautious procedure that respects individual agreements with creators. While some YouTube content may contribute to AI training, such use is aligned with the terms of service and any existing contracts with content owners.

Meanwhile, OpenAI said it sees potential for companies to use ChatGPT, according to a separate Bloombergreport. OpenAI is experiencing an increase in demand for its corporate version of ChatGPT despite facing competition from a growing number of AI companies offering similar products for the workplace. OpenAI Chief Operating OfficerBrad Lightcapsaid ChatGPT Enterprise now has over 600,000 users, growing from approximately 150,000 users in January.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Here is the original post:
YouTube's Videos Can't Be Used to Train OpenAI's Sora - PYMNTS.com

Read More..

How Does Artificial Intelligence (AI) Work and Its Applications [Updated] – Simplilearn

Artificial Intelligence (AI), the new buzzword in the world of technology, is set to change the way future generations will function. But what exactly is AI and how does AI work? You may not be aware of it, but you probably interact with AI on a daily basis. From smartphones to chatbots, AI is already prevalent in many aspects of our lives. Growing investments in this area and AIs increasing use in the enterprise space are indicative of how the job market is warming up for AI experts.

Let us begin this tutorial by first understanding what is AI and how does AI work. AI is probably one of the most exciting advancements that we're in the middle of experiencing as humans. It is a branch of computer science dedicated to creating intelligent machines that work and react like humans.

Let us cover the types of AI in the next section of this tutorial.

There are four main types of AI. They are:

This kind of AI is purely reactive and does not have the ability to form memories or use past experiences to make decisions. These machines are designed to perform specific tasks. For example, programmable coffeemakers or washing machines are designed to perform specific functions, but they do not have memory.

This kind of AI uses past experiences and the present data to make a decision. Limited memory means that the machines are not coming up with new ideas. They have a built-in program running the memory. Reprogramming is done to make changes in such machines. Self-driving cars are examples of limited memory AI.

These AI machines can socialize and understand human emotions and will have the ability to cognitively understand somebody based on the environment, their facial features, etc. Machines with such abilities have not been developed yet. There is a lot of research happening with this type of AI.

This is the future of AI. These machines will be super-intelligent, sentient and conscious. They are able to react very much like a human being, although they are likely to have their own features.

The next section of this tutorial will help you get a better understanding on how exactly to implement AI.

Lets explore the following ways that explain how we can implement AI:

It is machine learning that gives AI the ability to learn. This is done by using algorithms to discover patterns and generate insights from the data they are exposed to.

Deep learning, which is a subcategory of machine learning, provides AI with the ability to mimic a human brains neural network. It can make sense of patterns, noise, and sources of confusion in the data.

Consider an image shown below:

Here we segregated the various kinds of images using deep learning. The machine goes through various features of photographs and distinguishes them with a process called feature extraction. Based on the features of each photo, the machine segregates them into different categories, such as landscape, portrait, or others.

Let us understand how deep learning works.

Consider an image shown below:

The above image depicts the three main layers of a neural network:

The images that we want to segregate go into the input layer. Arrows are drawn from the image on to the individual dots of the input layer. Each of the white dots in the yellow layer (input layer) are a pixel in the picture. These images fill the white dots in the input layer.

We should have a clear idea of these three layers while going through this artificial intelligence tutorial.

The hidden layers are responsible for all the mathematical computations or feature extraction on our inputs. In the above image, the layers shown in orange represent the hidden layers. The lines that are seen between these layers are called weights. Each one of them usually represents a float number, or a decimal number, which is multiplied by the value in the input layer. All the weights add up in the hidden layer. The dots in the hidden layer represent a value based on the sum of the weights. These values are then passed to the next hidden layer.

You may be wondering why there are multiple layers. The hidden layers function as alternatives to some degree. The more the hidden layers are, the more complex the data that goes in and what can be produced. The accuracy of the predicted output generally depends on the number of hidden layers present and the complexity of the data going in.

The output layer gives us segregated photos. Once the layer adds up all these weights being fed in, it'll determine if the picture is a portrait or a landscape.

Example - Predicting Airfare Costs

This prediction is based on various factors, including:

We begin with some historical data on ticket prices to train the machine. Once our machine is trained, we share new data that will predict the costs. Earlier, when we learned about four kinds of machines, we discussed machines with memory. Here, we talk about the memory only, and how it understands a pattern in the data and uses it to make predictions for the new prices as shown below:

Next up in this tutorial let us take a look at how does AI work and applications of AI.

A common AI application that we see today is the automatic switching of appliances at home.

When you enter a dark room, the sensors in the room detect your presence and turn on the lights. This is an example of non-memory machines. Some of the more advanced AI programs are even able to predict your usage pattern and turn on appliances before you explicitly give instructions.

Some AI programs are able to identify your voice and perform an action accordingly. If you say, turn on the TV, the sound sensors on the TV detect your voice and turn it on.

With the Google dongle and a Google Home Mini, you can actually do this every day.

The last section of this Artificial Intelligence tutorial discusses the use case of AI in healthcare.

AI has several amazing use cases, and this section of the tutorial will help you understand them better, beginning with the application of AI in the healthcare field. The problem statement is predicting whether a person has diabetes or not. Specific information about the patient is used as input for this case. This information will include:

Check out the Simplilearn's video on "Artificial Intelligence Tutorial" to see how a model for this problem statement is created. The model is implemented with Python using TensorFlow.

Learn from the best in the AI/ML industry with our Caltech Artificial Intelligence Course! Enroll now to get started!

AI is redefining the way business processes are carried out in various fields, such as marketing, healthcare, financial services, and more. Companies are continuously exploring the ways they can reap benefits from this technology. As the quest for improvement of current processes continues to grow, it makes sense for professionals to gain expertise in AI.

If you found this tutorial informative, you can also check out our Caltech AI Course. The course will help you learn the basic concepts of AI, data science, machine learning, deep learning with TensorFlow and more. Apart from the theory, you will also get the opportunity to apply your skills to solve real-world problems through industry-oriented projects. So, without further ado, start your career in AI and get ahead!

The rest is here:
How Does Artificial Intelligence (AI) Work and Its Applications [Updated] - Simplilearn

Read More..

Will A.I. Boost Productivity? Companies Sure Hope So. – The New York Times

Wendys menu boards. Ben & Jerrys grocery store freezers. Abercrombie & Fitchs marketing. Many mainstays of the American customer experience are increasingly powered by artificial intelligence.

The question is whether the technology will actually make companies more efficient.

Rapid productivity improvement is the dream for both companies and economic policymakers. If output per hour holds steady, firms must either sacrifice profits or raise prices to pay for wage increases or investment projects. But when firms figure out how to produce more per working hour, it means that they can maintain or expand profits even as they pay or invest more. Economies experiencing productivity booms can experience rapid wage gains and quick growth without as much risk of rapid inflation.

But many economists and officials seem dubious that A.I. especially generative A.I., which is still in its infancy has spread enough to show up in productivity data already.

Jerome H. Powell, the Federal Reserve chair, recently suggested that A.I. may have the potential to increase productivity growth, but probably not in the short run. John C. Williams, president of the New York Fed, has made similar remarks, specifically citing the work of the Northwestern University economist Robert Gordon.

Mr. Gordon has argued that new technologies in recent years, while important, have probably not been transformative enough to give a lasting lift to productivity growth.

The enthusiasm about large language models and ChatGPT has gone a bit overboard, he said in an interview.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Go here to read the rest:
Will A.I. Boost Productivity? Companies Sure Hope So. - The New York Times

Read More..