Page 1,060«..1020..1,0591,0601,0611,062..1,0701,080..»

As employers expand artificial intelligence in hiring, Maryland is one … – Maryland Matters

Photo by Joe Raedle/Getty Images.

As artificial intelligence finds its way into aspects of everyday life and becomes increasingly advanced, some state legislators feel a new urgency to create regulations for its use in the hiring process.

Artificial intelligence, commonly known as AI, has been adopted by a quarter of businesses in the United States, according to the 2022IBMGlobal AI Adoption Index, a jump of more than 13% over the previous year. Many are beginning touse itin the hiring process.

State laws havent kept up. Only Illinois, Maryland and New York City require employers to ask for consent first if using AI during certain parts of the hiring process. A handful of states are considering similar legislation.

Legislators are critical, and as always, legislators are always late to the party, said Maryland state Del. Mark Fisher, a Calvert County Republican. Fisher sponsored Marylands law, enacted in 2020, regulating the use of facial recognition programs in hiring. It prohibits an employer from using certain facial recognition services such as those that might cross-check applicants faces against outside databases during an applicants interview process unless the applicant consents.

Technology innovates first, and then it always seems like a good idea, until it isnt, Fisher said. Thats when legislators step up and try to regulate things as best as they can.

Where AI developers are interested in innovating as fast as possible with or without legislation, both developers and policymakers must think about the implications of their decisions, said Hayley Tsukayama, senior legislative activist at the Electronic Frontier Foundation, which advocates for civil liberties on the internet.

For policymakers to write effective legislation, developers must be transparent about what systems are being used and open to considering what the potential problems could be, Tsukayama said.

Its probably not exciting to people who want to move faster or people who want to put these systems in their workplace right now or already have them in the workplace right now, she said. But I do think for policymakers, its really important to talk to a lot of different people, particularly people who are going to be affected by this.

AI can help with the hiring process by performing resume evaluations, scheduling candidate interviews and sourcing data, according to ananalysisby Skillroads, which provides professional resume-writing services that incorporate AI.

Some members of Congress are trying to act too. TheproposedAmerican Data Privacy and Protection Act aims to set rules for artificial intelligence, including AI risk assessments and its overall use, and would cover data collected during the hiring process. Introduced last year by U.S. Rep. Frank Pallone Jr., a New Jersey Democrat, it currently sits in the U.S. House Energy and Commerce Committee.

The Biden administration last year issued the Blueprint for an AI Bill of Rights, a set of principles to guide organizations and individuals on the design, use and deployment of automated systems, according to thedocument.

In the meantime, lawmakers in some states and localities have worked to create policies.

Maryland, Illinois and New York City are the only places with laws explicitly for job seekers dealing with artificial intelligence during the hiring process by requiring companies to inform them when its being used at certain points and ask for consent before moving forward, according todatafrom Bryan Cave Leighton Paisner, a global law firm providing legal advice to clients regarding business litigation, finance, real estate and more.

California, New Jersey, New York and Vermont also have considered bills that would regulate AI in hiring systems, according to The New York Times.

Face recognition technologyis used by many federal agencies, including for cybersecurity and policing, according to the U.S. Government Accountability Office. Some industries are using it as well.

Artificial intelligence can link face recognition programs with applicant databases in seconds, Fisher said, which he cited as the concern that motivated his bill.

His goal, he said, was to craft a narrow measure that could open the door for potential future AI-related legislation. The bill, which took effect in 2020 without being signed into law by then-Gov. Larry Hogan, a Republican, only includes the private sector, but Fisher said hed like for it to be expanded to include public employers.

Policymakers understanding of artificial intelligence, particularly when it comes to its civil rights implications, is almost nonexistent, said Clarence Okoh, the senior policy counsel at the Washington, D.C.-based nonprofit Center for Law and Social Policy (CLASP) and a Social Science Research Council Just Tech Fellow.

As a result, he said, companies that use AI often are regulating themselves.

Unfortunately, I think whats happened is a lot of AI developers and sales have been very effective at crowding out the conversation with policymakers around how to govern AI and to mitigate social consequences, Okoh said. And so unfortunately, theres a lot of interest in developing self-regulatory schemes.

Some self-regulatory practices include audits or compliance that use general guidance such as the Blueprint for an AI Bill of Rights, Okoh said.

The results have sometimes raised concerns. Some organizations operating under their own guidelines have used AI recruiting tools that showed bias.

In 2014, a group of developers at Amazon began creating an experimental, automated program to review job applicants resumes for top talent, according to aReutersinvestigation, but by 2015, the company found that its system effectively had taught itself that male candidates were preferable.

Those close to the project told Reuters theexperimentalsystem was trained to filter applicants by observing patterns in resumes submitted to the company over a 10-year period most of which came from men.Amazon told Reutersthe tool was never used by Amazon recruiters to evaluate candidates.

But some companies say AI is helpful and that strong ethics rules are in place.

Helena Almeida, vice president-managing counsel at ADP, a human resources management software company, says its approach to using artificial intelligence in its products follows the same ethical guidelines as before the technology emerged. Regardless of the legal requirements, Almeida said, ADP considers it an obligation to go above and beyond the basic framework to ensure its products dont discriminate.

Artificial intelligence and machine learning is used in several of ADPs hiring support services. And many current laws apply to the artificial intelligence world, she said. ADP also offers its clientscertain servicesthat use face recognition technology, according to its website. As the technology evolves, ADP hasadopted a set of principlesto govern its use of AI, machine learning and more.

You cant discriminate against a particular demographic group without AI, and you also cant do it with AI, Almeida said. So, thats an essential part of our framework and how we look at bias in these tools.

One way to avoid issues with AI in the hiring process is to maintain human involvement, from product design to regular monitoring of automated decisions.

SamanthaGordon, the chief programs officer at TechEquity Collaborative, an organization advocating for tech workers in the industry, said in situations where machine learning or data collection is used with no human input, the machine is at risk of being biased toward certain groups.

In one example, HireVue, a platform helping employers collect video interviews and assessments from job seekers, announced in 2021 the removal of its facial analysis component after finding during an internal review that the system had less correlation to job performance than other elements of the algorithmic assessment, according to a release from the organization.

I think thats the thing that you dont have to be a computer scientist to understand, Gordon said. Speeding up the hiring process, she said, leaves room for error. Thats where Gordon said legislators are going to have to intervene.

And on both sides of the aisle, Fisher said, legislators think companies ought to show their work.

I would like to think that, generally speaking, people would like to see there be a lot more transparency and disclosure in the use of this technology, Fisher said. Whos using this technology? And why?

Editors note: This article has been corrected to reflect that an internal Amazon artificial intelligence hiring program was experimental, and, the company says, never used by recruiters to evaluate job candidates.

Read this article:
As employers expand artificial intelligence in hiring, Maryland is one ... - Maryland Matters

Read More..

What is artificial intelligence? Experts weigh in – ABC News

Artificial intelligence, or AI, has migrated from techie niche to cultural mainstream. Today, the technology eases many basic tasks but raises profound life-or-death concerns.

By 2030, AI could contribute up to $15.7 trillion to the global economy -- an amount that exceeds the current annual output of China and India combined, accounting and research firm PwC found.

In recent months, a reckoning with AI has swept across institutions as disparate as universities, factories, media companies, governments and even amusement parks.

Here are some answers to fundamental questions about the technology:

AI simulates the human capacity to think and learn for the sake of performing tasks.

Computers or other machines equipped with the technology can serve dinner, package boxes, recommend personalized ads or write college-level essays, among many other uses.

Sauvik Das, a professor at Carnegie Mellon University who focuses on AI and cybersecurity, characterizes AI as a "broad umbrella term."

"AI is our attempt at creating tech that mimics human cognition," Das told ABC News. "The pace of development is pretty rapid right now."

The term was coined in the 1950s and was notably deployed by Alan Turing, who devised a test that examines whether a human interlocutor can distinguish between their conversations with a fellow individual versus those with a machine.

Over the ensuing decades, as computational capacity ballooned, AI grew increasingly sophisticated.

The technology manifests in everyday life through social media and movie recommendation algorithms, phone unlocking systems that rely on facial recognition, and personalized search engine results.

"The seeds have been there for a while," Chris McComb, a professor of mechanical engineering at Carnegie Mellon University and director of the Human+AI Design Initiative, told ABC News.

AI garnered mainstream attention last year after the release of a new-and-improved version of ChatGPT, a conversation bot that reached 100 million users within two months.

ChatGPT immediately responds to prompts from users on a wide range of subjects, generating an essay on Shakespeare or a set of travel tips for a given destination.

Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.

The text bots, known as large language models, have prompted clashes within university classrooms, newsrooms and TV studios over uses and abuses in creating original work.

Art generators, meanwhile, instantly produce fresh artwork based on written prompts.

"We've just crossed the hump where AI seems to be doing a lot more than it used to do," Das said.

Proponents of AI say the technology could increase productivity, automate unpleasant or mundane tasks, and afford the opportunity to focus on creative and innovative endeavors.

"AI allows humans to focus on higher-value activities," Adam Wray, founder and CEO of AstrumU, an education-focused company that uses artificial intelligence, told ABC News.

The technology, Wray added, performs an array of tasks that would be "impossible for someone to efficiently handle at scale."

Detractors, however, warn AI could supercharge the spread of misinformation, hate speech and deceptive information, such as deep-fake video and audio. The technology could even pose an existential threat for humanity, some experts have warned.

In May, hundreds of business leaders and public figures sounded a sobering alarm over what they described as the threat of mass extinction posed by artificial intelligence.

Experts agree it's important to have conversations about safety and the implications of using AI.

"We have lots of experts thinking about the implications on society, safety, policy -- the right policies that we need to ensure we have safe, productive use of this technology," Brendan Englot, director of The Stevens Institute for Artificial Intelligence (SIAI), Stevens Institute of Technology's cross-division focus on AI, told ABC News.

"These same issues have come up with every new wave of technology," he added, citing cars and airplanes as two examples, "and ushered in new machines and tools that have potential to be impactful in a positive way and also carry risks."

While much uncertainty about AI remains, one forecast stands assured, Wray said.

"The only constant when it comes to AI is change," he said.

McComb said it's worth exploring AI, especially when it comes to small tasks that help make daily life "a more joyful experience" -- but it's important to be able to verify the results.

He added, "Were deeply social beings. There's something fundamentally human we have to protect about relationships and the dignity of humanity."

ABC News' Melissa Gaffney contributed to this report.

Excerpt from:
What is artificial intelligence? Experts weigh in - ABC News

Read More..

USF’s artificial intelligence researchers say regulation is needed – ABC Action News Tampa Bay

HILLSBOROUGH COUNTY, Fla. USF assistant professor John Licato says humans have never seen an advance like artificial intelligence.

He and others in the Advancing Machine and Human Reasoning Lab are working on how to get AI to understand language better.

He says sees two very different reactions from the public regarding the power of AI systems like Chat GPT.

Unwarranted paranoia on one side. But also the exact extreme opposite, where some people think the technology is not going to affect my job at all. I dont have to pay attention. I think both of those reactions are probably incorrect in some way, said Licato.

Tech giants like Amazon, Meta, Google, and Microsoft are working with the White House to develop artificial intelligence regulations.

That includes security and transparency with the public. And testing of their products before they are released.

Something is necessary, but its very easy to see how they could overdo it and completely stifle innovation, said Licato.

Larry Hall is a professor at USFs Institute for Artificial Intelligence.

Related Story: Big Tech agrees to voluntary testing, safety commitments on AI

He says some sort of regulation is important.

One example would be detectors or watermarks to let people know when AI creates something.

There probably hasnt been enough focus on that piece of making sure that what you see is real. Enough research on that. I think thats probably something we really need to do. It doesnt seem so exciting. But for society, its really important, said Hall.

Weve reported extensively on AI research, including the massive computer system at the University of Florida.

Other departments are working with AI to improve everything from farming to brain health.

At USF, they say they are ramping up their AI department with the hope of helping businesses, the government, and the military in our area.

Weve also got an increasing number of new hires on the faculty side that are bringing in some really innovative research that the community is just now starting to realize the benefit. Chat GPT really helped with that, said Licato.

Some critics say that big tech companies still need to do more to show what data they use to train AI models.

President Biden is expected to sign an executive order for AI regulation eventually, but officials say final details are still being worked out.

Original post:
USF's artificial intelligence researchers say regulation is needed - ABC Action News Tampa Bay

Read More..

Will artificial intelligence change the way we do church – Independent Record

The world of artificial intelligence is expanding exponentially.

AIs infancy dates from the 1950s and 1960s, but it is now like an adolescent experiencing a sudden growth burst. ChatBPT and its siblings in the AI family are the rage in business, health care, education, and more.

How can chatbots help religious leaders? I have used ChatBPT to help me find facts and sources. I drew upon ChatBPT and traditional sources to research this article. As a research assistant, however, it is helpful only to a point. Sometimes AI can make errors. Some chatbots even invent material.

Artificial intelligence is in its adolescence, soaking up billions of data from sermons, blogs, online chats, news reports, jokes, poetry, fiction, and heaven knows what else. It is looking for patterns and connections.

People are also reading

Last December, Rabbi Josh Franklin in New York developed writer's block. Every preacher has been there. Instead of using someone elses sermon from the internet, he decided to experiment with ChatGPT. He asked it to write a sermon. Seconds later, there was a sermon. The result was a bit pedantic but not bad. As an experiment, he preached it on Sabbath, then told his congregation that he hadnt written it. He asked them who they thought he had written it. No one guessed AI.

Surprise, a machine had produced the sermon, not a rabbi.

Is this the future of preaching?

I did not think I could be shocked, but this report stopped me in my tracks. When several hundred Lutherans in Bavaria, Germany, attended a service on June 9 designed by ChatGPT, the program not only selected hymns and prayers, but also composed and delivered a sermon, delivered by an avatar on a big screen. (Joanne M. Pierce College of the Holy Cross in The Conversation)

Thats just the tip of the AI iceberg. Using the new artificial intelligence learning tool Khanmigo, students have interviewed historical figures, including Harriet Tubman, Abigail Adams, Genghis Khan, Montezuma, and Winston Churchill. Khanmigo is still in beta testing at this point. But what lies ahead? Will Sunday school teachers be able to ask students: What do you want to ask Jesus or Moses today? (There is a much better way to find out what they have said, and I suspect you know it if you are reading this article.)

I downloaded the Khanmigo software to try it myself. I didnt find Jesus or Moses to interview, but I did ask Mahatma Gandhi what he thought of Christians. The AI Ghandi replied: I once said, I like your Christ, I do not like your Christians. Your Christians are so unlike your Christ.

This statement reflects my belief that while the teachings of Christ were profound and inspiring, not all who identified as Christians lived up to these teachings. However, I always advocated for understanding and respect among all religions. The quote is accurate, followed by a summary of aggregated material on the internet about Ghandi.

The artificial intelligence smartphone app ChatGPT on June 6.

ChatGPTis a good research tool. It saves a lot of time. No Google searches. Sorry, Wikipedia, but our love affair is over. No rummaging through my library for books and articles. No listening to podcasts for hours to find the one interview that would answer my questions. All I had to do is ask ChatGPT or one of its AI siblings. Then I fact-check because AI can be a friend or a foe when it comes to accuracy.

I hear a lot of worries about AI taking over the world and enslaving us. People may be seeing too many "Terminator" movies. We are not the first people to fret about technological innovation and the changes it brings. Think about those scribes in the first century who had to adjust from unrolling scrolls to turning pages in a codex. Or monks who watched manuscript production die out with the advent of movable type and the printing press.

In my own lifetime, I have moved from a Smith Corona typewriter to a Microsoft Surface. I do confess that my smartphone may be smarter than me. I used to have a personal library of several thousand books, but now I buy books and carry them on my Kindle or listen to them on Audible. AI is as big and powerful as any of these technological developments in the past.

What will fully mature AI look like?

I dont know, but I can tell you it will change the way we do church.

In every generation, there are Luddites who fear innovation. But you cant stop the future. Like most change, however, there is a dark side to innovation. For preachers, it is all too easy to use ChatGBT to write last-minute sermons (known as Saturday night specials").

Sorry, but thats just high-tech plagiarism unless you admit it. Pastors using AI to prepare Bible or theology classes need to realize AI is not a magic wand. It sometimes makes mistakes.

One theologian asked it to name the 10 greatest religious thinkers of the 20th century. It included John Calvin, a 16th-century reformer. Oops. AI can be biased too. Historial-critical interpreters of the Bible will notice that as ChatGBT indiscriminately gobbles up information from the internet, it sometimes offers a fundamentalist bias when asked a question about the Scriptures. Thats because Biblical literalists dominate the internet, and thats the food source for AIs theological knowledge.

From my point of view, AI can make clergy more productive. But for producing sermons or preparing spiritual resources, it is limited because it is machine-generated intelligence.

What gives a good sermon or presentation that essential spiritual component comes from praying and wrestling with the text in light of personal experience in the context of a congregation's life. Whats missing with AI? Jews call it nefesh. Christians generally call it soul.

Joanne Pierce, quoted earlier, puts it this way: chatbots cannot know what it means to be human, to experience love or be inspired by a sacred text.

It can aggregate material about human feelings and mimic them, but at the end of the day, artificial intelligence is just that: artificial.

The Very Rev. Stephen Brehe is the retired dean of St. Peters Episcopal Cathedral in Helena. He is now serving as the interim dean of Trinity Episcopal Cathedral in Reno, NV.

Get local news delivered to your inbox!

Read the original:
Will artificial intelligence change the way we do church - Independent Record

Read More..

Python: The Foundation of Artificial Intelligence – Fagen wasanni

Python and artificial intelligence (AI) share a close relationship, akin to a harmonious duet of reason and imagination. Python, a versatile programming language, has been instrumental in countless AI advancements, fostering industry growth and encouraging creativity.

One of Pythons strengths lies in its clarity and readability, which enables academics, engineers, and data scientists to delve into the intricate web of AI. The languages expansive ecosystem, encompassing libraries and frameworks like TensorFlow, PyTorch, and sci-kit-learn, provides robust tools for quickly building and deploying AI models.

Pythons expressive syntax and dynamic nature make it ideal for iterative AI development, supporting flexible exploration and rapid prototyping. Its object-oriented approach facilitates organizing and abstracting modular code, promoting extensibility and maintainability of AI systems.

With its rich collection of libraries, Python empowers programmers to leverage existing solutions, accelerating AI development across various domains. These solutions range from machine learning techniques to natural language processing and computer vision. The thriving Python community actively fosters collaboration, sharing information, experience, and open-source projects, driving the advancement of AI applications.

Pythons adaptability extends even to cutting-edge areas like explainable AI, generative adversarial networks, and reinforcement learning. Acting as a common language, Python enables cross-disciplinary cooperation among AI researchers and facilitates the incorporation of diverse AI methodologies.

In the ever-evolving field of artificial intelligence, Python remains the language of choice, enabling the AI community to stretch boundaries, unlock new possibilities, and shape the future of intelligent systems.

Python serves as the throbbing heart of AI, orchestrating its marvels. With its clear syntax and extensive library, Python equips programmers with the necessary tools to harness the immense potential of AI. It takes us into a world where algorithms come alive, data reveals its secrets, and computers are taught to perceive the environment and make decisions.

Pythons compatibility with popular AI frameworks, such as TensorFlow and PyTorch, further amplifies its significance. This allows us to develop intelligent systems that revolutionize industries and shape our future. In this powerful combination, Python becomes the language of creativity, enabling us to unleash the boundless potential and unravel the mysteries of AI.

Here is the original post:
Python: The Foundation of Artificial Intelligence - Fagen wasanni

Read More..

Remarks at a UN Security Council High-Level Briefing on Artificial … – United States Mission to the United Nations

Ambassador Jeffrey DeLaurentisActing Deputy Representative to the United NationsNew York, New YorkJuly 18, 2023

AS DELIVERED

Thank you, Mr. President. Thank you to the UK for convening this discussion, and thank you to the Secretary-General, Mr. Jack Clarke, and Professor Yi Zeng for your valuable insights.

Mr. President, Artificial Intelligence offers incredible promise to address global challenges, such as those related to food security, education, and medicine. Automated systems are already helping to grow food more efficiently, predict storm paths, and identify diseases in patients, and thus used appropriately AI can accelerate progress toward achieving the Sustainable Development Goals.

AI, however, also has the potential to compound threats and intensify conflicts, including by spreading mis- and dis-information, amplifying bias and inequality, enhancing malicious cyberoperations, and exacerbating human rights abuses.

We, therefore, welcome this discussion to understand how the Council can find the right balance between maximizing AIs benefits while mitigating its risks.

This Council already has experience addressing dual-use capabilities and integrating transformative technologies into our efforts to maintain international peace and security.

As those experiences have taught us, success comes from working with a range of actors, including Member States, technology companies, and civil society activists, through the Security Council and other UN bodies, and in both formal and informal settings.

The United States is committed to doing just that and has already begun such efforts at home. On May 4, President Biden met with leading AI companies to underscore the fundamental responsibility to ensure AI systems are safe and trustworthy. These efforts build on the work of the U.S. National Institute of Standards and Technology, which recently released an AI Risk Management Framework to provide organizations with a voluntary set of guidelines to manage risks from AI systems.

Through the White Houses October 2022 Blueprint for an AI Bill of Rights, we are also identifying principles to guide the design, use, and deployment of automated systems so rights, opportunities, and access to critical resources or services are enjoyed equally and are fully protected.

We are now working with a broad group of stakeholders to identify and address AI-related human rights risks that threaten to undermine peace and security. No Member State should use AI to censor, constrain, repress or disempower people.

Military use of AI can and should also be ethical, responsible, and enhance international security. Earlier this year, the United States released a proposed Political Declaration on Responsible Military Use of AI and Autonomy, which elaborates principles on how to develop and use AI in the military domain in compliance with applicable international law.

The proposed Declaration highlights that military use of AI capabilities must be accountable to a human chain of command and that states should take steps to minimize unintended bias and accidents. We encourage all Member States to endorse this proposed Declaration.

Here at the UN, we welcome efforts to develop and apply AI tools that improve our joint efforts to deliver humanitarian assistance, provide early warning for issues as diverse as climate change or conflict, and further other shared goals. The International Telecommunication Unions recent AI for Good Global Summit, represents one step in that direction.

Within the Security Council, we welcome continued discussions on how technological advancements, including when and how to take action to address governments or non-state actors misuse of AI technologies to undermine international peace and security.

We must also work together to ensure AI and other emerging technologies are not used primarily as weapons or tools of oppression, but rather as tools to enhance human dignity and help us achieve our highest aspirations including for a more secure and peaceful world.

The United States looks forward to working with all relevant parties to ensure the responsible development and use of trustworthy AI systems serves the global public good.

I thank you.

###

By United States Mission to the United Nations | 18 July, 2023 | Topics: Highlights, Remarks and Highlights

View post:
Remarks at a UN Security Council High-Level Briefing on Artificial ... - United States Mission to the United Nations

Read More..

As Artificial Intelligence Demand Booms, University of San Diego … – Times of San Diego

The University of San Diego. Photo by Chris Stone

The University of San Diego has announced the launch of an artificial intelligence (AI) and machine learning bootcamp in partnership with a national tech education provider.

The 26-week curriculum, designed and delivered by experienced tech practitioners, aims to equip those who enroll with the skills and training needed to build specialized data career paths in AI and machine learning.

USD will offer the boot camp with New York-based Fullstack Academy.

Demand for AI skills is projected to increase by nearly 36% over the next decade, according to the U.S. Bureau of Labor Statistics, far surpassing the average growth rate of roughly 6% for all occupations.

Notably, this AI boom also has the potential to contribute $15.7 trillion to the global economy by 2035, with China and the U.S. positioned to account for nearly 70% of the worldwide impact, according toPwC.

Nelis Parts, CEO of Fullstack Academy said the rapid, widespread adoption and influence of AI and machine learning technologies are revolutionizing the way we work, live and interact with technology every day, and prompting companies to seek to expand talent pools.

This new program with USD will enable professionals from all skill levels and interests to embark on a rewarding career path and contribute to an ever-evolving sector, Parts added.

Graduates of the USD AI & Machine Learning Bootcamp can qualify for positions across the state, where the average entry-level salary for artificial intelligence and machine learning engineer roles is $109,599, according to Glassdoor.

The top three AI employers in San Diego are Qualcomm, Amazon and Accenture.

The part-time program, designed for both beginners and experienced tech professionals, students of the 26-week, will include lessons in applied data science with Python, machine learning, deep learning and deep neural networks and note their applications within AI technology.

The USD AI & Machine Learning Bootcamp curriculum is comprehensive, from foundational principles to advanced concepts, said Andrew Drotos, director of professional and public programs at USD. By equipping students with knowledge and skills in AI, we are building the next generation of AI experts and problem solvers who can navigate the opportunities and ethical considerations of an AI-driven world.

Applications are openfor the live online USD AI & Machine Learning Bootcamp. The deadline to apply is Sept. 5 for the inaugural cohort commencing Sept. 11.

The USD AI & Machine Learning Bootcamp does not require university enrollment. Tuition costs $13,495. Scholarships are available to current USD students and alumni, as well as active-duty service members and veterans. For more, see theUSD Tech Bootcamps website.

Read the original here:
As Artificial Intelligence Demand Booms, University of San Diego ... - Times of San Diego

Read More..

The Threat of Artificial Intelligence to Background Actors in Hollywood – Fagen wasanni

The rise of artificial intelligence (AI) has the potential to pose a significant threat to actors in Hollywood, particularly background actors who rely on work as extras. AI technology has the capability to replace human actors, raising concerns about the future of the industry.

Background actors, who typically receive a modest payment for their work and gain valuable on-set experience, are worried that AI could lead to a decrease in opportunities. However, established actors argue that AI will never be able to replicate award-winning performances.

While AI can scan and replicate the physical appearance of an actor, it may struggle to convey the same level of emotion and performance as a human. The nuanced delivery of lines and the depth of emotion displayed in performances may be difficult for AI to recreate convincingly.

Finding a balance between embracing the benefits of AI and maintaining the integrity of the creative industry is a complex task. On one hand, AI can be useful in scenarios where, for example, an actor does not want to redo a voiceover multiple times due to sound imperfections. However, the use of AI also raises ethical concerns, such as the creation of deepfakes that can manipulate and deceive audiences.

The current strikes by the Screen Actors Guild are partly driven by concerns about the potential threat of AI. However, it is unlikely that these strikes will lead to a conclusive resolution. With the rapid evolution of technology, it is highly probable that similar issues will arise in the future.

As technology continues to advance, it is crucial to address these emerging challenges and ensure a fair and sustainable future for actors in the ever-changing landscape of the entertainment industry. The impact of AI on the art of acting remains a topic of ongoing discussion and debate.

Read the original post:
The Threat of Artificial Intelligence to Background Actors in Hollywood - Fagen wasanni

Read More..

New York City Uses Artificial Intelligence to Track Fare Evaders – Fagen wasanni

New York City is utilizing artificial intelligence (AI) to combat fare evasion in its subway system, according to NBC News. The AI system, developed by Spanish software company AWAAIT, is currently being deployed in several subway stations, with plans to expand to more stations by the end of the year.

The primary purpose of the AI system is to track the amount of potential revenue lost due to fare evasion rather than catch fare evaders in the act. It records instances of fare skipping and analyzes data on how individuals avoid paying the fare. The recordings are stored on the Metropolitan Transit Authoritys servers for a limited time, but they are not shared with law enforcement.

The implementation of this AI technology is similar to what has been done in Barcelona, where the same software is used on trains to capture images of fare evaders and send them to station officers.

The Metropolitan Transit Authority has emphasized that the AI system is solely intended for tracking purposes and not for assisting law enforcement. This decision is in line with the organizations efforts to increase the presence of law enforcement in the subway system to deter larger-scale crimes.

In the fourth quarter of last year, the NYPD made 601 arrests and issued 13,157 summons for fare evasion. These numbers have increased in the first quarter of 2023, with 923 arrests and 28,057 summons issued for subway fare evasion.

While the AI system is currently focused on monitoring revenue loss, it is possible that it could be utilized in the future to directly address fare evasion. The Metropolitan Transit Authority plans to continue expanding the use of AI technology to improve the efficiency and security of the subway system.

More:
New York City Uses Artificial Intelligence to Track Fare Evaders - Fagen wasanni

Read More..

A.I. Regulation Is in Its Early Days – The New York Times

Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and news conferences and the White House announcing voluntary A.I. safety commitments by seven technology companies on Friday.

But a closer look at the activity raises questions about how meaningful the actions are in setting policies around the rapidly evolving technology.

The answer is that it is not very meaningful yet. The United States is only at the beginning of what is likely to be a long and difficult path toward the creation of A.I. rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House and speeches to introduce A.I. bills, it is too soon to predict even the roughest sketches of regulations to protect consumers and contain the risks that the technology poses to jobs, the spread of disinformation and security.

This is still early days, and no one knows what a law will look like yet, said Chris Lewis, president of the consumer group Public Knowledge, which has called for the creation of an independent agency to regulate A.I. and other tech companies.

The United States remains far behind Europe, where lawmakers are preparing to enact an A.I. law this year that would put new restrictions on what are seen as the technologys riskiest uses. In contrast, there remains a lot of disagreement in the United States on the best way to handle a technology that many American lawmakers are still trying to understand.

That suits many of the tech companies, policy experts said. While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe.

Heres a rundown on the state of A.I. regulations in the United States.

The Biden administration has been on a fast-track listening tour with A.I. companies, academics and civil society groups. The effort began in May when Vice President Kamala Harris met at the White House with the chief executives of Microsoft, Google, OpenAI and Anthropic and pushed the tech industry to take safety more seriously.

On Friday, representatives of seven tech companies appeared at the White House to announce a set of principles for making their A.I. technologies safer, including third-party security checks and watermarking of A.I.-generated content to help stem the spread of misinformation.

Many of the practices that were announced had already been in place at OpenAI, Google and Microsoft, or were on track to take effect. They dont represent new regulations. Promises of self-regulation also fell short of what consumer groups had hoped.

Voluntary commitments are not enough when it comes to Big Tech, said Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, a privacy group. Congress and federal regulators must put meaningful, enforceable guardrails in place to ensure the use of A.I. is fair, transparent and protects individuals privacy and civil rights.

Last fall, the White House introduced a Blueprint for an A.I. Bill of Rights, a set of guidelines on consumer protections with the technology. The guidelines also arent regulations and are not enforceable. This week, White House officials said they were working on an executive order on A.I., but didnt reveal details and timing.

The loudest drumbeat on regulating A.I. has come from lawmakers, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee A.I., liability for A.I. technologies that spread disinformation and the requirement of licensing for new A.I. tools.

Lawmakers have also held hearings about A.I., including a hearing in May with Sam Altman, the chief executive of OpenAI, which makes the ChatGPT chatbot. Some lawmakers have tossed around ideas for other regulations during the hearings, including nutritional labels to notify consumers of A.I. risks.

The bills are in their earliest stages and so far do not have the support needed to advance. Last month, The Senate leader, Chuck Schumer, Democrat of New York, announced a monthslong process for the creation of A.I. legislation that included educational sessions for members in the fall.

In many ways were starting from scratch, but I believe Congress is up to the challenge, he said during a speech at the time at the Center for Strategic and International Studies.

Regulatory agencies are beginning to take action by policing some issues emanating from A.I.

Last week, the Federal Trade Commission opened an investigation into OpenAIs ChatGPT and asked for information on how the company secures its systems and how the chatbot could potentially harm consumers through the creation of false information. The F.T.C. chair, Lina Khan, has said she believes the agency has ample power under consumer protection and competition laws to police problematic behavior by A.I. companies.

Waiting for Congress to act is not ideal given the usual timeline of congressional action, said Andres Sawicki, a professor of law at the University of Miami.

Continued here:
A.I. Regulation Is in Its Early Days - The New York Times

Read More..