Page 1,445«..1020..1,4441,4451,4461,447..1,4501,460..»

Programers behind artificial intelligence could learn from Daedalus – Herald-Mail Media

Pete Waters| Columnist

2023 Maryland General Assembly wrap up

Maryland Senate President Bill Ferguson, D-Baltimore City, bangs a gavel adjourning the chamber on April 11, 2023. Confetti and balloons fall in celebration.

Dwight A. Weingarten, The Herald-Mail

You know, I was thinking the other day. There is much information being reported in the news about artificial intelligence, more commonly known by its initials AI.

In an article on his website, author Will Hurd says, "The term artificial intelligence describes a field in computer science that refers to computers or machines being able to simulate human intelligence to perform tasks or solve problems."

Sasha Reeves in an article on iotforall.com says, "Using virtual filters on our faces when taking pictures and using face ID for unlocking our phones are two examples of artificial intelligence."

Alyssa Schroer, an analyst writing for builtin.com, says, "Examples of AI in the automotive industry include industrial robots constructing a vehicle and autonomous cars navigating traffic with machine learning and vision."

Some experts have even suggested that as AI advances further, it wont be long before machines will replace many more workers on jobs, solve space mysteries, and perhaps even design a plan to destroy the world.

Farfetched thinking, you say? Well, some of the most intriguing intellectual thinkers of the world like Stephen Hawking once proffered that very thought. Elon Musk and others have also recently sounded that alarm.

And as I sit here in my dark room with only the light on my computer screen, considering mans inventions, a Greek myth comes to visit me.

It is a story I once read about a famous Greek inventor, Daedalus, and his son, Icarus, who lived on the island of Crete in the kingdom of a mighty ruler by the name King Minos.

Daedalus was a famous sculptor and also an inventor something, I suppose, like some of our modern-day inventors of AI.

And during the time that Daedalus lived, there was much upheaval in King Minos' kingdom and, recognizing the important contributions of good inventions and wisdom, this powerful king had decided to keep Daedalus and his son captives on Crete.

As the relationship between Daedalus and King Minos became estranged, Daedalus longed to escape from the wicked king and leave the island with his son.

As an inventor, Daedalus had studied the flight of birds and he pondered an invention that would help him and his son in their plans.

He [Minos] may thwart our escape by land or sea but the sky is surely open to us," Daedalus thought. "We will go that way. Minos rules everything but he does not rule the heavens.

Soon he conceived a plan to make wings just like a bird's to navigate the sky. He laid down feathers and tied them together using beeswax and thread, and before long he had built wings for himself and Icarus.

Daedalus attached the wings, taking a successful trial flight as his son watched.

It was at this point, before giving Icarus his wings, that Daedalus gave his son some valuable advice:

Let me warn you, Icarus, to take the middle way, in case the moisture weighs down your wings, if you fly too low, or if you go too high, the sun scorches them. Travel between the extremes."

Daedalus' warning was a serious one. His invention would help them escape from that island of despair, but if not applied properly, it would have some devastating consequence.

He loved his son and wanted him to be safe. He also wanted to escape Crete.

Daedalus and Icarus then took flight and left Crete behind them. But not long afterwards, Icarus ignored his fathers instructions and suddenly flew upward toward the sun. He must have had a moment of joy on his face as he soared higher and higher.

Unfortunately for Icarus, his joy would not last. Soon the beeswax began to melt and his body tumbled from the sky. His last word as he fell was Father.

His father heard his cry and searched the sky, but he couldn't see his son. He had drowned in the dark waters below, which became known as the Icarian Sea.

Eventually, Daedalus found the body of his son floating amidst feathers. Cursing his inventions, he took the body to the nearest island and buried it there.

And perhaps Daedalus leaves our AI inventors today with some valuable advice:

Travel between the extremes. Never fly too close to the sun.

Pete Waters is a Sharpsburg resident who writes for The Herald-Mail.

More here:
Programers behind artificial intelligence could learn from Daedalus - Herald-Mail Media

Read More..

Diagnosing AI: Healthcare community excited, wary of artificial … – Worcester Business Journal

Artificial intelligence technologies have the potential to reshape how diagnoses are made, if not revolutionize diagnostics in medicine. Already, technologies exist to streamline the diagnostic process and detect illness before a physician can.

But the technology should not be fully relied upon just yet.

The algorithm can be only as good as the data that we give it, said Elke Rundensteiner, professor of computer science and founding director in data science at Worcester Polytechnic Institute.

PHOTO | Courtesy of WPI

Elke Rundensteiner's research at WPI focuses on effective use of data.

Its a mistake to think that technology immediately holds less bias than humans. Artificial intelligence programs arent developed in a vacuum: In medicine, they are fed thousands of data points from practitioners based on real data from real patients, and the AI learns to analyze based on data not always representative of diverse groups.

Participation in clinical trials in medicine is not representative of all populations, said Rundensteiner. According to a 2021 report in the Journal of the American Medical Association, less than 5% of clinical trial participants are Black.

If the data is biased and you give it to an algorithm, then the system might be neutral, but the bias is already baked in, she said.

Dr. Neil Marya, a gastroenterology specialist at UMass Memorial Health and assistant professor at UMass Chan Medical School in Worcester, is researching new diagnostic tools for pancreatic cancer and bile duct cancer. While a few years from clinical use, said Marya, the technologies are showing promise through observational use.

The AI is trained on tens of thousands of image and video data points from patients with and without cancer. The goal is to use computer learning to diagnose cancer, but for now, it is not enough to rely on for a definitive diagnosis.

We're not acting on the AI right now. Its just in the background, and we are understanding how it works in these cases, Marya said. For now, humans outrank the technology in declaring something cancerous or noncancerous. They will not yet use the technology to recommend chemotherapy or surgery without a definitive biopsy, he said.

In some cases, the AI has said something is cancer far before a pathologist has been able to make a diagnosis, Marya said. However, at other times, it has been incorrect, giving a false negative. In either case, the data the AI is pulling from is potentially more useful for certain demographic groups than others.

It would stand to reason to say, AI, should supersede all that implicit bias since its not influenced by one's upbringing or other things. But the issue is that for a lot of these AI models that we're developing, they're being developed at these major academic medical centers in areas or regions where there are only certain types of patients that are getting access to the care that generate the data necessary to create these artificial intelligence models, Marya said.

Maryas original work on the topic of pancreatic cancer models was at the Mayo Clinic in Minnesota, where he said a majority of the patients they saw were white males over the age of 60. Due to the regional bias of Southern Minnesota, there were a very low number of African Americans in the study, he said.

So while the results were very promising, indicating the possibility artificial intelligence programs could be trained to detect pancreatic cancer before physicians could use the current gold-standard method of biopsy, they might not be applicable to other demographic groups.

There's no denying that it's a heavily biased dataset towards a particular type of person. And it might not perform as accurately or maintain these really nice performance parameters when we put it out into practice, Marya said.

This chart shows the increase in clinical trials related to artificial intelligence in the U.S.

While there is cause for a measured introduction of the new machine-learning technology into the practice of medicine, increased access to software physicians can use in tandem with their expertise may increase access to needed health care in less serviced areas, said Dr. Bernardo Bizzo, senior director of the Data Science Office at Mass General Brigham in Boston.

Bizzos team works on software as a medical device, he said. With success, they have developed technology to detect early acute infarcts in some instances that neuroradiologists did not. In his estimation, a wave of new, similarly capable software is here.

We are at an inflection point with this [AI] technology. The expectation is that there will be a lot more AI products made available at a much faster pace, he said.

Marya said he may appear too nervous in his full adoption of the technology, but he just wants to work out the issues. I love the research we're doing; and I'm really passionate about it, and I trust it; but I think we need to be measured with how we approach this, he said. It's really important to do this right.

Rundensteiner said, based on how the public responds to AI overall, patients will continue to want the human element in the forefront, even if the technology is delivering more definitive answers.

Most [patients] would probably still say that they want that human touch right now. But I believe it's just a matter of time until we might want that convenience, and the human is maybe just a face because even that human that they're talking to has computers behind them, she said.

Patients and their families are often less skeptical than expected, said Marya. When it comes to life-threatening illnesses, patients and their families are open to most technologies with minimal hesitation.

If there was a better way, a more accurate way to get a diagnosis sooner, they're all for it, he said. A lot of people, for good and bad, I think they just want the most accurate diagnosis done as soon as possible. And however that's done, I think people just want the best results.

The rest is here:
Diagnosing AI: Healthcare community excited, wary of artificial ... - Worcester Business Journal

Read More..

I can’t wait for artificial intelligence to take this job away from humans – Tom’s Guide

Adults have to do a lot of unpleasant jobs; its part of the gig. Taking out the trash, cleaning the toilet and making the daily school run are unavoidable when it comes to keeping life running smoothly.

But theres one particular job that fills me with dread: calling a helpline."

Every time I pick up the phone to discuss tax codes, remortgage rates, insurance quotes, doctors appointments or some other exciting aspect of modern life, my knees go slack and my head starts to pound. Cue generic hold music and a constant robotic reminder of my place in the virtual queue.

Once you do get through to a person, things rarely improve. The poor soul on the other end of the line guides me through mundane security questions before reading from a pre-prepared script. Often, they fail to offer up a single noteworthy piece of advice when questioned directly.

During one of these recent calls, it occurred to me everyone involved would benefit from letting artificial intelligence handle the task. I dont mean the basic interactive voice response (IVR) program that routes your call based on how you answer recorded questions; I mean a full conversational AI agent capable of discussing and actioning my requests with no human input.

Id get through the process faster (because the organization wouldnt need to wait for available humans to assign) and it wouldnt require a living, breathing person to spend their days on the phone to an aggravated person like me. Similarly, an AI doesnt need to clock off at the end of a shift, so the call could be handled any time of the day or night.

Plenty of companies have implemented browser or app-based chat clients but, the fact is, a huge amount of people still prefer to pick up the phone and do things by voice. And I think most industry leaders recognize this.

Humana, a healthcare insurance provider with over 13 million customers, partnered with IBMs Data and AI Expert Labs in 2019 to implement natural language understanding (NLU) software into its call centers to respond to spoken sentences. The machines either rerouted the call to the relevant person or, where necessary, simply provided the information. This came after Humana recognized that 60% of the million-or-so calls they were getting each month were just queries for information.

According to a blog post (opens in new tab) from IBM, The Voice Assistant uses significant speech customization with seven language models and two acoustic models, each targeted to a specific type of user input collected by Humana.

Through speech customization training, the solution achieves an average of 90-95% sentence error rate accuracy level on the significant data inputs. The implementation handles several sub-intents within the major groupings of eligibility, benefits, claims, authorization and referrals, enabling Humana to quickly answer questions that were never answerable before.

The obvious stumbling block for most companies will be the cost. After all, OpenAIs chatbot ChatGPT charges for API access while Metas LLaMA is partially open-source but doesnt permit commercial use.

However, given time, the cost for implementing machine learning solutions will come down. For example, Databricks, a U.S.-based enterprise company recently launched Dolly 2.0 (opens in new tab), a 12-billion parameter model thats completely open source. It will allow companies and organizations to create large language models (LLMs) without having to pay costly API fees to the likes of Microsoft, Google or Meta. With more of these advancements being made, the AI adoption rate for small and medium-sized businesses will (and should) increase.

According to recent research by industry analysts Gartner (opens in new tab), around 10% of so-called agent interactions will be performed by conversational AI by 2026. At present, the number stands at around 1.6%.

"Many organizations are challenged by agent staff shortages and the need to curtail labor expenses, which can represent up to 95 percent of contact center costs, explained Daniel O'Connell, a VP analyst at Gartner. Conversational AI makes agents more efficient and effective, while also improving the customer experience."

You could even make the experience a bit more fun. Imagine if a company got the license to utilize James Earl Jones voice for its call center AI. I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.

I could spend a half-hour discussing insurance renewal rates with Darth Vader himself.

Im not saying there wont be teething problems; AI can struggle with things like regional dialects or slang terms and there are more deep-rooted issues like unconscious bias. And if a company simply opts for a one-size-fits-all AI approach, rather than tailoring it to specific customer requirements, we wont be any better off.

Zooming out for a second, I appreciate that were yet to fully consider all the ethical questions posed by the rapid advancements in AI. Regulation will surely become a factor (if it can keep pace) and upskilling a workforce to become comfortable with the new system will be something for industry leaders and educational institutions to grapple with.

But I still think a good place to start is letting the robots take care of mundane helpline tasks its for the good of humanity.

Today's best Google Nest Hub (2nd Gen) deals

See the original post:
I can't wait for artificial intelligence to take this job away from humans - Tom's Guide

Read More..

Elon Musk agrees A.I. will hit people like an asteroid, says he used Obama meeting to urge regulation – Fortune

Elon Musk thinks the world is woefully unprepared for the impact of artificial intelligence. On Sunday, he agreed that the technology will hit people like an asteroid, and he revealed that he used his only one-on-one meeting with then President Barack Obama to push for A.I. regulation.

The Twitter and Tesla CEO made the comments in response to a tweet from A.I. software developer Mckay Wrigley, who wrote on Saturday: It blows my mind that people cant apply exponential growth to the capabilities of AI. You wouldve been called a *lunatic* a year ago if you said wed have GPT-4 level AI right now. Now think another year. 5yrs? 10yrs? Its going to hit them like an asteroid.

Musk responded: I saw it happening from well before GPT-1, which is why I tried to warn the public for years. The only one on one meeting I ever had with Obama as President I used not to promote Tesla or SpaceX, but to encourage AI regulation. Obama had dinner with Musk in February 2015 in San Francisco.

This week, Musk responded to news about Senate Majority Leader Chuck Schumer laying the groundwork for Congress to regulate artificial intelligence.

Good news! AI regulation will be far more important than it may seem today, Musk tweeted.

According to the Financial Times, Musk is developing plans to launch an A.I. startup, dubbed X.AI, to compete against Microsoft-backed OpenAI, which makes generative A.I. tools, including the A.I. chatbots ChatGPT and GPT-4 and the image generator DALL-E 2.

Musk is also reportedly working on an A.I. project at Twitter.

A few weeks ago, Musk called for a six-month pause on developing A.I. tools more advanced than GPT-4, the successor to ChatGPT. He was joined in signing an open letter by hundreds of technology experts, among them Apple cofounder Steve Wozniak. The letter warned of mass-scale misinformation and the mass automation of jobs.

The power of A.I. systems to automate some white-collar jobs is in little doubt. A Wharton professor recently ran an experiment to see what A.I. tools could accomplish on a business project in 30 minutes and called the results superhuman. Meanwhile some remote workers are apparently taking advantage of productivity-enhancing A.I. tools to hold multiple full-time jobs, with their employers none the wiser. But fears that in the long run A.I. will replace many jobs are mounting.

Musk cofounded OpenAI in 2015 as a nonprofit, but he parted ways with it after a power struggle with CEO Sam Altman over its control and direction, according to the Wall Street Journal.

He tweeted on Feb. 17 that OpenAI was created as an open-source nonprofit to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.

Altman himself has warned frequently about the dangers of artificial intelligence. Last month in an ABC interview, he said that other A.I. developers working on ChatGPT-like tools wont apply the kind of safety limits his firm hasand the clock is ticking.

Musk has long believed that oversight for artificial intelligence is necessary, having described the technology as potentially more dangerous than nukes.

We need some kind of, like, regulatory authority or something overseeing A.I. development, he told Tesla investors last month. Make sure its operating in the public interest.

Read more here:
Elon Musk agrees A.I. will hit people like an asteroid, says he used Obama meeting to urge regulation - Fortune

Read More..

3 Top Artificial Intelligence Stocks to Buy Right Now – The Motley Fool

Unless you've been living under a rock without internet or wireless service, there's a good chance you've heard about some of the incredible breakthroughs that have been happening with artificial intelligence (AI) lately. Major technological leaps forward are occurring seemingly overnight and suggest that AI capabilities are on track to grow much faster than many had anticipated.

That said, as impressive as big breakthroughs in AI-generated art and OpenAI's ChatGPT have been, this paradigm shift is still just starting to unfold. While some companies have already seen explosive gains in conjunction with the excitement surrounding AI, others remain significantly underappreciated.

Read on for a look at three potentially explosive AI stocks that are worth buying right now.

CrowdStrike's (CRWD -0.24%) Falcon software helps protect computers, mobile devices, servers, and other endpoint hardware from being exploited. And crucially, the Falcon platform uses artificial intelligence to grow and adapt as it runs into new threats and attack vectors.

Amid some powerful demand catalysts, CrowdStrike's business has been doing gangbusters. CrowdStrike ended the year with annual recurring revenue of $2.56 billion, and it expects that it will grow its subscription sales base to $5 billion in fiscal 2026 -- good for growth of roughly 95% across the stretch. Even at the end of that projection period, the company will likely still be scratching the surface of its market opportunity.

Thanks to growth for existing services, new product launches, future initiatives, and cloud-security opportunities, the cybersecurity specialist estimates that its total addressable market will have expanded from $76 billion this year to $158 billion in 2026. But based on its targets, the company will still be tapping just over 3% of its addressable market at that point.

However, despite strong business performance and huge growth opportunities ahead, CrowdStrike stock has lost ground in conjunction with macroeconomic pressures impacting the broader market. Trading down 54% from its valuation peak, the software specialist presents an attractive risk-reward profile for investors looking to benefit from AI and cybersecurity trends.

E-commerce has historically been a low-margin business, but artificial intelligence has the potential to change that and pave the way for Amazon (AMZN 0.11%) to see huge earnings growth. Between advances in AI and robotics, the online retail giant will have opportunities to automate warehouse operations and offload deliveries to autonomous vehicles. The company's Zoox robotaxi business could also emerge as a significant sales and earnings driver.

In addition to advances in factory automation and autonomous shipping, the company is making some big moves in the consumer robotics space. The tech giant is on track to acquire iRobot, the maker of the popular Roomba vacuum cleaners, in a $1.7 billion deal. The move will not only push Amazon into a new consumer tech category, it will give the company access to data that can be fed into AI algorithms that lead to improvements and opportunities for other company initiatives.

Amazon's Echo smart speaker hardware and Alexa software also have the company positioned as a leader in terms of voice-based devices and operating systems. The company's strengths in these categories have already yielded benefits for its e-commerce business and data analytics initiatives, but leadership in voice-based OS potentially creates huge advantages in the AI space, and crossover opportunity between these two categories is likely just beginning to unfold.

With the stock still down roughly 47% from its high and the market seemingly underestimating its potential from AI, Amazon looks like a smart buy right now.

In some ways, an explosion of data generation and collection is the fuel that's powering the artificial intelligence revolution. But without special software tools, in many cases it's actually not possible to efficiently combine and analyze data generated from distinct cloud infrastructure service. Snowflake's (SNOW 0.53%) Data Cloud platform makes it possible to bring together data from Amazon, Microsoft, and Alphabet's respective cloud infrastructures.

AI and big-data trends are occurring in tandem, and they're still just starting to unfold.To put the progression of the latter trend in perspective, Tokyo Electron CEO Toshiki Kawai estimates that global data generation will increase tenfold by 2030. From there, he estimates that data generation will grow another hundredfold by 2040. Snowflake is on track to benefit from the ongoing evolution of big data, and its software tools are already playing a key role in powering AI and analytics applications.

At the end of last year, the data-services company tallied 330 customers generating trailing-12-month product revenue of more than $1 million, which represented a 79% increase for the number of customers in the category. Spurred by growing demand for analytics and app-building technologies, the company estimates that it will grow product revenue from roughly $2.7 billion this fiscal year to $10 billion in the fiscal year ending January 2029. Crucially, the data-services specialist could still have room for explosive growth from there.

Snowflake has seen macroeconomic pressures hurt its valuation and curb some of its near-term growth opportunities, but the market appears to be underestimating its significance as a player in AI. Down 65% from its high, the stock could go on to be an explosive winner for risk-tolerant investors.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Keith Noonan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon.com, CrowdStrike, Microsoft, Snowflake, and iRobot. The Motley Fool has a disclosure policy.

Read more:
3 Top Artificial Intelligence Stocks to Buy Right Now - The Motley Fool

Read More..

St. Louis County hopes artificial intelligence will reduce wait times … – St. Louis Public Radio

The St. Louis County Police Department has tapped artificial intelligence technology to reduce 911 wait times for county residents.

We are trying to provide prompt, efficient and accurate service to first responders in the community, said Brian Battles, the administrative specialist for the departments Bureau of Communications. But over the last three years, we noticed a decrease in the amount of applications that we've taken for the public safety dispatcher position, while the workload has increased. We were not able to keep up with that under the direction we were going.

Dispatchers handle about 2,000 calls a day, split roughly 50/50 between 911 and nonemergency issues like how to get a copy of a police report. Priority always goes to 911 calls, Battles said, but once dispatchers get on a nonemergency call, they cannot switch if an emergency call comes in.

And youre locked into a 5minute conversation with somebody on a nonemergency call in which you're not going to be able to provide them any assistance anyway, he said. That can leave someone who needs 911 waiting for the next available operator.

In order to free up dispatchers for 911 calls, the bureau needed to find a way to divert nonemergency calls. After consulting with other agencies and looking at national trends, Battles said, the department signed a contract with AT&T, which uses an intelligent voice assistant from Five9.

Brian Munoz

/

St. Louis Public Radio

Since the system went live in March, Battles said, the volume of nonemergency calls answered by dispatchers has decreased by 60%.

All 911 calls are still handled by dispatchers, Battles said. But those who dial the nonemergency line (636-529-8210) will have their call answered by a voice asking them to please state the nature of your call. The system is programmed to recognize key words and phrases, and then route the caller to the correct department, though they will get to a live person if the system incorrectly routes them twice.

Matt Crecelius, business manager for the St. Louis County Police Officers Association, called the system a great benefit to the community and our emergency dispatchers by making workloads more manageable.

Donald Wunsch, director of the Kummer Institutes Center for Artificial Intelligence and Autonomous Systems at Missouri University of Science and Technology in Rolla, said residents should give the system a chance.

The chances are that more often than not, this system is likely to forward you to as good a direction as the random person operating that system would be, he said. Even when you get to a human, it's kind of annoying if you have to get forwarded five times before you get to the right human.

Read more from the original source:
St. Louis County hopes artificial intelligence will reduce wait times ... - St. Louis Public Radio

Read More..

Kennesaw State partners with Equifax to advance research in … – Kennesaw State University

KENNESAW, Ga. | Apr 13, 2023

Through its ongoing partnership with Atlanta-based global data, analytics, and technology company Equifax, Kennesaw State has launched a second research lab, the AI Ethics Lab. The new research lab will focus on studying the use of artificial intelligence in the U.S. financial services industry.

According to MinJae Woo, assistant professor of statistics and data science at Kennesaw State University, it is important that credit models used to make financial decisions are transparent and explainable, so consumers can understand the outcome of decisions. As the AI Ethics Labs director, Woo will work with two doctoral students to establish methods that will help identify how an AI-powered process may create different outcomes than traditional models and the potential impact of these differences.

MinJae Woo

We live in a time when AI is coming to a variety of fields, Woo said. Studying how AI indirectly acquires information is key to ensuring discrimination and unintended ethical issues do not arise within the models.

This is the second collaboration between the University and Equifax. In 2017, Kennesaw States Equifax Data Science Research Lab was launched with a mission to investigate business challenges and opportunities created by non-traditional sources of consumer and commercial data. The success of the data science lab, combined with Woos AI research, prompted Equifax to approach KSU about starting a new lab.

As one of the first patent holders for explainable AI in credit risk modeling, Equifax understands the importance in studying the impacts of how the technology is used by data scientists and our customers, Christopher Yasko, chief data scientist at Equifax, said. Expanding our work with KSU builds our academic partnerships, fueling the innovators of tomorrow while they focus on issues that can help move our industry and business forward.

According to Woo, the field of AI ethics is still in its infancy, but its a growing area. Last December, three KSU doctoral students graduated from the School of Data Science and Analytics all three worked on data ethics during their studies, and each secured a position focused on the topic.

Equifax has been applying machine learning, a subset of artificial intelligence, for at least two decades. As a leader in explainable AI, its research efforts include more than 25 current and pending patents related to AI. Joseph White, distinguished data scientist at Equifax, leads Equifaxs participation in the new AI Ethics Lab at KSU.

Our team is excited to work with Kennesaw State University to explore how models can remain fair and consistent across a wide range of both known and unknown dimensions, White said. The new lab will have four components that can be explored over time privacy, robustness, explainability and fairness.

Woo and his team have been analyzing data provided by Equifax. Next, they will study models and create hypotheses to help find and address any unintended disparities.

Abbey OBrien BarrowsPhotos by Darnell Wilburn

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate and doctoral degrees to its more than 43,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The universitys vibrant campus culture, diverse population, strong global ties and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 7 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.

Go here to read the rest:
Kennesaw State partners with Equifax to advance research in ... - Kennesaw State University

Read More..

Google CEO Sundar Pichai weighs in on the future of artificial … – Seeking Alpha

imaginima

The competition for AI dominance is heating up as the world's biggest tech giants go all in on an area that will "impact every product across every company." That's the opinion of Google (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar Pichai, who hastily released the company's chatbot called Bard in March after Microsoft (NASDAQ:MSFT) poured billions of dollars into ChatGPT maker OpenAI. The developing industry also isn't limited to chatbots, with calls to pause many AI tools until new safety standards for the technology are in place, such as regulation for the economy, laws to punish abuse, and international treaties to make artificial intelligence safe for the world.

Is society prepared for what's coming? "On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch," Pichai told CBS's 60 Minutes. "On the other hand, compared to any other technology, I've seen more people worried about it earlier its life cycle... and worried about the implications." Knowledge workers could face the biggest disruption from future AI technologies, he added, and could unsettle professions like writers, accountants, architects and even software engineers.

Update (6:30 AM ET): Alphabet shares (GOOG) (GOOGL) are down nearly 3% in premarket trade on a report that smartphone giant Samsung (OTCPK:SSNLF) might replace Google as the default search service on its devices. An estimated $3B in annual revenue is at stake with the contract, per the New York Times.

There is no doubt that companies are on the brink of something big in terms of artificial intelligence, but it's also important to separate hype from the reality when talking about any emerging technology (remember Web 3.0?). There has been a lot of talk about the sentience of chatbots and the genesis of a new humanity, as well as an end to privacy and personal liberty or the quick demise of entire industries. There are also countless AI startups that are looking to play up the news cycle for valuable sources of funding, and even capitalize on investment from the public sector in terms of defense and national security.

SA commentary: Ironside Research explores why Google (GOOG) (GOOGL) was smart to let Microsoft launch its AI first, while Deep Tech Insights says its AI is even three time larger than ChatGPT. Meanwhile, Investing Groups Leader Samuel Smith calls out three AI stocks that are poised to win over the next decade and Luckbox Magazine explains how to add AI to your portfolio. Deep learning and debate are also taking place around hot industry players like C3.ai (NYSE:AI), with Julian Lin calling it an AI meme stock and Stone Fox Capital flagging the recent pullback as a buying opportunity.

Read this article:
Google CEO Sundar Pichai weighs in on the future of artificial ... - Seeking Alpha

Read More..

ChatGPT, artificial intelligence, and the news – Columbia Journalism Review

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toyan automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didnt seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Taywhich was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut downor even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call Artificial General Intelligence, or AGI, which, they warn, could transform society in ways that we dont understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is as revolutionary as mobile phones and the Internet.

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the mans widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered different methods of suicide with very little prompting.) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPTanother program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topicsthat chatbot praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the citys homeless crisis, [and] used the n-word.

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that hes never been accused of harassing a student. When the Post tried asking the same question of Microsofts Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat programReplika, which is also based on an open-source version of ChatGPTrecently came under fire for sending sexual messages to its users, even after they said they werent interested. Replika placed limits on the bots referencing of erotic roleplaybut some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that dont exist, academic papers that professors didnt write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs respond so confidently, its very seductive to assume they can do everything, and its very difficult to tell the difference between facts and falsehoods.

Joan Donovan, the research director at the Harvard Kennedy Schools Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack any way to tell the difference between true and false information. Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbias Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new fake news frenzy.

As I wrote for CJR in February, experts say that the biggest flaw in a large language model like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as hallucinations, or outright fabrications. And its not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent photos of Donald Trump being arrestedwhich were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcatand a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJRs Amanda Darrach about the perils of AI-created images earlier this year.)

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institutea nonprofit organization that says its mission is to reduce global catastrophic and existential risk from powerful technologiespublished an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musks foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was dripping with #Aihype. In contrast to the letters vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Benders co-authors were fired from Googles AI team. Some believe that Google made that decision because AI is a major focus for the companys future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it harder to tackle real AI harms, and characterized many of the questions that the letter asked as ridiculous. In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning there is no magic button that would halt dangerous AI research while allowing only the safe kind. Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: apocalyptic doomsaying about the terrifying power of AI makes OpenAIs technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but Im not convinced I would get a straight answer.) Even if its the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wireds global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us. The guidelines state that the magazine will not publish articles written or edited by AI tools, except when the fact that its AI-generated is the whole point of the story.

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.

Other notable stories:

ICYMI: Free Evan, prosecute the hostage takers

Follow this link:
ChatGPT, artificial intelligence, and the news - Columbia Journalism Review

Read More..

With no one at the wheel, artificial intelligence races ahead – University of Miami: News@theU

University of Miami innovation and data science specialists assess the newest phase of artificial intelligence where tools and models utilizing turbo-charged computing power have transitioned from development to market production.

In late March, citing concerns that not even the creators of powerful new artificial intelligence systems can understand, predict, or reliably control them, more than a thousand AI sector experts and researchers published an open letter in Le Monde calling for a six-month pause in research on artificial intelligence systems more powerful than the new GPT-4, or Generative Pre-Trained Transformer 4 and is linked to the popular ChatGPT.

Max Cacchione, director of innovation with University of Miami Information Technology UMIT), and David Chapman, an associate professor of computer science with the Institute for Data Science and Computing (IDSC), both dismissed the feasibility of any such moratorium.

Zero chance it will happen. AI is like a virusand you cant contain a virus, said Cacchione, also the director of Innovate, a group which supports and implements innovative technology initiatives across the University. You can put a rule or law in place, but theres always someone who will get around it both nationally and internationally.

Chapman pointed to the intense competition in the industry as a major factor no pause would be enacted.

If we pause AI research, who else is going to proceed to develop the technology faster than us? These new tools and models are really coming to market now and, if we dont pursue them, then someone else will be making those advances, Chapman said.

Cacchione though highlighted that the concerns outlined in the letter were warranted.

The only thing thats preventing a disaster right now is that AI is contained in an environment where its not actionableits not connected to commercial airlines, a nuclear facility, a dam or something like that, Cacchione said. If it were connected right now, it would be in a position to cause a lot of damage.

The problem is that AI is an intelligence without any morals and guidance, he added. Its without a soul, so its going to do whats most logicaland it wont feel bad about us or factor in the long-term survival of humanity if its not programmed to do so.

Recently, an art tool produced by AI image generator Midjourney was used to generate a number of false imagesPope Francis in a puffy white parka and Donald Trump being arrested and then escaping from jail. The small startup has since, at least temporarily, disabled the free trial options, but the brouhaha prompted media outlets to decry the absence of oversight.

Cacchione stressed that there is no single regulatory body responsible for regulating AI research and relatively few specific regulations focused solely on AI.

He identified, though, a range of organizations and agencies including the European Union, United Nations Group of Governmental Experts on Autonomous Weapons Systems, the Institute of Electrical and Electronics Engineers, the Partnership on AI, and the Global Partnership on AI, among others that are working to develop guidelines and frameworks for the ethical and responsible use of AI.

Cacchione also mentioned efforts to regulate AI at the U.S. federal level, pointing out that in 2019, Congress established the National AI Initiative Act to coordinate federal investments in AI research and development. The bill also included provisions for establishing a national AI research resource task force, promoting AI education and training, and addressing ethical and societal implications of AI.

Chapman noted that, historically, regulatory policy has always lagged between technological advances and that, if this were not the case, advances important to humankind would be stymied.

The idea that AI can be used to create false content among othersthese are just things that societys going to evolve to, Chapman said. Regulations for AI are going to catch up and progress over time, and societal norms will change as we have access to more powerful tools that are ultimately going to help us live more productive lives.

Cacchione pointed out that AI research dates to the 1950s, when computer scientists first began to explore the concept of creating intelligent machines. The term artificial intelligence was coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference.

He highlighted the many milestones and the remarkable development pace of the past decades that have resulted today in self-driving cars, medical diagnosis, and robotics.

The potential applications of AI are vast and include improving healthcare, addressing climate change, space exploration, and advancing scientific research, he said. While there are still significant challenges to be overcome, AI has the potential to revolutionize many aspects of our lives and create new opportunities for innovation and progress.

Yet, while recognizing the tremendous upside, Cacchione highlighted the parallels between AI and Crypto and the potential for misuse in both sectors.

Both have the potential to be used for malicious purposes, such as money laundering, fraud, or cyberattacks, Cacchione said. This potential for misuse has raised concerns among regulators, who worry that these technologies could be used to undermine national security, financial stability, or consumer protections.

The innovation specialist noted that both sectors can be characterized by decentralization, with the effect of operating outside of traditional regulatory frameworks and without being subject to the same types of oversight as other industries. This can make it difficult for regulators to enforce existing laws and regulations or to develop new regulations that can effectively address the unique challenges presented by these technologies.

Both specialists concurred that AI has transitioned to a new phase, from research and development to commercialization.

People have been doing really impressive things with generative adversarial networks, and AI image generation software has been in development, at least on a small scale in research labs, for the past eight years or so, Chapman noted.

Whats new and different, he said, is the amount of computing resources and the data people are now investing into training these models.

The biggest change in the last year that were starting to see the machine learning, the deep learning hit the mass market, he said. Its not just research software anymore; you can actually see tools such as ChatGPT that have been in research for the past decade or so finally starting to go into production, and you start to finally have access to that technology.

Chapman highlighted AIs benefits and potential to save both cost and labor and improve efficiency. He emphasized that ultimately AI is a tool, an algorithm, that is based on data analysis and statistical modeling and that depends on humans to provide input.

It can now create images more quickly and that those images can be of anything youre wanting to dofor example creating special effects for a movie.

Thats a great use of this technology, and something that would save a lot in terms of cost. The experience would be better just because youre able to automate the process of creating images, Chapman said.

So, the question is who is using artificial intelligence and for what purpose? Chapman said.

Excerpt from:
With no one at the wheel, artificial intelligence races ahead - University of Miami: News@theU

Read More..