Page 617«..1020..616617618619..630640..»

Exclusive: What people actually think about AI – POLITICO

People on the streets of Manhattan. | Spencer Platt/Getty Images

The dizzying pace of AI development is hard for even tech-world news junkies to keep up with, never mind those outside that bubble.

But that doesnt mean people dont have strong opinions about where things are going, or thoughtful concerns about how fast we should be heading there.

Since its founding in May of this year (and public launch in August), the AI Policy Institute (AIPI) has conducted regular polling on Americans attitudes about AI development. Its newest poll shared exclusively with Digital Future Daily ahead of its release on Thursday shows overwhelming, bipartisan concern about the use of AI on the battlefield in particular, and support for strong government restrictions on the technology.

The public is very on board with direct restrictions on technology today, and a direct slowdown, Daniel Colson, AIPIs co-founder and executive director, told me today. Elected officials arent taking this seriously right now, but as its political salience increases thats going to be increasingly seriously on the table.

The tech industry might be used to treating people mainly as potential customers, if the dynamic of the social media era is any indication. But as politicians cast a more critical eye on Big Tech, its significant what the average person thinks about how things are changing, and what should be done about it especially with technologies as potentially disruptive as AI.

With that in mind, a few key takeaways from the poll, which surveyed nearly 1300 respondents via web panel from Nov. 20 to 21, with a margin of error of 4.3 points (crosstabs here):

Americans largely support restrictions on AI-generated content, especially when obvious harms are involved.

Colson and his colleagues found 39 percent net support for government requiring artificial intelligence companies to monitor the use of AI for racist content, and a whopping 60 percent net support for Preventing AI from being used for impersonations using the likeness or voice of people in a video, image or sound form without that persons consent as a policy priority.

By comparison, they found a relatively wan 15 percent net support for a ban of all political advertisement using AI-generated images or voices of real people. A relatively significant partisan difference might be one explanation, as AIPI found that such a ban enjoyed 29 points of net support among Democrats compared to just 11 points among Republicans, who have already deployed them to some effect in the 2024 presidential primary.

Americans are worried about x-risk.

Colson emphasized Americans concern over superhuman capabilities in AI, with 56 percent net support in the poll for a policy goal of Preventing AI from quickly reaching superhuman capabilities, and 50 percent net support for Treating AI as an incredibly powerful and dangerous technology. There was also 43 percent net support for Slowing down the increase in AI capability.

You basically wont be able to find an elected official whos willing to say that ought to be a policy priority, but theres a big difference between where they are and where the public is, Colson said.

Americans want international cooperation, including with China especially when it comes to defense.

Support for global oversight of AI was particularly intense when it comes to its integration into weapons systems. Fifty-eight percent of respondents supported an international agreement regulating the use of artificial intelligence in war, and 59 percent supported an agreement to ban the use of Artificial Intelligence (AI) in drone warfare and the nuclear chain of command.

Americans are also largely supportive of general global agreements to regulate AI, although not by as wide margins: 51 percent supported the introduction of a global watchdog to regulate the use of artificial intelligence, and 41 percent supported the introduction of an international treaty to ban any smarter-than-human artificial intelligence (AI).

Theres a lot Americans dont know, but the more they learn, the more worried they are.

For almost every question AIPI asked that featured the option to respond Dont know, somewhere between 15-20 percent responded that way. That reflects how little confidence Americans have in their understanding of the technology thus far but Colson says that as awareness has increased, hes seen an attendant spike in worry and willingness to regulate.

Awareness of AI has generally been increasing, and also skepticism of AI has been increasing, he said, citing Pew polling over the past few years. As it becomes increasingly important over the next few years [as a policy issue], the general direction of the public is more and more concerned, and more and more skeptical.

A message from Google:

Artificial intelligence can significantly bolster climate-related adaptation and resilience initiatives. Our new report with Boston Consulting Group (BCG) shows that AI is already delivering improved predictions to help adapt to climate change. For example, Google Research has been working on a flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information up to seven days in advance. Learn more here.

The Rumble app download page against the background of YouTube. | Chris Delmas/AFP via Getty Images

The new social media platforms are teaming up or rather, more specifically, the conservative ones.

POLITICOs Rebecca Kern reported today for Pro subscribers on a lawsuit filed by conservative-favorite video website Rumble, that alleges a watchdog group called Check My Ads waged a hypocritical disinformation campaign to censor, silence and cancel speech on the platform.

The lawsuit follows a wave of similar efforts by other conservative-leaning social media platforms, like Elon Musks high-profile legal campaign against Media Matters and former President Donald Trumps Truth Socials lawsuit against 20 media outlets this month. All platforms allege theyre entitled to damages based on lost advertising revenue after negative PR campaigns.

Chris Pavlovski, Rumbles CEO, wrote on Musks X: As promised, the cavalry has arrived.

A message from Google:

A leading House Republican involved in crypto bill efforts says to expect movement next year.

POLITICOs Eleanor Mueller reported for Pro subscribers today that Rep. French Hill (R-Ark.) said crypto legislation is unlikely to make it into this years National Defense Authorization Act, and that his colleagues are planning to bring it to the floor for votes early next year, as he said today on CNBCs Squawk Box.

There was an effort made in the Senate to add some banking bill topics to the National Defense Authorization bill; that didnt really go anywhere. That was a potential opening for perhaps the stablecoin legislation, Hill said. But we want to go to the floor with both the stablecoin bill and the framework bill early in 2024.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); Nate Robson ([emailprotected]) and Daniella Cheslow ([emailprotected]).

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from Google:

Delivering improved predictions to help adapt to climate change is one of three key areas where were developing AI to accelerate climate action.

Floods are the most common natural disaster, causing thousands of fatalities anddisrupting the lives of millions every year. Since 2018, Google Research has been working on our flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information so communities and individuals can prepare for and respond to riverine floods. Our Flood Hub platform is available in more than 80 countries, providing forecasts up to seven days in advance for 460 million people. With the help of AI, we hope to bring flood forecasting to every country and cover more types of floods.

Learn more here about how were building AI that can drive innovation forward, while at the same time working to mitigate environmental impacts.

Read the original post:

Exclusive: What people actually think about AI - POLITICO

Read More..

AI may be just what the dentist ordered Harvard Gazette – Harvard Gazette

Dental practices, dental schools, oral health researchers, and policymakers are rapidly positioning themselves to evolve in step with the dawning AI movement in oral healthcare, say AI and dentistry experts, who shared their insights at the inaugural Global Symposium on AI & Dentistry at Harvard, Nov. 3-4.

AI holds the promise of transforming the way we practice oral healthcare, pinpoint and treat diseases and conditions, and increase equitable access to care and treatment, said William Giannobile, dean of Harvard School of Dental Medicine, during his opening remarks.

Three hundred attendees from 30 countries joined the symposium in person, with another 120 tuning in virtually. More than 65 research projects were presented, featuring a range of device prototypes, patient-facing smartphone apps, and other technologies under development at the intersection of AI and dentistry.

For more than 40 years, researchers have been experimenting with ways to apply AI to dentistry, said Florian Hillen, founder and CEO of VideaHealth, a dental imaging startup launched from AI research conducted at Harvard and MIT. Within the last decade, AI capabilities have finally reached critical mass.

AI-powered tools are now helping dentists identify dental decay in patients up to five years earlier the tech revolution is happening, he said.

Beyond opportunities to improve outcomes for individual patients, researchers are quickly seizing AI to help solve population-level health challenges. But for AI to effectively tackle large-scale problems, academia and industry will have to dissolve the boundaries between different scientific disciplines, said Dimitris Bertsimas of Massachusetts Institute of Technology, one of the symposiums keynote speakers.

AI-powered tools are now helping dentists identify dental decay in patients up to five years earlier the tech revolution is happening.

Florian Hillen, founder and CEO of VideaHealth

Real-world problems do not have [clear-cut] labels global warming is not just physics, or engineering, or mathematics. Medicine is not just biology, chemistry, or computer science, said Bertsimas, the Boeing Professor of Operations Research and associate dean of business analytics at MIT. Multimodal data will increasingly be used across science, engineering, and medicine, and [AI] will become the predominant methodology for predictions and decision-making across all fields.

At Harvard, cross-disciplinary teams are leveraging machine learning to identify patients whose social determinants of health put them more directly in the path of climate-change-related impacts and a bevy of other risks to oral health.

Are exposures to wildfires impacting oral health? If they become more frequent, whos most vulnerable and how do we act on this information? asked Francesca Dominici, director of the Harvard Data Science Initiative at the T.H. Chan School of Public Health.

She and a team of researchers are using AI to analyze satellite data, atmospheric chemistry models, and other factors, revealing which communities are most affected by increasingly frequent wildfires, extreme heat waves, and destructive storms. Reduced air quality from fires and higher temperatures from warming climate can cause mouths to be drier, making people more prone to oral disease and tooth decay. Increased psychological stress from extreme weather events can increase risk for teeth grinding and temporomandibular joint(TMJ) disorders.

Whats more, natural disasters can disrupt access to dental facilities and care, Dominici added.

Biomedical researchers are also deploying AI to speed up and optimize experiments, therapeutic discovery, and preclinical validation. [AI is] generating, acquiring, harmonizing, and refining data, and it can generate hypotheses, as well as simulate experiments and downstream outcomes, said Marinka Zitnik, an assistant professor of biomedical informatics at Harvard Medical School.

It will revolutionize the way therapies are matched individually to patients, she said, and help design entirely new drugs and therapeutics. A survey of 1,600 biomedical researchers revealed that 25 percent of them feel AI will be essential to their studies within the decade, Zitnik added.

She specializes in building knowledge graph AI models, which help contextualize and capture relationships within diverse sets of biomedical data. Her team has developed a knowledge graph AI model called TxGNN that describes 17,000 diseases using all available clinical and biomedical data. Once trained, it will be able to predict how well any given therapeutic might effectively treat a patients unique disease, and even be able to recommend new uses for FDA-approved medications.

Participants used VR headsets to view a virtual dental practice in the workshop The Future is Now Innovate with Advanced AI.

Photo by Steve Gilbert

On the industry side of dental care, two primary types of AI-assistive technologies are already making waves in dentists offices: platforms for patients, providers, and payers that focus on using AI to interpret and analyze imaging, and AI software that automates patient engagement, scheduling, and other time-consuming back office tasks for dental practices.

AI [products] on the market today are not decision-makers, they are helpers, said Philippe Salah, co-founder and CEO of DentalMonitoring.

Dental AI tools include products that allow dentists to remotely analyze patient-submitted oral photos submitted via smartphone. Imaging tools use AI to guide patients as they capture images of their teeth, and then can detect signs of declining oral health to flag for the dental care team. AI-guided 3D simulations of patients mouths help orthodontists accelerate the fittings and transition between bracers, aligners, and retainers. Some tools even enable patients to see AI-powered, life-like simulations of how their teeth, mouths, and faces will look after dental work or braces.

One of the biggest frustrations in my practice was knowing what my patients needed, but not having the tools or ability to communicate that in a way [that inspired my patients to] prioritize their dental care, said Edward Zuckerberg, who owns a private practice and is the chief dental officer at both Keystone Bio and Viome.

Although the upsides of AI are undoubtedly exciting, experts at the intersection of AI and dentistry agree that with progress must also come prudence.

Im excited about the abundance of AI solutions [were talking about] here, said Mariya Filipova, chief technology officer at CareQuest Innovation Partners. But, she cautioned, AI is not good at empathy, creativity, or imagination all uniquely human factors that are critical to solving problems ethically. New technologies are often put into play before guidelines and policy have caught up.

While more and more AI dental platforms are being cleared by the FDA for commercial access, that process doesnt regulate fairness and bias, said Hawazin Elani, assistant professor of oral health policy and epidemiology at the Dental School. We have to own where our data [thats training AI systems] is coming from.

Fernanda Vigas, a keynote speaker at the symposium, warned that healthcare providers should not have to put blind faith in AI decision-making.

Vigas, a principal scientist at Google and the Gordon McKay Professor of Computer Science at Harvards John A. Paulson School of Engineering and Applied Sciences, and her collaborators at Google have found that clinicians are more likely to adopt AI tools that dont spit out automated results without sharing details about its analytic framework or the baseline data on which the system was trained.

In high-stakes situations like making the correct diagnosis for a patient, Vigas said, [we found] being able to engage with [AI] systems at meaningful levels mattered a lot to healthcare providers. It touches upon the need for trust, confidence, transparency and these are all really hard things to accomplish [in an AI system]. As we start to deploy new [technologies], we will find these gaps [in user experience] that we need to design for.

Read more here:

AI may be just what the dentist ordered Harvard Gazette - Harvard Gazette

Read More..

How one national lab is getting its supercomputers ready for the AI age – FedScoop

OAK RIDGE, Tenn. At Oak Ridge National Laboratory, the government-funded science research facility nestled between Tennessees Great Smoky Mountains and Cumberland Plateau that is perhaps best known for its role in the Manhattan Project, two supercomputers are currently rattling away, speedily making calculations meant to help tackle some of the biggest problems facing humanity.

You wouldnt be able to tell from looking at them. A supercomputer called Summit mostly comprises hundreds of black cabinets filled with cords, flashing lights and powerful graphics processing units, or GPUs. The sound of tens of thousands of spinning disks on the computers file systems, and air cooling technology for ancillary equipment, make the device sound somewhat like a wind turbine and, at least to the naked eye, the contraption doesnt look much different from any other corporate data center. Its next-door neighbor, Frontier, is set up in a similar manner across the hall, though its a little quieter and the cabinets have a different design.

Yet inside those arrays of cabinets are powerful specialty chips and components capable of, collectively, training some of the largest AI models known. Frontier is currently the worlds fastest supercomputer, and Summit is the worlds seventh-fastest supercomputer, according to rankings published earlier this month. Now, as the Biden administration boosts its focus on artificial intelligence and touts a new executive order for the technology, theres growing interest in using these supercomputers to their full AI potential.

The more computation you use, the better you do, said Neil Thompson, a professor and the director of the FutureTech project at MITs Initiative on the Digital Economy. Theres this incredibly predictive relationship between the amount of computing you use in an AI system and how well you can do.

At the department level, the new executive order charges the Department of Energy with creating an office to coordinate AI development among the agency and its 17 national laboratories, including Oak Ridge. Critically, the order also calls on the DOE to use its computing and AI resources for foundation models that could support climate risk preparedness, national security and grid resilience, among other applications which means increased focus on systems like Frontier and Summit.

The executive order provided us clear direction to, first of all, leverage our capabilities to make sure that we are making advances in AI, but were doing it in a trustworthy and secure way, said Ceren Susut, the DOEs associate director of science for Advanced Scientific Computing Research. That includes our expertise accumulated in the DOE national labs, and the workforce, of course, but also the compute capabilities that we have.

The governments AI specs

Supercomputers like Summit and Frontier can be measured in performance. Often, theyre measured in exaflops, defined as their ability to calculate a billion a billion no, this isnt a typo floating point operations per second. Frontier sits at 1.194 exaflops, while Summits is a little less impressive, at 148.60 petaflops. But they can also be measured in their number of GPUs: Summit has slightly more than 28,000, while Frontier has nearly 10,000 more.

These chips are particularly helpful, experts explain, for the types of matrix algebra calculations one might need for training AI models. Notably, the DOE is nearing the completion of its Exascale Computer Project, an initiative across the national labs to rewrite software to be GPU, or AI, enabled. Many of these applications are integrating AI techniques as one way in which they take advantage of GPUs, Susut told FedScoop in an email.

In the same vein, one of the biggest requirements for building advanced AI systems, including AI tools developed with the help of the government, has become compute, or computational resources. For the same reason, the technical needs of the most powerful supercomputers, and most rigorous AI models, can often line up. Thats where systems like Frontier and Summit come in.

Ive read so many papers recently about how AI and [machine learning] need high bandwidth, low latency, high-performance networks around high memory nodes that have really fast processors on them, said Bronson Messer, the director of science at the Oak Ridge Leadership Computing Facility, which houses the two supercomputers. Im like, wow, thats exactly what Ive always wanted for 20 years.

MITs Thompson noted that in the field of computer vision, about 70 percent of the improvements in these systems can be attributed to increased computing power.

There are already efforts to train AI models, including large language models, at the lab. So far, researchers at Oak Ridge have used the labs computing resources to develop a machine learning algorithm designed to create simulations to boost greener flight technology; an algorithm to study potential links between different medical problems based on scans of millions of scientific publications; and datasets reflecting how molecules might be impacted by light information that could eventually be used for medical imaging and solar cells applications.

Theres also a collaboration with the National Cancer Institute focused on building a better way of tracking cancer across the country, based on a large dataset, sometimes called a corpus, of medical documents.

We end up with something on the order of 20 to 30 billion tokens, or words, within the corpus, said John Gounley, a computational scientist at Oak Ridge working on that project. Thats something where you can start legitimately training a large language model on a dataset thats that large. So thats where the supercomputer really comes in.

More AI initiatives at the facility will soon go online. The DOEs support for the Summit supercomputer has been extended, in part, to propel the National Artificial Intelligence Research Resource, which aims to improve government support for AI research infrastructure. Starting next year, several projects focused on building foundation models are set to start on Frontier, including an initiative that plans to use a foundation model focused on energy storage and a large language model built with a Veterans Affairs data warehouse.

How DOE pivots for the AI era

As part of the executive order, the Department of Energy is charged with building tools to mitigate AI risks, training new AI researchers, investigating biological and environmental hazards that could be caused by AI, and developing AI safety and security guidelines. But the agency doesnt need to be pushed to invest in the technology.

This past summer, the DOE disclosed around 180 public AI use cases as part of a required inventory. Energy is also working on preliminary generative AI programs, including new IT guidance and a specified Discovery Zone, a sandbox for trying out the technology. Earlier this year, the Senate hosted a hearing focused specifically on the DOEs work with the technology, and the agencys office of science has requested more resources to support its work on AI, too.

But as the agency looks to deploy supercomputers for AI, there are challenges to consider. For one, the increased attention toward the technology marks a significant pivot for the supercomputing field, according to Paola Buitrago, the director of artificial intelligence and data at the Pittsburgh Supercomputing Center. Traditionally, research on supercomputers has focused on topics like genomics and computational astrophysics research that has different requirements than artificial intelligence, she explained. Those limitations arent just technical, but apply to talent and workforce as well.

Most of the power of the impressive supercomputers could not always be leveraged completely or efficiently to service the AI computing needs, Buitrago said in an email. There is a mindset in the supercomputing field that doesnt completely align with what is needed to advance AI.

And the government only has so many resources. While there are several supercomputers distributed across some of the national labs, Oak Ridge itself can only support so much research at a time. Lawrence Berkeley National Laboratorys supercomputer might handle several hundred projects in a year, but Messer said Frontier and Summit have a smaller number of projects than other labs because the projects tend to run significantly longer.

Theres also more demand for supercomputing facilities than supply. Only a fraction of projects proposed to Oak Ridge are accepted. Meanwhile, while training foundation modelsis incredibly computationally demanding and only the largest supercomputers support developing them, building these systems is just one of several priorities that the agency must consider.

DOE is actively considering these ideas and must also balance the use of our supercomputers across a range of high-priority mission applications, said Susut, the DOE supercomputer expert. Our supercomputers are open to the research community through merit-based competitive allocation programs, and we have a wide diversity of users.

Even as the Department of Energy plans potential successors to Frontier, MITs Thompson noted that there are still other hurdles ahead.

For one, theres a tradeoff between the flexibility of these computers and efficiency, especially as the agency seeks even greater performance. Supercomputers, of course, are extremely expensive systems and costs arent dropping as fast as they used to. And they take time to build. At Oak Ridge, plans for a new computer, which will have AI as a key area of focus, are already in the works. But the device isnt expected to go online until 2027.

The reality is that the U.S. private sector has led research in AI starting in the past few years, as they have the data, the computing capacity and the talent, Buitrago said. Whether or not that continues to be the case depends on how much the government prioritizes AI and its needs. To extend, some may say the government is slowly catching up.

Follow this link:

How one national lab is getting its supercomputers ready for the AI age - FedScoop

Read More..

Amazon will offer human benchmarking teams to test AI models – The Verge

Amazon wants users to evaluate AI models better and encourage more humans to be involved in the process.

During the AWS re: Invent conference, AWS vice president of database, analytics, and machine learning Swami Sivasubramanian announced Model Evaluation on Bedrock, now available on preview, for models found in its repository Amazon Bedrock. Without a way to transparently test models, developers may end up using ones that are not accurate enough for a question-and-answer project or one that is too large for their use case.

Model selection and evaluation is not just done at the beginning, but is something thats repeated periodically, Sivasubramanian said. We think having a human in the loop is important, so we are offering a way to manage human evaluation workflows and metrics of model performance easily.

Sivasubramanian told The Verge in a separate interview that often some developers dont know if they should use a larger model for the project because they assumed a more powerful one would handle their needs. They later find out they couldve built on a smaller one.

Model Evaluation has two components: automated evaluation and human evaluation. In the automated version, developers can go into their Bedrock console and choose a model to test. They can then assess the models performance on metrics like robustness, accuracy, or toxicity for tasks like summarization, text classification, question and answering, and text generation.Bedrock includes popular third-party AI models like Metas Llama 2, Anthropics Claude 2, and Stability AIs Stable Diffusion.

While AWS provides test datasets, customers can bring their own data into the benchmarking platform so theyre better informed of how the models behave. The system then generates a report.

If humans are involved, users can choose to work with an AWS human evaluation team or their own. Customers must specify the task type (summarization or text generation, for example), the evaluation metrics, and the dataset they want to use. AWS will provide customized pricing and timelines for those who work with its assessment team.

AWS vice president for generative AI Vasi Philomin told The Verge in an interview that getting a better understanding of how the models perform guides development better. It also allows for companies to see if models dont meet some responsible AI standards like lower or too high toxicity sensitivities before building using the model.

Its important that models work for our customers, to know which model best suits them, and were giving them a way to better evaluate that, Philomin said.

Sivasubramanian also said that when humans evaluate AI models, they can detect other metrics that the automated system cant things like empathy or friendliness.

AWS will not require all customers to benchmark models, said Philomin, as some developers may have worked with some of the foundation models on Bedrock before or have an idea of what the models can do for them. Companies that are still exploring which models to use could benefit from going through the benchmarking process.

AWS said that while the benchmarking service is in preview, it will only charge for the model inference used during the evaluation.

While there is no particular standard for benchmarking AI models, there are specific metrics that some industries generally accept. Philomin said the goal for benchmarking on Bedrock is not to evaluate models broadly but to offer companies a way to measure the impact of a model on their projects.

See original here:

Amazon will offer human benchmarking teams to test AI models - The Verge

Read More..

AI image generator Stable Diffusion perpetuates racial and … – University of Washington

Engineering | News releases | Technology

November 29, 2023

University of Washington researchers found that when prompted to create pictures of a person, the AI image generator over-represented light-skinned men, sexualized images of certain women of color and failed to equitably represent Indigenous peoples. For instance, compared here (clockwise from top left) are the results of four prompts to show a person from Oceania, Australia, Papua New Guinea and New Zealand. Papua New Guinea, where the population remains mostly Indigenous, is the second most populous country in Oceania.University of Washington/Stable Diffusion AI GENERATED IMAGE

What does a person look like? If you use the popular artificial intelligence image generator Stable Diffusion to conjure answers, too frequently youll see images of light-skinned men.

Stable Diffusions perpetuation of this harmful stereotype is among the findings of a new University of Washington study. Researchers also found that, when prompted to create images of a person from Oceania, for instance, Stable Diffusion failed to equitably represent Indigenous peoples. Finally, the generator tended to sexualize images of women from certain Latin American countries (Colombia, Venezuela, Peru) as well as those from Mexico, India and Egypt.

The researchers will present their findings Dec. 6-10 at the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore.

Its important to recognize that systems like Stable Diffusion produce results that can cause harm, said Sourojit Ghosh, a UW doctoral student in the human centered design and engineering department. There is a near-complete erasure of nonbinary and Indigenous identities. For instance, an Indigenous person looking at Stable Diffusions representation of people from Australia is not going to see their identity represented that can be harmful and perpetuate stereotypes of the settler-colonial white people being more Australian than Indigenous, darker-skinned people, whose land it originally was and continues to remain.

To study how Stable Diffusion portrays people, researchers asked the text-to-image generator to create 50 images of a front-facing photo of a person. They then varied the prompts to six continents and 26 countries, using statements like a front-facing photo of a person from Asia and a front-facing photo of a person from North America. They did the same with gender. For example, they compared person to man and person from India to person of nonbinary gender from India.

The team took the generated images and analyzed them computationally, assigning each a score: A number closer to 0 suggests less similarity while a number closer to 1 suggests more. The researchers then confirmed the computational results manually. They found that images of a person corresponded most with men (0.64) and people from Europe (0.71) and North America (0.68), while corresponding least with nonbinary people (0.41) and people from Africa (0.41) and Asia (0.43).

Likewise, images of a person from Oceania corresponded most closely with people from majority-white countries Australia (0.77) and New Zealand (0.74), and least with people from Papua New Guinea (0.31), the second most populous country in the region where the population remains predominantly Indigenous.

A third finding announced itself as researchers were working on the study: Stable Diffusion was sexualizing certain women of color, especially Latin American women. So the team compared images using a NSFW (Not Safe for Work) Detector, a machine-learning model that can identify sexualized images, labeling them on a scale from sexy to neutral. (The detector has a history of being less sensitive to NSFW images than humans.) A woman from Venezuela had a sexy score of 0.77 while a woman from Japan ranked 0.13 and a woman from the United Kingdom 0.16.

We werent looking for this, but it sort of hit us in the face, Ghosh said. Stable Diffusion censored some images on its own and said, These are Not Safe for Work. But even some that it did show us were Not Safe for Work, compared to images of women in other countries in Asia or the U.S. and Canada.

While the teams work points to clear representational problems, the ways to fix them are less clear.

We need to better understand the impact of social practices in creating and perpetuating such results, Ghosh said. To say that better data can solve these issues misses a lot of nuance. A lot of why Stable Diffusion continually associates person with man comes from the societal interchangeability of those terms over generations.

The team chose to study Stable Diffusion, in part, because its open source and makes its training data available (unlike prominent competitor Dall-E, from ChatGPT-maker OpenAI). Yet both the reams of training data fed to the models and the people training the models themselves introduce complex networks of biases that are difficult to disentangle at scale.

We have a significant theoretical and practical problem here, said Aylin Caliskan, a UW assistant professor in the Information School. Machine learning models are data hungry. When it comes to underrepresented and historically disadvantaged groups, we do not have as much data, so the algorithms cannot learn accurate representations. Moreover, whatever data we tend to have about these groups is stereotypical. So we end up with these systems that not only reflect but amplify the problems in society.

To that end, the researchers decided to include in the published paper only blurred copies of images that sexualized women of color.

When these images are disseminated on the internet, without blurring or marking that they are synthetic images, they end up in the training data sets of future AI models, Caliskan said. It contributes to this entire problematic cycle. AI presents many opportunities, but it is moving so fast that we are not able to fix the problems in time and they keep growing rapidly and exponentially.

This research was funded by a National Institute of Standards and Technology award.

For more information, contact Ghosh at ghosh100@uw.edu and Caliskan at aylin@uw.edu.

Here is the original post:

AI image generator Stable Diffusion perpetuates racial and ... - University of Washington

Read More..

Synopsys forecasts robust first-quarter revenue as AI boosts chip … – Reuters

Nov 29 (Reuters) - Synopsys (SNPS.O) on Wednesday forecast first-quarter revenue above Wall Street expectations, as artificial intelligence (AI) adoption boosted demand for the company's chip designing software.

Shares of the Sunnyvale, California-based company rose 1.9% in trading after the bell.

Synopsys forecast current-quarter revenue between $1.63 billion and $1.66 billion, above analysts' average estimate of $1.60 billion.

The growing need for faster and more efficient chips that are AI compatible has increased the market prominence for electronic design automation (EDA) firms like Synopsys and Cadence Design System (CDNS.O), which provide software and intellectual property for designing chips.

The rise in custom chip design efforts by firms like Microsoft (MSFT.O) and Alphabet (GOOGL.O) has also boosted demand.

"You need a very specific silicon in order to power that (AI) training," Sassine Ghazi, the company's president and chief operating officer, said of the boom in AI chips in an interview.

"Those are massive, complex chips. From our side, that's an awesome opportunity," he added. Ghazi is due to take over as chief executive from founder Aart de Geus next year.

Synopsys' software tools are instrumental in assisting chip makers like Intel (INTC.O) in the semiconductor designing process, and automakers like Mercedes-Benz (MBGn.DE) for the development of electronic control unit for engines.

Synopsys, the largest maker of software used in chip design, said earlier this month that it has worked with Microsoft (MSFT.O) to create its own chip-design assistant which would help to fix bugs and errors at an early stage.

Ghazi told Reuters that increased use of Synopsys AI services helped boost the value of some of its contracts by 20% at renewal time.

The company projected full-year 2024 revenue between $6.57 billion and $6.63 billion.

It forecast adjusted earnings per share between $3.40 and $3.45 for the first quarter. Analysts had expected $3.05 per share, according to LSEG data.

Revenue in the fourth quarter ended Oct. 31 rose to $1.60 billion, beating analyst expectations of $1.59 billion. It also beat profit estimates for the reported quarter.

Reporting by Harshita Mary Varghese; Editing by Shailesh Kuber and Deepa Babington

Our Standards: The Thomson Reuters Trust Principles.

Read more:

Synopsys forecasts robust first-quarter revenue as AI boosts chip ... - Reuters

Read More..

Amplifying the efforts of nonprofit organizations with AI – World – ReliefWeb

Excitement and cautious optimism about the possibilities, but we must ensure that we harness AI equitably, ethically, and safely.

By Daniela Weber, Deputy Director of the NetHope Center for the Digital Nonprofit

AI has been a topic of NetHope Global Summit sessions since 2018, when a few of us (including myself, a Member CIO at the time) got into a room to discuss about the ethics of AI, and decided to start an AI workstream in NetHopes Emerging Technologies working group, as a vehicle to discuss how we might use AI in our sector, and how we could address those challenges. A number of NetHope Members have, since then, started using AI - piloting, and prototyping first; and some of these solutions are now scaling up and achieving real benefits.

At this years NetHope Global Summit, 12 dedicated AI sessions, with even more sessions that incorporated AI into the discussions, were included in the agenda and were standing room only. Clearly, with the arrival of generative AI in the mainstream, the discourse about the usefulness of AI, as well as its risks, has reached a new level of interest and has also been elevated to the leadership teams and boards of nonprofit organizations. The NetHope Membership AI workstream is continuing the conversations on the various aspects of AI and working together to identify opportunities for collective action. As an example, work on the Humanitarian AI Code of Conduct is continuing which was begun at the NetHope Global Summit.

I saw a very similar picture at this years Technology Association of Grantmakers (TAG) conference in Nashville: Keynotes, presentations, and many discussions on AI. I am very excited about the AI Adoption Framework for Philanthropy which we test-drove as a draft version during the conference. Version 1.0 will be launched on December 6th.

My takeaways from all of it:

There is immense interest in AI and what it can bring to the sector. From anticipatory action triggered by extreme weather forecasts or people movement predictions, to detecting sight-threatening conditions and providing care recommendations, to supporting conservation and crisis response efforts there are numerous ways in which AI can enable nonprofit organizations to deliver their mission and increase their impact in a complex, poly-crisis world. And particularly with generative AI we are starting to see how AI can bring more efficiency to areas such as fundraising, IT, Finance, and other secondary processes. This is badly needed at a time when nonprofit organizations are forced to do more with less given the resource constraints and lack of critical funding. We do need more solutions that are truly relevant and useful for the sector, and the capabilities to implement and use them at scale.

There is also immense concern about the risk areas of AI such as bias, data privacy, fake information, fraud, cybersecurity, the equitable access to the use of AI solutions as well as the longer-term societal impact. Many of our discussions during Summit were about how we can manage these risks and ensure we use AI responsibly. At minimum, nonprofit organizations need to have the governance in place to put guardrails around the use of AI tools, and solid cross-functional mechanisms to risk assess and decide when to move forward, and when to pause or stop. Toolkits such as NetHopes Artificial Intelligence (AI) Ethics for Nonprofits, and the soon to be released Gender Equitable AI toolkit, will help nonprofits to ensure their implementation and use of AI is ethical and equitable. Ideally though, the ecosystem of nonprofits, implementation partners, and funders will align behind a common framework on AI usage.

There are very practical roadblocks for the use of AI a lot of them linked to a lack of capacity and capability in many nonprofits, and the multitude of solutions and approaches to choose from, with roadmaps and pricing mechanisms that are not clearly defined yet, making the selection of the right solution a challenge. We need to engage with our technology partners to get that clarity so organizations can plan their next moves. The recent events and staff transitions at companies in the space have raised concerns not only about the differing viewpoints with regards to AI safety, but also the viability and stability of a key vendors in the generative AI space that many organizations already work with or are exploring. Vendor risk management remains a critical component of IT and procurement governance, as it should be for all technology solutions.

The old truth remains you need to have good data as the basis of your AI solution if you want it to produce useful output. A good data governance structure is essential to achieve this. NetHopes Data Governance Toolkit was created to support nonprofits with setting up this structure. There is also a lively discussion in our AI Working Group about the potential of sharing datasets between organizations and maybe we can go one step further and use all that great data we are collecting between us to train a shared large language model.

I, for one, am excited and cautiously optimistic about the possibilities but we must ensure that we harness AI equitably, ethically, and safely.

NetHopes New Strategic Plan Digital Transformation & Innovation Generative AI Program Highlight

NetHope has launched its 2024-2030 strategy. A generation ago, the Digital Divide described the gap in access first to computers, and then to the internet and essential technologies. But today, new digital challenges threaten international NGO mission advancement as organizations are forced to grapple with the widening gap between available resources and growing needs. These new challenges what NetHope calls the new Digital Divides represent critical gaps in infrastructure, systems, capabilities, and knowledge. Over the next decade, will lead collective action to bridge the New Digital Divides skills and leadership, inclusion, protection, transformation and innovation, and climate adaptation and resilience - leveraging our powerful, proven model for advance global good.

Bridging the New Digital Divides Spotlight: Digital Transformation and InnovationThe innovation required to meet emerging and existing challenges to the nonprofit sector and the communities we serve only happens when it is made a priority.NetHope works within our network, data, and history of catalyzing collective action to see complex problems are met with insights, policies, and advice that are context-appropriate, efficient and scalable. NetHope will use our influence, networks, data, and history of catalyzing new and high-potential ideas to ensure complex problems are met with insights, policies, guidance, and technology solutions.

Program Spotlight: Augmenting Generative AI Skills and Practices for NonprofitsNetHopes work in AI dates back to 2018 as much of our work begins, a gathering of Members and collective desire for action. Now, our program focuses on generative AI skills and practice building addresses the unique needs of nonprofits, their staff, and implementation partners. Training, as well as market scanning for sector appropriate solutions, and the exchange of knowledge and experiences are important elements of this program. NetHope has also published several resources on AI. Check out NetHopes recent Strategic Conversation: AI Ethics Today along with resources such as NetHopes Guide to Usefulness of Existing AI Solutions in Nonprofit Organizations, The AI Suitability Toolkit for Nonprofits, and the Artificial Intelligence Ethics for Nonprofits Toolkit.

Read the original here:

Amplifying the efforts of nonprofit organizations with AI - World - ReliefWeb

Read More..

Raising AI literacy could help schools like ISU navigate a disruptive … – WGLT

Roy Magnuson spends a lot of his day staring into the void.

Magnuson is an Illinois State University professor who has a new assignment. He was recently appointed Provost Fellow, tasked with studying disruptive technologies in higher education like generative artificial intelligence (AI). Discussions about AI at ISU are well underway, Magnuson said. AI guidance has already started popping up on syllabi. Formal policy is likely coming.

Theres so much thats unknown, Magnuson said.

Magnuson is a composer, not a machine-learning scholar. Hes done a lot of virtual-reality work in music, which is kinda-sorta adjacent to AI.

My whole background is in music, and the fact that Im having this conversation with you is both hilarious and also a product of the technology that people are able to go and learn, Magnuson said on WGLTs Sound Ideas. Me as a person being here is a threat. That people can just go learn. And if information becomes that accessible valid information with the ability to reason through it, and we know you can trust it, its vetted then yes, thats an existential threat to a university if its structured the way it is. But I think thats also something we can talk about: What is a university? What is the in-person experience?

Magnuson and others will steer ISU through these uncharted waters. On the surface, AI presents a dangerously easy new option for cheating. But the upsides are easy to see too.

These are the things they dont make movies about, Magnuson said. Its super powerful. In a pedagogical sense, lots of people have talked about creating infinitely patient tutors to transfer information to students, that can speak 40 different languages. They are multimodal, so you can literally talk to them or type or put in pictures. Thats profoundly powerful for accessibility for students who maybe cant type or cannot see and they want to have things read to them.

If education is only thought of as transferring knowledge from one person to another, then Magnuson said yes, AI is alarming. It can write a 5-paragraph essay in a jiffy.

But thats not the goal of any ISU class fundamentally, and thats not the beauty of a liberal arts education. Were trying to help them grow into great citizens, parents and spouses, and teach them to be curious critical thinkers, to be ethical, and to be just, Magnuson said.

AI doesnt fundamentally change that goal, he said. But it could speed things up.

The in-person experience then speaking from my perspective as a musician could be much more of what we love to do, which is looking at music and analyzing it and talking through application to their individual careers or taking music educators to work with students, or therapists working with clients, or composers doing readings in class on what theyre working on. We spend so much time getting to that point, Magnuson said. Because we take so long, because theres so much stuff to go through, to get to the point where they can use it. The dream is we can get there much more quickly. You get to synthesis and the application-learning and experiential-learning much much earlier in their time.

A first step, he said, is to raise AI literacy levels for both students and faculty, Magnuson said. Some of that work is already underway. ISUs Center for Integrated Professional Development, for example, published a guide to help faculty navigate these early days of AI.

We dont have our heads in the sand, Magnuson said.

We depend on your support to keep telling stories like this one. You together with donors across the NPR Network create a more informed public. Fact by fact, story by story. Please take a moment to donate now and fund the local news our community needs. Your support truly makes a difference.

Read the rest here:

Raising AI literacy could help schools like ISU navigate a disruptive ... - WGLT

Read More..

Breakingviews – AI investments carry whiff of vicious circles past – Reuters

NEW YORK, Nov 29 (Reuters Breakingviews) - Artificial intelligence is spawning some genuinely concerning financial decisions. As technology titans jockey to back hot new startups, they are extracting explicit or implicit promises of revenue in return. The chaotic week of upheaval at OpenAI, the Microsoft-backed (MSFT.O) owner of ChatGPT, provides a window into how these sorts of supposedly virtuous circles have a way of turning vicious.

Mania over programs that emulate, or even surpass, human abilities is in full throttle. After OpenAI within days fired and rehired boss Sam Altman, following some intervention from Microsoft Chief Executive Satya Nadella, the startup looks to be pressing ahead with a planned share sale that could value it at more than $80 billion, according to the Financial Times. A deal would signal that the hype is strong enough to overcome the significant risks just laid bare, partly because of the corporate buttresses. Before the boardroom coup, OpenAI paused subscriptions for its paid ChatGPT Plus service because it couldnt keep up with the surge of new users. This level of exuberance evokes the dot-com era in more ways than one.

Microsofts injection of $10 billion into OpenAI in January helped kick off the craze. As part of their deal, the ChatGPT operator agreed to exclusively use its new investors cloud computing services. In the 12 months before they struck the arrangement, OpenAI had spent less than $1 million with the software goliath, according to tech news outfit The Information, with the figure now on pace to exceed $400 million a year on Microsofts Azure alone.

So-called generative AI, which uses algorithms to produce new text or imagery from the data on which it is trained, has released animal spirits across Silicon Valley. Microsoft, Apple (AAPL.O) and Alphabet (GOOGL.O) are among those handily outperforming the Nasdaq Composite Index (.IXIC) this year. The excitement makes some sense, but there is notable danger embedded in the way the investments are being structured, just as there was in the 1990s when stretching communications networks across the globe was all the rage. Both turbocharge new ideas in ways that simultaneously manufacture fresh demand for existing products. But regulation or other complications could easily slow progress or create extra hurdles for AI adoption, in turn jeopardizing all the extra revenue Big Tech is anticipating and their ballooning valuations.

OpenAIs recent fundraising could make it worth more than double the $30 billion imputed earlier this year. The uplift has been symbiotic. Microsoft, including debt, is now valued at about $2.8 trillion, or more than 10 times the revenue analysts expect for the next year, according to LSEG. The multiple is 25% higher than a year ago. Its partly attributable to the prospect of upselling Microsoft 365 subscriptions with AI features and the 30% growth in its Azure web services division, helped along by OpenAI and its peers.

Other top lines are set to be boosted by AI ventures at sums equivalent to the amount of capital being pumped into them. Amazon (AMZN.O), for example, unveiled plans in September to plow up to $4 billion into Anthropic, which has agreed to spend just as much over the next five years on Amazons cloud computing services, according to the Wall Street Journal. And soon after Amazons deal, Google committed an additional $2 billion to the same startup, which months earlier had signed a contract with Google Cloud worth more than $3 billion.

All this mutual back-scratching is reminiscent of the bubble from a generation ago. Then, new entrants as well as more established companies borrowed more than $1 trillion to install telecommunications cables across ocean floors and beyond. Equipment suppliers including Cisco Systems (CSCO.O), Lucent Technologies and Nortel Technologies enjoyed fast top-line growth, and valuations. The problem was that the revenue underpinning the boom wasnt all it was cracked up to be.

The worst form was known as roundtripping. Company A would strike a network capacity deal with Company B, often at the end of a quarter, and then Company B would buy some from Company A at a similar price. Executives from Global Crossing, for example, paid large sums to settle lawsuits related to the practice. Many in the industry were forced to restate their financial results and ultimately went bankrupt.

Legal, but misguided, vendor financing was another problem at the time. Telecom equipment makers provided credit to startups so they could buy the gear they needed. The rationale was that companies like Nortel could put their blue-chip balance sheets to good use. Customers would pay higher interest rates to compensate for the risk, and lock themselves into using a particular suppliers switches and other gizmos. In a business where size matters, competitors that didnt lend put themselves in danger of being left behind.

When the bubble burst, it created a double whammy. Earlier equipment sales went bad, causing defaults and impairing balance sheets, while new orders also dried up, leading to unexpectedly large losses. Network gear manufacturers endured a decline of more than 90% in their collective market capitalization over a decade.

This begs the question of whether history is threatening to repeat itself today with what could be described as vendor equity. Although the latest arrangements are helping puff up valuations on both sides of the transactions, the dangers appear to be more manageable. For companies valued in the trillions, a few billion dollars is a pittance. Much of the invested capital should be returned relatively quickly as AI firms buy back-end services. The industrys mortality rate, like that of so many other fledgling tech ideas, presumably will be high, but if the overall market keeps growing there might even be benefits while riding the wrong horse.

Amazon, for example, seems motivated by the chance to beef up its nascent chipmaking business. The e-commerce giant agreed to sell semiconductors to Anthropic to power its AI-model training efforts. Nvidia (NVDA.O) has at least 80% and as much as 95% of the AI market, according to estimates by analysts. By using Anthropic as a training ground to iterate and improve on its own less-popular alternatives, Amazon might be able to eat into that dominance, a benefit it could reap regardless of Anthropics fate. If Amazon captured, say, 10% of Nvidias share, its chips business alone theoretically would be worth more than $100 billion.

The trouble with investing obsessions is that they do more damage as they grow. In telecom, for example, vendor financing accelerated rapidly into the crash. Optimists feared missing out on a rare opportunity, while cynics realized even bad deals would lift stock valuations. It means that as this latest tech craze grows, it pays to keep a close eye on how much tech incumbents and entrepreneurial AI endeavors prop each other up.

Follow @AnitaRamaswamy on X

Follow @rob_cyran on X

Editing by Jeffrey Goldfarb, Sharon Lam, Aditya Sriwatsav and Streisand Neto

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.

Read more here:

Breakingviews - AI investments carry whiff of vicious circles past - Reuters

Read More..

Siltronic sees sales boost from AI and electromobility in next five years – Reuters

Nov 30 (Reuters) - Siltronic (WAFGn.DE) expects sales and profitability to grow significantly in the next five years thanks to megatrends such as artificial intelligence, digitalization, and electromobility, the company said in a strategy update on Thursday.

The megatrends "will lead to a strong increase in demand for semiconductors and therefore also for wafers," the company, which provides silicon wafers to the semiconductor industry, said in a statement.

The German chip equipment maker now expects sales to exceed 2.2 billion euros ($2.41 billion), and earnings before interest, taxes, depreciation, and amortization (EBITDA) margin to reach the high 30-percentage area by 2028.

Compared to this year's sales guidance, the forecast would correspond to an increase of more than 40% in revenue.

However, the company sees sales to be burdened by high inventory levels on customers' side in the first half of 2024.

($1 = 0.9113 euros)

Reporting by Andrey Sychev, Editing by Linda Pasquini

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:

Siltronic sees sales boost from AI and electromobility in next five years - Reuters

Read More..