Page 866«..1020..865866867868..880890..»

FACT SHEET: Biden-Harris Administration Secures Voluntary … – The White House

Builds on commitments from seven top AI companies secured by the Biden-Harris Administration in July

Commitments are one immediate step and an important bridge to government action; Biden-Harris Administration is developing an Executive Order on AI to protect Americans rights and safety

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have acted decisively to manage the risks and harness the benefits of artificial intelligence (AI). As the Administration moves urgently on regulatory action, it is working with leading AI companies to take steps now to advance responsible AI. In July, the Biden-Harris Administration secured voluntary commitments from seven leading AI companies to help advance the development of safe, secure, and trustworthy AI.

Today, U.S. Secretary of Commerce Gina Raimondo, White House Chief of Staff Jeff Zients, and senior administration officials are convening additional industry leaders at the White House to announce that the Administration has secured a second round of voluntary commitments from eight companiesAdobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stabilitythat will help drive safe, secure, and trustworthy development of AI technology.

These commitments represent an important bridge to government action, and are just one part of the Biden-Harris Administrations comprehensive approach to seizing the promise and managing the risks of AI. The Administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development.

These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AIsafety, security, and trustand mark a critical step toward developing responsible AI. As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to take decisive action to keep Americans safe and protect their rights.

Today, these eight leading AI companies commit to:

Ensuring Products are Safe Before Introducing Them to the Public

Building Systems that Put Security First

Earning the Publics Trust

As we advance this agenda at home, the Administration continues to engage on these commitments and on AI policy with allies and partners. In developing these commitments, the Administration consulted with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK. These commitments complement Japans leadership of the G-7 Hiroshima Process, the United Kingdoms Summit on AI Safety, and Indias leadership as Chair of the Global Partnership on AI.

Todays announcement is part of a broader commitment by the Biden-Harris Administration to ensure AI is developed safely and responsibly, to safeguard Americans rights and safety, and to protect Americans from harm and discrimination.

###

Read the original:

FACT SHEET: Biden-Harris Administration Secures Voluntary ... - The White House

Read More..

Nation’s first dual degree in medicine and AI aims to prepare the … – UTSA

This unique partnership promises to offer groundbreaking innovation that will lead to new therapies and treatments to improve health and quality of life, said UT System Chancellor James B. Milliken. Were justifiably proud of the pioneering work being done at UTSA and UT Health San Antonio to educate and equip future medical practitioners on how to best harness the opportunities and address the challenges that AI will present for the field of health care in the years to come.

AIs presence can already be found in a variety of areas of the medical field including customized patient treatment plans, robotic surgeries and drug dosage. Additionally, UT Health San Antonio and UTSA have several research programs underway to improve health care diagnostics and treatment with the help of AI.

The World Economic Forum predicts that AI could enhance the patient experience by reducing wait times and improving efficiency in hospital health systems and by aggregating information from multiple sources to predict patient care. AI is also improving administrative online scheduling and appointment check-ins, reminder calls for follow-ups and digitized medical records.

Our goal is to prepare our students for the next generation of health care advances by providing comprehensive training in applied artificial intelligence, said Ronald Rodriguez, M.D., Ph.D., director of the M.D./M.S. in AI program and professor of medical education at the University of Texas Health Science Center at San Antonio. Through a combined curriculum of medicine and AI, our graduates will be armed with innovative training as they become future leaders in research, education, academia, industry and health care administration. They will be shaping the future of health care for all.

The UTSA M.S. in Artificial Intelligence is a multidisciplinary degree program with three tracks: data analytics, computer science, and intelligent and autonomous systems. The latter is a concentration that trains students with theory and applications. In the AI program, students will have an opportunity to work with emerging technology in the areas of computer science, mathematics, statistics, and electrical and computer engineering. Additionally, they will have the opportunity to conduct research alongside nationally recognized professors in MATRIX: The UTSA AI Consortium for Human Well-being, a research-intensive environment focused on developing forward-looking, sustainable and comprehensive AI solutions that benefit society.

This first-of-its-kind M.D./M.S. program has been several years in the making. Conversations about the innovative program began in 2019 with Ambika Mathur, dean of The UTSA Graduate School, and Robert R. Hromas, M.D., dean of UT Health San Antonios Long School of Medicine. Together, they worked through the pandemic with their teams to establish a degree pathway and curriculum that would prepare future physicians to lead in the workforce.

UTSA charged Dhireesha Kudithipudi with leading the development of the M.S. in AI curriculum in collaboration with three colleges. Over the course of one year, she closely collaborated with the faculty and chairs from three departments at UTSA and with UT Health San Antonios faculty. This effort resulted in the creation of new courses in AI, which will provide students with a rigorous cross-disciplinary training experience and reduce entry barriers for non-traditional students.

AI is transforming our world, and UTSAs approach to AI is grounded in transdisciplinary collaboration, underscoring our commitment to generating high-impact solutions to advance human well-being by engaging multiple and diverse audiences, saidMathur. Through this innovative partnership with UT Health San Antonio, aspiring medical leaders will gain mastery in the emerging technologies that will shape the health care profession for generations to come.

In 2021, a pilot program was introduced to UT Health San Antonio medical students. Two students who applied for and were accepted into the M.D./M.S. program for fall 2023 are projected to graduate in the spring of 2024. For these students, the combined degrees mean multiple possibilities in health care.

I believe the future of health care will require a physician to navigate the technical and clinical sides of medicine, Aaron Fanous, a fourth-year medical student. "While in the program, the experience opened my mind to the many possibilities of bridging the two fields. I look forward to using my dual degree, so that I can contribute to finding solutions to tomorrows medical challenges.

Eri Osta, is also a fourth-year medical student in the program. Osta said, The courses were designed with enough flexibility for us to pick projects from any industry, and medical students were particularly encouraged to undertake projects with direct health care applications. My dual degree will help align a patients medical needs with technologys potential. I am eager to play a role in shaping a more connected and efficient future for health care.

Medical students who are accepted to the dual degree program will be required to take a leave of absence from their medical education to complete two semesters of AI coursework at UTSA. Students will complete a total of 30 credit hours: nine credit hours in core courses including an internship, 15 credit hours in their degree concentration (Data Analytics, Computer Science, or Intelligent & Autonomous Systems) and six credit hours devoted to a capstone project.

More here:

Nation's first dual degree in medicine and AI aims to prepare the ... - UTSA

Read More..

LastMile AI closes $10M seed round to operationalize AI models – TechCrunch

Image Credits: Andrey Suslov / Getty Images

LastMile AI, a platform designed to help software engineers develop and integrate generative AI models into their apps, has raised $10 million in a seed funding round led by Gradient, Googles AI-focused venture fund.

AME Cloud Ventures, Vercels Guillermo Rauch, 10x Founders and Exceptional Capital also participated in the round, which LastMile co-founder and CEO Sarmad Qadri says will be put toward building out the startups products and services and expanding its seven-person team.

Machine learning, and the broader field of AI, has gone through a few AI winters oftentimes due to a constraint on computing resources, a constraint on expertise or a constraint on high-quality training data, Qadri told TechCrunch in an email interview. We plan to democratize generative AI by streamlining the tooling and disparate workflows and simplifying the need for deep technical expertise.

Qadri, along with LastMiles other co-founders Andrew Hoh and Suyog Sonwalkar, were members of Metas product engineering team prior to launching LastMile. While at Meta, they built tooling, including AI model management, experimentation, benchmarking, comparison and monitoring tools, geared toward machine learning engineers and data scientists.

Qadri says that these tools served as the inspiration for LastMile.

The recent wave of interest and adoption of AI is being driven by software developers and product teams that are using generative AI as a new part of their toolkit. Yet machine learning developer tooling is still mostly geared towards researchers and core machine learning practitioners, Qadri said. We want to empower builders by providing a new class of AI developer tools built for software engineers, not machine learning research scientists.

Qadri has a point. Some companies, faced with the immense logistical challenges of adopting AI from scratch, arent clear on how to leverage all that the tech has to offer.

According to a recent S&P Global survey, around half of IT leaders say that their organizations arent ready to implement AI and suggest that it may take five years or more to fully build AI into their companys workflows. Meanwhile, about a third say that theyre still in the pilot or proof-of-concept stage, outnumbering those whove reached enterprise scale with an AI project.

At the same time, business leaders arent fatalistic about their opportunities to embrace AI. In a 2022 Gartner survey, 80% of executives said that they think automation can be applied to any business decision. Model management was cited as a top roadblock 40% of organizations had thousands of models to keep tabs on, respondents said but they indicated that other factors, including AI talent, werent as big an issue as might be assumed.

LastMile allows customers to create generative AI apps leveraging text- and image-generating models from both open- and closed-source model providers. Developers can personalize these models with their proprietary data, and then incorporate them into their new or existing apps, products and services.

Using LastMiles AI Workbooks module, users can experiment with different models from a single pane of glass. The AI Workflows tool, meanwhile, can chain together different models to build more complex workflows, like an app that transcribes audio to text and then translates that text before applying a synthetic voiceover. And the AI Templates module, the last module in LastMiles AI dev suite, creates reusable development setups that can be shared with team members or the wider LastMile community.

Our goal with LastMile is to provide a single developer platform that encompasses the entire lifecycle of AI app development, Qadri said. Today, the AI developer journey is fragmented and requires stitching together a number of different tools and providers, and nuanced understanding of every step which increases the barrier to entry. Were focused on building a platform that non-machine learning software engineers can use to develop AI-powered apps and workflows, from experimentation and prompt engineering to evaluation, deployment and integration.

Now, LastMile isnt the only company tackling these challenges in the AI tooling, measurement and deployment space.

When asked who he sees as competitors, Qadri mentioned LlamaIndex, a startup offering a framework to assist developers in leveraging the capabilities of LLMs on top of their personal or organizational data. LangChain is another rival in Qadris eyes the open source toolkit to simplify the creation of apps that use large language models along the lines of GPT-4.

But competition or no, Qadri sees a massive opportunity for New York City-based LastMile, which is pre-revenue, to make waves in a nascent but fast-growing space. With the market for AI model operations set to grow to $16.61 billion by 2030, according to one report, he might not be too far off base.

Enterprises are investigating how to revamp their businesses to incorporate AI in their applications and workflows, but theyre encountering last mile issues that prevent them from getting things into production for example, how many ChatGPT-based chatbots have you seen incorporated into corporate websites?, Qadri said. These blockers can be largely solved by better AI developer tools that enable rapid experimentation and evaluation, provide orchestration infrastructure, and deliver monitoring and observability for confidence in production. LastMile AI provides the tooling and platform to assist businesses in confidently incorporating AI in their applications.

Originally posted here:

LastMile AI closes $10M seed round to operationalize AI models - TechCrunch

Read More..

GOP lawmakers sound alarm over AI used to sexually exploit children – Fox News

FIRST ON FOX: A group of 30 House Republicans is demanding to know what the Department of Justice (DOJ) is doing to combat the emergence of AI-generated child pornography on the internet.

"We write to you with grave concern regarding increasing reports of artificial intelligence (AI) being used to generate child sexual abuse materials (CSAM) which are shared across the internet," Rep. Bob Good, R-Va., wrote in a letter to Attorney General Merrick Garland.

"While recognizing the benefits of appropriate uses of AI, including medical research, cybersecurity defense, streamlining public transit, and may other applications, we believe action must be taken to prevent individuals from using AI to generate CSAM."

TECH GIANT TO SHIELD CUSTOMERS FROM IP LAWSUITS RELATED TO AI TOOLS

Rep. Bob Good, R-Va., leads a letter to the DOJ asking about what it is doing to combat AI-generated sexually exploitative images of children.

Theyre asking Garland about whether his department has "the necessary authority" to crack down on the growing issue and whether "gaps in the current criminal code" make it harder for law enforcement officials to pursue those who create and possess AI-generated CSAM. The lawmakers are also asking the DOJ to launch an internal inquiry into the troubling material.

"The first reports of AI being used to exploit children for the purpose of generating CSAM surfaced in 2019, when it was revealed that AI could generate obscene, personalized images of minors under the age of 18," they said.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Attorney General Merrick Garland speaks at a press conference in June. (Chip Somodevilla/Getty Images)

The lawmakers cited an October 2020 report by the MIT Technology Review that warned of an AI app that was being used to digitally "undress" images of women, predominantly underaged girls.

But AI technology has only grown more widespread and sophisticated since then, with diffusion model apps like Midjourney and DALL-E making it easy for most online users to generate fake images or alter existing ones. Midjourney has banned words related to human anatomy from prompts in an effort to prevent creation of AI-generated pornography.

The Washington Post reported in June that using the AI technology to create CSAM of children who do not exist still violated child pornography laws, according to DOJ officials, but did not mention specific incidents of someone being charged for possession of such items.

TECH COMPANY BOASTS IT CAN PREDICT CRIME WITH SOCIAL MEDIA POLICING THROUGH ARTIFICIAL INTELLIGENCE

"This report is deeply concerning, and we seek to understand what steps can be taken to address this perverted application of AI," the lawmakers letter said.

CLICK HERE TO GET THE FOX NEWS APP

In addition to Good, the letter is also signed by Reps. Ken Buck, R-Colo.; Ben Cline, R-Va.; Anna Paulina Luna, R-Fla.; and Ralph Norman, R-S.C., among others.

Earlier this year, the attorneys general of all 50 states wrote to Congress urging it to expand current rules on child pornography to cover AI and set up "an expert commission to study the means and methods of AI that can be used to exploit children specifically."

Fox News Digital reached out to the DOJ for comment.

See the original post:

GOP lawmakers sound alarm over AI used to sexually exploit children - Fox News

Read More..

If You Missed Nvidias Runup, You May Not Have Missed the AI Trade – Barron’s

Blink and you might have missed Nvidia s 200% gain this year. Thankfully, there are still plenty of other artificial-intelligence opportunities for investors looking to cash in. AI is the gift that will keep on giving for businesses in the years aheadbut for investors, the easy money has already been made.

Shifts in interest rates and inflation aside, the greatest theme driving markets in 2023 has been the boom in enthusiasm for artificial-intelligence-exposed industries and firms. Thats no secret to the market, which has caused Nvidia stock (ticker: NVDA), now trading at 40 times sales, to triple. If you were smartor luckyenough to have been along for the ride, then sell a third of your stake to take your cost basis off the table and play with house money. After all, the rally is showing signs of losing steam, or at least taking a break. Nvidia stock is down 4% since its blockbuster earnings report on Aug. 23, while the S&P 500 has added 1%.

AI-curious investors can look elsewhere. There are always the hyperscale data centers companies, namely Amazon.com (AMZN), Microsoft (MSFT), and Alphabet (GOOGL). Theyre the ones buying up as many of Nvidias chips as they can get their hands on to power various applications of AI. But those stocks havent been exactly sluggish lately either.

Microsoftthe relative slouch among the group, up only 41% this yearhas a hand in both pots. In addition to its Azure cloud-computing business, offering the buzzily named Artificial Intelligence as a Service, the company is about to roll out Microsoft 365 Copilot, an AI assistant for Word, Excel, PowerPoint, Outlook, Teams, and other applications. Early user feedback has been very positive, Microsoft says.

The company plans to charge $30 a month for Copilot. Even if only 20% of the 160 million users of Office 365 E5the top enterprise tierchoose to subscribe, the numbers quickly become meaningful for Microsoft, says Nick Frelinghuysen, a portfolio manager at Chilton Trust. That would already amount to $11.5 billion in annual revenue.

Advertisement - Scroll to Continue

Shares of internet and software companies employing AI, such as Adobe (ABDE), ServiceNow (NOW), and Salesforce (CRM), have also soared. Instead, investors can look at pick-and-shovel opportunities. Frelinghuysen notes that an AI GPU server burns as much as seven times the electricity that a typical data-center server does. That means more demand for the electrical infrastructure that powers the massive buildings that house rows after rows of servers.

Unfortunately, some have been nearly as strong as Nvidia. Vertiv (VRT), which specializes in electrical equipment for data centers, has already rallied 180% this year. Eaton (ETN), a leader in power-management products that has gained 42% in 2023, might be a better bet, but only just.

Even farther afield, old-school companies can use the technology to become even more efficient. Think United Parcel Service (UPS) using AI to optimize routes and sort packages; Deere (DE) selling farmers subscriptions to predictive software that tells them when best to plant, water, or harvest based on local weather and other inputs; or UnitedHealth Group (UNH) using AI to process claims or improve diagnostics. Those AI applications promise to one day be transformative, but will take years to play out and show up in the numbers.

Advertisement - Scroll to Continue

Its time for a new theme.

Write to Nicholas Jasinski at nicholas.jasinski@barrons.com

Read more:

If You Missed Nvidias Runup, You May Not Have Missed the AI Trade - Barron's

Read More..

The AI Detection Arms Race Is Onand College Students Are … – WIRED

The siren call of AI says, It doesnt have to be this way. And when you consider the billions of people who sit outside the elite club of writer-sufferers, you start to think: Maybe it shouldnt be this way.

May Habib spent her early childhood in Lebanon before moving to Canada, where she learned English as a second language. I thought it was pretty unfair that so much benefit would accrue to someone really good at reading and writing, she says. In 2020, she founded Writer, one of several hybrid platforms that aims not to replace human writing, but to help peopleand, more accurately, brandscollaborate better with AI.

Habib says she believes theres value in the blank page stare-down. It helps you consider and discard ideas and forces you to organize your thoughts. There are so many benefits to going through the meandering, head-busting, wanna-kill-yourself staring at your cursor, she says. But that has to be weighed against the speed of milliseconds.

The purpose of Writer isnt to write for you, she says, but rather to make your writing faster, stronger, and more consistent. That could mean suggesting edits to prose and structure, or highlighting what else has been written on the subject and offering counterarguments. The goal, she says, is to help users focus less on sentence-level mechanics and more on the ideas theyre trying to communicate. Ideally, this process yields a piece of text thats just as human as if the person had written it entirely themselves. If the detector can flag it as AI writing, then youve used the tools wrong, she says.

The black-and-white notion that writing is either human- or AI-generated is already slipping away, says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania. Instead, were entering an era of what he calls centaur writing. Sure, asking ChatGPT to spit out an essay about the history of the Mongol Empire produces predictably AI-ish results, he says. But start writing, The details in paragraph three arent quite rightadd this information, and make the tone more like The New Yorker, he says. Then it becomes more of a hybrid work and much better-quality writing.

Mollick, who teaches entrepreneurship at Wharton, not only allows his students to use AI toolshe requires it. Now my syllabus says you have to do at least one impossible thing, he says. If a student cant code, maybe they write a working program. If theyve never done design work, they might put together a visual prototype. Every paper you turn in has to be critiqued by at least four famous entrepreneurs you simulate, he says.

Students still have to master their subject area to get good results, according to Mollick. The goal is to get them thinking critically and creatively: Idont care what tool theyre using to do it, as long as theyre using the tools in a sophisticated manner and using their mind.

Mollick acknowledges that ChatGPT isnt as good as the best human writers. But it can give everyone else a leg up. If you were a bottom-quartile writer, youre in the 60th to 70th percentile now, he says. It also frees certain types of thinkers from the tyranny of the writing process. We equate writing ability with intelligence, but thats not always true, he says. In fact, Id say its often not true.

Continued here:

The AI Detection Arms Race Is Onand College Students Are ... - WIRED

Read More..

Is AI the next frontier in preventing gun violence? This Prince … – WTOP

Artificial Intelligence seems to be able to do everything these days. Can it also detect a gunman before a gunshot is even fired? A Prince Georges County company is betting that it can.

WTOP/John Domen

WTOP/John Domen

WTOP/John Domen

A Prince Georges Co. company said its AI technology could stop the next mass shooting

Artificial intelligence is one of those terms you hear about all the time now. These days it seems it can write a paper, flavor your Coca-Cola and do so many things humans used to do on their own. Can it also detect a gunman before a gunshot is even fired?

A Prince Georges County, Maryland, company is betting that it can.

Wave Welcome occupies a small office in the National Harbor area, and is led by Vennard Wright, a former Chief Information Officer in Prince Georges County and for WSSC Water, among lots of other companies.

He grew up in the Hillcrest Heights area and lives in Clinton now, and was inspired to come up with the technology back in the spring when a group of teenage boys rushed on to a school bus and tried to shoot another student.

Weve developed a platform called PerVista, which leverages AI to analyze security cameras video streams, and the goal is to detect firearms, said Wright.

So a good example of that would be, if someones walking up to a school with an AR-15, we can see in real-time by analyzing frame by frame, whether or not theres a firearm detected, he added. Once a firearm is detected, we notify public safety in real-time. And the goal is really to cut down on dispatch time, and to also make sure were identifying the person who could be the perpetrator.

Wright said the technology is 100% accurate when it comes to detecting long guns now.

We did start with that use case because a lot of school shootings do occur with AR-15s and long guns, he said. So that was an easier use case to go after. But we are also working on making sure theres a match to smaller guns, like 9 mm as well. We are training the algorithm to be able to detect shorter guns as well.

The cameras can be set up inside and outside. Drones can also be used to further track a suspect carrying a weapon. Alerts and video are then sent immediately to police, and they can also be sent to other people who might need to know, like security guards or teachers and principals if the technology is deployed in a school.

Even homes and businesses could use the cameras to monitor for the same kinds of threats. Or, to hearken back to the event that inspired this idea, a school bus.

The way I understood that incident is they walked up to the bus with the gun out. So at that point, the way were looking at that scenario is immediately we call the police, we let the police know, Hey, the bus is at this location, there are three people, heres what they look like, said Wright.

You cut down the amount of time that it takes in order to get the perpetrators, he added. Were also looking to make sure that people are not on the run for a long time as well, which tends to happen theyre looking for days or weeks for the shooter. Immediately, the police are able to respond and get the people.

Wright said the technology is ready to roll out now. He also hopes that even if the platform cant stop every single shooting, the immediate detection and dispatch to police could end up reducing the impact, and eventually lower the number of shootings that occur.

And as a Maryland native, hes hoping to utilize this in area schools.

Its unfortunate that a solution like this is needed, he admitted. Im also optimistic that by applying technology in the right way, we can start to serve as a deterrent. So were looking forward to being a big part of the solution. And hopefully by doing that, we will also be able to scale and make some incredible things happen here in Prince Georges County.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

2023 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

See the rest here:

Is AI the next frontier in preventing gun violence? This Prince ... - WTOP

Read More..

Salesforce to hire 3,300 staffers as it eyes generative AI opportunity – CIO

After laying off 8,000 staffers in January, Salesforce is now planning to hire at least 3,300 employees. The plan includes rehiring some of the former employees.

Salesforce is looking at a large recruitment drive as it plans to invest in new areas such as generative AI and push some of its popular products, such as the Data Cloud, CEO Marc Benioff, and chief operating officer Brian Millham told Bloomberg in an interview.

The company already has made several product enhancements, especially integrating new generative AI features into its Data Cloud.

This week at its annual Dreamforce conference, the company said it has rebuilt its Data Cloud to support generative AI and will begin rolling out the omnipresent chatbot to some customers by year-end.

In August, Salesforce released a newno-code, interface-based AI andgenerative AImodel training tool, dubbed Einstein Studio, as part of its Data Cloud offering.

Earlier in June, Salesforce showcased a new offering, dubbed AI Cloud, which combines its previously announced Slack GPT, Tableau GPT,Apex GPT, MuleSoft GPT, Flow GPT, Service GPT, Marketing GPT, and Commerce GPT along with the new Einstein Trust layer and a prompt engineering tool for traininglarge language models (LLMs).

The new hires, according to Millham, will be divided between sales, engineering, and the team handling the development of its Data Cloud.

The top executives said that some of the new positions are more likely to be filled by what the company terms as boomerang hires. These are essentially employees who worked at Salesforce earlier before moving to other companies.

Salesforce sees boomerang hires as a new success metric, the top executives said, adding that there still might be strategic layoffs in the future.

Strategic layoffs, such as trading non-technical staff for more engineering and technical talent, could become common for most large technology companies. This week, Google-parent Alphabet laid off hundreds of HR employees citing less demand for such staff within the company for the next few quarters.

The plan to hire 3,300 new employees by Salesforce is expected to restore nearly 40% of the staff laid off during the 10% workforce reduction in January.

Salesforce, which had nearly80,000 global employeesas of February 2022, currently employs about 70,000 staffers after eliminating at least 8,000 roles in January citing reduced customer spending due to macroeconomic uncertainty.

Just two months before the downsizing, the company decided to reduce at least 950 roles despite it experiencing a relatively successful year financially, with the companys second-quarterrevenue rising 22%year on year, driven by the rapid adoption of its cloud-basedCRMand other sales management tools.

See the original post:

Salesforce to hire 3,300 staffers as it eyes generative AI opportunity - CIO

Read More..

Intel Celebrates AI Accessibility and Enables the Next Generation of … – Investor Relations :: Intel Corporation (INTC)

Intel is celebrating AI accessibility innovation by next-generation technologists with its AI Global Impact Festival

SANTA CLARA, Calif.--(BUSINESS WIRE)--Whats New: Today, Intel announced the global grand-prize winners at its third-annual AI Global Impact Festival. The festival brings together future developers, and educators who are working to solve real-world problems using artificial intelligence (AI), with the support of policymakers and academic leaders. Students from 26 countries participated in the competition at this years festival, Enriching Lives with AI Innovation. Intels event program focused on building digital readiness for all students and celebrating AI innovations that drive inclusion, accessibility and responsible impact.

I am constantly amazed by the innovative young technologists who understand the potential of AI to be a force for good. I am excited to celebrate this year's innovative winners. The success of the technology of tomorrow relies on them, as they embody the Intel purpose to improve the life of every person on the planet.

Pat Gelsinger, Intel CEO

Why It Matters: Artificial intelligence has the potential to unlock powerful new possibilities and improve the life of every person on the planet. It can also be integral in helping people with disabilities live independently and participate fully in all aspects of life. This year, centered on Intels goal of making technology fully inclusive, Intel introduced a new award for projects focused on AI innovation for accessibility. Accessibility was a key pillar of the festival platform, which includes closed captioning, screen readers for those with visual disabilities and translation to more than 120 languages.

Although AI technology has the power to create positive change, there are also potential ethical risks associated with its development. Intel is committed to responsibly advancing AI technology. The company follows a comprehensive responsible AI approach to guard against the misuse of AI.

Students were judged on how well their projects relate to and address potential risks, and the winners projects went through an ethics audit by Intels Responsible AI team, inspired by the protocol followed for every company AI project. This years festival platform also featured a new, self-paced lesson on Responsible AI skills, in which all participants earn a certificate.

Who was Awarded: During the global competition, participants competed for more than $500,000 in cash prizes, certificates, Intel laptops and mentorship opportunities. The following students were named Global Award winners for AI Impact Creator:

For the 13- to 17-year-old age group:

For the 18-year-old+ age group:

For the accessibility award:

About Intels Role: Intel is committed to bringing AI skills everywhere, regardless of a person's ethnicity, age, gender or background. The AI Global Impact Festival provides opportunities and platforms for future innovators to learn, showcase and celebrate the impact of AI innovations.

Intel has committed to expand digital readiness to reach 30 million people in 30,000 institutions in 30 countries. Currently, Intel has expanded Intel Digital Readiness Programs globally by collaborating with 27 national governments, enabling 23,000 institutions and training more than 5.6 million people worldwide. The festival is part of Intels 2030 RISE Goals and the companys dedication to using tech as a force for good, underscoring its aim to make technology fully inclusive and to expand digital readiness worldwide.

More Context: Visit the AI Global Impact Festival website.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com: https://www.businesswire.com/news/home/20230914095329/en/

Orly Shapiro1-949-231-0897orly.shapiro@intel.com

Source: Intel

Released Sep 14, 2023 12:00 PM EDT

View post:

Intel Celebrates AI Accessibility and Enables the Next Generation of ... - Investor Relations :: Intel Corporation (INTC)

Read More..

AI detects eye disease and risk of Parkinson’s from retinal images – Nature.com

Retinal imaging allows researchers and physicians to observe small blood vessels whose condition could hint at a health-care issue.Credit: ipm/Alamy

Scientists have developed an artificial intelligence (AI) tool capable of diagnosing and predicting the risk of developing multiple health conditions from ocular diseases to heart failure to Parkinsons disease all on the basis of peoples retinal images.

AI predicts chemicals smells from their structures

AI tools have been trained to detect disease using retinal images before, but what makes the new tool called RETFound special is that it was developed using a method known as self-supervised learning. That means that the researchers did not have to analyse each of the 1.6 million retinal images used for training and label them as normal or not normal, for instance. Such procedures are time-consuming and expensive, and are needed during the development of most standard machine-learning models.

Instead, the scientists used a method similar to the one used to train large-language models such as ChatGPT. That AI tool harnesses myriad examples of human-generated text to learn how to predict the next word in a sentence from the context of the preceding words. In the same kind of way, RETFound uses a multitude of retinal photos to learn how to predict what missing portions of images should look like.

Over the course of millions of images, the model somehow learns what a retina looks like and what all the features of a retina are, says Pearse Keane, an ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust in London who co-authored a paper published today in Nature1 describing the tool. This forms the cornerstone of the model, and classifies it as what some call a foundation model, which means that it can be adapted for many tasks.

A persons retinas can offer a window into their health, because they are the only part of the human body through which the capillary network, made up of the smallest blood vessels, can be observed directly. If you have some systemic cardiovascular disease, like hypertension, which is affecting potentially every blood vessel in your body, we can directly visualize [that] in retinal images, Keane says.

Scientists used ChatGPT to generate an entire paper from scratch but is it any good?

Retinas are also an extension of the central nervous system, sharing similarities with the brain, which means that retinal images can be used to evaluate neural tissue. The rub is that a lot of the time people dont have the expertise to interpret these scans. This is where AI comes in, Keane says.

Once they had pre-trained RETFound on those 1.6 million unlabelled retinal images, Keane and his colleagues could then introduce a small number of labelled images say, 100 retinal images from people who had developed Parkinsons and 100 from people who had not to teach the model about specific conditions. Having learnt from all the unlabelled images what a retina should look like, Keane says, the model is able to easily learn the retinal features associated with a disease.

Using unlabelled data to initially train the model unblocks a major bottleneck for researchers, says Xiaoxuan Liu, a clinical researcher who studies responsible innovation in AI at the University of Birmingham, UK. Radiologist Curtis Langlotz, director of the Center for Artificial Intelligence in Medicine and Imaging at Stanford University, in California, agrees. High-quality labels for medical data are extremely expensive, so label efficiency has become the coin of the realm, he says.

The system performed well at detecting ocular diseases such as diabetic retinopathy. On a scale where 0.5 represents a model that performs no better than a random prediction and 1 represents a perfect model that makes an accurate prediction each time, it scored between 0.822 and 0.943 for diabetic retinopathy, depending on the data set used. When predicting the risk for systemic diseases such as heart attacks, heart failure, stroke and Parkinsons the overall performance was limited, but still superior to that of other AI models.

RETFound is so far one of the few successful applications of a foundation model to medical imaging, Liu says.

Researchers are now looking ahead to what other types of medical imaging the techniques used to develop RETFound might be applied to. It will be interesting to see whether these methods generalize to more complex images, Langlotz says for example, to magnetic resonance images or computed tomography scans, which are often three- or even four-dimensional.

Is the world ready for ChatGPT therapists?

The authors have made the model publicly available, and hope that groups around the world will be able to adapt and train it to work for their own patient populations and medical settings. They could potentially take this algorithm and fine-tune it, using data from their own country to have something thats more optimized for their use, Keane says.

This is tremendously exciting, Liu says. But using RETFound as the basis for other models to detect diseases comes with a risk, she adds. Thats because any limitations embedded in the tool could leak into future models that are built from it. It is now up to the authors of RETFound to ensure its ethical and safe usage, including transparent communication of its limitations, so that it can be a true community asset.

See more here:

AI detects eye disease and risk of Parkinson's from retinal images - Nature.com

Read More..