Page 293«..1020..292293294295..300310..»

Dr. Radha Plumb Assumes Role as Department of Defense Chief Digital and AI Officer – Department of Defense

Pentagon Appoints Technical and Strategic Expert to Lead DoD's Acceleration of the Adoption of Data, Analytics, and AI to Generate Decision Advantage

ARLINGTON, Va. In a ceremony held today at the Pentagon, Dr. Radha Plumb was officially sworn in as the Department of Defense (DoD) Chief Digital and Artificial Intelligence Officer (CDAO). Secretary of Defense Lloyd J. Austin IIIadministered the oath of office to Plumb with family and colleagues in attendance. Plumb most recently served as the Deputy Under Secretary of Defense for Acquisition and Sustainment.

"It is an honor to serve in this role at a critical time for our nation. AI and digital innovation provide crucial new technologies and capabilities for our warfighters," said Plumb. "I am looking forward to collaborating with partners from military leaders to industry partners and academia to our allies and partners to ensure we deliver digital and AI technological advancements at speed and scale."

Plumb assumes the role of CDAO having completed her last assignment working acquisition matters related to DoD, including building out and maintaining a robust national security industrial base and supply chain. She brings significant technical expertise in data and AI development as well as creative acquisition approaches combined with strategic acumen to advance the CDAO's innovative efforts. Her combination of government and industry experience will provide a unique advantage to drive accelerated adoption of data, analytics, and AI across DOD.

Prior to her work with the DoD, Plumb has worked in various industry positions to include Google, RAND and Facebook. Plumb received her Ph.D. in economics from Princeton University and at the outset of her career, she was an assistant professor at the London School of Economics and a Robert Wood Johnson Health Policy Scholar at Harvard.

About the CDAO The CDAO became operational in June 2022 and is dedicated to integrating and optimizing artificial intelligence capabilities across the DoD. The office is responsible for accelerating the DoD's adoption of data, analytics, and AI, enabling the Department's digital infrastructure and policy adoption to deliver scalable AI-driven solutions for enterprise and joint use cases, safeguarding the nation against current and emerging threats.

Link:

Dr. Radha Plumb Assumes Role as Department of Defense Chief Digital and AI Officer - Department of Defense

Read More..

Are AI Mammograms Worth the Cost? – The New York Times

Clinics around the country are starting to offer patients a new service: having their mammograms read not just by a radiologist, but also by an artificial intelligence model. The hospitals and companies that provide these tools tout their ability to speed the work of radiologists and detect cancer earlier than standard mammograms alone.

Currently, mammograms identify around 87 percent of breast cancers. Theyre more likely to miss cancer in younger women and those with dense breasts. They sometimes lead to false positives that require more testing to rule out cancer, and can also turn up precancerous conditions that may never cause serious problems but nonetheless lead to treatment because its not possible to predict the risk of not treating them.

Its not a perfect science by any stretch, said Dr. John Lewin, chief of breast imaging at Smilow Cancer Hospital and Yale Cancer Center.

Experts are excited by the prospect of improving the accuracy of screening for breast cancer, which 300,000 women are diagnosed with each year in the United States. But they also have concerns about whether these A.I. tools will work well across a diverse range of patients and whether they can meaningfully improve breast cancer survival.

Mammograms contain a wealth of information on breast tissues and ducts. Certain patterns, such as bright white spots with jagged edges, may be a sign of cancer. Fine white lines, by comparison, may indicate calcifications that can be benign or may need more testing. Other patterns can be tricky for humans to differentiate from normal breast tissue.

A.I. models can, in some cases, see what we cannot see, said Dr. Katerina Dodelzon, a radiologist who specializes in breast imaging at NewYork-Presbyterian/Weill Cornell Medical Center.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

The rest is here:

Are AI Mammograms Worth the Cost? - The New York Times

Read More..

AI Companies With A Winning Hand: The 2024 CRN AI 100 – CRN

CRNs inaugural AI 100 list breaks down the 100 companies you need to know about in the AI market in the five major categories: cloud, security, data and analytics, data center and edge, and software.

Artificial intelligence is revolutionizing the IT industry with the most powerful tech companies on the planet, along with some special unicorn startups, investing significantly in AI and generative AI.

There are 100 companies who are standing out from the crowd by leading the AI revolution in 2024 by providing AI solutions for cloud computing, cybersecurity, data and analytics, edge computing, data centers, PCs and software.

From Nvidia and Microsoft to Dell Technologies and Wiz, these 100 artificial intelligence market leaders are the ones making waves in the boiling-hot AI industry with billions expected to be poured into new AI technology this year.

CRNs inaugural AI 100 list breaks down the 100 companies you need to know about in the AI market in the five major categories: cloud, security, data and analytics, data center and edge, and software. Before jumping into the list of 100 AI companies, heres what to know about each category.

AI For Cloud

Cloud computing is one of the most popular delivery options for AI and generative AI applications at scale, with companies like AWS, IBM, Google, Microsoft and Oracle making the list as well as several cloud startups.

The rapid advancement in large language models (LLMs) and foundation models (FMs) are driving AI and GenAI capabilities and business use cases with many cloud companies leading the way.

Because of its scale and shared services model, cloud technology is best-suited for the delivery of GenAI-enabled applications at scale and the development of general-purpose foundation models, said Gartner Analyst Sig Nag in a recent report.

AI For Cybersecurity

Both AI and machine learning technologies are under the hood of just about every major cybersecurity capability out therefrom threat detection, to user authentication and access management, to analysis of network traffic and many other key security functions.

GenAI has gone viral in the cybersecurity industry, with countless vendors introducing new capabilities powered by LLMs in the wake of OpenAIs debut of ChatGPT in late 2022. In the security landscape including email, collaboration, code security, protecting the use of GenAI itselfproducts are being enhanced with the help of AI.

Cybersecurity firms securing AI on various fronts range from CrowdStrike and Palo Alto Networks to Wiz and Vectra AI.

AI For Data Center, PC And Edge

AI is powered by processors, servers and data storage devices, while PCs and laptops have become one of the hottest AI topics in 2024.

From data centers to edge computing and end-user PCs, AI is infiltrating across the IT landscape in the shape on new CPUs, AI-powered data management as well as AI copilots initiatives to boost AIOps.

In this category, AI superstars like Nvidia, Cisco, Intel, HPE and Supermicro made the list alongside AI startups such as Prosimo and Run:ai.

AI For Data And Analytics

Data is the fuel that AI needs to work. IT companies in the data management spaceincluding suppliers of tools for collecting, managing and preparing huge volumes of dataplay a huge role in the AI wave sweeping the industry.

Also participating in the AI tsunami are many data analytics companies, both established vendors and startups, that are incorporating AI and generative AI into their software to go beyond traditional business intelligence and reporting to provide sophisticated, AI-powered search and analytics tools and natural language querying capabilities.

Data and analytics all-stars who made CRNs list include Alteryx, Couchbase, Databricks, Dataloop and Informatica, to name a few.

AI For Software

From AI virtual assistants to the automation of administrative IT tasks, AI software companies are critical to the industry.

Early AI software use cases focus on improving operations for businesses, creating new revenue streams and boosting productivity such as AI for generative code and content.

Research firm IDC predicts enterprises will spend over $38 billion worldwide on generative AI software and related infrastructure hardware and services, with the number reaching $151 billion by 2027

Companies driving AI for software include the likes of Connectwise, CrushBank, Dataiku, SAP and ServiceNow.

For CRNs inaugural AI 100 list, we highlight the work 100 top-notch AI providers are doing that is powering the AI and GenAI world of the future.

The 25 Hottest AI Companies For Data Center And Edge: The 2024 CRN AI 100 Here are the 25 companies leading the AI revolution and innovation race in servers, chips, networking, storage, microprocessors, laptops and PCs in 2024.

The 20 Hottest AI Cloud Companies: The 2024 CRN AI 100 From AWS and Microsoft Azure to Altair, MongoDB and Dynatrace, here are the 20 cloud AI companies you need to know about in 2024.

The 20 Hottest AI Cybersecurity Companies: The 2024 CRN AI 100 The coolest AI cybersecurity companies in 2024 include Crowdstrike, Fortinet, Netskope and Trend Micro.

The 20 Hottest AI Software Companies: The 2024 CRN AI 100 The coolest AI software companies in 2024 include CrushBank, LogicMonitor, MSPbots, Rewst and more.

The 15 Hottest AI Data And Analytics Companies: The 2024 CRN AI 100 Here are 15 data management and data analytics companies, part of the inaugural CRN AI 100, that are playing an outsized role in AI today.

Read the original here:

AI Companies With A Winning Hand: The 2024 CRN AI 100 - CRN

Read More..

Mobile Robots and AI Help Modernize, Streamline Warehouse Workflows – PYMNTS.com

The rise of artificial intelligence (AI) holds the promise to fully digitize all manner of verticals, all manner of workflows

and in the age of eCommerce, advanced technology in the service of efficiency can help transform logistics.

Ben Gruettner, vice president of revenue at Robust.AI, told PYMNTS that robots and AI, in tandem with, well, the human touch, can keep up with changing end-market demands and the changing workflows that demand flexibility.

Generative AI is arriving on the warehouse floor faster than many anticipated, he told PYMNTS, adding that Theres no either/or when it comes to workers and robots.

Used collaboratively, much in the way that warehouse workers today make use of forklifts, pallet jacks and hand trucks, robots can help with filling the job gap and helping warehouses and fulfillment centers deal with increasingly high demand.

As reported earlier this year, DHL Supply Chain partnered with Robust.AI to develop and deploy a fleet of warehouse robots specifically, Carter, a collaborative mobile robot that the companies said provide flexible warehouse material handling automation.

Through the joint efforts, the robots handle a range of functions, starting with optimizing the picking process. AI helps the robots learn and adapt to real-time warehouse conditions, optimizing workflows and maximizing productivity. The robots embedded sensors and AI capabilities generate data to improve warehouse layouts, staffing, and inventory management.

Gruettner said that solutions like Carter help support the labor companies intend to keep, adding, We are providing a solution that can be easily deployed into existing operations, aiming to empower and enhance the productivity of warehouse workers within DHL.

AI, he added, helps ensure that the robots adapt as third-party logistics operations change because warehouse volume and customer profiles change throughout the year.

The robots he said, are autonomous allies with zero-friction adoption, he told PYMNTS.

The remarks from Gruettner come in the wake of separate coverage by PYMNTS,where as noted here,Akash Gupta, CEO of GreyOrange, said that only about 10% to 15% of warehouses have mechanized at least some of their processes. And a relatively scant mid-single-digit percentage point slice of warehouses might be deemed to be heavily automated.

He told PYMNTS that the former, traditional ways of managing inventory in batches are no longer enough to satisfy the demands of omnichannel commerce. AI, he said, can offer up an intelligent software orchestration layer that help manage the flow of inventory and data.

In a separate interview with PYMNTS, Yoav Amiel, chief information officer at freight brokerage platform and third-party logistics company RXO, told PYMNTS, If you look at the maturity of AI models over the years, if you go back 20 years, AI was more around recognition, and gradually that evolved into coming up with insights and serving as a recommendation engine.

He added, Today, AI is capable of task completion. If you think about warehouse inventory planning, workforce planning, all of these activities, AI can make these processes much more efficient overall.

The rest is here:

Mobile Robots and AI Help Modernize, Streamline Warehouse Workflows - PYMNTS.com

Read More..

Texas will use computers to grade written answers on this year’s STAAR tests – The Texas Tribune

Sign up for The Brief, The Texas Tribunes daily newsletter that keeps readers up to speed on the most essential Texas news.

Students sitting for their STAAR exams this week will be part of a new method of evaluating Texas schools: Their written answers on the states standardized tests will be graded automatically by computers.

The Texas Education Agency is rolling out an automated scoring engine for open-ended questions on the State of Texas Assessment of Academic Readiness for reading, writing, science and social studies. The technology, which uses natural language processing, a building block of artificial intelligence chatbots such as GPT-4, will save the state agency about $15-20 million per year that it would otherwise have spent on hiring human scorers through a third-party contractor.

The change comes after the STAAR test, which measures students understanding of state-mandated core curriculum, was redesigned in 2023. The test now includes fewer multiple choice questions and more open-ended questions known as constructed response items. After the redesign, there are six to seven times more constructed response items.

We wanted to keep as many constructed open ended responses as we can, but they take an incredible amount of time to score, said Jose Rios, director of student assessment at the Texas Education Agency.

In 2023, Rios said TEA hired about 6,000 temporary scorers, but this year, it will need under 2,000.

To develop the scoring system, the TEA gathered 3,000 responses that went through two rounds of human scoring. From this field sample, the automated scoring engine learns the characteristics of responses, and it is programmed to assign the same scores a human would have given.

This spring, as students complete their tests, the computer will first grade all the constructed responses. Then, a quarter of the responses will be rescored by humans.

When the computer has low confidence in the score it assigned, those responses will be automatically reassigned to a human. The same thing will happen when the computer encounters a type of response that its programming does not recognize, such as one using lots of slang or words in a language other than English.

We have always had very robust quality control processes with humans, said Chris Rozunick, division director for assessment development at the Texas Education Agency. With a computer system, the quality control looks similar.

Every day, Rozunick and other testing administrators will review a summary of results to check that they match what is expected. In addition to low confidence scores and responses that do not fit in the computers programming, a random sample of responses will also be automatically handed off to humans to check the computers work.

TEA officials have been resistant to the suggestion that the scoring engine is artificial intelligence. It may use similar technology to chatbots such as GPT-4 or Googles Gemini, but the agency has stressed that the process will have systematic oversight from humans. It wont learn from one response to the next, but always defer to its original programming set up by the state.

We are way far away from anything thats autonomous or can think on its own, Rozunick said.

But the plan has still generated worry among educators and parents in a world still weary of the influence of machine learning, automation and AI.

Some educators across the state said they were caught by surprise at TEAs decision to use automated technology also known as hybrid scoring to score responses.

There ought to be some consensus about, hey, this is a good thing, or not a good thing, a fair thing or not a fair thing, said Kevin Brown, the executive director for the Texas Association of School Administrators and a former superintendent at Alamo Heights ISD.

Representatives from TEA first mentioned interest in automated scoring in testimony to the Texas House Public Education Committee in August 2022. In the fall of 2023, the agency announced the move to hybrid scoring at a conference and during test coordinator training before releasing details of the process in December.

The STAAR test results are a key part of the accountability system TEA uses to grade school districts and individual campuses on an A-F scale. Students take the test every year from third grade through high school. When campuses within a district are underperforming on the test, state law allows the Texas education commissioner to intervene.

The commissioner can appoint a conservator to oversee campuses and school districts. State law also allows the commissioner to suspend and replace elected school boards with an appointed board of managers. If a campus receives failing grades for five years in a row, the commissioner is required to appoint a board of managers or close that school.

With the stakes so high for campuses and districts, there is a sense of uneasiness about a computers ability to score responses as well as a human can.

There's always this sort of feeling that everything happens to students and to schools and to teachers and not for them or with them, said Carrie Griffith, policy specialist for the Texas State Teachers Association.

A former teacher in the Austin Independent School District, Griffith added that even if the automated scoring engine works as intended, it's not something parents or teachers are going to trust.

Superintendents are also uncertain.

The automation is only as good as what is programmed, said Lori Rapp, superintendent at Lewisville ISD. School districts have not been given a detailed enough look at how the programming works, Rapp said.

The hybrid scoring system was already used on a limited basis in December 2023. Most students who take the STAAR test in December are retaking it after a low score. Thats not the case for Lewisville ISD, where high school students on an altered schedule test for the first time in December, and Rapp said her district saw a drastic increase in zeroes on constructed responses.

At this time, we are unable to determine if there is something wrong with the test question or if it is the new automated scoring system, Rapp said.

The state overall saw an increase in zeroes on constructed responses in December 2023, but the TEA said there are other factors at play. In December 2022, the only way to score a zero was by not providing an answer at all. With the STAAR redesign in 2023, students can receive a zero for responses that may answer the question but lack any coherent structure or evidence.

The TEA also said that students who are retesting will perform at a different level than students taking the test for the first time. Population difference is driving the difference in scores rather than the introduction of hybrid scoring, a TEA spokesperson said in an email.

For $50, students and their parents can request a rescore if they think the computer or the human got it wrong. The fee is waived if the new score is higher than the initial score. For grades 3-8, there are no consequences on a students grades or academic progress if they receive a low score. For high school students, receiving a minimum STAAR test score is a common way to fulfill one of the state graduation requirements, but it is not the only way.

Even with layers of quality control, Round Rock ISD Superintendent Hafedh Azaiez said he worries a computer could miss certain things that a human being may not be able to miss, and that room for error will impact students who Azaiez said are trying to do his or her best.

Test results will impact how they see themselves as a student, Brown said, and it can be humiliating for students who receive low scores. With human graders, Brown said, students were rewarded for having their own voice and originality in their writing, and he is concerned that computers may not be as good at rewarding originality.

Julie Salinas, director of assessment, research and evaluation at Brownsville ISD said she has concerns about whether hybrid scoring is allowing the students the flexibility to respond in a way that they can demonstrate their full capability and thought process through expressive writing.

Brownsville ISD is overwhelmingly Hispanic. Students taking an assessment entirely in Spanish will have their tests graded by a human. If the automated scoring engine works as intended, responses that include some Spanish words or colloquial, informal terms will be flagged by the computer and assigned to a human so that more creative writing can be assessed fairly.

The system is designed so that it does not penalize students who answer differently, who are really giving unique answers, Rozuick said.

With the computer scoring now a part of STAAR, Salinas is focused on adapting. The district is incorporating tools with automated scoring into how teachers prepare students for the STAAR test to make sure they are comfortable.

Our district is on board and on top of the things that we need to do to ensure that our students are successful, she said.

Disclosure: Google, the Texas Association of School Administrators and Texas State Teachers Association have been financial supporters of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.

We cant wait to welcome you to downtown Austin Sept. 5-7 for the 2024 Texas Tribune Festival! Join us at Texas breakout politics and policy event as we dig into the 2024 elections, state and national politics, the state of democracy, and so much more. When tickets go on sale this spring, Tribune members will save big. Donate to join or renew today.

Read more here:

Texas will use computers to grade written answers on this year's STAAR tests - The Texas Tribune

Read More..

The 10 Best AI Courses That Are Worth Taking in 2024 – TechRepublic

Since ChatGPT proved a consumer hit, a gold rush has set off for AI in Silicon Valley. Investors are intrigued by companies promising generative AI will transform the world, and companies seek workers with the skills to bring them into the future. The frenzy may be cooling down in 2024, but AI skills are still hot in the tech market.

Looking to join the AI industry? Which route into the profession is best for each individual learner will depend on that persons current skill level and their target skill or job title.

When assessing online courses, we examined the reliability and popularity of the provider, the depth and variety of topics offered, the practicality of the information, the cost and the duration. The courses and certification programs vary a lot, so choose the options that are right for each person or business.

They are listed in order of skill level and, within the skill level categories, alphabetically. In most cases, each provider offers multiple courses in different aspects of generative AI. Explore these generative AI courses to see which might fit the right niche.

A name learners are likely to see on AI courses a lot is Andrew Ng; he is an adjunct professor at Stanford University, founder of DeepLearning.AI and cofounder of Coursera. Ng is one of the authors of a 2009 paper on using GPUs for deep learning, which NVIDIA and other companies are now doing to transform AI hardware. Ng is the instructor and driving force behind AI for Everyone, a popular, self-paced course more than one million people have enrolled. AI for Everyone from Coursera contains four modules:

For individuals, a Coursera account is $49-$79 per month with a 7-day free trial, depending on the course and plan. However, the AI for Everyone course can be taken for free; the $79 per month fee provides access to graded assignments and earning a certificate.

Coursera states the class takes six hours to complete.

This course has no prerequisites.

Are you a C-suite leader looking to shape your companys vision for generative AI? If so, this non-technical course helps business leaders build a top-down philosophy around generative AI projects. It could be useful for sparking conversation between business and technical leaders.

Free if completed within the Coursera 7-day trial. Otherwise, a Coursera account is $49-$79 per month, depending on the course and plan.

This course takes about one hour.

There are no prerequisites for this course.

This is a well-reviewed beginner course that sets itself apart by approaching AI holistically, including its practical applications and potential social impact. It includes hands-on exercises but doesnt require the learner to know how to code, making it a good mix of practical and beginner content. Datacamps Understanding Artificial Intelligence course is particularly interesting because it includes a section on business and enterprise. Business leaders looking for a non-technical explanation of infrastructure and skills they need to harness AI might be interested in this course.

This course can be accessed with a DataCamp subscription, which costs $25 per person per month, billed annually. Educators can get a group subscription for free.

Including videos and exercises, this course lasts about two hours.

This course has no prerequisites.

Google Clouds Introduction to Generative AI Learning Path covers what generative AI and large language models are for beginners. Since its from Google, it provides some specific Google applications used to build generative AI: Google Tools and Vertex AI. It includes a section on responsible AI, inviting the learner to consider ethical practices around the generative AI they may go on to create. Completing this learning path will award the Prompt Design in Vertex AI skill badge.

Another option from Google Cloud is the Generative AI for Developers Learning Path.

This course is free.

The path technically contains 8 hours and 30 minutes of content, but some of that content is quizzes. The time it takes for each individual to complete the path may vary.

The path has no prerequisites.

Since this course is taught by an IBM professional, it is likely to include contemporary, real-world insight into how generative AI and machine learning are used today. It is an eight-hour course that covers a wide range of topics around artificial intelligence, including ethical concerns. Introduction to Artificial Intelligence includes quizzes and can contribute to career certificates in a variety of programs from Coursera.

Free if completed within the 7-day Coursera free trial, or $49-$79 per month afterward, depending on the course and plan. Financial aid is available.

Coursera estimates this course will take about eight hours.

There are no prerequisites for this course.

AWS offers a lot of AI-related courses and programs, but we chose this one because it combines fundamentals the first two courses in the developer kit with hands-on knowledge and training on specific AWS products. This could be very practical for someone whose organization already works with multiple AWS products but wants to expand into more generative AI products and services. This online, self-guided kit includes hands-on labs and AWS Jam challenges, which are gamified and AI-powered experiences.

The AWS Generative AI Developer Kit is part of the AWS Skill Builder subscription. AWS Skill Builder is accessible with a 7-day trial, after which it costs $29 per month or $449 per year.

The courses take 16 hours and 30 minutes to complete.

This course is appropriate for professionals who have not worked with generative AI before, but it would help to have worked within the AWS ecosystem. In particular, Amazon Bedrock is discussed at such a level that it would be beneficial to have completed the course AWS Technical Essentials or have comparable real-world experience.

Harvards online professional certificate combines the venerable universitys Introduction to Computer Science course with another course tailored to careers in AI: Introduction to Artificial Intelligence with Python. This certification is suitable for people who want to become software developers with a focus on AI. This course is self-paced, and students will receive pre-recorded instruction from Harvard University faculty.

Both courses together cost $466.20 as of the time of writing; this is a discounted price from the usual $518. Learners can take both courses in the certification for free, but the certification itself requires a fee.

These courses are self paced, but the estimated time for completion is five months at 7-22 hours per week.

There are no prerequisites required, although a high-school level of experience with programming basics would likely provide a solid foundation. The Introduction to Computer Science course covers algorithms and programming in C, Python, SQL and JavaScript, as well as CSS and HTML.

MIT has played a leading role in the rise of AI and the new category of jobs it is creating across the world economy, the description of the program states, summing up the educational legacy behind this course. MITs AI and machine learning certification course for professionals is taught by MIT faculty who are working at the cutting edge of the field.

This certification program is comparable to a traditional college course, and that level of commitment is reflected in the price.

If a learner completes at least 16 days of qualifying courses, they will be eligible to receive the certificate. Courses are typically taught June, July and August online or on MITs campus.

There is an application fee of $325. The two mandatory courses are:

The remaining required 11 days can be composed of elective classes, which last between two and five days each and cost between $2,500 and $4,700 each.

16 days.

The Professional Certificate Program in Machine Learning & Artificial Intelligence is designed for technical professionals with at least three years of experience in computer science, statistics, physics or electrical engineering. In particular, MIT recommends this program for anyone whose work intersects with data analysis or for managers who need to learn more about predictive modeling.

Completion of the academically rigorous Stanford Artificial Intelligence Professional Program will result in a certification. This program is suitable for professionals who want to learn how to build AI models from scratch and then fine-tune them for their businesses. In addition, it helps professionals understand research results and conduct their own research on AI. This program offers 1 to 1 time with professionals in the industry and some flexibility learners can take all eight courses in the program or choose individual courses.

The individual courses are:

The Stanford Artificial Intelligence Professional Program costs $1,750 per course. Learners who complete three courses will earn a certificate.

Each course lasts 10 weeks at 10 to 15 hours per week. Courses are held on set dates.

Interested professionals can submit an application; applicants are asked to prove competence in the following areas:

Udacitys Artificial Intelligence Nanodegree program equips graduates with practical knowledge about how to solve mathematical problems using artificial intelligence. This class isnt about generative AI models; instead, it teaches the underpinnings of traditional search algorithms, probabilistic graphical models, and planning and scheduling systems. Learners who complete this course will gain experience in working with the types of algorithms used in the real world for:

This course costs $249 per month paid monthly or $846 for the first four months of the subscription, after which it will cost $249 per month.

This course lasts about three months.

Learners in this course should have a background in programming and mathematics. The following skills are recommended:

Whether it is worth taking an AI course depends on many factors: the course, the individual and the job market. For instance, getting an AI-focused certification might contribute to getting a salary increase or making a career change. AI courses could help someone learn AI skills that might be a good fit for their abilities, or could be the first step toward a lucrative and life-long career. Educating oneself in a contemporary topic can always have some benefits in terms of practicing new skills.

Some introductory AI courses do not require coding; however, AI is a relatively complex topic in computing, and practitioners will need some programming skills as they progress to more advanced courses and learn how to build and deploy AI models. Most likely, intermediate learners need to be comfortable working in Python.

SEE: Help your business by becoming your own IT expert. (TechRepublic Academy)

Some of these courses and certifications include education in basic programming and computer science. More advanced courses and certifications will require learners to already have a college-level knowledge of calculus, linear algebra, probability and statistics, as well as coding.

Read the original:

The 10 Best AI Courses That Are Worth Taking in 2024 - TechRepublic

Read More..

Beauty Reviews Were Already Suspect. Then Came Generative AI. – The Business of Fashion

For beauty shoppers, it was already hard enough to trust reviews online.

Brands such as Sunday Riley and Kylie Skin are among those to have been caught up in scandals over fake reviews, with Sunday Riley admitting in a 2018 incident that it had tasked employees with writing five-star reviews of its products on Sephora. It downplayed the misstep at the time, arguing it would have been impossible to post even a fraction of the hundreds of thousands of Sunday Riley reviews on platforms around the globe.

Today, however, thats increasingly plausible with generative artificial intelligence.

Text-generating tools like ChatGPT, which hit the mainstream just over a year ago, make it easier to mimic real reviews faster, better and at greater scale than ever before, creating more risk of shoppers being taken in by bogus testimonials. Sometimes there are dead giveaways. As an AI language model, I dont have a body, but I understand the importance of comfortable clothing during pregnancy, began one Amazon review of maternity shorts spotted by CNBC. But often theres no way to know.

Back in the day, you would see broken grammar and youd think, That doesnt look right. That doesnt sound human, said Saoud Khalifah, a former hacker and founder of Fakespot, an AI-powered tool to identify fake reviews. But over the years weve seen that drop off. These fake reviews are getting much, much better.

Fake reviews have become an industry in themselves, driven by fraud farms that act as syndicates, according to Khalifah. A 2021 report by Fakespot found roughly 31 percent of reviews across Amazon, Sephora, Walmart, eBay, Best Buy and sites powered by Shopify which altogether accounted for more than half of US online retail sales that year to be unreliable.

It isnt just bots that are compromising trust in beauty reviews. The beauty industry already relies heavily on incentivised human reviewers, who receive a free product or discount in exchange for posting their opinion. It can be a valuable way for brands to get new products into the hands of their target audience and boost their volume of reviews, but consumers are increasingly suspicious of incentivised reviews, so brands should use them strategically, and should always explicitly declare them.

Sampling and review syndicators such as Influenster are keen to point out that receiving a free product does not oblige the reviewer to give positive feedback, but its clear from the exchanges in online communities that many users of these programmes believe they will receive more freebies if they write good reviews. As one commenter wrote in a post in Sephoras online Beauty Insider community, People dont want to stop getting free stuff if they say honest or negative things about the products they receive for free.

That practice alone can skew the customer rating of a product. On Sephora, for example, the new Ouai Hair Gloss In-Shower Shine Treatment has 1,182 reviews and a star rating of 4.3. But when filtering out incentivised reviews, just 89 remain. Sephora also doesnt recalculate the star rating after removing those reviews. Among just the non-incentivised reviews, the products rating is 2.6 stars. The issue has sparked some frustration among members of its online community. Sephora declined to comment.

But the situation gets even murkier when factoring in the rise in reviews partially created by a human and partially by AI. Khalifah describes these kinds of reviews as a hybrid monstrosity, where its half legit and half not, because AI is being used to fill the gaps within the review and make it look better.

The line between authentic reviews and AI-generated content is itself beginning to blur as review platforms roll out new AI-powered tools to assist their communities in writing reviews. Bazaarvoice, a platform for user-generated content which owns Influenster and works with beauty brands including LOral, Pacifica, Clarins and Sephora, has recently launched three new AI-powered features, including a tool called Content Coach. The company developed the tool based on research showing that 68 percent of its community had trouble getting started when writing a review, according to Marissa Jones, Bazaarvoice senior vice president of product.

Content Coach gives users prompts of key topics to include in their review, based on common themes in other reviews. The prompts for a review of a Chanel eyeliner might include pigmentation, precision and ease of removal, for instance. As users type their review, the topic prompts light up as they are addressed, gamifying the process.

Jones stressed that the prompts are meant to be neutral. We wanted to provide an unbiased way to give [users] some ideas, she said. We dont want to influence their opinion or do anything that pushes them one direction or the other.

But even such seemingly innocuous AI nudges as those created by Content Coach can still influence what a consumer writes in a product review, shifting it from a spontaneous response based on considered appraisal of a product to something more programmed that requires less thought.

Fakespots Khalifah points out that governments and regulators around the globe have been slow to act, given the speed at which the problem of fake reviews is evolving with the advancement of generative AI.

But change is finally on the horizon. In July 2023, the US Federal Trade Commission introduced the Trade Regulation Rule on the Use of Consumer Reviews and Testimonials, a new piece of regulation to punish marketers who feature fake reviews, suppress negative reviews or offer incentives for positive ones.

Our proposed rule on fake reviews shows that were using all available means to attack deceptive advertising in the digital age, Samuel Levine, director of the FTCs Bureau of Consumer Protection, said in a release at the time. The rule would trigger civil penalties for violators and should help level the playing field for honest companies.

In its notice of proposed rule-making, the FTC shared comments from industry players and public interest groups on the damage to consumers caused by fake reviews. Amongst these, the National Consumers League cited an estimate that, in 2021, fraudulent reviews cost US consumers $28 billion. The text also noted that the widespread emergence of AI chatbots is likely to make it easier for bad actors to write fake reviews.

In beauty, of course, the stakes are potentially higher, as fake reviews can also mislead consumers into buying counterfeit products, which represent a risk to a shoppers health and wellbeing as well as their wallet.

If the FTCs proposed rule gets the green light, as expected, it will impose civil penalties of up to $51,744 per violation. The FTC could take the position that each individual fake review constitutes a separate violation every time it is viewed by a consumer, establishing a considerable financial deterrent to brands and retailers alike.

With this tougher regulatory stance approaching, beauty brands should get their houses in order now, and see it as an opportunity rather than an imposition. There is huge potential for brands and retailers to take the lead on transparency and build an online shopping experience consumers can believe in.

Continue reading here:

Beauty Reviews Were Already Suspect. Then Came Generative AI. - The Business of Fashion

Read More..

Students Are Likely Writing Millions of Papers With AI – WIRED

Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.

A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Cond Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

ChatGPTs launch was met with knee-jerk fears that the English class essay would die. The chatbot can synthesize information and distill it near-instantlybut that doesnt mean it always gets it right. Generative AI has been known to hallucinate, creating its own facts and citing academic references that dont actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race. Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter. Traces of chatbots have even been found in peer-reviewed, published academic writing.

Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules, and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.

Detecting the use of gen AI is tricky. Its not as easy as flagging plagiarism, because generated text is still original text. Plus, theres nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.

Students also aren't tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitins AI detector has also been updated to detect word spinners, says Annie Chechitelli, the companys chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool. As familiar software increasingly adds generative AI components, what students can and cant use becomes more muddled.

Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitins version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.

See the original post here:

Students Are Likely Writing Millions of Papers With AI - WIRED

Read More..

How Ukraine is using AI to fight Russia – The Economist

IN THE run-up to Ukraines rocket attacks on the Antonovsky Bridge, a vital road crossing from the occupied city of Kherson to the eastern bank of the Dnipro River, security officials carefully studied a series of special reports. It was the summer of 2022 and Russia was relying heavily on the bridge to resupply its troops west of the Dnipro. The reports contained research into two things: would destroying the bridge lead the Russian soldiers, or their families back home, to panic? And, more importantly, how could Ukraines government maximise the blow to morale by creating a particular information environment?

This is how Sviatoslav Hnizdovsky, the founder of the Open Minds Institute (OMI) in Kyiv, describes the work his research outfit did by generating these assessments with artificial intelligence (AI). Algorithms sifted through oceans of Russian social-media content and socioeconomic data on things ranging from alcohol consumption and population movements to online searches and consumer behaviour. The AI correlated any changes with the evolving sentiments of Russian loyalists and liberals over the potential plight of their countrys soldiers.

This highly sensitive work continues to shape important Ukrainian decisions about the course of the war, says Mr Hnizdovsky. This includes potential future strikes on Russias Kerch Bridge, which is the only direct land link between Russia and Crimea.

Ukraine, outgunned by Russia, is increasingly seeking an edge with AI by employing the technology in diverse ways. A Ukrainian colonel involved in arms development says drone designers commonly query ChatGPT as a start point for engineering ideas, like novel techniques for reducing vulnerability to Russian jamming. Another military use for AI, says the colonel, who requested anonymity, is to identify targets.

As soldiers and military bloggers have wisely become more careful with their posts, simple searches for any clues about the location of forces have become less fruitful. By ingesting reams of images and text, however, AI models can find potential clues, stitch them together and then surmise the likely location of a weapons system or a troop formation. Using this puzzle-pieces approach with AI allows Molfar, an intelligence firm with offices in Dnipro and Kyiv, to typically find two to five valuable targets every day, says Maksym Zrazhevsky, an analyst with the firm. Once discovered, this intelligence is quickly passed along to Ukraines army, resulting in some of the targets being destroyed.

Targeting is being assisted by AI in other ways. SemanticForce, a firm with offices in Kyiv and Ternopil, a city in the west of Ukraine, develops models that in response to text prompts scrutinises online or uploaded text and images. Many of SemanticForces clients use the system commercially to monitor public sentiments about their brands. Molfar, however, uses the model to map areas where Russian forces are likely to be low on morale and supplies, which could make them a softer target. The AI finds clues in pictures, including those from drone footage, and from soldiers bellyaching in social media.

It also cobbles together clues about Russian military weaknesses using a sneaky proxy. For this, Molfar employs SemanticForces AI to generate reports on the activities of Russian volunteer groups that fundraise and prepare care packages for the sections of the front most in need. The algorithms, Molfar says, do a good job of discarding potentially misleading bot posts. (Accounts with jarring political flip-flops are one tipoff.) The firms analysts sometimes augment this intelligence by using software that disguises the origin of a phone call, so that Russian volunteer groups can be rung by staff pretending to be a Russian eager to contribute. Ten of the companys 45-odd analysts work on targeting, and do so free of charge for Ukrainian forces.

Then there is counter-intelligence. The use of AI helps Ukraines spycatchers identify people who Oleksiy Danilov, until recently secretary of the National Security and Defence Council (NSDC), describes as prone to betrayal. Offers to earn money by taking geolocated pictures of infrastructure and military assets are often sent to Ukrainian phones, says Dmytro Zolotukhin, a former Ukrainian deputy minister for information policy. He recently received one such text himself. People who give this market for intelligence services a shot, he adds, are regularly nabbed by Ukraines SBU intelligence agency.

Using AI from Palantir, an American firm, Ukrainian counter-intelligence fishes for illuminating linkages in disparate pools of data. Imagine, for instance, an indebted divorcee at risk of losing his flat and custody of his children who opens a foreign bank account and has been detected with his phone near a site that was later struck by missiles. In addition to such dot-connecting, the AI performs social-network analysis. If, say, the hypothetical divorcee has strong personal ties to Russia and has begun to take calls from someone whose phone use suggests a higher social status, then AI may increase his risk score.

The result of AI assessments of interactions among a networks nodes have been impressive for more than a decade. Kristian Gustafson, a former British intelligence officer who advised Afghanistans interior ministry in 2013, recounts the capture of a courier transporting wads of cash for Taliban bigwigs. Their ensuing phone calls, he says, lit up the whole diagram. Since then, algorithmic advances for calculating things like betweenness centrality, a measure of influence, make those days look, as another former intelligence officer puts it, pretty primitive.

In addition, network analysis helps Ukrainian investigators identify violators of sanctions on Russia. By connecting data in ship registries with financial records held elsewhere, the software can pierce the corporate veil, a source says. Mr Zolotukhin says hackers are providing absolutely enormous caches of stolen business data to Ukrainian agencies. This is a boon for combating sanctions-busting.

The use of AI has been developing for some time. Volodymyr Zelensky, Ukraines president, called for the development of a massive boost in the use of the technology for national security in November 2019. The result is a strategically minded model built and run by the NSDC that ingests text, statistics, photos and video. Called the Centre of Operations for Threats Assessment (COTA), it is fed a wide range of information, some obtained by hackers, says Andriy Ziuz, NSDCs head of staff. The model tracks prices, phone usage, migration, trade, energy, politics, diplomacy and military developments down to the weapons in repair shops.

Operators at COTA call this model a constructor. This is because it also ingests output from smaller models such as Palantirs software and Delta, which is battlefield software that supports the Ukrainian armys manoeuvre decisions. COTAs bigger picture output provides senior officials with guidance on sensitive matters, including mobilisation policy, says Mykola Dobysh, NSDCs chief technologist. Mr Danilov notes that Mr Zelensky has been briefed on COTAs assessments on more than 130 occasions, once at 10am on the day of Russias full invasion. Access to portions (or circuits) of COTA is provided to some other groups, including insurers, foreign ministries and Americas Department of Energy.

Ukraines AI effort benefits from its societys broad willingness to contribute data for the war effort. Citizens upload geotagged photos potentially relevant for the countrys defence into a government app called Diia (Ukrainian for action). Many businesses supply Mantis Analytics, a firm in Lviv, with operations data on things that range from late deliveries to call-centre activity and the setting off of burglar alarms. Recipients of the platforms assessments of societal functioning include the defence ministry and companies that seek to deploy their own security resources in better ways.

How much difference all this will ultimately make is still unclear. Evan Platt of Zero Line, an NGO in Kyiv that provides kit to troops and who spends time at the front studying fighting effectiveness, describes Ukraines use of AI as a bright spot. But there are concerns. One is that enthusiasm for certain AI security applications may divert resources that would provide more bang for the buck elsewhere. Excessive faith in AI is another risk, and some models on the market are certainly overhyped. More dramatically, might AI prove to be a net negative for Ukraines battlefield performance?

A few think so. One is John Arquilla, a professor emeritus at the Naval Postgraduate School in California who has written influential books on warfare and advised Pentagon leaders. Ukraines biggest successes came early in the war when decentralised networks of small units were encouraged to improvise. Today, Ukraines AI constructor process, he argues, is centralising decision-making, snuffing out creative sparks at the edges. His assessment is open to debate. But at a minimum, it underscores the importance of human judgment in how any technology is used.

See the original post here:

How Ukraine is using AI to fight Russia - The Economist

Read More..

Intel Takes Aim at Nvidia’s AI Dominance With Launch of Gaudi 3 Chip – Investopedia

Key Takeaways

Intel (INTC) unveiled its latest artificial intelligence (AI) chip, the Gaudi 3 AI accelerator, which the chipmaker claims outperforms Nvidia's (NVDA)H100, during an event on Tuesday.

The Gaudi 3 accelerator delivers "50% on average better inferenceand 40% on average better power efficiency" than Nvidia's H100 at "a fraction of the cost," Intel said.

The latest Intel AI chip will be available to some original equipment manufacturers (OEMs)including Dell Technologies (DELL), Hewlett Packard Enterprise (HPE), Lenovo, and Super Micro Computer (SMCI) in the second quarter of 2024.

The announcement comes as the chipmaker works to compete with other semiconductor companies leading the AI boom, including Nvidia and Advanced Micro Devices (AMD).

Intel compared its latest chip to Nvidia's H100, which was first announced in 2022. Nvidia has since unveiledthe Blackwell platform, the latest version of its AI-powering tech, whichanalysts called the "most ambitious project in Silicon Valley," in March.

Nvidia's latest chip, the GB200, "provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x," the company said.

Intel shares were up 0.4% at $38.12 as of about 12:45 p.m. ET Tuesday. The stock has lost about one-fifth of its value year to date.

Follow this link:

Intel Takes Aim at Nvidia's AI Dominance With Launch of Gaudi 3 Chip - Investopedia

Read More..