Page 1,308«..1020..1,3071,3081,3091,310..1,3201,330..»

Chat GPT, artificial intelligence challenging education sector – 1News

Not all New Zealand universities have turned on plagiarism tool Turnitins new artificial intelligence capability; as they say, risks remain.

The University of Auckland has decided against the technology for now.

"Detection is at best a short-term, partial solution and should not be relied upon, as detection methods are not 100% accurate and will be constantly evolving in the arms race between ChatGPT and emerging detection tools," a spokesperson for the university said in a statement.

Like most universities in Aotearoa, Auckland University said staff need to consider how AI technology can be used as part of student learning.

"Staff are looking at modifying assessment questions and structure, so it is more difficult and complex for AI tools like ChatGPT to produce acceptable responses.

"They are also encouraged to consider using more low-stakes, in-person assessments that place importance on the process of producing work, not the product, and a range of in-person assessment opportunities, such as presentations, podcasts, interviews, group work, etc," the spokesperson stated.

Victoria University of Wellington and the University of Otago are also not using Turnitins AI capability as university evaluation of the technology has not taken place.

The University of Canterbury and Massey University are using the tool, calling it just one part of their academic integrity process, while the University of Waikato is using the technology while testing it. Auckland University of Technology isnt using the tool but is considering it.

Turnitin has developed the tool to detect 98% of writing created by artificial intelligence such as Chat GPT, the business says.

Turnitin Asia Pacific Regional President James Thorley said artificially-created content is here to stay for the foreseeable future, but so is the need for ensuring honesty in learning institutes.

I don't ever see a place where people are not wanting to uphold academic integrity, and that will incorporate many aspects, including detection and potentially other matters, he said.

Victoria University of Wellington software engineering senior lecturer Dr Simon McCallum said detection is a challenge as artificial intelligence developments are making writing more "human-like".

The new systems are coming along so quickly that any detection you come up with is almost obsolete once you release it.

McCallum is urging academics and the Government to equip themselves with an understanding of AI developments and consider how the technology will change multiple sectors and day-to-day life.

I think there is a role for the Government to take this change very seriously and use it as part of an assessment of what we are trying to do with our education system, what is the objective of our education system, and start funding that transition because if we are going to keep up we need to have that resource, we need to spend that time, he said.

McCallum said hes never seen this rate of improvement in AI before.

The challenge it presents universities is enormous, and I mean, its something that we are struggling with, he said.

He thinks universities and high schools should close temporarily so educators have time to workshop how their lessons and assignments should change as a result of AI development.

The students are racing ahead of them, and if you're in an arms race, being behind and falling further behind is just going to make things much, much worse.

Massey University's decided to counter cheating during online exams through a webcam monitoring system called Remote Proctor Now.

The majority of exams at the university are now held online.

"Due to the emergence of COVID-19 and the resulting lockdown periods, it was necessary to utilise RPNow on a large scale sooner than planned in order to meet student needs, a spokesperson for the university said in a statement."

Some students have concerns about the rollout of the technology, and student association leaders are running a student survey to hear more from them.

Our students have raised lots of concerns about not actually knowing where their data and information is getting sent to since the only information we have is that it's getting sent offshore, Te Tira Ahu Pae Pasifika President Aniva-Storm Feau said.

Other issues raised include accessibility to laptops, webcams and quiet spaces to sit exams and concern for how neurodiverse students may be penalised by the software which alerts the university when a student looks away from the screen.

The university stated several small trials of Remote Proctor Now since 2014 were responded to with positive feedback from students.

"Notably, students liked being able to sit an exam in their own environment," the spokesperson said.

"In most cases, a student will be flagged for looking away from the screen or if another person is heard or visible during the exam. It is highly unlikely this would be the only activity taken into consideration."

The statement also said trained reviewers oversee recordings and are "professionally handled".

The university stated it will provide alternatives for students who require equipment, and financial hardship grants may also be available to students.

See original here:
Chat GPT, artificial intelligence challenging education sector - 1News

Read More..

Artificial Intelligence Is Making Inroads Into Airline Operations In India – Simple Flying

Digitization and artificial intelligence are not mere buzzwords today. Businesses across industries are realizing the growing importance of AI and how its only a matter of time before AI chatbots like ChatGPT would feature in everyday operations. The Indian aviation sector also seems keen on the tech, with Air India already announcing its plans to use ChatGPT and IndiGos Chief Digital and Information Officer, too, realizing its potential.

The aviation sector is increasingly relying on digital technology and artificial intelligence to optimize everyday operations. In India, Air India seems to be the torchbearer at the moment when it comes to embracing AI, at least when it comes to publicly announcing the use of ChatGPT.

Photo: Nicolas Economou | Shutterstock

Recently, the carriers CEO, Campbell Wilson, said that Air India would use the latest GPT4 to improve the customer experience on its website and not just feature it as a gimmicky tool. A source also disclosed to the Economic Times that AI technology could soon be implemented to power the websites FAQ section. Air Indias Chief Digital and Technology Officer Satya Ramaswamy also explained how AI could feature extensively in the running of the airline, saying,

We do see the promise of generative AI primarily because as an airline, we are swamped with data and information. Before pilots embark on one of their long-range journeys such as going from say Mumbai to San Francisco, which takes about 14-15 hours or so, they are given a briefing document which is about 150 pages long. And this is given a few hours before the flight to depart. We are looking at using generative AI to summarise the pilot briefing to extract the most important elements and point it out to the pilots.

Air Indias bet on AI is not surprising, considering it has committed to spending around $200 million in upgrading its IT system as well as making other processes digitally advanced, a huge departure from when the airline was run as a state-owned entity.

Photo: Sundry Photography |Shutterstock

At a recent interview at the Asia Aviation Festival, IndiGos Chief Digital and Information Officer Neetan Chopra also gave his perspective on the potential of AI and digital technology within the aviation space and how India and the APAC region, in particular, are well-placed for AI revolution. He said,

I see a level youthfulness in the workplace and country, a level of optimism for the future potential, the economy itself is high growth, and inherently there seems to be an adoption of these digital technologies. And I feel all of these are great ingredients which enable a transformation journey to occur.

Chopra said that theres plenty of scope within airline operations where AI could be of tremendous use, such as forecasting and predicting capability in core areas of aviation best price, demand patterns, number of meals to be loaded onto an aircraft, etc. These are some of the areas that could benefit initially before AI reaches other areas of running the airline.

Photo: Phuong D. Nguyen/Shutterstock

Chopra is a huge advocate of technology and is bullish about ChatGPT and AI in general. He said that innovations like ChatGPT, despite there being issues (ethics, ability to risk manage these algorithms), create that excitement in enterprises to take note and then see the potential and try things out.

And, if the chief tech officer of the biggest airline in India feels this way, more mainstream use of AI could be expected within the aviation space in the country sooner rather than later.

What are your views on this? Please leave a comment below.

Source: The Economic Times, World Aviation Festival

Originally posted here:
Artificial Intelligence Is Making Inroads Into Airline Operations In India - Simple Flying

Read More..

Why this Des Moines startup is bringing AI to insurance companies – Des Moines Register

Computers are going to know about customers life changes before the customers themselves do, if Colby Tunick gets his way.

Tunick, the co-founder of artificial intelligence startup ReFocus AI, moved to Des Moines from San Diego because he believes hes found fertile ground for his product. The company bills itself as a retention tool for clients, using an artificial intelligence algorithm to alert businesses about customers who are likely leave for a competitor.

Tunick wants to sell the product to a range of companies. But for now, he is focused on insurance, an industry that collects millions of records looking at hundreds of data points, he said.

More: Far-reaching Silicon Valley Bank collapse affects some Iowa startups, lenders

This information is fuel for a company like ReFocus AI, where staff can input the reams of data into the algorithm to predict future customer behavior. That means there are few places better to roll out ReFocus AI than the Des Moines metro, where about 25,000 employees work at about 850 insurance firms.

And likewise, Tunick said, there are few business clusters in the United States better positioned to quickly add AI tools than Des Moines dominant industry.

Who else knows you more than the insurance company? he recently told the Des Moines Register. When you sign up for an insurance policy, youre turning over basically everything.

Tunick and his co-founder, Nisar Hundewale, acclimated themselves to Des Moines last year when they enrolled in the Global Insurance Accelerator, a startup program at 321 E. Walnut St. Though he still lives in California, Tunick keeps a desk at the accelerators office and said he wants to continue to work in town at least once a week. (Hundewale still works mostly in Alexandria, Virginia.)

Tunick said the company wants to hire three software engineers, preferably in Des Moines. The company deepened its local roots April 4, when it became one of 12 businesses to form the latest cohort at BrokerTech Ventures, the citys other insurance technology accelerator.

More: ChatGPT is poised to upend medical information. For better and worse.

Dan Israel, managing director of the Global Insurance Accelerator, said he has seen several good AI pitches in recent years. Companies sell products like image recognition software, which tells insurers whether homes have looming problems.

But Israel believes Tunick and Hundewale offer a product that will appeal to even the least cutting-edge customers.

(Tunick has) a good passion for what theyre trying to sell and what theyre trying to build, Israel said. One of the key things were looking at is, Are the founders really trying to solve something? Instead of just, Were a solution to a problem.

Tunick said he began mulling how to launch an AI company while working as an associate governmental program analyst at the California Earthquake Authority, a publicly managed firm that sells earthquake insurance policies through other companies.

Tunick said he helped executives and computer programs communicate, speaking both sides language. He also sat in on high-level meetings, watching as board members grilled executives about how customers would react to premiums and new product lines.

He said the executives didnt have confident answers, but he believed they would be able to present precise projections if they analyzed several decades worth of past customer information.

They didnt know they had the data, he said.

More: 'AI, what's for dinner?' 5 cool things to ask ChatGPT, from business names to recipes

Tunick launched ReFocus AI with a couple of friends while still at the earthquake authority in 2019. While insurance companies have a roster of well-trained actuaries who know how to project profits and risk for their customers, Tunick believed only the largest firms knew how to use that information for other parts of their business, such as sales and customer services.

Advanced analytics are essentially limited to the Top 10 (U.S. insurance companies), he said. That leaves a huge playing field of customers who want this technology. But essentially, because of their size, they cant afford it.

After his friends dropped out of the business, Tunick posted on Angelist, an online job board for startups and investors. There, he received a solicitation from Hundewale.

Holding a doctorate in computer science, Hundewale had worked at a consultant for companies like Walmart and Johnson & Johnson. He said he used customer data to help retailers create targeted ads. He also used computer models to help them determine prices for their products and how much inventory to order at a given time.

But Hundewale said he wanted to be part of a startup, where he could help build a business. When he learned about ReFocus AI, he said he understood how he could take advantage of the deep reams of data that insurers collect.

More: Does homeowners'insurance cover tornado damage? How to prepare before a disaster

In Tunick, he also found a good match.

I wanted another half who knew the business side, would know how to sell the product, he said. I dont know to sell it.

Said Tunick: It could have gone either way, for both of us. When you meet somebody online, you dont know who they are. But we both just jumped in feet first.

Israel, the Global Insurance Accelerator director, said he learned of ReFocus AI from someone else who works in insurance technology. He reached out to Tunick, and the two metat InsureTech Connect, a trade show.

He said he could see the companys potential and badgered Tunick to apply for the accelerator. (Tunick also remained at the earthquake authority until last August.)

Its not so much that theres a lack of data, Israel said when asked why companies cant build AI tools in-house. And its not so much that theres very, very smart people in insurance companies that can use the data. Sometimes, you just dont know what youve got. Or you cant access it the right way.

Tunick said the company has tried to help businesses several different ways. ReFocus AI has suggested which customers might be most likely to buy extra insurance plans from a company. The company also suggested which customers might be open to spending more for a more generous plan.

More: Every claim you make, insurers are watching you

But he said that a company he worked with suggested ReFocus AI should dedicate its time to building algorithms that identify customers who are likely to leave. This is the area where smaller and mid-sized companies need the most help, Tunick said he learned.

Theyre already doing that work anyway, he said. Were just helping them do it smarter and faster.

Chad Combs, a vice president of personal lines underwriting at Ohio Mutual Insurance Group, said his company began working with ReFocus AI to improve Ohio Mutual's customer retention tool about a year ago. The new program, which agents will begin using in April, should tell them which customers to focus on.

Combs said ReFocus AI built an "enhanced" version of the tool that the company previously used. He added that, unlike bigger and established AI companies, Tunick and Hundewale built an algorithm to Ohio Mutuals specific requests.

Theyre smart people, he said. But theyre nimble. Theyre small. They can be creative.

See more here:
Why this Des Moines startup is bringing AI to insurance companies - Des Moines Register

Read More..

Artificial Intelligence: 3 Things Travel Executives Should Know Now – Skift Travel News

Skift Take

FLYR

Its the Year of AI, and artificial intelligence (AI) is everywhere including in a growing number of products and solutions for the travel industry. But AI isnt a monolith. For all the promises attributed to the concept, its not always easy for executives and other decision-makers to understand what they should be deploying, how it affects their current systems and processes, and how it will ultimately benefit the bottom line.

By exploring how AI works, its varying practical applications, and how it can exponentially scale data intake and analysis, travel companies can better understand where they should take their AI roadmaps in 2023. Here are three things every travel executive should be thinking about as they embark upon that journey.

The news has been abuzz about ChatGPT, DALL-E, and other so-called generative AI programs that can create new, unique outputs based on specific prompts theyre given. Its an exciting space that has deep relevance for the travel industry, but generative uses are still in the early developmental stages. Today, its important for executives to understand that AI comes in many different forms.

[Generative AI] is an interesting space, but we are not there yet for airlines, said Kartik Yellepeddi, vice president of ML and AI strategy for FLYR Labs. You cant expect to generate new pricing strategies out of the blue yet.

In the travel industry, supervised uses of AI are much more controlled than the generative applications that have been popular in the news, Yellepeddi said.

So how does a supervised learning model work? Using airline revenue management as an example, an AI model will label historical outcomes for pricing as good or bad based on how given actions contributed to the ultimate goal of maximizing revenue. The AI can then assess new variables and suggest pricing modifications consistent with those good decisions. Through thousands of inputs and repetitions of this action daily, it trains to do more of the good and less of the bad and becomes smarter as time progresses. And at some point, its learning from itself.

AI is data hungry, but the advantage is that its highly scalable, said Yellepeddi. The art is in how you design it, and it can theoretically learn any task you give it. If there is a pattern out there, its able to learn that pattern and recommend what action to take to maximize the reward its trained to seek.

To understand this better, take the case of optimizing the price for any given flight, which typically opens for sale 300 days before departure. Every day, thousands of variables, such as new bookings, changes in search volume, competitor sales, and pricing changes, affect the flights potential price and final outcome. AI can analyze this constantly shifting context in a way thats impossible for a human to do independently, providing pricing analysts with a depth of information that was not previously available.

How to offer ancillary products, when to overbook, how to price cargo space, and how to deploy marketing dollars are other ways that airlines and travel companies can take advantage of AI models to improve their decision-making.

We realized that airline pricing and forecasting was a generic use case and that you can apply the same machine learning technology to a number of other important commercial functions, said Yellepeddi.

These types of practical, day-to-day uses allow companies to dip their toes in the water and deploy AI capabilities while operating in relatively low-risk scenarios.

As travel companies look to take advantage of AI opportunities on a long-term, organization-wide basis, they must be ready to invest time and technology into shifting how they operate.

For example, revenue management systems have been historically built on fixed-growth scenarios, looking broadly at year-over-year changes.

Historic revenue management systems were tasked to do one thing price the flight and now there are 10 or 15 transactions occurring with the same customers during the same trip, from ancillaries to other offers, Yellepeddi said.

The rate of change has significantly accelerated in todays travel environment, and AI can become an asset by being far more dynamic and reactive than humans. Cloud technology allows companies to be more flexible in their data storage, analysis, and application, while legacy systems with fixed servers arent built to scale in this way. Because fixed servers have fixed costs and fixed capacity, that means companies are unable to allow the AI the freedom to use all the available data, because they have to make predetermined decisions about how much information they can reasonably manage. That, in effect, hinders their ability to scale and take full advantage of running sophisticated AI models.

Cloud has really changed the landscape, said Yellepeddi. Most importantly, it allows you to use all the data in the decision-making process.

One of the most important things for executives to consider when using AI is that theyll have to relinquish some level of control and trust the technology.

If there are thousands of data points related to pricing generated every day, analysts might be reasonably able to look at a few hundred of them. Its the responsibility of the AI not only to look at all of those data points but also to flag which ones require human attention to drive meaningful outcomes. Building a deeper level of trust enhances analysts abilities to use the information to optimize their recommendations.

The strength of AI is not necessarily the ability to be right all the time but rather its ability to react to situations quickly, continuously explore and exploit market opportunities, and learn from its mistakes faster on a larger scale than humans.

From that perspective, an important recent evolution of AI has been improved explainability that is, being able to show its work. AI models arent just spitting out decisions, theyre now also able to provide information on how they came to those decisions.

If the good decisions outweigh the bad ones in the end, how it makes the revenue should matter less as long as you have visibility into the decision-making process when needed, Yellepeddi said. Its important to build that trust to drive adoption of any advanced AI technology.

AIs effect on productivity, its ability to exponentially scale data analysis and decision-making, as well as learn on the job, show its work, and escalate to humans when required will drive automation, efficiency, and profitability across the travel industry, from revenue management and marketing to cargo, maintenance, and more. By taking advantage of practical commercial opportunities today, executives will also set themselves up to understand and integrate cutting-edge AI applications as they come online in the near future.

For more information about FLYR and its commercial intelligence and optimization platform that leverages AI and deep learning, visit http://www.flyrlabs.com.

This content was created collaboratively by FLYR and Skifts branded content studio, SkiftX.

View post:
Artificial Intelligence: 3 Things Travel Executives Should Know Now - Skift Travel News

Read More..

Paper Acquires Readlee To Boost Literacy Features With Artificial … – T.H.E. Journal

Reading Supports

Educational support platform and tutoring providerPaperhas acquiredReadlee, whose software uses artificial intelligence and speech-recognition technology to help students improve their reading skills, according to a recent announcement by Paper.

Readlee, founded by two educators in collaboration with Harvard researchers, said current users will continue to have access through the current school year, in a letter announcing the acquisition. Readlee, which has offered educators an always free plan and a premium plan with more features, said more information about changes to pricing and plans will be announced in the near future.

Readlee will become Paper Reading, will be fully integrated into Papers so-called educational support system platform, where it will be available to over 3 million students across the United States, according to Papers news release.

Readlee listens to students as they read aloud and provides immediate feedback, individualized support, and measurable success using the latest in AI, speech recognition, and learning research, particularly capitalizing on scientific research showing that reading aloud improves memory, vocabulary, and confidence.

Statistics show that students who engage in reading-aloud exercises 20 minutes a day are likely to score better than 90% of their peers on standardized tests. Paper Reading will allow students to practice reading aloud from any content in any subject, the company said.

Learn more atPaper.co.

About the Author

Kristal Kuykendall is editor, 1105 Media Education Group. She canbe reached at [emailprotected].

See more here:
Paper Acquires Readlee To Boost Literacy Features With Artificial ... - T.H.E. Journal

Read More..

Study sheds light on the dark side of artificial intelligence – Troy Media

Reading Time: 4 minutes

To understand how to get artificial intelligence right, we need to know how it can go wrong

Artificial intelligence is touted as a panacea for almost every computational problem these days, from medical diagnostics to driverless cars to fraud prevention.Vern Glaser

But when AI fails, it does so quite spectacularly, says Vern Glaser of the Alberta School of Business. In his recent study, When Algorithms Rule, Values Can Wither, Glaser explains how human values are often subsumed by AIs efficiency imperative, and why the costs can be high.

If you dont actively try to think through the value implications, its going to end up creating bad outcomes, he says.

Glaser cites Microsofts Tay as one example of bad outcomes. When the chatbot was introduced on Twitter in 2016, it was revoked within 24 hours after trolls taught it to spew racist language.

Then there was the robodebt scandal of 2015, when the Australian government used AI to identify overpayments of unemployment and disability benefits. But the algorithm presumed every discrepancy reflected an overpayment and automatically sent notification letters demanding repayment. If someone didnt respond, the case was forwarded to a debt collector.

By 2019, the program identified more than 734,000 overpayments worth two billion Australian dollars (C$1.8 billion).

The idea was that by eliminating human judgment, which is shaped by biases and personal values, the automated program would make better, fairer and more rational decisions at much lower cost, says Glaser.

But the human consequences were dire, he says. Parliamentary reviews found a fundamental lack of procedural fairness and called the program incredibly disempowering to those people who had been affected, causing significant emotional trauma, stress and shame, including at least two suicides.

While AI promises to bring enormous benefits to society, we are now also beginning to see its dark underbelly, says Glaser. In a recent Globe and Mail column, Lawrence Martin points out AIs dystopian possibilities, including autonomous weapons that can fire without human supervision, cyberattacks, deepfakes (a type of artificial intelligence used to create convincing images, audio and video hoaxes) and disinformation campaigns. Former Google CEO Eric Schmidt has warned that AI could easily be used to construct killer biological weapons.

Glaser roots his analysis in French philosopher Jacques Ellulsnotion of technique, offered in his 1954 book The Technological Society, by which the imperatives of efficiency and productivity determine every field of human activity.

Ellul was very prescient, says Glaser. His argument is that when youre going through this process of technique, you are inherently stripping away values and creating this mechanistic world where your values essentially get reduced to efficiency.

It doesnt matter whether its AI or not. AI, in many ways, is perhaps only the ultimate example of it.

Glaser suggests adherence to three principles to guard against the tyranny of technique in AI. First, recognize that because algorithms are mathematical, they rely on proxies, or digital representations of real phenomena.

One way Facebook gauges friendship, for example, is by how many friends a user has, or by the number of likes received on posts from friends.

Is that really a measure of friendship? Its a measure of something, but whether its actually friendship is another matter, says Glaser, adding that the intensity, nature, nuance and complexity of human relationships can easily be overlooked.

When youre digitizing phenomena, youre essentially representing something as a number. And when you get this kind of operationalization, its easy to forget its a stripped-down version of whatever the broader concept is.

For AI designers, Glaser recommends strategically inserting human interventions into algorithmic decision-making and creating evaluative systems that account for multiple values.

Theres a tendency when people implement algorithmic decision-making to do it once and then let it go, he says, but AI that embodies human values requires vigilant and continuous oversight to prevent its ugly potential from emerging.

In other words, AI is simply a reflection of who we are at our best and our worst. Without a good, hard look in the mirror, the latter could take over.

We want to make sure we understand whats going on so the AI doesnt manage us, he says. Its important to keep the dark side in mind. If we can do that, it can be a force for social good.

| By Geoff McMaster

This article was submitted by the University of Albertas Folio online magazine, a Troy Media Editorial Content Provider Partner.

The opinions expressed by our columnists and contributors are theirs alone and do not inherently or expressly reflect the views of our publication.

Troy MediaTroy Media is an editorial content provider to media outlets and its own hosted community news outlets across Canada.

Artificial Intelligence, Ethics, Machine learning

View post:
Study sheds light on the dark side of artificial intelligence - Troy Media

Read More..

Qualcomm’s ‘Cloud AI 100’ Beats Nvidia’s Best Artificial Intelligence … – Times of San Diego

Cards with Cloud AI 100 chips in a data center server. Image from Qualcomm video

Artificial intelligence chips from San Diegos Qualcomm beat those from Nvidia in two out of three measures of power efficiency in a new set of test data published on Wednesday.

Nvidia dominates the market for training AI models with huge amounts of data. But after those AI models are trained, they are put to wider use in what is called inference by doing tasks like generating text responses to prompts and deciding whether an image contains a cat.

Analysts believe that the market for data center inference chips will grow quickly as businesses put AI technologies into their products, but companies such as Google are already exploring how to keep the lid on the extra costs that doing so will add.

One of those major costs is electricity, and Qualcomm has used its history designing chips for battery-powered devices such as smartphones to create a chip called the Cloud AI 100 that aims for parsimonious power consumption.

In testing data published on Wednesday by MLCommons, an engineering consortium that maintains testing benchmarks widely used in the AI chip industry, Qualcomms AI 100 beat Nvidias flagship H100 chip at classifying images, based on how many data center server queries each chip can carry out per watt.

Qualcomms chips hit 197.6 server queries per watt versus 108.4 queries per watt for Nvidia. Neuchips, a startup founded by veteran Taiwanese chip academic Youn-Long Lin, took the top spot with 227 queries per watt.

Qualcomm also beat Nvidia at object detection with a score of 3.2 queries per watt versus Nvidias 2.4 queries per watt. Object detection can be used in applications like analyzing footage from retail stores to see where shoppers go most often.

Nvidia, however, took the top spot in both absolute performance terms and power efficiency terms in a test of natural language processing, which is the AI technology most widely used in systems like chatbots. Nvidia hit 10.8 samples per watt, while Neuchips ranked second at 8.9 samples per watt and Qualcomm was in third place at 7.5 samples per watt.

Read more here:
Qualcomm's 'Cloud AI 100' Beats Nvidia's Best Artificial Intelligence ... - Times of San Diego

Read More..

World Health Day: How Will AI Impact the Future of Healthcare … – The Weather Channel

Representational Image

A few decades from now, healthcare as we know it will see a fundamental shift. In fact, the transformation is already underway, driven by technological integration and innovation in healthcare. At the forefront of this revolution is the buzzword of the era: Artificial Intelligence (AI).

At this moment in the history of civilisation, it is 'virtually' impossible to visualise modern life without artificial intelligence enabling our day-to-day life in some way or the other. From social media and self-driving cars to classrooms and homes, AI is everywhere!

On this World Health Day, let's take a look at what could be the future of 'Health For All,' with digitisation, emerging technologies and AI leading the way.

At the outset, three megatrends are driving AI innovation in healthcare, as highlighted in this years World Economic Forum Report.

Firstly, what we have before us is a 'data deluge' flooding the medical systems. The doubling time for medical knowledge in 1950 was 50 years. In 2020 it was just 73 days! With some technological help, this immense data from new findings to day-to-day patient information can be streamlined to suit our needs.

Moreover, when such data is fed into digital systems, we have a vast repository to train machines to aid with diagnosis and treatment, improving accuracy, reducing errors, providing early detection, and also predicting the risk of life-threatening diseases well in advance. For instance, the most common form of pancreatic cancer has a five-year survival rate of less than 10%; but with earlier detection, it's 50%!

Secondly, these technologies are envisioned to affect not only patient care, but also ease the burdens of healthcare professionals when faced with novel problems they haven't witnessed before. A prime example is the COVID-19 pandemic, which pushed global healthcare systems to the brink during its peak.

And thirdly, we are in the midst of a technological renaissance, the floodgates to which have been opened with the launch of Chat GPT a language model trained on massive volumes of internet texts. And early adopters are already on it! Chat GPT has become a high-level technological assistant to medical professionals, aiding with mundane tasks such as medical paperwork, patient certificates and letters.

But it could also aid in more serious medical activities such as triage, that is, moving people, resources and supplies to where they are needed most. It could help with research studies as well, streamlining tasks like the selection and enrollment of participants in clinical trials.

At the same time, we must remember that there are profound ethical implications associated with such advancements. The first and foremost concerns stem from privacy and confidentiality the foundation of doctor-patient relationships.

An article published in The Conversation states: "If identifiable patient information is fed into Chat GPT, it forms part of the information that the chatbot uses in future. In other words, sensitive information is out there and vulnerable to disclosure to third parties."

Another concern pertains to the efficiency and quality of such databases. Outdated references won't cut when it comes to sensitive sectors like healthcare. This calls for plugging such databases with robust designs that provide accurate real-time references.

Finally, we have the issues of equity and governance looming before us. More often than not, the benefits and risks of emerging technologies tend to be unevenly distributed between countries, especially in the absence of strong global guidelines.

It isn't easy to gauge the exact implications of artificial intelligence and emerging technologies from where we stand, but chances are it will get clearer as its use increases in the future. However, addressing the ethical concerns plaguing the sector should be a priority for governments worldwide going forward.

**

For weather, science, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!

Original post:
World Health Day: How Will AI Impact the Future of Healthcare ... - The Weather Channel

Read More..

In A.I. Race, Microsoft and Google Choose Speed Over Caution – The New York Times

In March, two Google employees, whose jobs are to review the companys artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similarconcerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbotwoven into its Bing search engine. Google followed about six weeks later withits own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industrys next big thing generative A.I., the powerful new technology that fuels those chatbots.

That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new A.I. was crystallized in aninternalemail sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an absolutely fatal error in this moment to worry about things that can be fixed later.

When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product is the long-term winner just because they got started first, he wrote. Sometimes the difference is measured in weeks.

Last week, tension between the industrys worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apples co-founder Steve Wozniak, calledfor a six-month pause inthe development of powerful A.I. technology. In a public letter, they said it presented profound risks to society and humanity.

Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. In the United States, President Biden on Tuesday became the latest official to question the safety of A.I.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Tech companies have a responsibility to make sure their products are safe before making them public, he said at the White House. When asked if A.I. was dangerous, he said: It remains to be seen. Could be.

The issues being raised now were once the kinds of concerns that prompted some companies to sit on new technology. They had learned that prematurely releasing A.I. could be embarrassing. Five years ago, for example, Microsoft quickly pulled a chatbot called Tay after users nudged it to generate racist responses.

Researchers say Microsoft and Google are taking risks by releasing technology that even its developers dont entirely understand. But the companies said that they had limited the scope of the initial release of their new chatbots, and that they had built sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.

Natasha Crampton, Microsofts chief responsible A.I. officer, said in an interview that six years of work around A.I. and ethics at Microsoft had allowed the company to move nimbly and thoughtfully. She added that our commitment to responsible A.I. remains steadfast.

Google released Bard after years of internal dissent over whether generative A.I.s benefits outweighed the risks. It announced Meena, asimilarchatbot, in 2020. But that system was deemed too risky to release, three people with knowledge of the process said. Those concerns were reported earlier by The Wall Street Journal.

Later in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language. The researchers were pushed out after Ms. Gebru criticized the companys diversity efforts and Ms. Mitchell was accused of violating its code of conduct after she saved some work emails to a personal Google Drive account.

Ms. Mitchell said she had tried to help Google release products responsibly and avoid regulation, but instead they really shot themselves in the foot.

Brian Gabriel, a Google spokesman, said in a statement that we continue to make responsible A.I. a top priority, using our A.I. principles and internal governance structures to responsibly share A.I. advances with our users.

Concerns over larger modelspersisted. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper.

Mr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because theyve probably had access to private data stored in various locations around the internet.

Though an executive presentation later warned of similar A.I. privacy violations, Google reviewers asked Mr. El Mhamdi for substantial changes. He refused and released the paper through cole Polytechnique.

He resigned from Google this year, citing in part research censorship. He said modern A.I.s risks highly exceeded the benefits. Its premature deployment, he added.

AfterChatGPTs release, Kent Walker, Googles top lawyer, met with research and safety executives on the companys powerful Advanced Technology Review Council. He told them that Sundar Pichai, Googles chief executive, was pushing hard to release Googles A.I.

Jen Gennai, the director of Googles Responsible Innovation group, attended that meeting. She recalled what Mr. Walker had said to her own staff.

The meeting was Kent talking at the A.T.R.C. execs, telling them, This is the company priority, Ms. Gennai said in a recording that was reviewed by The Times. What are your concerns? Lets get in line.

Mr. Walker told attendees to fast-track A.I. projects, though some executives said they would maintain safety standards, Ms. Gennai said.

Her team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them and enable tech-facilitated violence through mass harassment online.

In March, two reviewers from Ms. Gennais team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready.

Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bards risks, the people said.

Ms. Gennai said in an email to The Times that because Bard was an experiment, reviewers were not supposed to weigh in on whether to proceed. She said she corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.

Google said it had released Bard as a limited experiment because of those debates, and Ms. Gennai said continuing training, guardrails and disclaimers made the chatbot safer.

Google released Bard to some users on March 21. The company said it would soon integrate generative A.I. into its search engine.

Satya Nadella, Microsofts chief executive, made a bet on generativeA.I. in 2019 when Microsoft invested $1 billion in OpenAI. After deciding the technology was ready over the summer, Mr. Nadella pushed every Microsoft product team to adopt A.I.

Microsoft had policies developed by its Office of Responsible A.I., a team run by Ms. Crampton, but the guidelines were not consistently enforced or followed, said five current and former employees.

Despite having a transparency principle, ethics experts working on the chatbotwere not given answers about what data OpenAI used to develop its systems, according to three people involved in the work. Some argued that integrating chatbots into a search engine was a particularly bad idea, given how it sometimesserved up untrue details, a person with direct knowledge of the conversations said.

Ms. Crampton said experts across Microsoft worked on Bing, and key people had access to the training data. The company worked to make the chatbot more accurate by linking it to Bing search results, she added.

In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to four people familiar with the team.

The remaining few joined daily meetings with the Bing team, racing to launch the chatbot. John Montgomery, an A.I. executive, told them in a December email that their work remained vital and that more teams will also need our help.

After the A.I.-powered Bing was introduced, the ethics team documented lingering concerns. Users could become too dependent on the tool. Inaccurate answers could mislead users. People could believe the chatbot, which uses an I and emojis, was human.

In mid-March, the team was laid off, an action that was first reported by the tech newsletter Platformer. But Ms. Crampton said hundreds of employees were still working on ethics efforts.

Microsoft has released new products every week, a frantic pace to fulfill plans that Mr. Nadella set in motion in the summer when he previewed OpenAIs newestmodel.

He asked the chatbotto translate the Persian poet Rumi into Urdu, and then English. It worked like a charm, he said in a February interview. Then I said, God, this thing.

Mike Isaac contributed reporting. Susan C. Beachy contributed research.

More:
In A.I. Race, Microsoft and Google Choose Speed Over Caution - The New York Times

Read More..

How Artificial Intelligence is Shaking Up the Music Industry – The National Law Review

You are responsible for reading, understanding and agreeing to the National Law Review's (NLRs) and the National Law Forum LLC's Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free to use, no-log in database of legal and business articles.The content and links on http://www.NatLawReview.comare intended for general information purposes only. Any legal analysis, legislative updates or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys or other professionals or organizations who include content on the National Law Review website.If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.

Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals.The National Law Review is not a law firm nor is http://www.NatLawReview.com intended to be a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional. NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us.

Under certain state laws the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.

The National Law Review - National Law Forum LLC 3 Grant Square #141 Hinsdale, IL 60521 Telephone (708) 357-3317 ortollfree(877)357-3317. If you would ike to contact us via email please click here.

See the original post here:
How Artificial Intelligence is Shaking Up the Music Industry - The National Law Review

Read More..