Page 3,886«..1020..3,8853,8863,8873,888..3,9003,910..»

FDA Authorizes Marketing of First Cardiac Ultrasound Software That Uses Artificial Intelligence to Guide User – FDA.gov

For Immediate Release: February 07, 2020

Today, the U.S. Food and Drug Administration authorized marketing of software to assist medical professionals in the acquisition of cardiac ultrasound, or echocardiography, images. The software, called Caption Guidance, is an accessory to compatible diagnostic ultrasound systems and uses artificial intelligence to help the user capture images of a patients heart that are of acceptable diagnostic quality.

The Caption Guidance software is indicated for use in ultrasound examination of the heart, known as two-dimensional transthoracic echocardiography (2D-TTE), for adult patients, specifically in the acquisition of standard views of the heart from different angles. These views are typically used in the diagnosis of various cardiac conditions.

Echocardiograms are one of the most widely-used diagnostic tools in the diagnosis and treatment of heart disease, said Robert Ochs, Ph.D., deputy director of the Office of In Vitro Diagnostics and Radiological Health in the FDAs Center for Devices and Radiological Health. Todays marketing authorization enables medical professionals who may not be experts in ultrasonography, such as a registered nurse in a family care clinic or others, to use this tool. This is especially important because it demonstrates the potential for artificial intelligence and machine learning technologies to increase access to safe and effective cardiac diagnostics that can be life-saving for patients.

According to the Centers for Disease Control and Prevention, heart disease is the leading cause of death in the United States, killing one out of every four people, or approximately 647,000 Americans each year. The term heart disease refers to several types of heart conditions. The most common type is coronary artery disease, which can cause heart attack. Other kinds of heart disease may involve the valves in the heart, or the heart may not pump well and cause heart failure.

Cardiac diagnostic tests are necessary to identify heart conditions. Among them are electrocardiograms (more widely known as an EKG or ECG), Holter monitors and cardiac ultrasound examinations. The software authorized today is the first software authorized to guide users through cardiac ultrasound image acquisition. The Caption Guidance software was developed using machine learning to train the software to differentiate between acceptable and unacceptable image quality. This knowledge formed the basis of an interactive AI user interface that provides prescriptive guidance to users on how to maneuver the ultrasound probe to acquire standard echocardiographic images and video clips of diagnostic quality. The AI interface provides real-time feedback on potential image quality, can auto-capture video clips, and automatically saves the best video clip acquired from a particular view. Importantly, the cardiologist still reviews the images for a final assessment of the images and videos for patient evaluation.

The Caption Guidance software currently can be used with a specific FDA-cleared diagnostic ultrasound system produced by Teratech Corporation, with the potential to be used with other ultrasound imaging systems that have technical specifications consistent with the range of ultrasound systems used as part of the development and testing.

In its review of this device application, the FDA evaluated data from two independent studies. In one study, 50 trained sonographers scanned patients, with and without the assistance of the Caption Guidance software. The sonographers were able to capture comparable diagnostic quality images in both settings. The other study involved training eight registered nurses who are not experts in sonography to use the Caption Guidance software and asking them to capture standard echocardiography images, followed by five cardiologists assessing the quality of the images acquired. The results showed that the Caption Guidance software enabled the registered nurses to acquire echocardiography images and videos of diagnostic quality.

The FDA is dedicated to ensuring medical device regulation keeps pace with technological advancements, such as todays marketing authorization. This February, the FDA is hosting a public workshop titled Evolving Role of Artificial Intelligence (AI) in Radiological Imaging and seeks to discuss emerging applications of AI in radiological imaging, including AI devices intended to automate the diagnostic radiology workflow, as well as guided image acquisition. Discussions will also focus on best practices for the validation of AI-automated radiological imaging software and image acquisition devices, which is critical to assess safety and effectiveness.

The FDA reviewed the device through the De Novo premarket review pathway, a regulatory pathway for low- to moderate-risk devices of a new type. Along with this authorization, the FDA is establishing special controls for devices of this type, including requirements related to labeling and performance testing. When met, the special controls, along with general controls, provide reasonable assurance of safety and effectiveness for devices of this type. This action creates a new regulatory classification, which means that subsequent devices of the same type with the same intended use may go through FDAs 510(k) premarket process, whereby devices can obtain marketing authorization by demonstrating substantial equivalence to a predicate device.

The FDA granted marketing authorization of the Caption Guidance software to Caption Health Inc.

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nations food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.

###

Read this article:
FDA Authorizes Marketing of First Cardiac Ultrasound Software That Uses Artificial Intelligence to Guide User - FDA.gov

Read More..

What Role Will (Or Does) Artificial Intelligence Play In Your Life? – Forbes

The role AI plays in your life is a matter of choice (but only to a certain extent).

It doesnt seem too long ago that artificial intelligence (AI) was mostly the stuff of science fiction. Today it seems to be everywhere: in our home appliances, in our cars, in the workplace, even on our wrists.

To some extent, our use of AI is still a matter of personal choice. But because AI is becoming increasing ubiquitous, we need to make a lot of conscious decisions.

Regardless of the choices we make, we need to stay educated on the evolution of this science. A thoughtful primer on this is Rhonda Scharfs bookAlexa Is Stealing Your Job: The Impact of Artificial Intelligence on Your Future.

My conversation with Rhonda provides some good tips what we should know and what we can do.

Rodger Dean Duncan:AI today is similar to the introduction of the desktop computer three decades ago. Many people resisted computers and got left behind. Whats the best argument for AI today?

Rhonda Scharf

Rhonda Scharf:Artificial Intelligence is not going away. When the desktop computer was introduced in the 1980s, many people felt it was a fad, and it would disappear over time.

Hazel, a woman I worked with, was willing to bet her career on it.When the company I worked at insisted we transition to desktops or leave the company, she rolled the dice and called their bluff. She lost. She believed there was no way a company could exist without tried-and-true manual systems and that computers were a big waste of time and money.

We are in precisely that situation again.

If you can write instructions for a task so that someone can follow them, then AI can replicate those actions.

Duncan:So whats the implication?

Scharf:Not only can your company exist without you performing these tasks, it will also (eventually) be more profitable (with fewer errors) because of it.

By refusing to learn about AIand by refusing to adapt and be flexibleyoure rolling the dice that AI will not take over the tasks you currently do. Call yourself Hazel, and youll soon be out of a job.

AI is alive and well in the workplace, only many people dont realize it. Being nave and refusing to acknowledge what is right under your nose is a recipe for disaster. Take a look around at how much AI we already have in our lives. Artificial Intelligence is not going away. Adapt or become unemployed.

Duncan:Most people have grown comfortable with the idea of letting machines replace humans to do monotonous, heavy, repetitive, and dangerous tasks. But the notion of having AI make decisions and predictions about the future often evokes skepticism or even fear. What do you say to people who have such concerns?

Scharf:Movies like2001: A Space Odyessyand its AI character, HAL 9000, have planted the seeds of fear and mass destruction in our minds. We are afraid of what computers can do on their own. AI learns from its experiences and will make decisions on its owncalculated, logical, and statistically accurate decisions.

What AI doesnt do is make emotional decisions. Take AI stock trading as an example. Without any emotions involved, the robo-advisers can determine the optimal price to buy and sell specific stocks. They dont get emotionally tied into one more day and potentially lose profits. AI can evaluate millions of data points and make conclusions instantly that neither humans nor computers can do. As quickly as the market changes, AI changes its course of action based on the data.

Im not about to have AI make life-or-death decisions for me. The same way we now trust machines to handle monotonous, heavy, repetitive, and dangerous tasks, I will rely on AI to do some heavy thinking and bring me logical conclusions, quickly and efficiently.

If you don't want to be left behind, you'd better get educated on AI.

Duncan:What do you tell people who have privacy concerns about AI applications?

Scharf:The privacy concerns are real, but you gave up your privacy when you got your first mobile phone (for some this was as early as 1996). It could track you. Technically, that impacted your privacy 20-plus years ago.

Once the Blackberry was introduced in 1999, followed by the iPhone eight years later, your privacy became severely compromised. Your phone knows where you are, and it knows what youre doing. Even if you keep your Bluetooth off, your device and its apps know a lot about you.

If you wear any technology whatsoever, you are giving up your privacy. According to a 2014 study by GlobalWebIndex, 71% of people ages 16 to 24 want wearable tech. That was over five years ago before we had much wearable technology.

In the same study, 64% of internet users aged 16 to 64 said theyve either already used a piece of wearable tech or were keen to do so in the future.

Fast forward five years, and half of Americans use fitness trackers daily. More than 96% of Americans have a cell phone of some kind.

People may say they have privacy concerns, but when it comes to using technology that improves our lives, we forgo privacy for convenience.

Next: Artificial Intelligence, Privacy, And The Choices You Must Make

Continued here:
What Role Will (Or Does) Artificial Intelligence Play In Your Life? - Forbes

Read More..

These are the exact skills you need to get a job in artificial intelligence – Ladders

Artificial intelligence is all the rage, and theres good money to be made in an industry thats still largely emerging from its infancy. But, the problems that AI solves are not easy, and to work in the AI industry you will need a strong and focused set of skills.

Heres the good news: We live in a society where a shocking number of people would rather have a robot boss than a human one. We would rather be led by machines.

This means that most of us are accepting of the idea of artificial intelligence, or AI.

In many sectors, machines have already taken over monotonous jobs. Manufacturing is a prime example. Auto and aerospace manufacturers use machines heavily in their assembly lines. In fact, machines completely transformed the way that our cars are built.

Artificial intelligence isnt just a fad. Its here to stay.

And, that means the industry will need a skilled workforce to build, test and deploy more and more artificial brains around the world. Get in early and youll stand to make a lot of money.

Not to mention help change the world.

If you are interested in a career in artificial intelligence, then youre in the right place.

Artificial intelligence attempts to mimic (and surpass) the power of the human brain using nothing but machines. Machine learning is another common term in AI.

The primary goals of artificial intelligence are:

Artificial intelligence attempts to build machines that think and reason rather than operate in a relatively confined space with pre-built routines, procedures and outcomes. Smart AI systems recognize patterns and remember past events and learn from them, making each subsequent decision smarter, logical and more organic.

AI is a giant paradigm shift in modern computing and requires a deeply scientific and logical approach to design computer systems that think and learn. In other words, build robots that arent just robots.

And believe it or not, AI capabilities are all over the place.

A few examples of artificial intelligence systems include speech recognition (available on many cell phones and smart home devices), email spam blockers, plagiarism checkers, language translation services (like Google Translate) and the auto-pilot system on airplanes.

Companies like Google, Microsoft, Apple, Amazon, Facebook, Accenture, Boeing and so many others are hiring for artificial intelligence roles. AI salaries are typically higher than average because good AI talent can be hard to find.

Artificial intelligence is everywhere in society, and the industry is growing rapidly in 2020. Here is exactly what you need to know to pursue a career in AI.

Artificial intelligence is highly scientific. After all, mimicking the human brain using machines is a very tough problem to solve, much less master. The skills that you will need to pursue AI as a career are varied, but all of them require a great deal of education, training and focus.

That said, there is a wide variety of career types available in AI and machine learning, and they range from higher-level research to low-level programming and implementation.

For example, researchers use their breadth of knowledge in theory and study to reveal new types of systems and capabilities. Researchers hypothesize new or different ways for machines to think and test their research for real-world feasibility.

Algorithm developers take AI research and transform that research into repeatable processes through mathematical formulas that can be implemented using hardware and software.

Software developers and computer scientists use those algorithms to write sophisticated pieces of software that analyze, interpret and make decisions.

Hardware technicians build pieces of equipment (like robots) to interact with the world. Robots use its internal software to move and operate.

Most careers in artificial intelligence require coursework and experience in a variety of math and science-related topics like:

Want a career in AI? Then read. A lot.

Read papers and case studies. Experiment with technologies like Map-Reduce, PHP, MySQL, Postgres and Big Data, especially if you are targeting a computer science-related career in AI. Expose yourself to as many technologies as you can.

Pro tip: Browse through AI job opportunities. Read the job descriptions and especially the requirements to get a feel for specific qualifications that you need for that job.

For example, some might need experience in low-level programming languages like Python or MatLab. Others, especially in the healthcare industry, need expertise in data services like Spark and Blockchain.

Regardless of the type of job that youre after in artificial intelligence, there is no better way to figure out the exact skills you need than to read job requisitions and stay as up-to-date in the industry as possible.

Use the Job Search tool here on The Ladders to find AI and machine learning jobs.

Though the types of careers in the AI industry are varied, most professionals in AI possess five key skills and capabilities, regardless of their individual roles.

Most AI professionals:

Are highly critical thinkers. They take nothing at face value and are naturally curious. They believe in trial and error and must test and experiment before making a concrete decision.

Like to push the envelope. AI is all about pushing the boundaries. Pegging the capabilities of hardware and software to their max, always looking for more. More ways to improve existing systems. More ideas for inventing new ways to live.

Live naturally-curious lives. Always wanting to know more, artificial intelligence pros want to know how things work. They dont just look. They observe. They dont hear. They listen.

Dont get easily overwhelmed. They understand that artificial intelligence is highly technical, but also realize that venturing into uncharted waters is difficult and mysterious. They enjoy the process rather than getting frustrated by it.

Love math and science. AI is highly technical and its a natural good fit for those who are gifted and interested in hard sciences and mathematics.

Artificial intelligence is not just about replacing the human component of the industry. Its also about making it easier to make decisions based on observable patterns, use logic and reasoning to form conclusions and build pathways to boost efficiency and production.

It is not an easy discipline, but thats also why salaries in the AI industry are much higher than average. It takes the right type of person with the right skill set to excel.

Are you the type of person whos right for a career in AI? If you have many of these skill sets, then you just might be.

Follow this link:
These are the exact skills you need to get a job in artificial intelligence - Ladders

Read More..

Stats Perform’s Chief Scientist of Artificial Intelligence to Deliver Keynote at AI in Team Sports Conference – Yahoo Finance

Dr. Patrick Lucey to Discuss Interactive Sports Analytics at the Association for the Advancement of Artificial Intelligence 2020 Workshop

Stats Perform, the revolutionary leader in sports AI and data, announced that Chief Scientist Dr. Patrick Lucey will deliver keynote remarks at the Association for the Advancement of Artificial Intelligence (AAAI-20) Workshop in Team Sport in New York on Saturday, February 8.

Dr. Luceys presentation "Interactive Sports Analytics" will examine new ways to break down player or team performance using big data and AI software. The presentation will include examples of how coaches can draw up and search for specific plays and, using AI and Stats Performs decades of tracking and multi-agent trajectory data, simulate likely outcomes specific to a particular opponent and the players involved. In addition, Dr. Lucey will demonstrate the capabilities of new body-pose data made possible through Stats Performs state of the art AutoSTATS technology.

"We have reached an exciting moment in sports where coaches and analysts can now leverage big data and AI to generate advanced insights on play development and likely outcomes," Dr. Lucey said. "Imagine a coach drawing up an Xs and Os play, the same way he would on chalkboard, on an iPad and simulating likely outcomes based on different sets of offensive and defensive opponents in-play. Imagine then being able to search that play and find video of every time a near similar play was run. With AI and big data, we are already making that happen at Stats Perform and I cant wait to meet and discuss this with the illustrious group of researchers at the AAAI Workshop."

The AAAI Workshop in Team Sport is one of the leading conferences for AI in team sports with participation from some of the leading global research institutions. The 34th AAAI Conference will include a research paper and poster track.

About Stats Perform

Stats Perform collects the richest sports data in the world and transforms it through revolutionary artificial intelligence (AI) to unlock the most in-depth insights for media and technology, betting and team performance. With company roots dating back almost 40 years, Stats Perform embraces and solves the dynamic nature of sport be that for digital and broadcast media with differentiated storytelling, tech companies with reliable and fast data to power their innovations, sportsbooks with in-play betting and integrity services, or teams with first-of-its-kind AI analysis software. As the leading sports data and AI company, Stats Perform works with the top global sports media, tech companies, sportsbooks, teams and leagues. For more information, visit StatsPerform.com

View source version on businesswire.com: https://www.businesswire.com/news/home/20200206005783/en/

Contacts

Reed Findlay, 847-583-2642media.relations@statsperform.com

See the rest here:
Stats Perform's Chief Scientist of Artificial Intelligence to Deliver Keynote at AI in Team Sports Conference - Yahoo Finance

Read More..

Artificial intelligence assists in the study of autonomous vehicle performance in winter conditions – Vision Systems Design

In this weeks roundup from the Association for Unmanned Vehicle Systems International, which highlights some of the latest news and headlines in unmanned vehicles and robotics, studying autonomous vehicle operation in Canadian winters, the foundation is laid out for ZM Interactive customers to conduct beyond-line-of-sight drone flights, and unmanned surface vehicles conduct seabed surveys on offshore wind farm turbines.

Scale AI open-sources data set to help in the development of autonomous vehicles capable of driving in wintry weather

This week, a startup called Scale AI open-sourced Canadian Adverse Driving Conditions (CADC), which is a data set that contains more than 56,000 images in conditions including snow created with the University of Waterloo and the University of Toronto.

The move is designed to help in the development of autonomous vehicles capable of driving in wintry weather, as Scale AI claims that CADC is the first corpora with snowy sensor samples to focus specifically on real-world driving in snowy weather.

Snow is hard to drive in as many drivers are well aware. But wintry conditions are especially hard for self-driving cars because of the way snow affects the critical hardware and AI algorithms that power them, explains Scale AI CEO Alexandr Wang in a blog post, via VentureBeat.

A skilled human driver can handle the same road in all weathers but todays AV models cant generalize their experience in the same way. To do so, they need much more data.

According to Scale AI, the routes captured in CADC were chosen based on levels of traffic and the variety of objects such as cars, pedestrians, animals, and most importantly, snowfall. Teams of engineers used an autonomous vehicle platform called Autonomoose to drive a Lincoln MKZ Hybrid mounted with a suite of lidar, inertial sensors, GPS, and vision sensors (including eight wide-angle cameras) along 12.4 miles of Waterloo roads.

Combining human work and review with smart tools, statistical confidence checks, and machine learning checks, Scale AIs data annotation platform was used to label each of the resulting camera images, 7,000 lidar sweeps, and 75 scenes of 50-100 frames. The company says that the accuracy is consistently higher than what a human or synthetic labeling technique could achieve independently, as measured against seven different annotation quality areas.

For University of Waterloo professor Krzysztof Czarnecki, his hope is that the data set will put the wider research community on equal footing with companies that testing self-driving cars in winter conditions, including Alphabets Waymo, Argo, and Yandex.

We want to engage the research community to generate new ideas and enable innovation, Czarnecki says. This is how you can solve really hard problems, the problems that are just too big for anyone to solve on their own.

ZM Interactive selects Iris Automation as detect and avoid provider for its UAS

ZM Interactive (ZMI) has selected Iris Automation as the detect and avoid (DAA) provider for its drones, which will allow ZMI customers to conduct beyond visual line of sight (BVLOS) operations.

ZMI manufactures the xFold drone, which is an industrial, military-grade UAS that comes in various sizes and configurations. Its frame can change between a x4 (Quad), x6 (Hexa), X8 (octo) and X12 (Dodeca) configurations in minutes, and it has a heavy payload capability of more than 300 pounds, making the UAS ideal for a wide range of commercial, industrial, military and emergency response applications. Some of its use cases include aerial cinematography, 3-D Mapping and inspections, and cargo delivery.

Having selected Iris Automation as its DAA provider, ZMI will provide the option of equipping its UAS platforms with Iris Automations Casia system. Described as a turnkey solution, Casia detects, tracks and classifies other aircraft and makes informed decisions about the threat they could potentially pose to the UAS. To avoid collisions, Casia triggers automated maneuvers, and alerts the pilot in command of the mission.

This collaboration between Iris Automation and ZMI allows xFold drone customers to use their drones to their full potential, explains Iris Automation CEO Alexander Harmsen.

Having drones pre-equipped with the option for advanced BVLOS capabilities is a basic requirement the industry will soon expect to see on all drones out-of-the-box.

Under its partnership with ZMI, Iris says that it will also offer customers with Casia onboard regulatory support for Part 107 waiver writing and regulatory approval processes to secure the permissions needed to conduct their unique BVLOS operations.

XOCEAN's XO-450 USV conducts seabed surveys for Greater Gabbard Offshore Wind Farm

Considered a first for the offshore wind sector, XOCEANs XO-450 USV recently conducted seabed surveys on seven of the turbines at the Greater Gabbard Offshore Wind Farm, a joint venture between SSE Renewables and innogy.

To validate data collection before the vessel departed the work locations, experts located in the United Kingdom monitored the data collected from shore in real-time throughout the survey.

According to XOCEAN, the survey demonstrates the highly flexible and collaborative nature of this technology, which ultimately allows industry experts to have direct access to real time data, from any location.

We are constantly looking for innovative ways in which we can operate our fleet of renewables assets, says Jeremy Williamson, SSE Renewables Head of Operations.

XOCEANs vessel will allow us to carry out our work in a more efficient, and most importantly for SSE Renewables and our partners innogy, in the safest way possible. Were really interested to see how this sort of work can help improve our industry and look forward to working with XOCEAN in future.

XOCEAN says that its USVs offer a number of benefits, including keeping operators safe as they remain onshore, efficiency with operations 24 hours a day, seven days a week, and environmental benefits with ultra-low emission. These benefits result in significant economic savings, the company adds.

Our USV platform has demonstrated itself to be a safe, reliable and low carbon solution for the collection of ocean data, says James Ives, CEO of XOCEAN.

We are delighted to be working with SSE and innogy on this ground-breaking project.

Share your vision-related news by contactingDennis Scimeca, Associate Editor, Vision Systems Design

SUBSCRIBE TO OUR NEWSLETTERS

Read more here:
Artificial intelligence assists in the study of autonomous vehicle performance in winter conditions - Vision Systems Design

Read More..

AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy – Newsweek

An artificial intelligence tool has been developed to help predict the structure of the universe and aid research into the mysteries of dark energy and dark matter.

Researchers in Japan used two of the world's fastest astrophysical simulation supercomputers, known as ATERUI and ATERUI II, to create an aptly-named "Dark Emulator" tool, which is able to ingest vast quantities of data and produce analysis of the universe in seconds.

The AI could play a role in studying the nature of dark energy, which seems to make up a large amount of the universe but remains an enigma.

Read more

When observed from a distance, the team noted how the universe appears to consist of clusters of galaxies and massive voids that appear to be empty.

But as noted by NASA, leading models of the universe indicate it is made of entities that cannot be seen. Dark matter is suspected of helping to hold galaxy clusters in place gravitationally, while dark energy is believed to play a role in how the universe is expanding.

According to the researchers responsible for Dark Emulator, the AI tool is able to study possibilities about the "origin of cosmic structures" and how dark matter distribution may have changed over time, using data from some of the top observational surveys conducted about space.

"We built an extraordinarily large database using a supercomputer, which took us three years to finish, but now we can recreate it on a laptop in a matter of seconds," said Associate Prof. Takahiro Nishimichi, of the Yukawa Institute for Theoretical Physics.

"Using this result, I hope we can work our way towards uncovering the greatest mystery of modern physics, which is to uncover what dark energy is. I also think this method we've developed will be useful in other fields such as natural sciences or social sciences."

Nishimichi added: "I feel like there is great potential in data science."

The teams, which included experts from the Kavli Institute for the Physics and Mathematics of the Universe and the National Astronomical Observatory of Japan, said in a media release this week that Dark Emulator had already shown promising results during extensive tests.

In seconds, the tool predicted some of effects and patterns found in previous research projects, including the Hyper Suprime-Cam Survey and Sloan Digital Sky Survey. The emulator "learns" from huge quantities of data and "guesses outcomes for new sets of characteristics."

As with all AI tools, data is key. The scientists said the supercomputers have essentially created "hundreds of virtual universes" to play with, and Dark Emulator predicts the outcome of new characteristics based on data, without having to start new simulations every time.

Running simulations through a supercomputer without the AI would take days, researchers noted. Details of the initial study were published in The Astrophysical Journal last October. The team said they hope to input data from upcoming space surveys throughout the next decade.

While work on this one study remains ongoing, there is little argument within the scientific community that understanding dark energy remains a key objective.

"Determining the nature of dark energy [and] its possible history over cosmic time is perhaps the most important quest of astronomy for the next decade and lies at the intersection of cosmology, astrophysics, and fundamental physics," NASA says in a fact-sheet on its website.

Read more:
AI Tool Created to Study the Universe, Unlock the Mysteries of Dark Energy - Newsweek

Read More..

Important role of Artificial Intelligence in education – PC-Tablet

For decades, humanity predicted and waited for something that we know of now as Artificial Intelligence. These days, you see AI in almost anything, starting with your cell phone and ending with smart houses. Of course, one thing that we still wait for is the creation of robots that are going to be so human-like that it will be difficult to tell a real person from an artificial one. However, that sort of AI is mainly fiction.

On the other hand, it is safe to say that the kind of AI that we have now influenced almost every sphere of our lives, and education is not an exception. However, for one reason or another, the role of AI in education has been mainly underestimated, so we at CustomWritings.com, a professional essay writing company, decided to change that. We are going to list all the main aspects that AI improved in modern education.

You cant imagine how long it takes a regular teacher to go through all the tests and grade them all. That is when AI comes is more than helpful. Many programs allow teachers to take a break from grading while the system checks all the multiple-choice and fill-in-the-blank tests. When it comes to essay checking, things are a little different since such programs are not developed yet, but they are being worked on.

It is not a secret that every student has a particular learning technique, and not always a teacher can provide him or her with a necessary educational approach. That is when various kinds of educational software step in. These days you can choose a program or a system that helps you most while the teachers help is also provided when required.

Teachers are not gods, and at times, they also make mistakes. When some essential information or concept skips the teachers mind, the students are to make equal mistakes in the upcoming tests. That is when such useful course providers as Coursera step in. The system notices the patterned mistake and improves it while notifying both the teacher and the student about the mistake.

When you do not understand one point in the study, you cant move to the other. That is why many students tend to use the services of tutors so that all explained well. At the same point, not every student can afford a tutor. That is when AI tutoring systems come in. They are not ideal or too creative and complex, but they are developed to provide that basic knowledge that will help you move forward from the place that you are stuck.

The constant access to all the miracles of technology that we have on offer now has changed the way we perceive the information. It is neither good nor bad, per se. It is different. Such an AI involvement in the information perception process may result in something entirely innovative in the future. It will surely affect all the spheres, and education is clearly not an exception.

It goes without saying that no AI can substitute a real teacher. However, the scientists and developers are working on improving the AI educational systems so that they can take off all the unnecessary load of teachers.

At times it seems that there is rarely anything worse than standing in front of the class and being unable to answer the set question. The very same goes for the negative grades that you get when all the peers seem to succeed with any set task. Things like that will not have a chance in the future. When you are learning using a set program, you do not have to fail so everyone can see that. Whenever you do fail, the system will correct the mistake explaining why you failed in the first place. This is probably one of the best features that AI can bring into the educational process.

When a person fails, the AI system may not. All the data that is necessary for successful research, as well as the educational process required by every student, is never easy to gather and then sort it out accordingly. For a powerful AI system, such a task is not that difficult to understand and carry out. When you have all the data gathered in one place, you can come up with the best courses, attract the most promising student, implement the best learning techniques that correspond to the needs of modern students, etc.

It happens so very often that you cant go to the college that you like just because you cant afford traveling to the city it is situated in or anything of the kind. With the development of AI, such things may become extinct. The thing is that with the ever-developing technologies soon, you will be able to access anything from any place, and that will make education affordable for all those who are willing to learn.

To sum all up, it is safe to say that there are many changes that AI has already brought into the education system. However, that is clearly not all that we can reap from its developments. Many changes are waiting in the future as well.

View post:
Important role of Artificial Intelligence in education - PC-Tablet

Read More..

Love in the time of Artificial Intelligence – Philippine Star

Everybody loves a good love triangle, but heres one that veers away from the usual. The formidable third wheel in the romance between a man and a woman is a hologram!

Netflixs newest and now-streaming K-drama My Holo Love takes on love in the time of Artificial Intelligence (AI).

So-yeon is a woman who distances herself from people because of her face blindness disorder, which is described as the inability to recognize even familiar faces. Then she meets someone whom she can turn to for support, understanding and attention ideal qualities one hopes to find in The One, except hes someone she cant hold and touch. His name is Holo and hes an AI-powered hologram.

Now here comes Nan-do, So-yeons reclusive next-door-neighbor and Holos genius inventor. While Holo is made in Nan-dos exact likeness, the latter is the polar opposite hes lonely and disconnected from the world, too, and certainly not as sweet and kind.

But Nan-do gets drawn into the interactions between So-yeon and Holo, and becomes his own creations rival for So-yeons affection.

Yoon Hyun-min (Tunnel, Witch at Court, Tale of Fairy) plays the dual role of Holo and Nan-do, opposite the new-gen rom-com queen Ko Sung-hee (Suits, While Youre Sleeping, Diary of a Night Watchman) as So-yeon.

Can someone really find emotional attachment with Artificial Intelligence? Is AI capable of changing human capacity for friendship and ultimately, love? These are some questions that the series explores.

True to My Holo Loves tech-inspired theme, the Philippine press interviewed Hyun-min, 34, and Sung-hee, 29, via a video-conference call last Tuesday. Here are excerpts from the almost 40-minute chat:

On the possibility of a human being growing feelings for a robot or an AI-assisted hologram:

Sung-hee: First of all, in our (series), there is a point in time where So-yeon starts to develop feelings for Holo and is very confused. And I think it was when I had to act that particular part (that) I really had to think a lot about that issue, that thing that you just asked. Obviously, its not very easy. However, you can see that the character Holo in our work is very lovable and extremely charming.

Hyun-min: I dont think that its something that is completely impossible. Just the fact that we are holding this press conference, with us being here and you all the way in your respective countries this is something that I couldnt even imagine when I was a child but here we are. This is our reality today. The world is changing at such a rapid pace. I feel that in the near future, maybe a human being can have a relationship with an AI, who knows? I am open to all possibilities.

On working and being paired with each other in the series:

Sung-hee: It was great. I think it was one of those times when I felt the closest and most open in terms of having a conversation with a co-actor. And, I think, the kind of chemistry we formed behind the screens is really evident in our portrayal of the characters.

Hyun-min: It was really great for me as well. Sung-hees positive energy really exuded on set. It didnt only affect me but also members of the staff, the team, everyone. I think we all had a great time, thanks to her wonderful energy. I would love to get the chance to work with her again. And she was a really great partner (fist-bumping Sung-hee).

On finding similarities or differences with their characters:

Hyun-min: Holo is always sweet and kind because he is an AI. Nan-do the developer due to his child trauma has built walls around himself and disconnected himself from the outside world. He kind of has a hard time being kind at all times and expressing himself. I think, personally, I feel for Nan-do more out of the two, because I can always be kind and gentle when Im working or when outside of my home like Holo. However, upon returning home... theres that sense of emptiness and depression sometimes. So, I think, the imperfect human being that is Nan-do really draws me to this character and thats why I feel more for him.

Sung-hee: I share similarities, in that both of us are a little bit different when in terms of how we act when were working or when Im outside among other people and also when I am alone, and in that sense, I really related to my character. And also I myself, I often hear, I think, I appear to be cold. However, So-yeon has a completely different character once you get to know her so that was another factor that really appealed to me. I do feel that the character So-yeon is a lot nicer, much more lovely and a lot more warmhearted that I actually am (laughs). Shes very considerate of other people so much to the point that sometimes I felt frustrated with the character. So, So-yeon is definitely a character that I related a lot to, but also someone that I learned a lot from.

On preparing for their roles and learning new lessons as actors:

Sung-hee: Im mostly focused on how to really bring to light So-yeon having a relationship with an AI that is, of course, Holo, and in particular how to develop feelings for him, to love him. And also another thing was in terms of appearance, and how to maximize this sort of cold character that So-yeon had and also how she was a little bit more laidback when she was alone.

Rather than something that I learned about myself, because there were so many factors in this series where it was a completely new challenge for me. Of course, there were difficulties... but I think I grew a lot as an actress. Also because it was a sci-fi genre and I had to work against the green screen. This was something that I was completely new to. However, I did learn a lot in that aspect. So, I feel like I can be a little bit more confident in the future if I were to start in a similar type of series.

Hyun-min: Weve worked about a little over a year on this series and because it was a dual role that I had to play, it was very difficult. It was also my first-ever dual role. So, there were a lot of difficulties that came with that. But throughout the year, I think I focused mostly on my level of concentration and how not to break it. I would say that I focused mostly on the effectivity and capability in my acting.

Last year, as I was working on this series, during that time... I was also very exhausted and I was a little bit sensitive due to all of the homework. However, now looking back last year, I think it was truly a valuable experience. The fact that I was able to take on my very first dual role as an actor has become an invaluable asset to me. And it has led me to achieve a lot of personal growth. And also what I felt was that the first part of the story about the lonely couple of man and a woman and a hologram in between, its rather a unique love triangle that we see. And I think at the end of the day, we all carry a sense of loneliness. We are all a little bit lonely. But ultimately, only people or only humans can can fill that void.

On advice they would give the characters (and which viewers can learn from) on how to avoid loneliness:

Hyun-min: If I were to give advice solely on the character played by Sung-hee here, I would like to say that if you feel alone or if you have a sense of loneliness, dont take it all on yourself. Reach out, stretch out your hand because there is going to be someone that can help you, and of course, to all the journalists out there, if youre ever lonely (laughs)...

Sung-hee: I think I would give similar advice to both Holo and Nan-do. I would say you are enough. Holo himself, though hes AI, is enough. And hes a character that is sometimes and in some parts better than a human being and Nan-do despite all the hurt inside him and the way he sort of isolates himself to be more lonely, I still want to say to Nan-do, you are enough. So, know that you are enough and that you are already such a wonderful and amazing human being.

Visit link:
Love in the time of Artificial Intelligence - Philippine Star

Read More..

Intel drops work on one of its AI-chip lines in favor of an other – Network World

Well, that was short.

Intel is ending work on its Nervana neural network processors (NNP) in favor of an artificial intelligence line it gained in the recent $2 billion acquisition of Habana Labs.

Intel acquired Nervana in 2016 and issued its first NNP chip one year later. After the $408 million acquisition by Intel, Nervana co-founder Naveen Rao was placed in charge of the AI platforms group, which is part of Intel's data platforms group. The Nervana chips were meant to compete with Nvidia GPUs in the AI inference training space, and Facebook worked with Intel in close collaboration, sharing its technical insights, according to former Intel CEO Brian Krzanich.

For now, Intel has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips; Intel will move forward with Habana Labs' Gaudi and Goya processors in their place.

There are two parts to neural networks: training, where the computer learns a process, such as image recognition; and inference, where the system puts what it was trained to do to work. Training is far more compute-intensive than inference, and its where Nvidia has excelled.

Intel said the decision was made after input from customers, and that this decision is part of strategic updates to its data-center AI acceleration roadmap. "We will leverage our combined AI talent and technology to build leadership AI products," the company said in a statement to me.

The Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training. By moving to a single hardware architecture and software stack for data-center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers, Intel said.

This outcome from the Habana acquisition wasn't entirely unexpected. "We had thought that they might keep one for training and one for inference. However, Habana's execution has been much better and the architecture scales better. And, Intel still gained the IP and expertise of both companies, said Jim McGregor, president of Tirias Research.

The good news is that whatever developers created for Nervana wont have to be thrown out. The frameworks work on either architecture, McGregor said. "While there will be some loss going from one architecture to another, there is still value in the learning, and I'm sure Intel will work with customers to help them with the migration.

This is the second AI/machine learning effort Intel has shut down, the first being Xeon Phi. Xeon Phi itself was a bit of a problem child, dating back to Intels failed Larrabee experiment to build a GPU based on x86 instructions. Larrabee never made it out of the gate, while Xeon Phi lasted a few generations as a co-processor but was ultimately axed in August 2018.

Intel still has a lot of products targeting various AI: Mobileye, Movidius, Agilex FPGA, and its upcoming Xe architecture. Habana Labs has been shipping its Goya Inference Processor since late 2018, and samples of its Gaudi AI Training Processor were sent to select customers in the second half of 2019.

Read the original:
Intel drops work on one of its AI-chip lines in favor of an other - Network World

Read More..

Why Profits From Amazon’s Cloud Business Could Be About to Soar – Motley Fool

Amazon (NASDAQ:AMZN) reported impressive fourth-quarter results last week, showing strong revenue growth, better-than-expected margins, and strong current-quarter guidance from management. The company reported accelerating growth in revenue from third-party seller services, and announced that there is free two-hour Amazon Fresh and Whole Foods delivery available to Prime members in 2,000 cities and towns.

These are certainly exciting developments, but what could be even more important is a subtle change management made to how it accounts for its server assets in its Amazon Web Services (AWS) cloud business. This change suggests AWS could be much more profitable than investors previously thought.

In the company's fourth-quarter press release, management said first-quarter operating income is expected to be between $3 billion and $4.2 billion, compared to $4.4 billion in the year-ago quarter. That guidance includes "approximately $800 million lower depreciation expense due to an increase in the estimated useful life of our servers beginning on January 1, 2020."

Amazon's servers are lasting longer than expected. Until now, the company had depreciated -- that is, recognized the expense of -- its servers over three years. But now it has sufficient evidence that they actually last more than four years. So starting Jan. 1, Amazon is depreciating them over four years. Any time a company depreciates an asset over a longer time period, it reduces the annual depreciation expense that gets recognized.

Image source: Getty Images.

Based only on the servers the company owned at year's end, Amazon expects this accounting change to increase its 2020 operating income by $2.3 billion. That alone would be a significant 16% profit boost from the $14.5 billion of operating income reported last year. Additional profit growth from what has typically been rapid growth of the business will be additional.

This has huge implications for how profitable AWS can be. Management said the "majority" of the $2.3 billion relates to AWS. While we don't know exactly what majority means here, AWS' primary assets are data centers and servers, whereas the retail and other business lines have a much more diversified asset base. My educated guess is 85% relates to AWS. If that's the case, it would mean AWS will generate almost $2 billion more profit than it otherwise would have as a result of this accounting change.

What would that mean for AWS profit growth?

Bank of America analyst Justin Post estimates AWS will generate about $45 billion of net sales this year, up from $35 billion in 2019. With the extra $2 billion, even assuming flat profit margins otherwise, AWS would generate almost $14 billion of operating income. That would mean AWS' operating profits would explode higher by about 50% this year. That's almost double the 26% profit growth rate of last year.That would mean the whole company would grow operating profit by 32% this year, even if the non-AWS parts of the company don't grow operating profits at all.

Accelerating profit growth is usually a good thing, but that could be especially true in this case. There's been concern that AWS' net sales growth is slowing, that Microsoft's Azure and Alphabet's Google Cloud Platform both appear to be growing faster than AWS, and that AWS margins could be hurt by the growing competition.

Throw in Microsoft's recent win of the highly coveted cloud contract with the Department of Defense, which Amazon is appealing,and it's not surprising that investors have questions about where AWS is going next. But if Amazon reports 50% operating profit growth for AWS this year, investors are likely to be reassured that AWS is just getting started.

Longer term, the accounting change suggests AWS has the ability to be more profitable than it's ever been. And this improvement hasn't only been driven by happenstance; management has been actively working to extend the useful life of its servers by making its software run more efficiently on the hardware.Given that AWS operated at a 31% operating profit margin during a more efficient period in the past, it seems like the next similarly efficient period could generate an operating margin closer to 35% now that depreciation expense is significantly lower. That would be a new all-time high for AWS' profitability, and would further reassure investors that Amazon continues to have a huge profitable growth engine in AWS.

Read more:
Why Profits From Amazon's Cloud Business Could Be About to Soar - Motley Fool

Read More..