Page 2,553«..1020..2,5522,5532,5542,555..2,5602,570..»

What is Artificial Intelligence as a Service (AIaaS)? | ITBE – IT Business Edge

Software as a Service, or SaaS, is a concept that is familiar to many. Long-time Photoshop users will recall when Adobe stopped selling its product and instead shifted to a subscriber model. Netflix and Disney+ are essentially Movies as a Service, particularly at a time when ownership of physical media is losing ground to media streaming. Artificial Intelligence as a Service (AIaaS) has been growing in market adoption in recent years, but the uninitiated might be asking: what exactly is it?

In a nutshell, AIaaS is what happens when a company develops and licenses use of an AI to another company, most often to solve a very specific problem. For example, Bill owns a company that sells hotdogs through his e-commerce site. While Bill offers a free returns policy for dissatisfied customers, he lacks the time to provide decent customer support, and rarely replies to emails. Separately, a software developer has created a chatbot that can handle most customer inquiries using natural language processing, and often solve the issue or answer a question before human intervention is even required. For a monthly fee, the chatbot is licensed to the hotdog vendor, and implemented on his website. Now, the bot is solving 80% of customer issues, leaving Bill with the time to respond to the remaining 20%. But Bill is still too preoccupied making hotdogs, so he subscribes to a service like Flowrite, that uses AI to intelligently write his emails on the fly.

AI is also being put in service to analyze large sets of data and make predictions, streamline information storage, or even detect fraudulent activity. Amazons personal recommendation engine, an AI powered by machine learning, is now available as a licensed service to other retailers, video stream platforms, and even the finance industry. Googles suite of AI services range from natural language processing, handwriting recognition, to real-time captioning and translation. IBMs groundbreaking AI, Watson, is now being deployed to fight financial crimes, target advertisements based on real-time weather analysis, and analyze data to help hospitals make treatment judgements.

Also read: AI-Enabled Payments: A Q&A with Tradeshift

Also read: How Quantum Computing Will Transform AI

Machine learning AIs improve with time, usage, and development. Some, like YouTubes recommendation engine, have become so sophisticated that it sometimes feels like we have entire television stations tailored perfectly to our interests. Others, like language model AI GPT-3, produce entire volumes of text that are nearly indistinguishable from an authentic human source.

Microsoft has even put GPT-3 to use to translate conversational language into a working computer code, potentially opening up a new frontier in how software can be written in the future, and giving coding novices a fighting chance. Microsoft has also partnered with NVIDIA to create a new natural language generation model, three times as powerful as GPT-3. Improvements in language recognition and generation have obvious carryover benefits for the future development of chatbots, home assistants, and document generation as well.

Industrial giant Siemens has announced they are integrating Googles AIaaS solutions to streamline and analyze data, and predict, for instance, the rate of wear-and-tear of machinery on their factory floor. This could reduce maintenance costs, improve the scheduling of routine inspections, and prevent unexpected equipment failures.

AIaaS is a rapidly growing field, and there will be many more niches discovered that it can fill for years to come.

Read next: Top 5 Benefits of AI in Banking and Finance

The rest is here:
What is Artificial Intelligence as a Service (AIaaS)? | ITBE - IT Business Edge

Read More..

Understanding the UK Artificial Intelligence commercialisation – GOV.UK

The government is undertaking research to explore how AI R&D is successfully commercialised and brought to market.

The Department for Digital, Culture, Media and Sport (DCMS), along with the Office for Artificial Intelligence and Digital Standards and Internet Governance (DSIG), are leading the research project.

Research consultants Oxford Insights and Cambridge Econometrics have been commissioned with exploring the ways technology transfer happens for AI, and are seeking to conduct interviews with those with knowledge of the industry.

The research aims to increase understanding of the following topics:

Oxford Insights and Cambridge Econometrics would like to speak individuals with experience and knowledge of the AI development ecosystem, Innovate UK and other funding programmes, Standards Developing Organisations (SDOs), AI patents, AI R&D in the public and private sectors, AI funding and Venture Capital, and AI policy.

Our interviews will take approximately 45 mins -1 hour; however, we are happy to accommodate if time doesnt permit this length of interview. We may request your approval to follow up on specific points and themes identified across all our interactions.

Please get in touch with either aisha.naz@dcms.gov.uk or sam.hainsworth@dcms.gov.uk if you have any clarifications or questions. We look forward to working with you.

Read more here:
Understanding the UK Artificial Intelligence commercialisation - GOV.UK

Read More..

Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

Teresa Carey: This is Scientific Americans 60-Second Science. I'm Teresa Carey.

Every morning at five oclock, composer Walter Werzowa would sit down at his computer to anticipate a particular daily e-mail. It came from six time zones away, where a team had been working all night (or day, rather) to draft Beethovens unfinished 10th Symphonyalmost two centuries after his death.

The e-mail contained hundreds of variations, and Werzowa listened to them all.

Werzowa: So by nine, 10 oclock in the morning, its likeIm already in heaven.

Carey: Werzowa was listening for the perfect tunea sound that was unmistakably Beethoven.

But the phrases he was listening to werent composed by Beethoven. They were created by artificial intelligencea computer simulation of Beethovens creative process.

Werzowa: There werehundreds of options, and some are better than others. But then there is that one which grabs you, and that was just a beautiful process.

Carey: Ludwig van Beethoven was one of the most renowned composers in Western music history. When he died in 1827, he left behind musical sketches and notes that hinted at a masterpiece. There was barely enough to make out a phrase, let alone a whole symphony. But that didnt stop people from trying.

In 1988 musicologist Barry Cooper attempted. But he didnt get beyond the first movement. Beethovens handwritten notes on the second and third movements are meagernot enough to compose a symphony.

Werzowa: A movement of a symphony can have up to 40,000 notes. And some of his themes were three bars, like 20 notes. Its very little information.

Carey: Werzowa and a group of music experts and computer scientists teamed up to use machine learning to create the symphony. AhmedElgammal, the director of the Art and Artificial Intelligence Laboratory at Rutgers University, led the AI side of the team.

Elgammal: When you listen to music read by AIto continue a theme of music, usually its a very short few seconds, and then they start diverging and becoming boring and not interesting. They cannot really take that and compose a full movement of a symphony.

Carey: The teams first task was to teach the AI to think like Beethoven. To do that, they gave it Beethovens complete works, his sketchesand notes. They taught it Beethoven's processlike how he went from those iconic four notes to his entire Fifth Symphony.

[CLIP: Notes from Symphony no. 5]

Carey: Then they taught it to harmonize with a melody, compose a bridge between two sectionsand assign instrumentation. With all that knowledge, the AI came as close to thinking like Beethoven as possible. But it still wasnt enough.

Elgammal: The way music generation using AI works is very similar to the way, when you write an e-mail, you find that the e-mail thread predicts whats the next word for you or what the rest of the sentence is for you.

Carey: Butlet the computer predict your words long enough, and eventually, the text will sound like gibberish.

Elgammal: It doesnt really generate something that can continue for a long time and be consistent. So that was the main challenge in dealing with this project: How can you take a motif or a short phrase of music that Beethoven wrote in his sketchand continue it into a segment of music?

Carey: Thats where Werzowas daily e-mails came in. On those early mornings, he was selecting what he thought was Beethovens best. And, piece by piece, the team built a symphony.

Matthew Guzdial researches creativity and machine learning at the University of Alberta. He didnt work on the Beethoven project, but he says that AI is overhyped.

Guzdial: Modern AI, modern machine learning, is all about just taking small local patterns and replicating them. And its up to a human to then take what the AI outputs and find the genius. The genius wasnt there. The genius wasnt in the AI. The genius was in the human who was doing the selection.

Carey: Elgammal wants to make the AI tool available to help other artists overcome writers block or boost their performance. But both Elgammal and Werzowa say that the AI shouldnt replace the role of an artist. Insteadit should enhance their work and process.

Werzowa: Like every tool, you can use a knife to kill somebody or to save somebodys life, like with a scalpel in a surgery. So it can go any way. If you look at the kids, like kids are born creative.Its like everything is about being creative, creative and having fun. And somehow were losing this. I think if we could sit back on a Saturday afternoon in our kitchen, and because maybe were a little bit scared to make mistakes, ask the AI to help us to write us a sonata, song or whateverin teamwork, life will be so much more beautiful.

Carey: The team released the 10th Symphony over the weekend. When asked who gets credit for writing it Beethoven, the AIor the team behind itWerzowa insists it is a collaborative effort. But, suspending disbelief for a moment, it isnt hard to imagine that were listening to Beethoven once again.

Werzowa: I dare to say that nobody knows Beethovenas well as the AI, didas well as the algorithm. I think music, when you hear it, when you feel it, when you close your eyes, it does something to your body. Close your eyes, sit back and be open for it, and I would love to hear what you felt after.

Carey: Thanks for listening. For Scientific Americans60-Second Science, Im Teresa Carey.

[The above text is a transcript of this podcast.]

Originally posted here:
Beethoven's Unfinished 10th Symphony Brought to Life by Artificial Intelligence - Scientific American

Read More..

Transactions in the Age of Artificial Intelligence: Risks and Considerations – JD Supra

Artificial Intelligence (AI) has become a major focus of, and the most valuable asset in, many technology transactions and the competition for top AI companies has never been hotter. According to CB Insights, there have been over 1,000 AI acquisitions since 2010. The COVID pandemic interrupted this trajectory, causing acquisitions to fall from 242 in 2019 to 159 in 2020. However, there are signs of a return, with over 90 acquisitions in the AI space as of June 2021 according to the latest CB Insights data. With tech giants helping drive the demand for AI, smaller AI startups are becoming increasingly attractive targets for acquisition.

AI companies have their own set of specialized risks that may not be addressed if buyers approach the transaction with their standard process. AIs reliance on data and the dynamic nature of its insights highlight the shortcomings of standard agreement language and the risks in not tailoring agreements to address AI specific issues. Sophisticated parties should consider crafting agreements specifically tailored to AI and its unique attributes and risks, which lend the parties a more accurate picture of an AI systems output and predictive capabilities, and can assist the parties in assessing and addressing the risks associated with the transaction. These risks include:

Freedom to use training data may be curtailed by contracts with third parties or other limitations regarding open source or scraped data.

Clarity around training data ownership can be complex and uncertain. Training data may be subject to ownership claims by third parties, be subject to third-party infringement claims, have been improperly obtained, or be subject to privacy issues.

To the extent that training data is subject to use limitations, a company may be restricted in a variety of ways including (i) how it commercializes and licenses the training data, (ii) the types of technology and algorithms it is permitted to develop with the training data and (iii) the purposes to which its technology and algorithms may be applied.

Standard representations on ownership of IP and IP improvements may be insufficient when applied to AI transactions. Output data generated by algorithms and the algorithms themselves trained from supplied training data may be vulnerable to ownership claims by data providers and vendors. Further, a third-party data provider may contract that, as between the parties, it owns IP improvements, resulting in companies struggling to distinguish ownership of their algorithms prior to using such third-party data from their improved algorithms after such use, as well as their ownership and ability to use model generated output data to continue to train and improve their algorithms.

Inadequate confidentiality or exclusivity provisions may leave an AI systems training data inputs and material technologies exposed to third parties, enabling competitors to use the same data and technologies to build similar or identical models. This is particularly the case when algorithms are developed using open sourced or publicly available machine learning processes.

Additional maintenance covenants may be warranted because an algorithms competitive value may atrophy if the algorithm is not designed to permit dynamic retraining, or the user of the algorithm fails to maintain and retrain the algorithm with updated data feeds.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until such time, companies should protect their IP, data, algorithms, and models, by ensuring that their transactions and agreements are specifically designed to address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Read this article:
Transactions in the Age of Artificial Intelligence: Risks and Considerations - JD Supra

Read More..

SenseTime Co-hosts the 3rd International Artificial Intelligence Fair to Nurture AI Talent and Promote a Collaborative Education Ecosystem -…

Since launching in July this year, the highly anticipated IAIF has attracted 665 project submissions from over 300 schools in 8 countries and regions, with 121 projects from 98 schools selected for the final online presentation and verbal Q&A.During the final competition presentations, the project submissions were reviewed meticulously by 45 professional judges from top-tier universities, enterprises and research institutions, including University of Science and Technology of China, Tsinghua University, Fudan University, Shanghai Jiao Tong University, Nanyang Technological University, Peking University, Chinese University of Hong Kong and Shanghai Technology Art Center. The teaching and evaluation system of martial arts based on body posture recognition and machine learning by Li Lufei from Shanghai Nanyang Model High School and Wu Keyu from the High School Affiliated to Fudan University as well as the drone powered by OpenCV for flood fight and rescue by Huang Pucheng, Wang Bingyang and Lin Yinhang from Zhejiang Wenling High School became the winners of the grand prize.

Besides, the research of rehabilitation assessment and training system powered by 3D hand posture verification by Zhang Yihong from Shanghai World Foreign Language Academy stood out from many excellent projects and won the first prize. Leveraging the 3D hand posture verification, this project aims to design a low-cost and easy-to-operate product for the patient with hand movement disorders, realizing 89.9% accurate in assessment of hand rehabilitation and training.

Lin Junqiu, Deputy Director of Science and Education Department from Shanghai Science, Art and Education Center, said, "Artificial intelligence is critical to our future. As we continue to advance technology development, we must cultivate a larger pool of AI talent with even higher levels of expertise and innovation capability. The huge opportunities brought by the AI era will facilitate transformative applications across industry verticals and scenarios but also formulate optimal collaboration between human being and artificial intelligence."

Lynn Dai, General Manager of SenseTime's Education Product, said at the final competition, "AI has become an important driving force for technological innovation, we believe the IAIF can provide an innovative platform for young people to develop their interest in AI. Meanwhile, SenseTime Education is dedicated to nurturing young talent and broadening their horizons with advanced insights from an industry perspective, as well as preparing them for the AI-empowered future."

IAIF is also providing comprehensive services for participants, from scientific innovation training to project incubation, helping them solve practical industrial problems. The IAIF organizing committee hosted a four-week AI training course for students before the final competition. The students from the most outstanding project teams will have the chance to participate in other national or international competitions. In addition, the students from the most outstanding IAIF projects will participate in a roadshow training workshop for startups as part of incubator programmes organized by SenseTime; the company will provide technology for high-potential projects.

"IAIF provided me with a unique opportunity to exchange ideas on this exciting AI topic with participants from different schools around the world," said Wu Keyu, the winner of the grand prize. "Through this competition, I have gained a better understanding of the powerful impact from AI and humans working together to build novel solutions that will create a better tomorrow for human society."

The success of the 3rd International Artificial Intelligence Fair not only marks the formation of the foundations for the AI education ecosystem developed by the Shanghai Xuhui Education Bureau and SenseTime, but also boosts the collaboration among governments, academia, enterprises and industries in AI technology innovation. In the future, SenseTime Education will continue to act as a focal point and a platform for cultivating future AI talents.

About SenseTime

SenseTime is a leading AI software company focused on creating a better AI-empowered future through innovation. Upholding a vision of advancing the interconnection of the physical and digital worlds with AI, driving sustainable productivity growth and seamless interactive experiences, SenseTime is committed to advancing the state of the art in AI research, developing scalable and affordable AI software platforms that benefit businesses, people and society, and attracting and nurturing top talents, shaping the future together.

With our roots in the academic world, we invest in our original and cutting-edge research that allows us to offer and continuously improve industry-leading, full-stack AI capabilities, covering key fields across perception intelligence, decision intelligence, AI-enabled content generation and AI-enabled content enhancement, as well as key capabilities in AI chips, sensors and computing infrastructure. Our proprietary AI infrastructure, SenseCore, allows us to develop powerful and efficient AI software platforms that are scalable and adaptable for a wide range of applications.

Today, our technologies are trusted by customers and partners in many industry verticals including Smart Business, Smart City, Smart Life and Smart Auto.

We have offices in markets including Hong Kong, Mainland China, Taiwan, Macau, Japan, Singapore, Saudi Arabia, the United Arab Emirates, Malaysia, and South Korea, etc., as well as presences in Thailand, Indonesia and the Philippines. For more information, please visit SenseTime's website as well as its LinkedIn, Twitter and Facebook pages.

SOURCE SenseTime

Link:
SenseTime Co-hosts the 3rd International Artificial Intelligence Fair to Nurture AI Talent and Promote a Collaborative Education Ecosystem -...

Read More..

Alation Acquires Artificial Intelligence Vendor Lyngo Analytics – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Alation Inc., the leader in enterprise data intelligence solutions, today announced the acquisition of Lyngo Analytics, a Los Altos, Calif.-based data insights company. The acquisition will elevate the business user experience within the data catalog, scale data intelligence, and help organizations drive data culture. Lyngo Analytics CEO and co-founder Jennifer Wu and CTO and co-founder Joachim Rahmfeld will join the company.

Lyngo Analytics uses a natural language interface to empower users to discover data and insights by asking questions using simple, familiar business terms. Alation offers the most intelligent and user-friendly machine-learning data catalog on the market. And by integrating Lyngo Analytics artificial intelligence (AI) and machine-learning (ML) technology into its platform, Alation deepens its support for the non-technical user, converting natural language questions into SQL.

The integration lowers the barrier to entry for business users. Now, they can acquire and develop data-driven insights from across an enterprise's broad range of data sources. This means even data consumers without SQL expertise can ask questions in natural language and find data and insights without the support of data analysts. The acquisition will help organizations drive data culture by putting data and analytics into the hands of the masses.

Wu will join Alation as Senior Director of Product Management, where she will be responsible for product strategy and delivery for natural language data search, discovery, and exploration experiences. Rahmfeld, who is also a part-time, graduate-level deep learning and natural language processing lecturer at UC Berkeleys Master of Information and Data Science Program, will be Senior Director of AI/ML Research. He will be responsible for Alations AI and machine learning center of excellence, building both platform and application experiences that leverage AI and ML to enhance Alations value for business and technical users.

Alation created the first machine learning data catalog and were known for providing the most user-friendly interface on the market, said Raj Gossain, Chief Product Officer, Alation. With this acquisition, were building on the best. Were doubling down on key aspects of the platform that will help drive data culture and spur innovation and growth. Jennifer and Joachim developed a unique solution for a complex data and analytics issue, and Im excited to welcome them to the Alation team.

The acquisition is the latest milestone for Alation, which announced a $110 million Series D funding round and a $1.2 billion market valuation in June 2021. Alation is growing quickly, earning the trust of nearly 300 customers, including leading global brands such as Cisco, Exelon, GE Aviation, Munich Re, NASDAQ, and Pfizer. The company has more than 450 employees globally and is hiring. Recently, Alation was named a leader in The Forrester Wave: Data Governance Solutions, Q3 2021 report and Snowflakes Data Governance Partner of the Year.

Learn More:

About Alation

Alation is the leader in enterprise data intelligence solutions including data search & discovery, data governance, data stewardship, analytics, and digital transformation. Alations initial offering dominates the data catalog market. Thanks to its powerful Behavioral Analysis Engine, inbuilt collaboration capabilities, and open interfaces, Alation combines machine learning with human insight to successfully tackle even the most demanding challenges in data and metadata management. Nearly 300 enterprises drive data culture, improve decision making, and realize business outcomes with Alation including AbbVie, American Family Insurance, Cisco, Exelon, Fifth Third Bank, Finnair, Munich Re, NASDAQ, New Balance, Parexel, Pfizer, US Foods and Vistaprint. Headquartered in Silicon Valley, Alation was named to Inc. Magazines Best Workplaces list and is backed by leading venture capitalists including Blackstone, Costanoa, Data Collective, Dell Technologies, Icon, ISAI Cap, Riverwood, Salesforce, Sanabil, Sapphire, and Snowflake Ventures. For more information, visit alation.com.

More here:
Alation Acquires Artificial Intelligence Vendor Lyngo Analytics - Business Wire

Read More..

Machine Learning Tutorial | Machine Learning with Python …

Machine Learning tutorial provides basic and advanced concepts of machine learning. Our machine learning tutorial is designed for students and working professionals.

Machine learning is a growing technology which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions using historical data or information. Currently, it is being used for various tasks such as image recognition, speech recognition, email filtering, Facebook auto-tagging, recommender system, and many more.

This machine learning tutorial gives you an introduction to machine learning along with the wide range of machine learning techniques such as Supervised, Unsupervised, and Reinforcement learning. You will learn about regression and classification models, clustering methods, hidden Markov models, and various sequential models.

In the real world, we are surrounded by humans who can learn everything from their experiences with their learning capability, and we have computers or machines which work on our instructions. But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine Learning.

Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of algorithms which allow a computer to learn from the data and past experiences on their own. The term machine learning was first introduced by Arthur Samuel in 1959. We can define it in a summarized way as:

With the help of sample historical data, which is known as training data, machine learning algorithms build a mathematical model that helps in making predictions or decisions without being explicitly programmed. Machine learning brings computer science and statistics together for creating predictive models. Machine learning constructs or uses the algorithms that learn from historical data. The more we will provide the information, the higher will be the performance.

A machine has the ability to learn if it can improve its performance by gaining more data.

A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new data, predicts the output for it. The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to build a better model which predicts the output more accurately.

Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms, machine builds the logic as per the data and predict the output. Machine learning has changed our way of thinking about the problem. The below block diagram explains the working of Machine Learning algorithm:

The need for machine learning is increasing day by day. The reason behind the need for machine learning is that it is capable of doing tasks that are too complex for a person to implement directly. As a human, we have some limitations as we cannot access the huge amount of data manually, so for this, we need some computer systems and here comes the machine learning to make things easy for us.

We can train machine learning algorithms by providing them the huge amount of data and let them explore the data, construct the models, and predict the required output automatically. The performance of the machine learning algorithm depends on the amount of data, and it can be determined by the cost function. With the help of machine learning, we can save both time and money.

The importance of machine learning can be easily understood by its uses cases, Currently, machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by Facebook, etc. Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of data to analyze the user interest and recommend product accordingly.

Following are some key points which show the importance of Machine Learning:

At a broad level, machine learning can be classified into three types:

Supervised learning is a type of machine learning method in which we provide sample labeled data to the machine learning system in order to train it, and on that basis, it predicts the output.

The system creates a model using labeled data to understand the datasets and learn about each data, once the training and processing are done then we test the model by providing a sample data to check whether it is predicting the exact output or not.

The goal of supervised learning is to map input data with the output data. The supervised learning is based on supervision, and it is the same as when a student learns things in the supervision of the teacher. The example of supervised learning is spam filtering.

Supervised learning can be grouped further in two categories of algorithms:

Unsupervised learning is a learning method in which a machine learns without any supervision.

The training is provided to the machine with the set of data that has not been labeled, classified, or categorized, and the algorithm needs to act on that data without any supervision. The goal of unsupervised learning is to restructure the input data into new features or a group of objects with similar patterns.

In unsupervised learning, we don't have a predetermined result. The machine tries to find useful insights from the huge amount of data. It can be further classifieds into two categories of algorithms:

Reinforcement learning is a feedback-based learning method, in which a learning agent gets a reward for each right action and gets a penalty for each wrong action. The agent learns automatically with these feedbacks and improves its performance. In reinforcement learning, the agent interacts with the environment and explores it. The goal of an agent is to get the most reward points, and hence, it improves its performance.

The robotic dog, which automatically learns the movement of his arms, is an example of Reinforcement learning.

Before some years (about 40-50 years), machine learning was science fiction, but today it is the part of our daily life. Machine learning is making our day to day life easy from self-driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine learning is so old and has a long history. Below some milestones are given which have occurred in the history of machine learning:

Now machine learning has got a great advancement in its research, and it is present everywhere around us, such as self-driving cars, Amazon Alexa, Catboats, recommender system, and many more. It includes Supervised, unsupervised, and reinforcement learning with clustering, classification, decision tree, SVM algorithms, etc.

Modern machine learning models can be used for making various predictions, including weather prediction, disease prediction, stock market analysis, etc.

Before learning machine learning, you must have the basic knowledge of followings so that you can easily understand the concepts of machine learning:

Our Machine learning tutorial is designed to help beginner and professionals.

We assure you that you will not find any difficulty while learning our Machine learning tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form so that we can improve it.

Read the original:
Machine Learning Tutorial | Machine Learning with Python ...

Read More..

Machine learning in the cloud is helping businesses innovate – MIT Technology Review

In the past decade, machine learning has become a familiar technology for improving the efficiency and accuracy of processes like recommendations, supply chain forecasting, developing chatbots, image and text search, and automated customer service functions, to name a few. Machine learning today is becoming even more pervasive, impacting every market segment and industry, including manufacturing, SaaS platforms, health care, reservations and customer support routing, natural language processing (NLP) tasks such as intelligent document processing, and even food services.

Take the case of Dominos Pizza, which has been using machine learning tools created to improve efficiencies in pizza production. Dominos had a project called Project 3/10, which aimed to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order, says Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use predictive machine learning models to achieve that.

The recent rise of machine learning across diverse industries has been driven by improvements in other technological areas, says Sahanot the least of which is the increasing compute power in cloud data centers.

Over the last few years, explains Saha, the amount of total compute that can be thrown at machine learning problems has been doubling almost every four months. That's 5 to 6 times more than Moore's Law. As a result, a lot of functions that once could only be done by humansthings like detecting an object or understanding speechare being performed by computers and machine learning models.

At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium.

The current machine learning use cases that help companies optimize the value of their data to perform tasks and improve products is just the beginning, Saha says.

Machine learning is just going to get more pervasive. Companies will see that they're able to fundamentally transform the way they do business. Theyll see they are fundamentally transforming the customer experience, and they will embrace machine learning.

AWS Machine Learning Infrastructure

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma. This is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is machine learning in the cloud. Across all industries, the exponential increase of data collection demands faster and novel ways to analyze data, but also learn from it to make better business decisions. This is how machine learning in the cloud helps fuel innovation for enterprises, from startups to legacy players.

Two words for you: data innovation. My guest is Dr. Bratin Saha, vice president and general manager of machine learning services for Amazon AI. He has held executive roles at NVIDIA and Intel. This episode of Business Lab is produced in association with AWS. Welcome, Bratin.

Dr. Bratin Saha: Thank you for having me, Laurel. It's great to be here.

Laurel: Off the top, could you give some examples of how AWS customers are using machine learning to solve their business problems?

Bratin: Let's start with the definition of what we mean by machine learning. Machine learning is a process where a computer and an algorithm can use data, usually historical data, to understand patterns, and then use that information to make predictions about the future. Businesses have been using machine learning to do a variety of things, like personalizing recommendations, improving supply chain forecasting, making chatbots, using it in health care, and so on.

For example, Autodesk was able to use the machine learning infrastructure we have for their chatbots to improve their ability to handle requests by almost five times. They were able to use the improved chatbots to address more than 100,000 customer questions per month.

Then there's Nerd Wallet. Nerd Wallet is a personal finance startup that did not personalize the recommendations they were giving to customers based on the customer's preferences. Theyre now using AWS machine learning services to tailor the recommendations to what a person actually wants to see, which has significantly improved their business.

Then we have customers like Thomson Reuters. Thomson Reuters is one of the world's most trusted providers of answers, with teams of experts. They use machine learning to mine data to connect and organize information to make it easier for them to provide answers to questions.

In the financial sector, we have seen a lot of uptake in machine learning applications. One company, for example, is a payment service provider, was able to build a fraud detection model in just 30 minutes.

The reason Im giving you so many examples is to show how machine learning is becoming pervasive. It's going across geos, going across market segments, and being used by companies of all kinds. I have a few other examples I want to share to show how machine learning is also touching industries like manufacturing, food delivery, and so on.

Domino's Pizza, for example, had a project called Project 3/10, where they wanted to have a pizza ready for pickup within three minutes of an order, or have it delivered within 10 minutes of an order. If you want to hit those goals, you have to be able to predict when a pizza order will come in. They use machine learning models to look at the history of orders. Then they use the machine learning model that was trained on that order history. They were then able to use that to predict when an order would come in, and they were able to deploy this to many stores, and they were able to hit the targets.

Machine learning has become pervasive in how our customers are doing business. It's starting to be adopted in virtually every industry. We have more than several hundred thousand customers using our machine learning services. One of our machine learning services, Amazon SageMaker, has been one of the fastest growing services in AWS history.

Laurel: Just to recap, customers can use machine learning services to solve a number of problems. Some of the high-level problems would be a recommendation engine, image search, text search, and customer service, but then, also, to improve the quality of the product itself.

I like the Domino's Pizza example. Everyone understands how a pizza business may work. But if the goal is to turn pizzas around as quickly as possible, to increase that customer satisfaction, Domino's had to be in a place to collect data, be able to analyze that historic data on when orders came in, how quickly they turned around those orders, how often people ordered what they ordered, et cetera. That was what the prediction model was based on, correct?

Bratin: Yes. You asked a question about how we think about machine learning services. If you look at the AWS machine learning stack, we think about it as a three-layered service. The bottom layer is the machine learning infrastructure.

What I mean by this is when you have a model, you are training the model to predict something. Then the predictions are where you do this thing called inference. At the bottom layer, we provide the most optimized infrastructure, so customers can build their own machine learning systems.

Then there's a layer on top of that, where customers come and tell us, "You know what? I just want to be focused on the machine learning. I don't want to build a machine learning infrastructure." This is where Amazon SageMaker comes in.

Then there's a layer on top of that, which is what we call AI services, where we have pre-trained models that can be used for many use cases.

So, we look at machine learning as three layers. Different customers use services at different layers, based on what they want, based on the kind of data science expertise they have, and based on the kind of investments they want to make.

The other part of our view goes back to what you mentioned at the beginning, which is data and innovation. Machine learning is fundamentally about gaining insights from data, and using those insights to make predictions about the future. Then you use those predictions to derive business value.

In the case of Domino's Pizza, there is data around historical order patterns that can be used to predict future order patterns. The business value there is improving customer service by getting orders ready in time. Another example is Freddy's Frozen Custard, which used machine learning to customize menus. As a result of that, they were able to get a double-digit increase in sales. So, it's really about having data, and then using machine learning to gain insights from that data. Once you've gained insights from that data, you use those insights to drive better business outcomes. This goes back what you mentioned at the beginning: you start with data and then you use machine learning to innovate on top of it.

Laurel: What are some of the challenges organizations have as they start their machine learning journeys?

Bratin: The first thing is to collect data and make sure it is structured wellclean datathat doesn't have a lot of anomalies. Then, because machine learning models typically get better if you can train them with more and more data, you need to continue collecting vast amounts of data. We often see customers create data lakes in the cloud, like on Amazon S3, for example. So, the first step is getting your data in order and then potentially creating data lakes in the cloud that you can use to feed your data-based innovation.

The next step is to get the right infrastructure in place. That is where some customers say, "Look, I want to just build the whole infrastructure myself," but the vast majority of customers say, "Look, I just want to be able to use a managed service because I don't want to have to invest in building the infrastructure and maintaining the infrastructure, and so on.

The next is to choose a business case. If you haven't done machine learning before, then you want to get started with a business case that leads to a good business outcome. Often what can happen with machine learning is to see it's cool, do some really cool demos, but those dont translate into business outcomes, so you start experiments and you don't really get the support that you need.

Finally, you need commitment because machine learning is a very iterative process. You're training a model. The first model you train may not get you the results you desire. There's a process of experimentation and iteration that you have to go through, and it can take you a few months to get results. So, putting together a team and giving them the support they need is the final part.

If I had to put this in terms of a sequence of steps, it's important to have data and a data culture. Its important in most cases for customers to choose to use a managed service to build and train their models in the cloud, simply because you get storage a lot easier and you get compute a lot easier. The third is to choose a use case that is going to have business value, so that your company knows this is something that you want to deploy at scale. And then, finally, be patient and be willing to experiment and iterate, because it often takes a little bit of time to get the data you need to train the models well and actually get the business value.

Laurel: Right, because it's not something that happens overnight.

Bratin: It does not happen overnight.

Laurel: How do companies prepare to take advantage of data? Because, like you said, this is a four-step process, but you still have to have patience at the end to be iterative and experimental. For example, do you have ideas on how companies can think about their data in ways that makes them better prepared to see success, perhaps with their first experiment, and then perhaps be a little bit more adventurous as they try other data sets or other ways of approaching the data?

Bratin: Yes. Companies usually start with a use case where they have a history of having good data. What I mean by a history of having good data is that they have a record of transactions that have been made, and most of the records are accurate. For example, you don't have a lot of empty record transactions.

Typically, we have seen that the level of data maturity varies between different parts of a company. You start with the part of a company where the data culture is a lot more prevalent. You start from there so that you have a record of historical transactions that you stored. You really want to have fairly dense data to use to train your models.

Laurel: Why is now the right time for companies to start thinking about deploying machine learning in the cloud?

Bratin: I think there is a confluence of factors happening now. One is that machine learning over the last five years has really taken off. That is because the amount of compute available has been increasing at a very fast rate. If you go back to the IT revolution, the IT revolution was driven by Moore's Law. Under Moore's Law, compute doubled every 18 months.

Over the last few years, the amount of total compute has been doubling almost every four months. That's five times more than Moore's Law. The amount of progress we have seen in the last four to five years has been really amazing. As a result, a lot of functions that once could only be done by humanslike detecting an object or understanding speechare being performed by computers and machine learning models. As a result of that, a lot of capabilities are getting unleashed. That is what has led to this enormous increase in the applicability of machine learningyou can use it for personalization, you can use it in health care and finance, you can use it for tasks like churn prediction, fraud detection, and so on.

One reason that now is a good time to get started on machine learning in the cloud is just the enormous amount of progress in the last few years that is unleashing these new capabilities that were previously not possible.

The second reason is that a lot of the machine learning services being built in the cloud are making machine learning accessible to a lot more people. Even if you look at four to five years ago, machine learning was something that only very expert practitioners could do and only a handful of companies were able to do because they had expert practitioners. Today, we have more than a hundred thousand customers using our machine learning services. That tells you that machine learning has been democratized to a large extent, so that many more companies can start using machine learning and transforming their business.

Then comes the third reason, which is that you have amazing capabilities that are now possible, and you have cloud-based tools that are democratizing these capabilities. The easiest way to get access to these tools and these capabilities is through the cloud because, first, it provides the foundation of compute and data. Machine learning is, at its core, about throwing a lot of compute on data. In the cloud, you get access to the latest compute. You pay as you go, and you don't have to make upfront huge investments to set up compute farms. You also get all the storage and the security and privacy and encryption, and so onall of that core infrastructure that is needed to get machine learning going.

Laurel: So Bratin, how does AWS innovate to help organizations with machine learning, model training, and inference?

Bratin: At AWS, everything we do works back from the customer and figuring out how we reduce their pain points and how we make it easier for them to do machine learning. At the bottom of the stack of machine learning services, we are innovating on the machine learning infrastructure so that we can make it cheaper for customers to do machine learning and faster for customers to do machine learning. There we have two AWS innovations. One is Inferentia and the other is Trainium. These are custom chips that we designed at AWS that are purpose-built for inference, which is the process of making machine learning predictions, and for training. Inferentia today provides the lowest cost inference instances in the cloud. And Trainium, when it becomes available later this year, will be providing the most powerful and the most cost-effective training instances in the cloud.

We have a number of customers using Inferentia today. Autodesk uses Inferentia to host their chatbot models, and they were able to improve the cost and latencies by almost five times. Airbnb has over four million hosts who welcome more than 900 million guests in almost every country. Airbnb saw a two-times improvement in throughput by using the Inferentia instances, which means that they were able to serve almost twice as many requests for customer support than they would otherwise have been able to do. Another company called Sprinklr develops a SaaS customer experience platform, and they have an AI-driven unified customer experience management platform. They were able to deploy the natural language processing models in Inferentia, and they saw significant performance improvements as well.

Even internally, our Alexa team was able to move their inferences over from GPUs to Inferentia-based systems, and they saw more than a 50% improvement in cost due to these Inferentia-based systems. So, we have that at the lowest layer of the infrastructure. On top of that, we have the managed services, where we are innovating so that customers become a lot more productive. That is where we have SageMaker Studio, which is the world's first IDE, that offers tools like debuggers and profilers and explainability, and a host of other toolslike a visual data preparation toolthat make customers a lot more productive. At the top of it, we have AI services where we provide pre-trained models for use cases like search and document processingKendra for search, Textract for document processing, image and video recognitionwhere we are innovating to make it easier for customers to address these use cases right out of the box.

Laurel: So, there are some benefits, for sure, for machine learning services in the cloudlike improved customer service, improved quality, and, hopefully, increased profit, but what key performance indicators are important for the success of machine learning projects, and why are these particular indicators so important?

Bratin: We are working back from the customer, working back from the pain points based on what customers tell us, and inventing on behalf of the customers to see how we can innovate to make it easier for them to do machine learning. One part of machine learning, as I mentioned, is predictions. Often, the big cost in machine learning in terms of infrastructure is in the inference. That is why we came out with Inferentia, which are today the most cost-effective machine learning instances in the cloud. So, we are innovating at the hardware level.

We also announced Tranium. That will be the most powerful and the most cost-effective training instances in the cloud. So, we are first innovating at the infrastructure layer so that we can provide customers with the most cost-effective compute.

Next, we have been looking at the pain points of what it takes to build an ML service. You need data collection services, you need a way to set up a distributed infrastructure, you need a way to set up an inference system and be able to auto scale it, and so on. We have been thinking a lot about how to build this infrastructure and innovation around the customers.

Then we have been looking at some of the use cases. So, for a lot of these use cases, whether it be search, or object recognition and detection, or intelligent document processing, we have services that customers can directly use. And we continue to innovate on behalf of them. I'm sure we'll come up with a lot more features this year and next to see how we can make it easier for our customers to use machine learning.

Laurel: What key performance indicators are important for the success of machine learning projects? We talked a little bit about how you like to improve customer service and quality, and of course increase profit, but to assign a KPI to a machine learning model, that's something a bit different. And why are they so important?

Bratin: To assign the KPIs, you need to work back from your use case. So, let's say you want to use machine learning to reduce fraud. Your overall KPI is, what was the reduction in fraud detection? Or let's say you want to use it for churn reduction. You are running a business, your customers are coming, but a certain number of them are churning off. You want to then start with, how do I reduce my customer churn by some percent? So, you start with the top-level KPI, which is a business outcome that you want to achieve, and how to get an improvement in that business outcome.

Lets take the churn prediction example. At the end of the day, what is happening is you have a machine learning model that is using data and the amount of training it had to make certain predictions around which customer is going to churn. That boils down, then, to the accuracy of the model. If the model is saying 100 people are going to churn, how many of them actually churn? So, that becomes a question of accuracy. And then you also want to look at how well the machine learning model detected all the cases.

So, there are two aspects of quality that you're looking for. One is, of the things that the model predicted, how many of them actually happened? Let's say this model predicted these 100 customers are going to churn. How many of them actually churn? And let's just say 95 of them actually churn. So, you have a 95% precision there. The other aspect is, suppose you're running this business and you have 1,000 customers. And let's say in a particular year, 200 of them churned. How many of those 200 did the model predict would actually churn? That is called recall, which is, given the total set, how much is the machine learning model able to predict? So, fundamentally, you start from this business metric, which is what is the outcome I want to get, and then you can convert this down into model accuracy metrics in terms of precision, which is how accurate was the model in predicting certain things, and then recall, which is how exhaustive or how comprehensive was the model in detecting all situations.

So, at a high level, these are the things you're looking for. And then you'll go down to lower-level metrics. The models are running on certain instances on certain pieces of compute: what was the infrastructure cost and how do I reduce those costs? These services, for example, are being used to handle surges during Prime Day or Black Friday, and so on. So, then you get to those lower-level metrics, which is, am I able to handle surges in traffic? Its really a hierarchical set of KPIs. Start with the business metric, get down to the model metrics, and then get down to the infrastructure metrics.

Laurel: When you think about machine learning in the cloud in the next three to five years, what are you seeing? What are you thinking about? What can companies do now to prepare for what will come?

Bratin: I think what will happen is that machine learning will get more pervasive. Because what will happen is customers will see that they're able to fundamentally transform the way to do business. Companies will see that they fundamentally are transforming the customer experience, and they will embrace machine learning. We have seen that at Amazon as wellwe have a long history of investing in machine learning. We have been doing this for more than 20 years, and we have changed how we serve customers with amazon.com or Alexa or Amazon Go, Prime. And now with AWS, where we have taken this knowledge that we have gained over the past two decades of deploying machine learning at scale and are making it available to our customers now. So, I do think we will see a much more rapid uptake of machine learning.

Then we'll see a lot of broad use cases like intelligent document processing, a lot of paper-based processing, will become automated because a machine learning model is now able to scan those documents and infer information from theminfer semantic information, not just the syntax. If you think of paper-based processes, whether it's loan processing and mortgage processing, a lot of that will get automated. Then, we are also seeing businesses get a lot more efficient in terms of personalization like forecasting, supply chain forecasting, demand forecasting, and so on.

We are seeing a lot of uptake of machine learning in health. We have customers, GE for example, uses a machine learning service for radiology. They use machine learning to scan radiology images to determine which ones are more serious, and therefore, you want to get the patients in early. We are also seeing potential and opportunity for using machine learning in genomics for precision medicine. So, I do think a lot of innovation is going to happen with machine learning in health care.

We'll see a lot of machine learning in manufacturing. A lot of manufacturing processes will become more efficient, get automated, and become safer because of machine learning.

So, I see in the next five to 10 years, pick any domainlike sports, NFL, NASCAR, Bundesliga, they're all using our machine learning services. NFL uses Amazon SageMaker to give their fans a more immersive experience through Next Gen Stats. Bundesliga uses our machine learning services to make a range of predictions and provide a much more immersive experience. Same with NASCAR. NASCAR has a lot of data history from their races, and they're using that to train models to provide a much more immersive experience to their viewers because they can predict much more easily what's going to happen. So, sports, entertainment, financial services, health care, manufacturingI think we'll see a lot more uptake of machine learning and making the world a smarter, healthier, and safer place.

Laurel: What a great conversation. Thank you very much, Bratin for joining us on Business Lab.

Bratin: Thank you. Thank you for having me. It was really nice talking to you.

Laurel: That was Dr. Bratin Saha, Vice President and General Manager of Machine Learning Services for Amazon AI, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review overlooking the Charles river. That's it for this episode of Business Law. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can also find us in prints on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. This show is available wherever you get your podcasts. If you enjoy this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read the rest here:
Machine learning in the cloud is helping businesses innovate - MIT Technology Review

Read More..

Learn about machine learning and the fundamentals of AI with free Raspberry Pi course – Geeky Gadgets

On this four-week course from the Raspberry Pi Foundation, youll learn about different types of machine learning, and use online tools to train your own AI models. Youll delve into the problems that ML can help to solve, discuss how AI is changing the world, and think about the ethics of collecting data to train a ML model. For teachers and educators its particularly important to have a good foundational knowledge of AI and ML, as they need to teach their learners what the young people need to know about these technologies and how they impact their lives. (Weve also got a free seminar series about teaching these topics.)

The first week of this course will guide you through how you can use machine learning to label data, whether to work out if a comment is positive or negative or to identify the contents of an image. Then youll look at algorithms that create models to give a numerical output, such as predicting house prices based on information about the house and its surroundings. Youll also explore other types of machine learning that are designed to discover connections and groupings in data that humans would likely miss, giving you a deeper understanding of how it can be used.

To register for the course for free jump over to the official course page by following the link below.

Source : RPiF

Here is the original post:
Learn about machine learning and the fundamentals of AI with free Raspberry Pi course - Geeky Gadgets

Read More..

Sleepy Hollow Teen Receives National Scholarship for Development of New Machine Learning Techniques – River Journal Staff

Owen Dugan

Owen Dugan awarded $10,000 as a 2021 Davidson Fellow Scholarship Winner

The Davidson Fellows Scholarship Program has announced the 2021 scholarship winners. Among the honorees is 18-year-old Owen Dugan of Sleepy Hollow. Only twenty students across the country are recognized as scholarship winners each year.

I am honored to be a Davidson Fellow, to have my work nationally recognized, and to join the Davidson Fellows community, said Dugan.

For his project, Dugan developed several new techniques to improve and expand the scope of OccamNet, a new interpretable neural network architecture, with the goal of increasing adoption of interpretable and reliable machine learning techniques.

The 2021 Davidson Fellows Scholarship recipients have risen to the challenges of a global pandemic to complete significant projects within their fields of study, said Bob Davidson, founder of the Davidson Institute. To be awarded this recognition, these students have shown immense skill and work ethic, and they should be commended as they continue their educational and research journeys while continuing to work to solve some of the worlds most vexing problems.

The 2021 Davidson Fellows were honored during a virtual ceremony in September 2021. The ceremony can be viewed online at https://www.davidsongifted.org/gifted-programs/fellows-scholarship/fellows/fellows-ceremony/.

The Davidson Fellows Scholarship program offers $50,000, $25,000, and $10,000 college scholarships to students 18 or younger, who have completed significant projects that have the potential to benefit society in the fields of science, technology, engineering, mathematics, literature, and music. The Davidson Fellows Scholarship has provided more than $8.6 million in scholarship funds to 386 students since its inception in 2001, and has been named one of the most prestigious undergraduate scholarships by U.S. News & World Report. It is a program of the Davidson Institute, a national nonprofit organization headquartered in Reno, Nev. that supports profoundly gifted youth.

Read the original here:
Sleepy Hollow Teen Receives National Scholarship for Development of New Machine Learning Techniques - River Journal Staff

Read More..