Page 3,614«..1020..3,6133,6143,6153,616..3,6203,630..»

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

See the original post:
Artificial intelligence is struggling to cope with how the world has changed - ZDNet

Read More..

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +1.57% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, +0.44%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original:
A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

Read More..

How Is Artificial Intelligence Combatting COVID-19? – Gigabit Magazine – Technology News, Magazine and Website

Chris Gannatti, head of research at ETF specialist WisdomTree, explains how artificial intelligence is being used to tackle Covid-19.

Artificial Intelligence (AI) is proliferating more widely than ever before, having the potential to influence many aspects of daily life. Crisis periods, like we have seen with the Covid-19 pandemic, are often catalysts for the deployment of new innovations and technologies more quickly. The power of AI is being harnessed to tackle the Covid-19 pandemic, whether that be to better understand the rate of infection or by tracing and quickly identifying infections. While AI has been associated with the future and ideas such as the development of driverless cars, its legacy could be how it has impacted the world during this crisis. It is likely that AI is already playing a major part in the early stages of vaccine development - the uses of artificial intelligence are seemingly endless.AI was already growing quickly and being deployed in ever more areas of our data-driven world.

Covid-19 has accelerated some of these deployments, bringing greater comfort and familiarity to the technology. To really understand how AI is making a difference, it is worth looking at some examples which illustrate the breadth of activities being carried out by AI during the pandemic.

Rizwan Malik, the lead radiologist at Royal Bolton Hospital run by the UKs National Health Service (NHS) designed a conservative clinical trial to help obtain initial readings of X-rays for patients faster. Waiting for specialists could sometimes take up to six hours. He identified a promising AI-based chest X-ray system and then set up a test to occur over six months. For all chest X-rays handled by his trainees, the system would offer a second opinion. He would then check if the systems conclusion matched his own and if it did, he would phase the system in as a permanent check on his trainees. As Covid-19 hit, the system became an important way to identify certain characteristics unique to Covid-19 that were visible on chest X-rays. While not perfect, the system did represent an interesting case-study in the use of computer vision in medical imagery.A great example of the collaborative efforts that can be inspired during times of crisis involved three organisations coming together to release the Covid-19 Open Research Dataset. This includes more than 24,000 research papers from peer-reviewed journals and other sources.

See also:

The National Library of Medicine at the National Institutes of Health provided access to existing scientific publications; Microsoft used its literature curation algorithms to find relevant articles; and research non-profit the Allen Institute for Artificial Intelligence converted them from web pages and PDFs into a structured format that can be processed by algorithms. Many major cities affected by Covid-19 were faced with a very real problem - getting the right care to the people who needed it without allowing hospitals to become overrun. Helping people to self-triage, therefore staying away from the hospital unless absolutely necessary, was extremely important. Providence St. Joseph Health System in Seattle built an online screening and triage tool that could rapidly differentiate between those potentially really sick with Covid-19 and those with less life-threatening ailments. In its first week of operation, it served 40,000 patients. The Covid-19 pandemic has pushed the unemployment rate in the US to 14.7%. This has led to unprecedented numbers of people filing unemployment claims and asking questions to different state agencies. Texas, which has received millions of these claims since early March, is using artificial intelligence-driven chatbots to answer questions from unemployed residents in need of benefits.

Other states, like Georgia and South Carolina, have reported similar activity. To give a sense of scale, the system that has been deployed in Texas can handle 20,000 concurrent users. Think of how much staff would be required to deal with 20,000 inquiries at the same time. These are but four of many, many ways in which AI has been deployed to help in the time of the Covid-19 pandemic. While we continue to hope for cures and vaccinations, which AI will help in developing, we expect to see more innovative uses of AI that will benefit society over the long-term.

How you can slow the spread of coronavirus:

Wash your hands with soap and water often do this for at least 20 seconds

Use hand sanitiser gel if soap and water are not available

Wash your hands as soon as you get home

Cover your mouth and nose with a tissue or your sleeve (not your hands) when you cough or sneeze

Put used tissues in the bin immediately and wash your hands afterwards

SOURCE: Funds Europe

Visit link:
How Is Artificial Intelligence Combatting COVID-19? - Gigabit Magazine - Technology News, Magazine and Website

Read More..

The Expanding Role Of Artificial Intelligence In Tax – Forbes

Watch Benjamin Alarie, co-founder and CEO of Blue J Legal, discuss the expanding role of artificial intelligence in tax with contributing editor atTax Notes FederalBenjamin Willis.

Here are some highlights

On machine learning and tax law

Benjamin Alarie: Whenwe talk about machine learning and artificial intelligence of the law, what we're doing is talking about collecting the raw materials, the rulings, the cases, the legislation, the regs, all that information, and bringing it to bear on a particular problem. We're synthesizing all of those materials to make a prediction about how a new situation would likely be decided by the courts.

. . . Law should be predictable. We have lots of data out there in the form of rulings, in the form of judgments that we can collect as good examples of how the courts have decided these matters in the past. And we can reverse engineer using machine learning methods how the courts are mapping the facts of different situations into outcomes. And we can do this in a really elegant way that leads to high quality prediction. So predictions of 90 percentor better accuracy about how the courts are going to make those decisions in the future, which is incredibly valuable to taxpayers to tax administration and to anyone who's looking for certainty, predictability and fairness, in the application of law.

On the availability of artificial intelligence

Benjamin Alarie: We're doing a lot to make this technology available throughout industry. Law firms are increasingly seeing this as one of the tools that they need to have in order to practice tax as effectively as possible. Academic programs see using this kind of technology [as]a huge boost for their graduates who are going to go into practice being familiar already with the leading tools for how to leverage machine learning and artificial intelligence. Accounting firms are also quite interested in this approach too because it has huge implications in terms of speeding up research [and] doing quality assurance . . .

On the moldability of results

Benjamin Alarie: You can play with different dimensions. You can swap out that assumption of fact, swap in a different assumption of fact, and see how that's likely to influence the results. So, then you can do scenario testing to really get comfortable with how much risk there is in a particular situation as the one providing a new opinion or providing advice to a client. That's really reassuring. You might say, Okay, I need to get this to 80 percent probability. I'm not willing to bite off more than that . . . Or you might be like, Well, I have a really risk-loving client. I just need to get to 51 percent . . . [machine learning] allows you to really calibrate the amount of risk that you're taking on, depending on the risk appetite of the client and your comfort as the practitioner.

Benjamin Willis, contributing editor with Tax Notes Federal, and Benjamin Alarie, co-founder and CEO ... [+] of Blue J Legal, discuss the expanding role of artificial intelligence and machine learning in the government, academia and tax practice.

On artificial intelligence and the courts

Benjamin Alarie: [Machine learning] is a great tool to encourage settlement between the parties, and so I think we're increasingly seeing that phenomenon where the party with the really strong position is using this to support their argument. They say, Don't take our word for it. We ran it on this independent system. . . Here's the report from the system saying that we have a 95 percent or better chance of winning this case. Are you still sure you don't want to enter into terms of settlement? That's often very convincing to the other side, who then run their analysis through the same system and they say, Okay . . . It's not nearly as strong as we thought it might be. Maybe we should talk about settling this and that saves judges from having to contend with cases that really aren't the best use of their time because it's pretty clear how those cases should get decided.

On artificial intelligence and low-income taxpayers

Benjamin Alarie: There are early adopters at these low income taxpayer clinics across the country who are interested in using technology to allow them to give faster advice to the low income taxpayers . . . Folks understand increasingly how to use the software and how it can materially assist their clientele and so the goal is to learn from those early adopters and to figure out how to position the software to help as much as possible in other clinics where maybe we don't have early adopters present, but who could still genuinely, really benefit from this.

Go here to read the rest:
The Expanding Role Of Artificial Intelligence In Tax - Forbes

Read More..

Artificial Intelligence (AI) Is Nothing Without Humans – E3zine.com

AI is not just a fad. Its a technology thats set to last. However, only companies who know how to leverage its full potential will succeed.

Leveraging AIs full potential doesnt mean developing a pilot project in a vacuum with a handful of experts which, ironically, is often called accelerator project. Companies need a tangible idea as to how artificial intelligence can benefit them in their day-to-day operations.

For this to happen, one has to understand how these new AI colleagues work and what they need to successfully do their jobs.

An example for why this understanding is so crucial is lead management in sales. Instead of sales team wasting their time on someone who will never buy anything, AI is supposed to determine which leads are promising and at what moment salespeople can make their move to close the contract. CEOs are usually very taken with that idea, sales staff not so much.

Experienced salespeople know that its not that easy. Its not only the hard facts like name, address, industry or phone number that are important. Human sales people consider many different factors, such as relationships, past conversations, customer satisfaction, experience with products, the current market situation, and more.

Make no mistake: if the data are available in a set framework, AI will also leverage them, searching for patterns, calculating behavior scores and match scores, and finally indicating if the lead is promising or not. They can make sense of the data, but they will never see more than them.

The real challenge with AI are therefore the data. Without data, artificial intelligence solutions cannot learn. Data have to be collected and clearly structured to be usable in sales and service.

Without enough data to draw conclusions from, all decisions that AI makes will be unreliable at best. Meaning that in our example, theres no AI without CRM. Thats not really new, I know. However, CRM systems now have to be interconnected with numerous touchpoints (personal conversations, ERP, online shops, customer portal, website and others) to aggregate reliable customer data. Best case: all of this happens automatically. Entrusting a human with this task makes collecting data laborious, inconsistent and faulty.

To profit from AI, companies need to understand where it makes sense to implement it and how they should train it. Theres one problem, however: the thought patterns of AI are often so complex and take so many different information and patterns into consideration that one cant understand why and how it made a decision.

In conclusion, AI is not a universal remedy. Its based on things we already know. Its recommendations and decisions are more error-prone than many would like them to be. Right now, AI has more of a supporting role than an autonomous one. They can help us in our daily routine, take care of monotonous tasks, and let others make the important decisions.

However, we shouldnt underestimate AI either. In the future, it will gain importance as it grows more autonomous each day. Artificial intelligence often reaches its limits when interacting with humans. When interacting with other AI solutions in clearly defined frameworks, it can often already make the right decisions today.

Read the rest here:
Artificial Intelligence (AI) Is Nothing Without Humans - E3zine.com

Read More..

Five Important Subsets of Artificial Intelligence – Analytics Insight

As far as a simple definition, Artificial Intelligence is the ability of a machine or a computer device to imitate human intelligence (cognitive process), secure from experiences, adapt to the most recent data and work people-like-exercises.

Artificial Intelligence executes tasks intelligently that yield in creating enormous accuracy, flexibility, and productivity for the entire system. Tech chiefs are looking for some approaches to implement artificial intelligence technologies into their organizations to draw obstruction and include values, for example, AI is immovably utilized in the banking and media industry. There is a wide arrangement of methods that come in the space of artificial intelligence, for example, linguistics, bias, vision, robotics, planning, natural language processing, decision science, etc. Let us learn about some of the major subfields of AI in depth.

ML is maybe the most applicable subset of AI to the average enterprise today. As clarified in the Executives manual for real-world AI, our recent research report directed by Harvard Business Review Analytic Services, ML is a mature innovation that has been around for quite a long time.

ML is a part of AI that enables computers to self-learn from information and apply that learning without human intercession. When confronting a circumstance wherein a solution is covered up in a huge data set, AI is a go-to. ML exceeds expectations at processing that information, extracting patterns from it in a small amount of the time a human would take and delivering in any case out of reach knowledge, says Ingo Mierswa, founder and president of the data science platform RapidMiner. ML powers risk analysis, fraud detection, and portfolio management in financial services; GPS-based predictions in travel and targeted marketing campaigns, to list a few examples.

Joining cognitive science and machines to perform tasks, the neural network is a part of artificial intelligence that utilizes nervous system science ( a piece of biology that worries the nerve and nervous system of the human cerebrum). Imitating the human mind where the human brain contains an unbounded number of neurons and to code brain-neurons into a system or a machine is the thing that the neural network functions.

Neural network and machine learning combinedly tackle numerous intricate tasks effortlessly while a large number of these tasks can be automated. NLTK is your sacred goal library that is utilized in NLP. Ace all the modules in it and youll be a professional text analyzer instantly. Other Python libraries include pandas, NumPy, text blob, matplotlib, wordcloud.

An explainer article by AI software organization Pathmind offers a valuable analogy: Think of a lot of Russian dolls settled within one another. Profound learning is a subset of machine learning and machine learning is a subset of AI, which is an umbrella term for any computer program that accomplishes something smart.

Deep learning utilizes alleged neural systems, which learn from processing the labeled information provided during training and uses this answer key to realize what attributes of the information are expected to build the right yield, as per one clarification given by deep AI. When an adequate number of models have been processed, the neural network can start to process new, inconspicuous sources of info and effectively return precise outcomes.

Deep learning powers product and content recommendations for Amazon and Netflix. It works in the background of Googles voice-and image-recognition algorithms. Its ability to break down a lot of high-dimensional information makes deep learning unmistakably appropriate for supercharging preventive maintenance frameworks

This has risen as an extremely sizzling field of artificial intelligence. A fascinating field of innovative work for the most part focuses around designing and developing robots. Robotics is an interdisciplinary field of science and engineering consolidated with mechanical engineering, electrical engineering, computer science, and numerous others. It decides the designing, producing, operating, and use of robots. It manages computer systems for their control, intelligent results and data change.

Robots are deployed regularly for directing tasks that may be difficult for people to perform consistently. Major robotics tasks included an assembly line for automobile manufacturing, for moving large objects in space by NASA. Artificial intelligence scientists are additionally creating robots utilizing machine learning to set interaction at social levels.

Have you taken a stab at learning another language by labeling the items in your home with the local language and translated words? It is by all accounts a successful vocab developer since you see the words again and again. Same is the situation with computers fueled with computer vision. They learn by labeling or classifying various objects that they go over and handle the implications or decipher, however, at a much quicker pace than people (like those robots in science fiction motion pictures).

The tool OpenCV empowers processing of pictures by applying them to mathematical operations. Recall that elective subject in engineering days called Fluffy Logic? Truly, that approach is utilized in Image processing that makes it a lot simpler for computer vision specialists to fuzzify or obscure the readings that cant be placed in a crisp Yes/No or True/False classification. OpenTLA is utilized for video tracking which is the procedure to find a moving object(s) utilizing a camera video stream.

Share This ArticleDo the sharing thingy

See the original post:
Five Important Subsets of Artificial Intelligence - Analytics Insight

Read More..

Artificial Intelligence Markets in IVD, 2019-2024: Breakdown by Application and Component – GlobeNewswire

Dublin, May 15, 2020 (GLOBE NEWSWIRE) -- The "Artificial Intelligence Markets in IVD" report has been added to ResearchAndMarkets.com's offering.

This report examines selected AI-based initiatives, collaborations, and tests in various in vitro diagnostic (IVD) market segments.

Artificial Intelligence Markets in IVD contains the following important data points:

The past few years have seen extraordinary advances in artificial intelligence (AI) in clinical medicine. More products have been cleared for clinical use, more new research-use-only applications have come to market and many more are in development.

In recent years, diagnostics companies - in collaboration with AI companies - have begun implementing increasingly sophisticated machine learning techniques to improve the power of data analysis for patient care. The goal is to use developed algorithms to standardize and aid interpretation of test data by any medical professional irrespective of expertise. This way AI technology can assist pathologists, laboratorians, and clinicians in complex decision-making.

Digital pathology products and diabetes management devices were the first to come to market with data interpretation applications. The last few years have seen the use of AI interpretation apps extended to a broader range of products including microbiology, disease genetics, and cancer precision medicine.

This report will review some of the AI-linked tests and test services that have come to market and others that are in development in some of the following market segments:

Applications of AI are evolving that predict outcomes such as diagnosis, death, or hospital readmission; that improve upon standard risk assessment tools; that elucidate factors that contribute to disease progression; or that advance personalized medicine by predicting a patient's response to treatment. AI tools are in use and in development to review data and to uncover patterns in the data that can be used to improve analyses and uncover inefficiencies. Many enterprises are joining this effort.

The following are among the companies and institutions whose innovations are featured in Artificial Intelligence Markets in IVD:

Key Topics Covered

Chapter 1: Executive Summary

Chapter 2: Artificial Intelligence In Diagnostics Markets

Chapter 3: Market Analysis: Artificial Intelligence in Diagnostics

For more information about this report visit https://www.researchandmarkets.com/r/vw8l7u

Research and Markets also offers Custom Research services providing focused, comprehensive and tailored research.

See the original post:
Artificial Intelligence Markets in IVD, 2019-2024: Breakdown by Application and Component - GlobeNewswire

Read More..

This Man Created A Perfect AC/DC Song By Using Artificial Intelligence – Kerrang!

While weve long been enjoying some weird and wonderful mash-ups courtesy of the internets most hilarious and creative YouTubers, clearly its too much effort to be letting humans do all the work these days. As such, satirist Funk Turkey has handed the task of creating new material over to artificial intelligence, using robots to make a pretty ace AC/DCsong.

The track in question, Great Balls, came about use lyrics.rip to generate the words, before Funk channeled his best Brian Johnson to sing this hilarious mish-mash of lyrics (Wasnt the dog a touch too young to thrill? sorry, what?), and then backed it all with suitably AC/DC-esqueinstrumentation.

Read this next: Classic album covers redesigned for socialdistancing

Of course, theres hopefully real AC/DC material on the way at some point soon, with Twisted Sister vocalist Dee Snider revealing in December 2019 that all four surviving members have reunited for a new record, and, Its as close as you can get to the originalband.

Until then, though, heres Great Balls to tide usover:

In fairness, lyrics.rip is actually a pretty great little tool. We tried the same thing for Green Day to see what fine words would come out now, to get Billie Joe Armstrong to performthem:

An ambulance thats turning on the way across towncause you feeling sorry for that your whining eyesWhen September endsHere comes the waitingJust roamin for yourselfAre we are the silence with the brick of my way to search the story of my memory rests,but never forgets what I bleeding from the brick of my heads above the starsAre the waitingMy heads above the brick of self-controlTo live?My heads above the innocent can never lastTo searchthe

Okaythen.

Read this next: An exhaustive look at the phenomenon of celebrity cameos in musicvideos

Posted on May 15th 2020, 1:29pm

View original post here:
This Man Created A Perfect AC/DC Song By Using Artificial Intelligence - Kerrang!

Read More..

TSA Issues Road Map to Tackle Insider Threat With Artificial Intelligence – Nextgov

The Transportation Security Administration is planning to increase and share information it collects, including that gleaned from employees, with other federal agencies and the private sector in an effort to prevent insiders from perpetrating various harmful malfeasance.

Artificial Intelligence, probabilistic analytics and data mining are among tools the agency lists in a document it issued today loosely outlining the problem and the plan to create an Insider Threat Mitigation Hub.

The Insider Threat Roadmap defines the common vision for the Transportation Systems Sector that insider threat is a community-wide challenge, since no single entity can successfully counter the threat alone, TSA Administrator David Pekoske wrote in an opening message.

In July 2019, a surveillance camera at the Miami International Airport captured footage of an airline mechanic sabotaging a planes navigation system with a simple piece of foam. The TSA road map describes this incident along with a number of others dating back to 2014 spanning a range of activities including terrorism, subversion and attempted or actual espionage, to stress the need for a layered strategy of overall transportation security.

A TSA press release identified three parts of that strategy as promoting data-driven decision making to detect threats; advancing operational capability to deter threats; and maturing capabilities to mitigate threats to the transportation sector.

Under the first objective, TSA plans to develop and maintain insider threat risk indicators, which could include behavioral, physical, technological or financial attributes that might expose malicious or potentially malicious insiders.

We must identify key information sources, and ensure they are accurate and available for use in informing risk mitigation activities, the document adds.

For the second objective, the document describes information-sharing plans with other federal agencies and industry.

We will establish an Insider Threat Mitigation Hub to elevate insider threat to the enterprise level and enable multiple offices, agencies, and industry entities to share perspectives, expertise, and data to enhance threat detection, assessment, and response across the TSS, the document reads. This capability will allow us to fuse together disparate information points to identify intricate patterns of conduct that may be unusual or indicative of insider threat activity and drive enhanced insider threat mitigation efforts.

Meeting the third objective would require seeking out the appropriate technology to improve detection and mitigation of insider threat TSA writes, and expanding it throughout the agencys supply chain.

TSA pre-empted concerns usually associated with massive data collection practices by including the protection of privacy and civil liberties among the guiding principles it said would accompany its efforts.

The rest is here:
TSA Issues Road Map to Tackle Insider Threat With Artificial Intelligence - Nextgov

Read More..

Cloud Hosting Companies Help Customers Contain Costs Winning Trust and Loyalty – EnterpriseAI

source: Shutterstock

Gartner predicts that through 2020, 80 percent of organizations will overshoot their cloud IaaS budgets. Gartner attributes cloud cost management problems to three main causes:

Our own research bears this out. In a recent cloud services survey we conducted, almost 70 percent of respondents reported overspending on their cloud budgets by 25 percent or more.

In short: Many organizations are spending too much on their cloud bills because optimizing cloud applications to bring down spend is very difficult.

Like any business, cloud hosting companies want to drive revenue. But customers who feel theyre being fleeced may well switch to other providers that will help optimize their cloud usage. Savvy cloud hosting companies are doing more than simply pocketing the spiraling bills of the service providers they host. Theyre helping customers reduce costs by helping them optimize cloud applications. This act of customer service will improve customer relations, loyalty and pay dividends in the long run.

Today, most DevOps teams operate a Continuous Integration/Continuous Deployment (CI/CD) pipeline, where cloud applications are iterated and developed in rapid cycles on an ongoing basis. However, while application teams pay very close attention when they are spinning up an app, once its running, they typically perform only minimal optimization efforts. The post-release portion of the delivery pipeline is generally neglected.

Why? Because cloud app optimization is extremely complicated. Rather than tweaking the internal machinery of a cloud app so that it consumes only what it needs, teams leave the controls alone and stock up on fuel. That is, for peace of mind they over-provision AWS (or other) cloud resources than they need so the cloud app cant possibly be caught short.

Its hard to blame them. The complexity of cloud optimization is a very real problem. Even a simple five-container application can have more than 255-trillion resource and basic parameter permutations. As a result most optimization tools focus on code and the app layer (UI, database schema etc.), but dont go much deeper.

However, software companies developing advanced technology that fully leverages machine learning and deep reinforcement learning can take optimization to the next level. They can embrace a full view of the entire infrastructure: compute, memory, cache, storage, network (bandwidth and latency), thread management, job placement, database config, application runtime, Java garbage collector and more. These tools can monitor parameters such as requests per second, or response time, while tweaking settings like VM instance type, CPU shares, thread count, garbage collection and memory pool sizes. They can ensure they select the right instance, the right number of instances and the right settings in each instance.

Traditional APM (application performance monitoring) tools act at the deployment level and are focused on utilization and cost. They can look at CPU and/or memory usage, but all they really do is utilization monitoring and cost-cutting. But the new wave of cloud optimization tools take in the entirety of application performance, not just the footprint of the elements. They can focus not on cost but on a sophisticated performance metric.

The results of next-gen cloud optimization can be striking. Example: Ancestry have spent two years migrating a database of over 20 million members from data centers into AWS. After implementing cloud optimization, Ancestry.com saw a 50-100 person increase in resource utilization and up to a 50 percent decrease in cost.

Ross Schibler is co-founder and CEO of Opsani, an AIOps and cloud optimization company.

Related

Link:
Cloud Hosting Companies Help Customers Contain Costs Winning Trust and Loyalty - EnterpriseAI

Read More..