Page 3,448«..1020..3,4473,4483,4493,450..3,4603,470..»

VIEW: Digitisation in pathology and the promise of artificial intelligence – CNBCTV18

The COVID-19 pandemic has had a profound impact across industries and healthcare in particularevery aspect of it is undergoing changefrom diagnosis to treatment and through the entire continuum of care. This has also created an urgency in the healthcare industry, to look for innovative solutions and a boost to the faster, efficient application of technologies like Artificial Intelligence (AI) and Deep Learning. Pathology is one area which stands to greatly benefit from these applications.

Pathologists today spend a significant amount of time observing tissue samples under a microscope and they are facing resource shortages, growing complexity of requests, and workflow inefficiencies with the growing burden of diseases. Their work underpins every aspect of patient care, from diagnostic testing and treatment advice to the use of cutting-edge genetic technologies. They also have to work together in a multidisciplinary team of doctors, scientists and healthcare professionals to diagnose, treat and prevent illness. With increasing emphasis on sub-specialisation, taking a second opinion from specialists, means shipping several glass slides across laboratories, sometimes to another country. This means reduced efficiency and delayed diagnosis and treatment. The current situation has disrupted this workflow.

Digitization in pathology

Digitization in Pathology has enabled an increase in efficiency, speed and enhanced quality of diagnosis. Recent technological advances have accelerated the adoption of digitisation in pathology, similar to the digital transformation that radiology departments have experienced over the last decade. Digital Pathology has enabled the conversion of the traditional glass slide to a digital image, which can then be viewed on a monitor, annotated, archived and shared digitally across the globe, for consultation based on organ sub-specialisation. With digitisation, a vast data set has become available, supporting new insights to pathologists, researchers, and pharmaceutical development teams.

The promise of AI

The availability of vast data is enabling the use of Artificial Intelligence methods, to further transform the diagnosis and treatment of diseases at an unprecedented pace. Human intelligence assisted with articial intelligence can provide a well-balanced view of what neither of them could do on their own. The evolution of Deep Learning neural networks and the improvement in accuracy for image pattern recognition has been staggering in the last few years. Similar to how we learn from experience, the deep learning algorithm would perform a task repeatedly, each time improving it a little to achieve more accurate outcomes.

The approach to diagnosis that incorporates multiple sources of data (e.g., pathology, radiology, clinical, molecular and lab operations) and using mathematical models to generate diagnostic inferences and presenting with clinically actionable knowledge to customers is Computational Pathology. Computational Pathology systems are able to correlate patterns across multiple inputs from the medical record, including genomics, enhancing a pathologists diagnostic capabilities, to make a more precise diagnosis. This allows Pathologists to eliminate tedious and time-consuming tasks while focusing more on interpreting data and detailing the implications for a patients diagnosis.

AI applications that can easily augment a Pathologists cognitive ability and save time are, for example, identifying the sections of greatest interest in biopsies, finding metastases in the lymph nodes of breast cancer patients, counting mitoses for cancer grading or measuring tumors point-to-point. The ultimate goal going forward is the integration of all these tools and algorithms into the existing workflow and make it seamless and more efficient.

The Challenge

However, Artificial Intelligence in Pathology is quite complex. The IT infrastructure required in terms of data storage, network bandwidth and computing power is significantly higher as compared to Radiology. Digitisation of Whole Slide Images (WSI) in pathology generate large amounts of gigapixel sized images and processing them needs high-performance computing. Training a deep learning network on a whole slide image at full resolution can be very challenging. With the increase in the processing power with the use of GPUs, there is a promise to train deep learning networks successfully, starting with training smaller regions of interest.

Another key aspect for training deep learning algorithms is the need for large amounts of labeled data. For supervised learning, a ground truth must first be included in the dataset to provide appropriate diagnostic context and this will be time-consuming. Obtaining adequately labeled data by experts is the key.

Digitisation in pathology supported by appropriate IT infrastructure is enabling Pathologists to work remotely without the need to wait for glass slides to be delivered and maintaining social distancing norms. The promise of Artificial Intelligence will only further accelerate the seamless integration of algorithms into the existing workflow. These unprecedented times have raised many challenges, but are also providing us a chance to accelerate the application of AI and in turn to achieve the quadruple aim: enhancing the patient experience, improving health outcomes, lowering the cost of care, and improving the work-life of care providers.

Read the rest here:
VIEW: Digitisation in pathology and the promise of artificial intelligence - CNBCTV18

Read More..

Artificial Intelligence and Its Partners – Modern Diplomacy

Digitalization and the development of artificial intelligence (AI) bring up many philosophical and ethical questions about the role of man and robot in the nascent social and economic order. How real is the threat of an AI dictatorship? Why do we need to tackle AI ethics today? Does AI provide breakthrough solutions? We ask these and other questions in our interview with Maxim Fedorov, Vice-President for Artificial Intelligence and Mathematical Modelling at Skoltech.

On 13 July, Maxim Fedorov chaired the inaugural Trustworthy AI online conference on AI transparency, robustness and sustainability hosted by Skoltech.

Maxim, do you think humanity already needs to start working out a new philosophical model for existing in a digital world whose development is determined by artificial intelligence (AI) technologies?

The fundamental difference between todays technologies and those of the past is that they hold up a mirror of sorts to society. Looking into this mirror, we need to answer a number of philosophical questions. In times of industrialization and production automation, the human being was a productive force. Today, people are no longer needed in the production of the technologies they use. For example, innovative Japanese automobile assembly plants barely have any people at the floors, with all the work done by robots. The manufacturing process looks something like this: a driverless robot train carrying component parts enters the assembly floor, and a finished car comes out. This is called discrete manufacturing the assembly of a finite set of elements in a sequence, a task which robots manage quite efficiently. The human being is gradually being ousted from the traditional economic structure, as automated manufacturing facilities generally need only a limited number of human specialists. So why do we need people in manufacturing at all? In the past, we could justify our existence by the need to earn money or consume, or to create jobs for others, but now this is no longer necessary. Digitalization has made technologies a global force, and everyone faces philosophical questions about their personal significance and role in the modern world questions we should be answering today, and not in ten years when it will be too late.

At the last World Economic Forum in Davos, there was a lot of discussion about the threat of the digital dictatorship of AI. How real is that threat in the foreseeable future?

There is no evil inherent in AI. Technologies themselves are ethically neutral. It is people who decide whether to use them for good or evil.

Speaking of an AI dictatorship is misleading. In reality, technologies have no subjectivity, no I. Artificial intelligence is basically a structured piece of code and hardware. Digital technologies are just a tool. There is nothing mystical about them either.

My view as a specialist in the field is that AI is currently a branch of information and communications technology (ICT). Moreover, AI does not even live in an individual computer. For a person from the industry, AI is a whole stack of technologies that are combined to form what is called weak AI.

We inflate the bubble of AIs importance and erroneously impart this technology stack with subjectivity. In large part, this is done by journalists, people without a technical education. They discuss an entity that does not actually exist, giving rise to the popular meme of an AI that is alternately the Terminator or a benevolent super-being. This is all fairy tales. In reality, we have a set of technological solutions for building effective systems that allow decisions to be made quickly based on big data.

Various high-level committees are discussing strong AI, which will not appear for another 50 to 100 years (if at all). The problem is that when we talk about threats that do not exist and will not exist in the near future, we are missing some real threats. We need to understand what AI is and develop a clear code of ethical norms and rules to secure value while avoiding harm.

Sensationalizing threats is a trend in modern society. We take a problem that feeds peoples imaginations and start blowing it up. For example, we are currently destroying the economy around the world under the pretext of fighting the coronavirus. What we are forgetting is that the economy has a direct influence on life expectancy, which means that we are robbing many people of years of life. Making decisions based on emotion leads to dangerous excesses.

As the philosopher Yuval Noah Harari has said, millions of people today trust the algorithms of Google, Netflix, Amazon and Alibaba to dictate to them what they should read, watch and buy. People are losing control over their lives, and that is scary.

Yes, there is the danger that human consciousness may be robotized and lose its creativity. Many of the things we do today are influenced by algorithms. For example, drivers listen to their sat navs rather than relying on their own judgment, even if the route suggested is not the best one. When we receive a message, we feel compelled to respond. We have become more algorithmic. But it is ultimately the creator of the algorithm, not the algorithm itself, that dictates our rules and desires.

There is still no global document to regulate behaviour in cyberspace. Should humanity perhaps agree on universal rules and norms for cyberspace first before taking on ethical issues in the field of AI?

I would say that the issue of ethical norms is primary. After we have these norms, we can translate them into appropriate behaviour in cyberspace. With the spread of the internet, digital technologies (of which AI is part) are entering every sphere of life, and that has led us to the need to create a global document regulating the ethics of AI.

But AI is a component part of information and communications technologies (ICT). Maybe we should not create a separate track for AI ethics but join it with the international information security (IIS) track? Especially since IIS issues are being actively discussed at the United Nations, where Russia is a key player.

There is some justification for making AI ethics a separate track, because, although information security and AI are overlapping concepts, they are not embedded in one another. However, I agree that we can have a separate track for information technology and then break it down into sub-tracks where AI would stand alongside other technologies. It is a largely ontological problem and, as with most problems of this kind, finding the optimal solution is no trivial matter.

You are a member of the international expert group under UNESCO that is drafting the first global recommendation on the ethics of AI. Are there any discrepancies in how AI ethics are understood internationally?

The group has its share of heated discussions, and members often promote opposing views. For example, one of the topics is the subjectivity and objectivity of AI. During the discussion, a group of states clearly emerged that promotes the idea of subjectivity and is trying to introduce the concept of AI as a quasi-member of society. In other words, attempts are being made to imbue robots with rights. This is a dangerous trend that may lead to a sort of technofascism, inhumanity of such a scale that all previous atrocities in the history of our civilization would pale in comparison.

Could it be that, by promoting the concept of robot subjectivity, the parties involved are trying to avoid responsibility?

Absolutely. A number of issues arise here. First, there is an obvious asymmetry of responsibility. Let us give the computer with rights, and if its errors lead to damage, we will punish it by pulling the plug or formatting the hard drive. In other words, the responsibility is placed on the machine and not its creator. The creator gets the profit, and any damage caused is someone elses problem. Second, as soon as we give AI rights, the issues we are facing today with regard to minorities will seem trivial. It will lead to the thought that we should not hurt AI but rather educate it (I am not joking: such statements are already being made at high-level conferences). We will see a sort of juvenile justice for AI. Only it will be far more terrifying. Robots will defend robot rights. For example, a drone may come and burn your apartment down to protect another drone. We will have a techno-racist regime, but one that is controlled by a group of people. This way, humanity will drive itself into a losing position without having the smallest idea of how to escape it.

Thankfully, we have managed to remove any inserts relating to quasi-members of society from the groups agenda.

We chose the right time to create the Committee for Artificial Intelligence under the Commission of the Russian Federation for UNESCO, as it helped to define the main focus areas for our working group. We are happy that not all countries support the notion of the subjectivity of AI in fact, most oppose it.

What other controversial issues have arisen in the working groups discussions?

We have discussed the blurred border between AI and people. I think this border should be defined very clearly. Then we came to the topic of human-AI relationships, a term which implies the whole range of relationships possible between people. We suggested that relationships be changed to interactions, which met opposition from some of our foreign colleagues, but in the end, we managed to sort it out.

Seeing how advanced sex dolls have become, the next step for some countries would be to legalize marriage with them, and then it would not be long before people starting asking for church weddings. If we do not prohibit all of this at an early stage, these ideas may spread uncontrollably. This approach is backed by big money, the interests of corporations and a different system of values and culture. The proponents of such ideas include a number of Asian countries with a tradition of humanizing inanimate objects. Japan, for example, has a tradition of worshipping mountain, tree and home spirits. On the one hand, this instills respect for the environment, and I agree that, being a part of the planet, part of nature, humans need to live in harmony with it. But still, a person is a person, and a tree is a tree, and they have different rights.

Is the Russian approach to AI ethics special in any way?

We were the only country to state clearly that decisions on AI ethics should be based on a scientific approach. Unfortunately, most representatives of other countries rely not on research, but on their own (often subjective) opinion, so discussions in the working group often devolve to the lay level, despite the fact that the members are highly qualified individuals.

I think these issues need to be thoroughly researched. Decisions on this level should be based on strict logic, models and experiments. We have tremendous computing power, an abundance of software for scenario modelling, and we can model millions of scenarios at a low cost. Only after that should we draw conclusions and make decisions.

How realistic is the fight against the subjectification of AI if big money is at stake? Does Russia have any allies?

Everyone is responsible for their own part. Our task right now is to engage in discussions systematically. Russia has allies with matching views on different aspects of the problem. And common sense still prevails. The egocentric approach we see in a number of countries that is currently being promoted, this kind of self-absorption, actually plays into our hands here. Most states are afraid that humans will cease to be the centre of the universe, ceding our crown to a robot or a computer. This has allowed the human-centred approach to prevail so far.

If the expert group succeeds at drafting recommendations, should we expect some sort of international regulation on AI in the near future?

If we are talking about technical standards, they are already being actively developed at the International Organization for Standardization (ISO), where we have been involved with Technical Committee 164 Artificial Intelligence (TC 164) in the development of a number of standards on various aspects of AI. So, in terms of technical regulation, we have the ISO and a whole range of documents. We should also mention the Institute of Electrical and Electronics Engineers (IEEE) and its report on Ethically Aligned Design. I believe this document is the first full-fledged technical guide on the ethics of autonomous and intelligent systems, which includes AI. The corresponding technical standards are currently being developed.

As for the United Nations, I should note the Beijing Consensus on Artificial Intelligence and Education that was adopted by UNESCO last year. I believe that work on developing the relevant standards will start next year.

So the recommendations will become the basis for regulatory standards?

Exactly. This is the correct way to do it. I should also say that it is important to get involved at an early stage. This way, for instance, we can refer to the Beijing agreements in the future. It is important to make sure that AI subjectivity does not appear in the UNESCO document, so that it does not become a reference point for this approach.

Let us move from ethics to technological achievements. What recent developments in the field can be called breakthroughs?

We havent seen any qualitative breakthroughs in the field yet. Image recognition, orientation, navigation, transport, better sensors (which are essentially the sensory organs for robots) these are the achievements that we have so far. In order to make a qualitative leap, we need a different approach.

Take the chemical universe, for example. We have researched approximately 100 million chemical compounds. Perhaps tens of thousands of these have been studied in great depth. And the total number of possible compounds is 1060, which is more than the number of atoms in the Universe. This chemical universe could hold cures for every disease known to humankind or some radically new, super-strong or super-light materials. There is a multitude of organisms on our planet (such as the sea urchin) with substances in their bodies that could, in theory, cure many human diseases or boost immunity. But we do not have the technology to synthesize many of them. And, of course, we cannot harvest all the sea urchins in the sea, dry them and make an extract for our pills. But big data and modelling can bring about a breakthrough in this field. Artificial intelligence can be our navigator in this chemical universe. Any reasonable breakthrough in this area will multiply our income exponentially. Imagine an AIDS or cancer medicine without any side effects, or new materials for the energy industry, new types of solar panels, etc. These are the kind of things that can change our world.

How is Russia positioned on the AI technology market? Is there any chance of competing with the United States or China?

We see people from Russia working in the developer teams of most big Asian, American and European companies. A famous example is Sergey Brin, co-founder and developer of Google. Russia continues to be a donor of human resources in this respect. It is both reassuring and disappointing because we want our talented guys to develop technology at home. Given the right circumstances, Yandex could have dominated Google.

As regards domestic achievements, the situation is somewhat controversial. Moscow today is comparable to San Francisco in terms of the number, quality and density of AI development projects. This is why many specialists choose to stay in Moscow. You can find a rewarding job, interesting challenges and a well-developed expert community.

In the regions, however, there is a concerning lack of funds, education and infrastructure for technological and scientific development. All three of our largest supercomputers are in Moscow. Our leaders in this area are the Russian Academy of Sciences, Moscow State University and Moscow Institute of Physics and Technology organizations with a long history in the sciences, rich traditions, a sizeable staff and ample funding. There are also some pioneers who have got off the ground quickly, such as Skoltech, and surpassed their global competitors in many respects. We recently compared Skoltech with a leading AI research centre in the United Kingdom and discovered that our institution actually leads in terms of publications and grants. This means that we can and should do world-class science in Russia, but we need to overcome regional development disparities.

Russia has the opportunity to take its rightful place in the world of high technology, but our strategy should be to overtake without catching up. If you look at our history, you will see that whenever we have tried to catch up with the West or the East, we have lost. Our imitations turned out wrong, were laughable and led to all sorts of mishaps. On the other hand, whenever we have taken a step back and synthesized different approaches, Asian or Western, without blindly copying them, we have achieved tremendous success.

We need to make a sober assessment of what is happening in the East and in the West and what corresponds to our needs. Russia has many unique challenges of its own: managing its territory, developing the resource industries and continuous production. If we are able to solve these tasks, then later we can scale up our technological solutions to the rest of the world, and Russian technology will be bought at a good price. We need to go down our own track, not one that is laid down according to someone elses standards, and go on our way while being aware of what is going on around us. Not pushing back, not isolating, but synthesizing.

From our partner RIAC

Related

More:
Artificial Intelligence and Its Partners - Modern Diplomacy

Read More..

Artificial intelligence isnt destroying jobs, its making them more inclusive – The Globe and Mail

A new world of work is on the horizon, driven by artificial intelligence. By 2025, the World Economic Forum predicts that 52 per cent of total task hours across existing jobs will be performed by machines. By 2030, up to 800 million jobs could be replaced by technology altogether.

That said, the outlook is far from bleak. Rather than eliminating positions, technology is expected to bring about net positive jobs over the coming decade but a fact equally as important (and often overlooked) is that artificial intelligence presents an opportunity for a more socioeconomically inclusive career start.

Throughout much of the past century, a persons success in life could be largely attributed to their socioeconomic circumstances at birth. Studies have shown that children born into middle-class homes have greater access to opportunities that are more highly correlated with successful occupational outcomes, such as good schools and financial support. As a result, these children are far more likely to succeed in primary school, high school and post-secondary education.

Story continues below advertisement

These advantages are compounded when it comes to hiring for jobs out of post-secondary school. Resumes, in this way, mirror our privilege.

The criteria for success in the future of work, however, presents an opportunity for a fairer system to assess job fit: skills.

If machine intelligence becomes a large source of expertise (i.e., cancer-screening detection, market research analytics and driving, just to name a few), people will need to adapt and change their skillsets to remain employable. A recent white paper published by IBM rated adaptability as the most important skill that executives will be hiring for in the future. Moreover, as technology continues to advance, our technical skills continue to depreciate (by approximately 50 per cent every five years).

As a result of all of these changes, we will have to upskill (which is the process of learning new skills or teaching workers new skills). Well have to learn and unlearn throughout the majority of our working lives. This changes the formula from front-loading education early in life to a life of continuous learning. It also places skills, like that adaptability mentioned above, more centrally as the currency of labour.

As the CEO of Upwork, one of the fastest-growing gig platforms in the world, wrote two years ago, What matters to me is not whether someone has a computer science degree, but how well they can think and how well they can code. The CEO of JPMorgan Chase, Jamie Dimon, echoed a similar sentiment, stating that the reality is, the new world of work is about skills, not necessarily degrees.

Of course, degrees will still have value. It will also take some time to readjust our job-fit assessment infrastructures. However, paths that do not include a four-year post-secondary degree will also be included in the job-fit assessment as skills become central. This can make room for more inclusive opportunities for career advancement.

Having a more inclusive job-fit assessment infrastructure, however, will not happen automatically. There are many challenges that governments and employers will have to overcome, and actions they will need to take:

Story continues below advertisement

The adoption of advanced technologies in the workforce will revolutionize work. In fact, our very definition of what it means to work may change. How governments and employers respond to these changes will have a large impact on whether this results in positive gains for more people. We have the potential to build a future that works for more people than it currently does, and it is up to us to make it happen.

Sinead Bovell is a futurist and founder of WAYE (Weekly Advice for Young Entrepreneurs), an organization aiming to educate young entrepreneurs on the intersection of business, technology, and the future. She is the Leadership Lab columnist for August 2020.

This column is part of Globe Careers Leadership Lab series, where executives and experts share their views and advice about the world of work. Find all Leadership Lab stories at tgam.ca/leadershiplab and guidelines for how to contribute to the column here.

Stay ahead in your career. We have a weekly Careers newsletter to give you guidance and tips on career management, leadership, business education and more. Sign up today or follow us at @Globe_Careers.

Read more:
Artificial intelligence isnt destroying jobs, its making them more inclusive - The Globe and Mail

Read More..

Artificial Intelligence in Healthcare: Beyond disease prediction – ETHealthworld.com

By Monojit Mazumdar, Partner and Krishatanu Ghosh, Manager, Deloitte IndiaIn Deloitte Centre for Health Solutions 2020 survey conducted in January 20201, 83% of respondents have mentioned Artificial Intelligence and Machine Learning (AI/ML) as one of their top two priorities.

Conventional wisdom has it that physicians cannot work from home. In the field of healthcare, traditional leverage of AI has been on disease detection and prediction. AI engines have generally been efficient in predicting anomalies in CT scans to detect onset of a disease.

Does it need to remain restricted to detection only? At specific scenario. Many of the Type1 diabetes patients now use a Continuous Glucose Monitor (CGM) to get a near real time reading of their blood sugar levels to determine insulin dosage. These commercially available devices pull the data and load into a cloud based data set-up at a regular interval.

Physicians look at the data during review and suggest adjustment to foods and dosage. A simple AI algorithm can take this further by recommending precise set of treatment recommendations for physicians to validate.

Since routine visits are getting deferred, this simple intervention has the potential to increase both precision and accuracy of the treatment process for all conditions that require timely and routine physician visits.

This opens up the possibility of AI being used as a recommendation tool as opposed to a detection only model. This single change has the ability to transform the entire business model of physical healthcare. From a facility to physically host healthcare professionals along with patients, hospitals and clinics may start operating as a digitally driven operations nerve center.

AI based scheduling service may listen to the patients conditions through a chat bot or voice application. It can ask a series of questions, look at the clinical records of the patients in the system and get a basic hypothesis ready for diagnosis based on data.

It can then schedule an appointment with the most competent physician available depending on the urgency. Before the appointment, the AI engine may prepare a complete briefing with potential diagnosis and recommended treatments. It can answer a set of follow on questions and allow the recommendations to be overridden.

In case of a required diagnostic intervention, AI driven scheduler should be able to arrange for an agent to collect the samples and add them in the patient dossier. Post tele or video consultation, a personal yet non-intervening Voice AI service may do regular follow-throughs, a reminder on medication and other recommended treatment follow through along with any future treatment recommendation. AI engine can sharpen this recommendation by constantly looking through data stream coming from devices that monitor the patient, by consulting physicians.

While this sounds futuristic, we have the technology components commercially available. With a strong and progressively cheaper data network, communication has just got easier. Cloud based storage and delivery of information has cut down the cost of computing infrastructure to a fraction. AI can process faster with advanced hardware gaining speed. Finally, a compulsive situation out of a pandemic has changed our mindset to believe things can be equally good if not better in a remote mode.

Through an efficient sharing of this data with suppliers, typical gaps of demand and supply can be bridged as well. Most important component of making the system work, the need for healthcare professionals may be calibrated as well and with increasing load on healthcare system, a changing model of treatment aided by AI seems to be a good option for future.

DISCLAIMER: The views expressed are solely of the author and ETHealthworld.com does not necessarily subscribe to it. ETHealthworld.com shall not be responsible for any damage caused to any person/organisation directly or indirectly.

Continue reading here:
Artificial Intelligence in Healthcare: Beyond disease prediction - ETHealthworld.com

Read More..

Artificial Intelligence (AI) in the Freight Transportation Industry Market – Global Industry Growth Analysis, Size, Share, Trends, and Forecast 2020 …

Global Artificial Intelligence (AI) in the Freight Transportation Industry Market 2020 report focuses on the major drivers and restraints for the global key players. It also provides analysis of the market share, segmentation, revenue forecasts and geographic regions of the market.

The Artificial Intelligence (AI) in the Freight Transportation Industry market research study is an extensive evaluation of this industry vertical. It includes substantial information such as the current status of the Artificial Intelligence (AI) in the Freight Transportation Industry market over the projected timeframe. The basic development trends which this marketplace is characterized by over the forecast time duration have been provided in the report, alongside the vital pointers like regional industry layout characteristics and numerous other industry policies.

Request a sample Report of Artificial Intelligence (AI) in the Freight Transportation Industry Market at:https://www.marketstudyreport.com/request-a-sample/2833612?utm_source=Algosonline.com&utm_medium=AN

The Artificial Intelligence (AI) in the Freight Transportation Industry market research report is inclusive of myriad pros and cons of the enterprise products. Pointers like the impact of the current market scenario about investors have been provided. Also, the study enumerates the enterprise competition trends in tandem with an in-depth scientific analysis about the downstream buyers as well as the raw material.

Unveiling a brief of the competitive scope of Artificial Intelligence (AI) in the Freight Transportation Industry market:

Unveiling a brief of the regional scope of Artificial Intelligence (AI) in the Freight Transportation Industry market:

Ask for Discount on Artificial Intelligence (AI) in the Freight Transportation Industry Market Report at:https://www.marketstudyreport.com/check-for-discount/2833612?utm_source=Algosonline.com&utm_medium=AN

Unveiling key takeaways from the Artificial Intelligence (AI) in the Freight Transportation Industry market report:

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-artificial-intelligence-ai-in-the-freight-transportation-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Reports:

1. COVID-19 Outbreak-Global Liposomes Drug Delivery Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-liposomes-drug-delivery-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

2. COVID-19 Outbreak-Global Radio Frequency (RF) Cable Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-radio-frequency-rf-cable-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Report : https://www.marketwatch.com/press-release/Automated-Parcel-Delivery-Terminals-Market-2020-08-06

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

See the article here:
Artificial Intelligence (AI) in the Freight Transportation Industry Market - Global Industry Growth Analysis, Size, Share, Trends, and Forecast 2020 ...

Read More..

Drones, blockchain, bots, artificial intelligencethe new auditors on the block – Economic Times

Experts say that apart from the jazzy tech like drones, some of the auditors are also using artificial intelligence and bots for auditing.

Auditors fear that at a time when they are working from home and unable to hit the ground, technology could be the only solution that could give them comfort as the fear of fraud increases due to movement restrictions and inability to do physical checks.

Mumbai: Though change came late to the musty world of auditing, it has finally arrived. Thanks to Covid-19, some of the top firms are using drones, robotics, artificial intelligence and blockchain technology to complete their auditing assignments during the pandemic.The eye in the sky that is the drone will now be used for cross-check whether inventory in a power companys financials tallies with the actual position of the stock of coal on

BY

ET Bureau

AbcSmall

AbcMedium

AbcLarge

Access the exclusive Economic Times stories, Editorial and Expert opinion

Already a Member? Sign In now

Sharp Insight-rich, Indepth stories across 20+ sectors

Access the exclusive Economic Times stories, Editorial and Expert opinion

Clean experience withMinimal Ads

Comment & Engage with ET Prime community

Exclusive invites to Virtual Events with Industry Leaders

A trusted team of Journalists & Analysts who can best filter signal from noise

Go here to see the original:
Drones, blockchain, bots, artificial intelligencethe new auditors on the block - Economic Times

Read More..

Hey software developers, youre approaching machine learning the wrong way – The Next Web

I remember the first time I ever tried to learn to code. I was in middle school, and my dad, a programmer himself, pulled open a text editor and typed this on the screen:

Excuse me? I said.

It prints Hello World, he replied.

Whats public? Whats class? Whats static? Whats

Ignore that for now. Its just boilerplate.

But I was pretty freaked out by all that so-called boilerplate I didnt understand, and so I set out to learn what each one of those keywords meant. That turned out to be complicated and boring, and pretty much put the kibosh on my young coder aspirations.

Its immensely easier to learn software development today than it was when I was in high school, thanks to sites likecodecademy.com, the ease of setting up basic development environments, and a generalsway towards teaching high-level, interpreted languageslike Python and Javascript. You can go from knowing nothing about coding to writing your first conditional statements in a browser in just a few minutes. No messy environmental setup, installations, compilers, or boilerplate to deal with you can head straight to the juicy bits.

This is exactly how humans learn best. First, were taught core concepts at a high level, and onlythencan we appreciate and understand under-the-hood details and why they matter. We learn Python,thenC,thenassembly, not the other way around.

Unfortunately, lots of folks who set out to learn Machine Learning today have the same experience I had when I was first introduced to Java. Theyre given all the low-level details up front layer architecture, back-propagation, dropout, etc and come to think ML is really complicated and that maybe they should take a linear algebra class first, and give up.

Thats a shame, because in the very near future, most software developers effectively using Machine Learning arent going to have to think or know about any of that low-level stuff. Just as we (usually) dont write assembly or implement our own TCP stacks or encryption libraries, well come to use ML as a tool and leave the implementation details to a small set of experts. At that point after Machine Learning is democratized developers will need to understand not implementation details but instead best practices in deploying these smart algorithms in the world.

Today, if you want to build a neural network that recognizes your cats face in photos or predicts whether your next Tweet will go viral, youd probably set off to learn eitherTensorFloworPyTorch. These Python-based deep learning libraries are the most popular tools for designing neural networks today, and theyre both under 5 years old.

In its short lifespan, TensorFlow has already become way,waymore user-friendly than it was five years ago. In its early days, you had to understand not only Machine Learning but also distributed computing and deferred graph architectures to be an effective TensorFlow programmer. Even writing a simple print statement was a challenge.

Just earlier this fall, TensorFlow 2.0 officially launched, making the framework significantly more developer-friendly. Heres what a Hello-World-style model looks like in TensorFlow 2.0:

If youve designed neural networks before, the code above is straight-forward and readable. But if you havent or youre just learning, youve probably got some questions. Like, what is Dropout? What are these dense layers, how many do you need and where do you put them? Whatssparse_categorical_crossentropy? TensorFlow 2.0 removes some friction from building models, but it doesnt abstract away designing the actual architecture of those models.

So what will the future of easy-to-use ML tools look like? Its a question that everyone from Google to Amazon to Microsoft and Apple are spending clock cycles trying to answer. Also disclaimer it is whatIspend all my time thinking about as an engineer at Google.

For one, well start to see many more developers using pre-trained models for common tasks, i.e. rather than collecting our own data and training our own neural networks, well just use Googles/Amazons/Microsofts models. Many cloud providers already do something like this. For example, by hitting a Google Cloud REST endpoint, you can use a pre-trained neural networks to:

You can also run pre-trained models on-device, in mobile apps, using tools like GooglesML Kitor ApplesCore ML.

The advantage to using pre-trained models over a model you build yourself in TensorFlow (besides ease-of-use) is that, frankly, you probably cannot personally build a model more accurate than one that Google researchers, training neural networks on a whole Internet of data and tons GPUs andTPUs, could build.

The disadvantage to using pre-trained models is that they solve generic problems, like identifying cats and dogs in images, rather than domain-specific problems, like identifying a defect in a part on an assembly line.

But even when it comes to training custom models for domain-specific tasks, our tools are becoming much more user-friendly.

Screenshot of Teachable Machine, a tool for building vision, gesture, and speech models in the browser.

Googles freeTeachable Machinesite lets users collect data and train models in the browser using a drag-and-drop interface. Earlier this year, MIT released a similarcode-free interfacefor building custom models that runs on touchscreen devices, designed for non-coders like doctors.Microsoftand startups likelobe.aioffer similar solutions. Meanwhile,Google Cloud AutoMLis an automated model-training framework for enterprise-scale workloads.

As ML tools become easier to use, the skills that developers hoping to use this technology (but not become specialists) will change. So if youre trying to plan for where, Wayne-Gretsky-style, the puck is going, what should you study now?

What makes Machine Learning algorithms distinct from standard software is that theyre probabilistic. Even a highly accurate model will be wrong some of the time, which means its not the right solution for lots of problems, especially on its own. Take ML-powered speech-to-text algorithms: it might be okay if occasionally, when you ask Alexa to Turn off the music, she instead sets your alarm for 4 AM. Its not ok if a medical version of Alexa thinks your doctor prescribed you Enulose instead of Adderall.

Understanding when and how models should be used in production is and will always be a nuanced problem. Its especially tricky in cases where:

Take medical imaging. Were globally short on doctors and ML models are oftenmore accuratethan trained physicians at diagnosing disease. But would you want an algorithm to have the last say on whether or not you have cancer? Same thing with models that help judges decide jail sentences.Models can be biased, but so are people.

Understanding when ML makes sense to use as well as how to deploy it properly isnt an easy problem to solve, but its one thats not going away anytime soon.

Machine Learning models are notoriously opaque. Thats why theyre sometimes called black boxes. Its unlikely youll be able to convince your VP to make a major business decision with my neural network told me so as your only proof. Plus, if you dont understand why your model is making the predictions it is, you might not realize its making biased decisions (i.e. denying loans to people from a specific age group or zip code).

Its for this reason that so many players in the ML space are focusing on building Explainable AI features tools that let users more closely examine what features models are using to make predictions. We still havent entirely cracked this problem as an industry, but were making progress. In November, for example, Google launched a suite of explainability tools as well as something calledModel Cards a sort of visual guide for helping users understand the limitations of ML models.

Googles Facial Recognition Model Card shows the limitations of this particular model.

There are a handful of developers good at Machine Learning, a handful of researchers good at neuroscience, and very few folks who fall in that intersection. This is true of almost any sufficiently complex field. The biggest advances well see from ML in the coming years likely wont be from improved mathematical methods but from people with different areas of expertise learning at least enough Machine Learning to apply it to their domains. This is mostly the case in medical imaging, for example, where themost exciting breakthroughs being able to spot pernicious diseases in scans are powered not by new neural network architectures but instead by fairly standard models applied to a novel problem. So if youre a software developer lucky enough to possess additional expertise, youre already ahead of the curve.

This, at least, is whatIwould focus on today if I were starting my AI education from scratch. Meanwhile, I find myself spending less and less time building custom models from scratch in TensorFlow and more and more time using high-level tools like AutoML and AI APIs and focusing on application development.

This article was written by Dale Markowitz, an Applied AI Engineer at Google based in Austin, Texas, where she works on applying machine learning to new fields and industries. She also likes solving her own life problems with AI, and talks about it on YouTube.

Originally posted here:
Hey software developers, youre approaching machine learning the wrong way - The Next Web

Read More..

Ensighten Launches Client-Side Threat Intelligence Initiative and Invests in Machine Learning – WFMZ Allentown

MENLO PARK, Calif., Aug. 6, 2020 /PRNewswire/ -- Ensighten, the leader in client-side website security and privacy compliance enforcement, today announced increased investment into threat intelligence powered by machine learning. The new threat intelligence will focus specifically on client-side website threats with a mandate of discovering new methods as well as actively monitoring ongoing attacks against organizations.

Client-side attacks such as web skimming are now one of the leading threat vectors for data breaches and with a rapid acceleration of the digital transformation, businesses are facing a substantially increased risk. With privacy regulations, including the CCPA and GDPR, penalizing organizations for compromised customer data, online businesses of all sizes are facing significant security challenges due to the number of organized criminal groups using sophisticated malware.

"We have seen online attacks grow in both intensity and complexity over the past couple of years, with major businesses having their customers' data stolen," said Marty Greenlow, CEO of Ensighten. "One of the biggest challenges facing digital security is that these attacks happen at the client side in the customers' browser, making them very difficult to detect and often run for significant periods of time. By leveraging threat intelligence and machine learning, our customers will benefit from technology which dynamically adapts to the growing threat." Ensighten already provides the leading client-side website security solution to prevent accidental and malicious data leakage, and by expanding its threat intelligence, not only will it benefit its own technology, but also the security community in general. "We are a pioneer in website security, and we need to continue to lead the way," said Greenlow.

Ensighten's security technology is used by the digital marketing and digital security teams of some of the world's largest brands to protect their website and applications against malicious threats. This new threat intelligence initiative will enable further intelligence-driven capabilities and machine learning will drive automated rules, advanced data analytics, and more accurate identification. "Threat intelligence has always been part of our platform," said Jason Patel, Ensighten CTO, "but this investment will allow us to develop some truly innovative technological solutions to an issue that is unfortunately not only happening more regularly but is also growing in complexity."

Additional Resources

Learn more at http://www.ensighten.com or email info@ensighten.com

About Ensighten

Ensighten provides security technology to prevent client-side website data theft to the world's leading brands, protecting billions of online transactions. Through its cloud-based security platform, Ensighten continuously analyzes and secures online content at the point where it is most vulnerable: in the customer's browser. Ensighten threat intelligence focuses on client-side website attacks to provide the most comprehensive protection against web skimming, JavaScript Injection, malicious adware and emerging methods.

Here is the original post:
Ensighten Launches Client-Side Threat Intelligence Initiative and Invests in Machine Learning - WFMZ Allentown

Read More..

Introducing The AI & Machine Learning Imperative – MIT Sloan

Topics The AI & Machine Learning Imperative

The AI & Machine Learning Imperative offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Leading organizations recognize the potential for artificial intelligence and machine learning to transform work and society. The technologies offer companies strategic new opportunities and integrate into a range of business processes customer service, operations, prediction, and decision-making in scalable, adaptable ways.

As with other major waves of technology, AI requires organizations and managers to shed old ways of thinking and grow with new skills and capabilities. The AI & Machine Learning Imperative, an Executive Guide from MIT SMR, offers new insights from leading academics and practitioners in data science and AI. The guide explores how managers and companies can overcome challenges and identify opportunities across three key pillars: talent, leadership, and organizational strategy.

Email Updates on AI, Data & Machine Learning

Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.

Please enter a valid email address

Thank you for signing up

Privacy Policy

The series launches Aug. 3, and summaries of the upcoming articles are included below. Sign up to be reminded when new articles launch in the series, and in the meantime, explore our recent library of AI and machine learning articles.

In order to achieve the ultimate strategic goals of AI investment, organizations must broaden their sights beyond creating augmented intelligence tools for limited tasks. To prepare for the next phase of artificial intelligence, leaders must prioritize assembling the right talent pipeline and technology infrastructure.

Recent technical advances in AI and machine learning offer genuine productivity returns to organizations. Nevertheless, finding and enabling talented individuals to succeed in engineering these kinds of systems can be a daunting challenge. Leading a successful AI-enabled workforce requires key hiring, training, and risk management considerations.

AI is no regular technology, so AI strategy needs to be approached differently than regular technology strategy. A purposeful approach is built on three foundations: a robust and reliable technology infrastructure, a specific focus on new business models, and a thoughtful approach to ethics. Available Aug. 10.

CFOs who take ownership of AI technology position themselves to lead an organization of the future. While AI is likely to impact business practices dramatically in the future across the C-suite, its already having an impact today and the time for CFOs to step up to AI leadership is now. Available Aug. 12.

To remain relevant and resilient, companies and leaders must strive to build business models in a way that ensures three key components are working together: AI that enables and powers a centralized data lake of enterprise data, a marketplace of sellers and partners that make individualized offers based on the intelligence of the data collected and powered by AI, and a SaaS platform that is essential for users. Available Aug. 17.

Acquiring the right AI technology and producing results, while critical, arent enough. To gain value from AI, organizations need to focus on managing the gaps in skills and processes that impact people and teams within the organization. Available Aug. 19.

The AI & Machine Learning Imperative offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Ally MacDonald (@allymacdonald) is a senior editor at MIT Sloan Management Review.

Read the original post:
Introducing The AI & Machine Learning Imperative - MIT Sloan

Read More..

Who Does the Machine Learning and Data Science Work? – Customer Think

A survey of over 19,000 data professionals showed that nearly 2/3rds of respondents said they analyze data to influence product/business decisions. Only 1/4 of respondents said they do research to advance the state of the art of machine learning. Different data roles have different work activity profiles with Data Scientists engaging in more different work activities than other data professionals.

We know that data professionals, when working on data science and machine learning projects, spend their time on a variety of different activities (e.g., gathering data, analyzing data, communicating to stakeholders) to complete those projects. Todays post will focus on the broad work activities (or projects) that make up their roles at work, including Build prototypes to explore applying machine learning to new areas and Analyze and understand data to influence product or business decisions. Toward that end, I will use the data from the recent Kaggle survey of over 19,000 data professionals in which respondents were asked a variety of questions about their analytics practices, including their job title, work experience and the tools and products they use.

The survey respondents were asked to Select any activities that make up an important part of your role at work: (Select all that apply). On average respondents indicated that two (median) of the activities make up on important part of their role. The entire list of activities (shown in Figure 1) were:

Figure 1. Activities that Make Up Important Parts of Data Professionals Role

The The top work activity was somewhat practical in nature, helping the company improve how it runs the business: analyzing data to influence products and decisions. The work activity with the lowest endorsement was more theoretical in nature: doing research that advances the state of the art of machine learning.

Next, I examined if there were differences across different data roles (as indicated by respondents job title) with respect to work activities. I looked at 5 different job title for this analysis. The results revealed a couple of interesting findings (See Figure 2):

First, respondents who self-identified as Data Scientists, on average, indicated that they are involved in 3 (median) activities at work compared to the other respondents who are involved in 2 job activities.

Second, we see that the profile of work activities varies greatly across different data roles. While many of the respondents indicated that analysis and understanding of data to influence products/decisions was the top activity for them, a top activity for Research Scientists was doing research that advances the state of the art of machine learning. Additionally, the top activity for Data Engineers was building and/or running the data infrastructure.

Figure 2. Typical work activities vary across different data roles.

The top work activity for data professional roles appears to be very practical and necessary to run day-to-day business operations. These top work activities included influencing business decisions, building prototypes to expand machine learning to new areas and improving ML models. The bottom activity was more about long-term understanding of machine learning reflected in conducting research to advance the state of the art of machine learning.

Different data roles possess different activity profiles. Top work activities tend to be associated with the skill sets of different data roles. Building/Running data infrastructure was the top activity for Data Engineers; doing research to advance the field of machine learning was a top activity for Research Scientists.These results are not surprising as we know that different data professionals have different skill sets. In prior research, I found that data professionals who self-identified as Researchers have a strong math/statistics/research skill set. Developers, on the other hand, have strong programming/technology skills. And data professionals who were Domain Experts have strong business-domain knowledge. Data science and machine learning work really is a team sport. Getting data teams with members who have complementary skill sets will likely improve the success rate of data science projects.

Remember that data professionals have their unique skill set that makes them a better fit for some data roles than others.When applying for data-related positions, it might be useful to look at the type of work activities for which you have experience (or are competent) and apply for the positions with corresponding job titles. For example, if you are proficient in running a data infrastructure, you might consider focusing on Data Engineer jobs. If you have a strong skill set related to research and statistics, you might be more likely to get a call back when applying for Research Scientist positions.

The rest is here:
Who Does the Machine Learning and Data Science Work? - Customer Think

Read More..