Page 3,399«..1020..3,3983,3993,4003,401..3,4103,420..»

COVID-19 Is Driving a Cloud Computing Surge That Will Only Continue | Opinion – Newsweek

Tech companies are booming. In earnings reports from the end of July, Amazon, Apple, Alphabet and Microsoft all beat expectations and posted rising revenues in a variety of their different products and services. Amazon had the most impressive showing, with an explosive $10.30 per share against the $1.50 predicted by analysts. The COVID-19 pandemic is reshaping every industry on the planet, and for Big Tech, that appears to be a positive for profit.

One of the fastest-growing services many of these companies offer is enterprise cloud computing, and coronavirus restrictions may be pushing demand for these products even higher. Overall revenue for the top cloud platforms is through the roof over the last two quarters in 2020. Amazon Web Services, the leading cloud computing provider in the world, saw 33 percent and 29 percent growth in their first and second earnings reports of 2020. Microsoft Azure and Google Cloud had even greater increases of around 50 percent each quarter.

The increase in cloud revenue for Big Tech is an amplification of a decade-long trend. Companies are spending more on outsourcing their computing needs, and in the last 10 years, enterprise spending on cloud services have increased by over 8,500 percent, going from $1.1 billion in 2009 to $96.4 billion in 2019. Overall spending on cloud storage and IT infrastructure is quickly replacing non-cloud solutions in the form of physical servers and computers owned by a company.

Newsweek subscription offers >

COVID-19 is proving to be the ultimate test for cloud providers and business cloud users. Employees from companies across many industries are now spread throughout the world due to office closures and restrictions, and remote work is expected to be on par with normal productivity. So far, cloud computing is meeting the challenge and giving companies easy-to-access storage, tools, IT and other services now almost essential for a modern business.

Cloud computing for businesses can be broken down into three main services: infrastructure (IaaS), software (SaaS) and platform (PaaS). Salesforce, a company founded with cloud enterprise services at the core of its business, specializes in SaaS by providing software and productivity tools over the cloud. Slack has a similar business model, housing its software and productivity tools on their servers for users to access anywhere. IaaS is another popular form of cloud computing, where companies like Amazon, Microsoft, Google and IBM own and manage servers for clients to use primarily for storage and complex networking. This is the most common solution for companies looking to save on costs associated with buying and maintaining their own servers and networking on a broad scale. PaaS is a step up from IaaS, giving clients access to complete hardware over the cloud to use on intensive projects like application development. From a broad perspective, large, expensive in-house hardware and software is replaced with similar specifications via the internet.

Look for the dominant services (SaaS and IaaS) to continue charting new territory for businesses looking to further digitize and stabilize their presence online. Whether it's for more effective communication with employees, access to essential applications and storage databases, or to expand business from physical sources to digital, cloud computing will play a key role in reshaping businesses and industries during a long COVID-19 economic recovery.

Newsweek subscription offers >

Manuel Moerbach is president and CEO of Statista Inc.

The views expressed in this article are the writer's own.

Continue reading here:
COVID-19 Is Driving a Cloud Computing Surge That Will Only Continue | Opinion - Newsweek

Read More..

Asia Pacific Personal Cloud Market Industry Analysis and Market Forecast (2019-2026) _ Hosted Types, Revenues, User Type, and Geography. – Galus…

Asia Pacific Personal Cloud Marketis expected to reach US$ 9580 Mn by 2026 from US$ XX Mn in 2018 at a CAGR of XX%

The Asia Pacific personal cloud market is divided by hosted types, revenues, user type, and geography. Based on hosted types, the Asia Pacific personal cloud market includes hosted from providers premises and users premises. By revenues, the market is classified into direct and indirect revenues. On the basis of user type, the Asia Pacific personal cloud market consists of individual, small enterprises, and medium enterprises. Geographically market is spread by China, Japan, India, South Korea, Australia, others.The report study has analyzed revenue impact of COVID -19 pandemic on the sales revenue of market leaders, market followers and market disrupters in the report and same is reflected in our analysis.

REQUEST FOR FREE SAMPLE REPORT:https://www.maximizemarketresearch.com/request-sample/8425

The current economic, as well as political challenges faced by China, have lead towards some optimism in its IT sector. Over the past few years, cloud storage seems to have taken the Chinese IT industry and its growth. The pervasive and rapid internet growth has exploded with more people gaining access to mobile devices. The personal cloud has sprung up to serve an undivided need for sufficient storage as well as data sharing points for mobile users. Behind the boom lies a growing level of attention from both China and the international players. Due to the size, every technology vendor hopes to hold a share of the coveted market.

Unlike other international players, the personal cloud services in Asia Pacific region allow users to share information leading to raising concerns among the regulators. In a way to clean the sector, the national authorities in different countries have embarked on a drive to purge illegal content from the internet, resulting in exit of six primary services. Moreover, the absence of a clear business model further drive the firms into prosperity standing in the way between companies and profitability

China, India, Japan, and South Korea are some of the key countries that been the major contributors of the growth of the Asia Pacific Personal Cloud market. Chinas tech-savvy consumers and eagerness toward embracing newer technological innovations is a forerunner of growth for the Personal Cloud market in Asia Pacific region.

Key players operated in the market include Nihao Cloud, Alibaba Cloud, Meituan Open Services, Tencent Cloud, UCloud, QingCloud, Huawei Enterprise Cloud, NetEase Cloud, Western Digital, Elephant Cloud, Baidu Cloud, E Cloud, Grand Cloud, KS Cloud, Microsoft, IBM Corporation, Google Inc.

The objective of the report is to present comprehensive Asia Pacific Personal Cloud Market including all the stakeholders of the industry. The past and current status of the industry with forecasted market size and trends are presented in the report with analysis of complicated data in simple language. The report covers all the aspects of industry with dedicated study of key players that includes market leaders, followers and new entrants by region. PORTER, SVOR, PESTEL analysis with the potential impact of micro-economic factors by region on the market have been presented in the report. External as well as internal factors that are supposed to affect the business positively or negatively have been analyzed, which will give clear futuristic view of the industry to the decision makers.

The report also helps in understanding Asia Pacific Personal Cloud Market North America for Asia Pacific dynamics, structure by analyzing the market segments, and project the Asia Pacific Personal Cloud Market North America for Asia Pacific size. Clear representation of competitive analysis of key players by type, price, financial position, product portfolio, growth strategies, and regional presence in the Asia Pacific Personal Cloud Market North America for Asia Pacific make the report investors guide.

DO INQUIRY BEFORE PURCHASING REPORT HERE:https://www.maximizemarketresearch.com/inquiry-before-buying/8425

The scope of Asia Pacific Personal Cloud Market:

Asia Pacific Personal Cloud Market by Hosted Types:

Hosted from providers premises Hosted from users premisesAsia Pacific Personal Cloud Market by Revenues:

Direct Revenues Indirect RevenuesAsia Pacific Personal Cloud Market by User Type:

Individual Small Enterprises Medium EnterprisesAsia Pacific Personal Cloud Market by Geography:

China Japan India South Korea Australia OthersKey Players Operated in Asia Pacific Personal Cloud Market:

Nihao Cloud Alibaba Cloud Meituan Open Services Tencent Cloud UCloud QingCloud Huawei Enterprise Cloud NetEase Cloud Western Digital Elephant Cloud Baidu Cloud E Cloud Grand Cloud KS Cloud Microsoft IBM Corporation Google Inc.

Browse Full Report with Facts and Figures of Asia Pacific Personal Cloud Market Report at:https://www.maximizemarketresearch.com/market-report/asia-pacific-personal-cloud-market/8425/

About Us:

Maximize Market Research provides B2B and B2C market research on 20,000 high growth emerging technologies & opportunities in Chemical, Healthcare, Pharmaceuticals, Electronics & Communications, Internet of Things, Food and Beverages, Aerospace and Defense and other manufacturing sectors.

Contact info:

Name: Vikas Godage

Organization: MAXIMIZE MARKET RESEARCH PVT. LTD.

Email: [emailprotected]

Contact: +919607065656/ +919607195908

Website:www.maximizemarketresearch.com

Read the original post:
Asia Pacific Personal Cloud Market Industry Analysis and Market Forecast (2019-2026) _ Hosted Types, Revenues, User Type, and Geography. - Galus...

Read More..

3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes

Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.

3 Predictions For The Role Of Artificial Intelligence In Art And Design

It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.

1. Machines will be used to enhance human creativity (enhance being the key word)

Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.

Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.

The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.

2. AI could help to overcome the limits of human creativity

Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.

Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.

For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.

3. Generative design is one area to watch

Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.

Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.

In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.

Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.

See the rest here:
3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes

Read More..

Catalyst of change: Bringing artificial intelligence to the forefront – The Financial Express

Artificial Intelligence (AI) has been much talked about over the last few years. Several interpretations of the potential of AI and its outcomes have been shared by technologists and futurologists. With the focus on the customer, the possibilities range from predicting trends to recommending actions to prescribing solutions.

The potential for change due to AI applications is energised by several factors. The first is the concept of AI itself which is not a new phenomenon. Researchers, cognitive specialists and hi-tech experts working with complex data for decades in domains such as space, medicine and astrophysics have used data to help derive deep insights to predict trends and build futuristic models.

AI has now moved out of the realms of research labs to the commercial world and every day life due to three key levers. Innovation and technology advancements in the hardware, telecommunications and software have been the catalysts in bringing AI to the forefront and attempting to go beyond the frontiers of data and analytics.

What was once seen as a big breakthrough to be able to analyse the data as if-else- then scenarios transitioned to machine learning with the capability to deal with hundreds of variables but mostly structured data sets. Handcrafted techniques using algorithms did find ways to convert unstructured data to structured data but there are limitations to such volumes of data that could be handled by machine learning.

With 80% of the data being unstructured and with the realisation that the real value of data analysis would be possible only when both structured and unstructured data are synthesised, there came deep learning which is capable of handling thousands of factors and is able to draw inferences from tens of billions of data comprising of voice, image, video and queries each day. Determining patterns from unstructured data multi-lingual text, multi-modal speech, vision have been maturing making recommendation engines more effective.

Another important factor that is aiding the process for adoption of AI rapidly is the evolution seen in the hardware. CPUs (Central processing unit) today are versatile and designed for handling sequential codes and not for addressing codes related to massive parallel problems. This is where the GPUs (graphcial processing units) which were hitherto considered primarily for applications such as gaming are now being deployed for the purpose of addressing the needs of commercial establishments, governments and other domains dealing with gigantic volumes of data supporting their needs for parallel processing in areas such as smart parking, retail analytics, intelligent traffic systems and others. Such computing intensive functions requiring massive problems to be broken up into smaller ones that require parallelisation are finding efficient hardware and hosting options in the cloud.

Therefore the key drivers for this major transition are the evolution of hardware and hosting on the cloud, sophisticated tools and software to capture, store and analyse the data as well as a variety of devices that keep us always connected and support in the generation of humungous volumes of data. These dimensions along with advances in telecommunications will continue to evolve, making it possible for commercial establishments, governments and society to arrive at solutions that deliver superior experiences for the common man. Whether it is agriculture, health, decoding crimes, transportation or maintenance of law and order, we have already started seeing the play of digital technologies and democratisation of AI would soon become a reality.

The writer is chairperson, Global Talent Track, a corporate training solutions company

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Read more:
Catalyst of change: Bringing artificial intelligence to the forefront - The Financial Express

Read More..

Artificial Intelligence: How realistic is the claim that AI will change our lives? – Bangkok Post

Artificial Intelligence: How realistic is the claim that AI will change our lives?

Artificial Intelligence (AI) stakes a claim on productivity, corporate dominance, and economic prosperity with Shakespearean drama. AI will change the way you work and spend your leisure time and puts a claim on your identity.

First, an AI primer.

Let's define intelligence, before we get onto the artificial kind. Intelligence is the ability to learn. Our senses absorb data about the world around us. We can take a few data points and make conceptual leaps. We see light, feel heat, and infer the notion of "summer."

Our expressive abilities provide feedback, i.e., our data outputs. Intelligence is built on data. When children play, they engage in endless feedback loops through which they learn.

Computers too, are deemed intelligent if they can compute, conceptualise, see and speak. A particularly fruitful area of AI is getting machines to enjoy the same sensory experiences that we have. Machines can do this, but they require vast amounts of data. They do it by brute force, not cleverness. For example, they determine the image of a cat by breaking pixel data into little steps and repeat until done.

Key point: What we do and what machines do is not so different, but AI is more about data and repetition than it is about reasoning. Machines figure things out mathematically, not visually.

AI is a suite of technologies (machines and programs) that have predictive power, and some degree of autonomous learning.

AI consists of three building blocks:

An algorithm is a set of rules to be followed when solving a problem. The speed of the volume of data that can be fed into algorithms is more important than the "smartness" of algorithms.

Let's examine these three parts of the AI process:

The raw ingredient of intelligence is data. Data is learning potential. AI is mostly about creating value through data. Data has become a core business value when insights can be extracted. The more you have, the more you can do. Companies with a Big Data mind-set don't mind filtering through lots of low value data. The power is in the aggregation of data.

Building quality datasets for input is critical too, so human effort must first be spent obtaining, preparing and cleaning data. The computer does the calculations and provides the answers, or output.

Conceptually, Machine Learning (ML) is the ability to learn a task without being explicitly programmed to do so. ML encompasses algorithms and techniques that are used in classification, regression, clustering or anomaly detection.

ML relies on feedback loops. The data is used to make a model, and then test how well that model fits the data. The model is revised to make it fit the data better, and repeated until the model cannot be improved anymore. Algorithms can be trained with past data to find patterns and make predictions.

Key point: AI expands the set of tools that we have to gain a better grasp of finding trends or structure in data, and make predictions. Machines can scale way beyond human capacity when data is plentiful.

Prediction is the core purpose of ML. For example, banks want to predict fraudulent transactions. Telecoms want to predict churn. Retailers want to predict customer preferences. AI-enabled businesses make their data assets a strategic differentiator.

Prediction is not just about the future; it's about filling in knowledge gaps and reducing uncertainty. Prediction lets us generalise, an essential form of intelligence. Prediction and intelligence are tied at the hip.

Let's examine the wider changes unfolding.

AI increases our productivity. The question is how we distribute the resources. If AI-enhanced production only requires a few people, what does that mean for income distribution? All the uncertainties are on how the productivity benefits will be distributed, not how large they will be.

Caution:

ML is already pervasive in the internet. Will the democratisation of access brought on by the internet continue to favour global monopolies? Unprecedented economic power rests in a few companies you can guess which ones with global reach. Can the power of channelling our collective intelligence continue to be held by these companies that are positioned to influence our private interests with their economic interests?

Nobody knows if AI will produce more wealth or economic precariousness. Absent various regulatory measures, it is inevitable that it will increase inequality and create new social gaps.

Let's examine the impact on everyone.

As with all technology advancements, there will be changes in employment: the number of people employed, the nature of jobs and the satisfaction we will derive from them. However, with AI all classes of labour are under threat, including management. Professions involving analysis and decision-making will become the providence of machines.

New positions will be created, but nobody really knows if new jobs will sufficiently replace former ones.

We will shift more to creative or empathetic pursuits. To the extent of income shortfall, should we be rewarded for contributing in our small ways to the collective intelligence? Universal basic income is one option, though it remains theoretical.

Our consumption of data (mobile phones, web-clicks, sensors) provides a digital trail that is fed into corporate and governmental computers. For governments, AI opens new doors to perform surveillance, predictive policing, and social shaming. For corporates, it's not clear whether surveillance capitalism, the commercialisation of your personal data, will be personalised to you, or for you. Will it direct you where they want you to go, rather than where you want to go?

How will your data be a measure of you?

The interesting angle emerging is whether we will be hackable. Thats when the AI knows more about you than yourself. At that point you become completely influenceable because you can be made to think and to react as directed by governments and corporates.

We do need artificial forms of intelligence because our prediction abilities are limited, especially when handling big data and multiple variables. But for all its stunning accomplishments, AI remains very specific. Learning machines are circumscribed to very narrow areas of learning. The Deep Mind that wins systematically at Go can't eat soup with a spoon or predict the next financial crises.

Filtering and personalisation engines have the potential to both accommodate and exploit our interests. The degree of change will be propelled, and restrained, by new regulatory priorities. The law always lags behind technology, so expect the slings and arrows of our outrageous fortune.

Author: Greg Beatty, J.D., Business Development Consultant. For further information please contact gregfieldbeatty@gmail.com

Series Editor: Christopher F. Bruton, Executive Director, Dataconsult Ltd, chris@dataconsult.co.th. Dataconsult's Thailand Regional Forum provides seminars and extensive documentation to update business on future trends in Thailand and in the Mekong Region.

Read the original:
Artificial Intelligence: How realistic is the claim that AI will change our lives? - Bangkok Post

Read More..

3 Ways Artificial Intelligence Is Transforming The Energy Industry – OilPrice.com

Back in 2017, Bill Gates penned a poignant online essay to all graduating college students around the world whereby he tapped artificial intelligence (AI), clean energy, and biosciences as the three fields he would spend his energies on if he could start all over again and wanted to make a big impact in the world today.

It turns out that the Microsoft co-founder was right on the money.

Three years down the line and deep in the throes of the worst pandemic in modern history, AI and renewable energy have emerged as some of the biggest megatrends of our time. On the one hand, AI is powering the fourth industrial revolution and is increasingly being viewed as a key strategy for mastering some of the greatest challenges of our time, including climate change and pollution. On the other hand, there is a widespread recognition that carbon-free technologies like renewable energy will play a critical role in combating climate change.

Consequently, stocks in the AI, robotics, and automation sectors as well as clean energy ETFs have lately become hot property.

From utilities employing AI and machine learning to predict power fluctuations and cost optimization to companies using IoT sensors for early fault detection and wildfire powerline/gear monitoring, here are real-life cases of how AI has continued to power an energy revolution even during the pandemic.

Top uses of AI in the energy sector

Source: Intellias

#1. Innowatts: Energy monitoring and management The Covid-19 crisis has triggered an unprecedented decline in power consumption. Not only has overall consumption suffered, but there also have been significant shifts in power usage patterns, with sharp decreases by businesses and industries while domestic use has increased as more people work from home.

Houston, Texas-based Innowatts, is a startup that has developed an automated toolkit for energy monitoring and management. The companys eUtility platform ingests data from more than 34 million smart energy meters across 21 million customers, including major U.S. utility companies such as Arizona Public Service Electric, Portland General Electric, Avangrid, Gexa Energy, WGL, and Mega Energy. Innowatts says its machine learning algorithms can analyze the data to forecast several critical data points, including short- and long-term loads, variances, weather sensitivity, and more.

Related: The Real Reason The Oil Rally Has Fizzled Out

Innowatts estimates that without its machine learning models, utilities would have seen inaccuracies of 20% or more on their projections at the peak of the crisis, thus placing enormous strain on their operations and ultimately driving up costs for end-users.

#2. Google: Boosting the value of wind energy

A while back, we reported that proponents of nuclear energy were using the pandemic to highlight its strong points vis-a-vis the short-comings of renewable energy sources. To wit, wind and solar are the least predictable and consistent among the major power sources, while nuclear and natural gas boast the highest capacity factors.

Well, one tech giant has figured out how to employ AI to iron out those kinks.

Three years ago, Google announced that it had reached 100% renewable energy for its global operations, including its data centers and offices. Today, Google is the largest corporate buyer of renewable power, with commitments totaling 2.6 gigawatts (2,600 megawatts) of wind and solar energy.

In 2017, Google teamed up with IBM to search for a solution to the highly intermittent nature of wind power. Using IBMs DeepMind AI platform, Google deployed ML algorithms to 700 megawatts of wind power capacity in the central United States--enough to power a medium-sized city.

IBM says that by using a neural network trained on widely available weather forecasts and historical turbine data, DeepMind is now able to predict wind power output 36 hours ahead of actual generation. Consequently, this has boosted the value of Googles wind energy by roughly 20 percent.

A similar model can be used by other wind farm operators to make smarter, faster and more data-driven optimizations of their power output to better meet customer demand.

IBMs DeepMind uses trained neural networks to predict wind power output 36 hours ahead of actual generation

Source: DeepMind

#3. Wildfire powerline and gear monitoring In June, Californias biggest utility, Pacific Gas & Electric, found itself in deep trouble. The company pleaded guilty for the tragic 2018 wildfire accident that left 84 people dead and PG&E saddled with hefty penalties of $13.5 billion as compensation to people who lost homes and businesses and another $2 billion fine by the California Public Utilities Commission for negligence.

It will be a long climb back to the top for the fallen giant after its stock crashed nearly 90% following the disaster despite the company emerging from bankruptcy in July.

Perhaps the loss of lives and livelihood could have been averted if PG&E had invested in some AI-powered early detection system.

Source: CNN Money

One such system is by a startup called VIA, based in Somerville, Massachusetts. VIA says it has developed a blockchain-based app that can predict when vulnerable power transmission gear such as transformers might be at risk in a disaster. VIAs app makes better use of energy data sources, including smart meters or equipment inspections. Related: Worlds Largest Oilfield Services Provider Sells U.S. Fracking Business

Another comparable product is by Korean firm Alchera which uses AI-based image recognition in combination with thermal and standard cameras to monitor power lines and substations in real time. The AI system is trained to watch the infrastructure for any abnormal events such as falling trees, smoke, fire, and even intruders.

Other than utilities, oil and gas producers have also been integrating AI into their operations. These include:

By Alex Kimani for Oilprice.com

More Top Reads From Oilprice.com:

Visit link:
3 Ways Artificial Intelligence Is Transforming The Energy Industry - OilPrice.com

Read More..

Toward a machine learning model that can reason about everyday actions – MIT News

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.

Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them.

Their model did as well as or better than humans at two types of visual reasoning tasks picking the video that conceptually best completes the set, and picking the video that doesnt fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MITs Multi-Moments in Time and DeepMinds Kinetics.

We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level, says the studys senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.

As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.

Language representations allow us to integrate contextual information learned from text databases into our visual models, says study co-author Mathew Monfort, a research scientist at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL). Words like running, lifting, and boxing share some common characteristics that make them more closely related to the concept exercising, for example, than driving.

Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like sculpting, carving, and cutting, for example, were connected to higher-level concepts like crafting, making art, and cooking. Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset.

This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.

To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand.

Its effectively covering, but very different from the visual features of the other clips, says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. Conceptually it fits, but I had to think about it.

Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.

A deep learning model that can be trained to think more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.

One hallmark of human cognition is our ability to describe something in relation to something else to compare and to contrast, says Oliva. Its a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.

Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University.

Continued here:
Toward a machine learning model that can reason about everyday actions - MIT News

Read More..

The fourth generation of AI is here, and its called Artificial Intuition – The Next Web

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but its not nearly as new as you might think. In fact, its undergone several evolutions since its inception in the 1950s. The first generation of AI was descriptive analytics, which answers the question, What happened? The second, diagnostic analytics, addresses, Why did it happen? The third and current generation is predictive analytics, which answers the question, Based on what has already happened, what could happen in the future?

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true artificial intelligence, we need machines that can think on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a gut feeling when something doesnt add up. In short, we need AI that can mimic human intuition.Thankfully, we have it.

What is Artificial Intuition?

The fourth generation of AI is artificial intuition, which enables computers to identify threats and opportunities without being told what to look for, just as human intuition allows us to make decisions without specifically being instructed on how to do so. Its similar to a seasoned detective who can enter a crime scene and know right away that something doesnt seem right, or an experienced investor who can spot a coming trend before anybody else. The concept of artificial intuition is one that, just five years ago, was considered impossible. But now companies like Google, Amazon and IBM are working to develop solutions, and a few companies have already managed to operationalize it.

How Does It Work?

So, how does artificial intuition accurately analyze unknown data without any historical context to point it in the right direction? The answer lies within the data itself. Once presented with a current dataset, the complex algorithms of artificial intuition are able to identify any correlations or anomalies between data points.

Of course, this doesnt happen automatically. First, instead of building a quantitative model to process the data, artificial intuition applies a qualitative model. It analyzes the dataset and develops a contextual language that represents the overall configuration of what it observes. This language uses a variety of mathematical models such as matrices, euclidean and multidimensional space, linear equations and eigenvalues to represent the big picture. If you envision the big picture as a giant puzzle, artificial intuition is able to see the completed puzzle right from the start, and then work backward to fill in the gaps based on the interrelationships of the eigenvectors.

In linear algebra, an eigenvector is a nonzero vector that changes at most by a scalar factor (direction does not change) when that linear transformation is applied to it. The corresponding eigenvalue is the factor by which the eigenvector is scaled. In concept this provides a guidepost for visualizing anomalous identifiers. Any eigenvectors that do not fit correctly into the big picture are then flagged as suspicious.

How Can It Be Used?

Artificial intuition can be applied to virtually any industry, but is currently making considerable headway in financial services. Large global banks are increasingly using it to detect sophisticated new financial cybercrime schemes, including money laundering, fraud and ATM hacking. Suspicious financial activity is usually hidden among thousands upon thousands of transactions that have their own set of connected parameters. By using extremely complicated mathematical algorithms, artificial intuition rapidly identifies the five most influential parameters and presents them to analysts.

In 99.9% of cases, when analysts see the five most important ingredients and interconnections out of tens of hundreds, they can immediately identify the type of crime being presented. So artificial intuition has the ability to produce the right type of data, identify the data, detect with a high level of accuracy and low level of false positives, and present it in a way that is easily digestible for the analysts.

By uncovering these hidden relationships between seemingly innocent transactions, artificial intuition is able to detect and alert banks to the unknown unknowns (previously unseen and therefore unexpected attacks). Not only that, but the data is explained in a way that is traceable and logged, enabling bank analysts to prepare enforceable suspicious activity reports for the Financial Crimes Enforcement Network (FinCEN).

How Will It Affect the Workplace?

Artificial intuition is not intended to serve as a replacement for human instinct. It is just an additional tool that helps people perform their jobs more effectively. In the banking example outlined above, artificial intuition isnt making any final decisions on its own; its simply presenting an analyst with what it believes to be criminal activity. It remains the analysts job to review the identified transactions and confirm the machines suspicions.

AI has certainly come a long way since Alan Turing first presented the concept back in the 1950s, and it is not showing any sign of slowing down. Previous generations were just the tip of the iceberg. Artificial intuition marks the point when AI truly became intelligent.

So youre interested in AI? Thenjoin our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.

Published September 3, 2020 17:00 UTC

Read more:
The fourth generation of AI is here, and its called Artificial Intuition - The Next Web

Read More..

Artificial Intelligence Is Helping to Spot California Wildfires – GovTech

(TNS) As 12,000 lightning strikes pummeled the Bay Area this month, igniting hundreds of fires, fire spotters sprang into action.

Their arsenal of tools includes thermal imagery collected by space satellites; real-time feeds from hundreds of mountaintop cameras; a far-flung array of weather stations monitoring temperature, humidity and winds; and artificial intelligence to munch and crunch the vast data troves to pinpoint hot spots.

For decades, wildfires in remote regions were spotted by people in lookout towers who scanned the horizon with binoculars for smoke a tough and tedious job. They reported potential danger by telephone, carrier pigeon or Morse code signals with a mirror.

Now, fire spotting has gone high tech. And the technology to address it is getting exponentially better and faster, trained by a growing body of data about wildfires. Its making firefighters more nimble and keeping them safer. The only question is whether silicon-powered progress can keep up with the climate change-fueled flames.

Tech has also made fire spotting more democratic. Anyone can go online to see the satellite and camera images, whileinteractive mapsdisplay the conflagrations locations. Footage from some of the mountaintop cameraswent viralthis month as they transmitted apocalypticimagesof the raging flames that ultimately burned them in the CZU Lightning Complex fires.

Its Netflix for fire, said Graham Kent, who runs theAlertWildfire.orgsystem, which has about 550 cameras in California, a number he hopes to double by 2022. The cameras capture a still image every second to make time-lapse videos, using near-infrared technology for nighttime viewing. They give an intimate sense of whats going on. Theres a primal sense like were still living in caves; everyone fears fire.

The network of cameras, backed by a consortium of the University of Nevada at Reno, UC San Diego and the University of Oregon, allows authorized personnel such as fire command teams to rotate, pan and zoom to zero in on suspicious plumes of smoke. The AlertWildfire system is adding some mobile cameras a trailer with a 30-foot tower that can be positioned anywhere its needed.

The images from the cameras and satellites, along with footage captured by piloted and unpiloted aircraft, and weather station data, are vital components in the rapidly advancing technology for fire spotting.

The new technology is helping us fight more-aggressive fires more aggressively with a calculated level of safety, said Brice Bennett, a spokesman for Cal Fire. Fire-line commanders utilize intelligence from all these different inputs. Situational awareness is paramount fully understanding the events unfolding around you, not just whats directly in front of your face but what will occur in the next 12 hours.

The boots on the ground crews use the detailed data to get information even while theyre en route, he said. The digital maps can show where the hottest spots are, for instance, so they know what areas to avoid and where to construct fire lines.

We can use this information to understand where fires are spreading, where theyre most active and to get rapid alerts for wildfires, said Scott Strenfel, manager of meteorology and fire science at PG&E. Its pretty exciting with all this technology coming together. The earlier you can spot a fire, the earlier you can take suppression action.

During fire season, PG&E staffs its newWildfire Safety Operations Centeraround the clock. Analysts in the room at the companys San Francisco headquarters monitor big-screen monitors displaying data-packed maps and information flowing in from a variety of sources.

The company used to spend a couple of million dollars a year on a smoke patrol program. Every afternoon during fire season, seven pilots would fly in set patterns (similar to a lawn-mowers path) over heavily forested areas in its service territory, looking for smoke. But satellite advances meant it could get similar information for a tenth of the cost and have continuous coverage, Strenfel said.

Even in a test version last year, the satellite system detected an early-morning grass fire on Mount Diablo in July 2019 about 15 minutes before the first 911 calls came in, he said. PG&E now has systems in place to notify local fire agencies when its technology spots fires.

Technology comes into play after fires as well. We map burn severity to see how much damage resulted from the fire, so resource management can stabilize the landscape and mitigate hazards like flash floods, said Brad Quayle, a program manager at the Forest ServicesGeospatial Technology and Applications Center, which uses satellites and other technologies to detect and monitor fire activity.

Technology also helps authorities decide whether and when to evacuate locals.

A fire is a dynamic situation with high winds, dry fuels, proximity to populations, especially in California, said Everett Hinkley, national remote sensing program manager at the Forest Service. We can provide rapid updates to infer the direction and speed of those wildfires to help people calling the evacuation orders.

Although satellites have been used in fire spotting for about 20 years, a new generation of satellites and onboard tools have dramatically improved their aptitude for the task.

Weather satellites have thermal channels that can be used for fires, but theyre optimized to look at cloud temperatures (which are) very cold, not for very high temperatures, said Vincent Ambrosia, associate program manager for wildfires at the NASA Ames Research Center in Mountain View. Newer satellites with spectral sensors and advanced optics technology now provide finer spatial resolution and data processing.

There are two types of satellites: Polar orbiter satellites are closer to Earth and provide higher-resolution images, but capture them only twice a day. Geosynchronous or geostationary satellites stay over a specific geographic area, providing images about every five minutes, but must fly about 22,000 miles above the Earth to synchronize with its orbit, so the images are more coarse.

Researchers have lengthy lists of tech improvements they hope to see in the near future.

One is unpiloted aircraft that can stay aloft for months at a time, perhaps 100,000 feet above the ground, providing persistent surveillance of a fire event, allowing (firefighters) to make real-time decisions, Ambrosia said. Its the same as the resources that support troops on the ground in battle scenarios.

Quayle likewise said hed like to see long endurance, high-altitude platforms that can serve the purpose of a satellite but fly in the atmosphere.

Several private companies are working on options such as solar-powered aircraft or high-altitude airships like dirigibles, he said, estimating that deployment is between one and five years out.

Hed also like to see satellites built specifically for fire detection, something now being developed in Canada, which is replete with remote, fire-prone forests. That satellite system is probably five years out from completion and launch, he said, noting that the rest of the world can share it.

While some have speculated that the smaller drones flown by hobbyists could be deployed, they lack the power and range to fly high enough to usefully spot fires. But their technology, too, could improve over time.

Another future upgrade is for computers to get even better at reading the data via improved artificial intelligence, to cut down on false positives. We need better machine learning to process this data overload, because you cant put enough analysts in front of screens to handle it all, Hinkley said.

Despite all the high-tech wizardry, many fires are initially reported through a traditional system: 911 calls. Blazes increasingly occur near populated areas so there are essentially millions of potential spotters on the ground.

The 911 calls in many places will be the first notification, Strenfel said.

But calls to 911 can mean a deluge of information without the specifics that firefighters need so the satellites and cameras come into play to home in on exact locations.

In cases like we just went through, with the lightning causing 500 fires all at once, and many people calling, that information can be overwhelming, Strenfel said. The satellite detection systems (show) where these fires are in real time.

Kent from AlertWildfire said similar things about his camera network.

When a 911 call comes in, authorities can turn to a camera and see the ignition phase of that fire, he said. Cameras can also triangulate a fires exact location. Under normal circumstances, they can see 20 miles in daytime; 40 miles at night if there arent obstacles. But hes seen fires caught by cameras as far away as 100 miles in the daytime and 160 miles at night.

Sometimes traditional ways reemerge.

Cal Fires Amador-El Dorado Unit recently refurbished two dilapidated lookout towers and now staffs them during fire season with community volunteers.

Armed with a two-way radio, binoculars and an Osborne Fire Finder a topographic paper map with sighting apertures to help gauge a fires distance and location the volunteers have spotted 85 smokes since June 1, with seven of them being first reports, said Diana Swart, a spokeswoman for the unit.

These human volunteers get up in that tower with their old-fashioned Fire Finders from the early 1900s, she said. In these very rural wooded areas, fires otherwise may not be noticed until they get very large. Having a person out there whos actively looking is key.

2020 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.

Looking for the latest gov tech news as it happens? Subscribe to GT newsletters.

Read more:
Artificial Intelligence Is Helping to Spot California Wildfires - GovTech

Read More..

Artificial intelligence expert moves to Montreal because it’s an AI hub – Montreal Gazette

Irina Rish, now a renowned expert in the field of artificial intelligence, first became drawn to the topic as a teenager in the former Soviet republic of Uzbekistan. At 14, she was fascinated by the notion that machines might have their own thought processes.

I was interested in math in school and I was looking at how you improve problem solving and how you come up with algorithms, Rish said in a phone interview Friday afternoon. I didnt know the word yet (algorithm) but thats essentially what it was. How do you solve tough problems?

She read a book introducing her to the world of artificial intelligence and that kick-started a lifelong passion.

First of all, they sounded like just mind-boggling ideas, that you could recreate in computers something as complex as intelligence, said Rish. Its really exciting to think about creating artificial intelligence in machines. It kind of sounds like sci-fi. But the other interesting part of that is that you hope that by doing so, you can also better understand the human mind and hopefully achieve better human intelligence. So you can say AI is not just about computer intelligence but also about our intelligence. Both goals are equally exciting.

Here is the original post:
Artificial intelligence expert moves to Montreal because it's an AI hub - Montreal Gazette

Read More..