Page 4,041«..1020..4,0404,0414,0424,043..4,0504,060..»

Will the next Mozart or Picasso come from artificial intelligence? No, but here’s what might happen instead – Ladders

As artificial intelligence has been slowly becoming more and more of a mainstream term, there has been a question rumbling in the art community:

Will AI replace creativity?

Its a fantastic question, to tell you the truthand certainly shows what sorts of problems were wrestling with as a society in todays day and age.

First, its important to consider what our definition of art is in the first place. A very broad definition within the art world would be, Anything created by a human to please someone else. Thats what makes something art. In this sense, photography is an art. Videography is an art. Painting, music, drawing, sculpture, all of these things are done to evoke an emotion, to please someone elsecreated by one human, and enjoyed by another.

Stage one:AI became a trendy marketing phrase used by everyone from growth hackers to technologists, with the intention of getting more eyeballs on their work, faster. So the term AI actually made its way into the digital art world faster than the technology itself, since people would use the term to make what they were building seem more cutting-edge than anything else in the spaceregardless of whether or not it was actually utilizing true artificial intelligence.

Stage two:Companies saw the potential artificial intelligence had in being able to provide people (in a wide range of industries) with tools to solve critical programs. For example, we use data science atSkylum, to help photographers and digital content creators be more efficient when performing complex operationslike retouching photos, replacing backgrounds, etc. We use AI to make the process of creating the art more efficient, automating the boring or tedious tasks so that artists can focus more time and energy on the result instead of the process.

Theres a great article in Scientific American titled,Is Art Created By AI Really Art?And the answer is both yes and no.

Its not that artificial intelligence will fundamentally replace human artists. Its that AI will lower the barrier to entry in terms of skill, and give the world access to more creative minds because of what can be easily achievable using digital tools. Art will still require a human vision, however, the way that vision is executed will become easier, more convenient, less taxing, and so on.

For example, if you are only spending one day in Paris, and you want to capture a particular photograph of the Eiffel Tower, that day might not be the best day for your photo. The weather might be terrible, there might be thousands of people around, etc. Well, you can use artificial intelligence to not only remove people from the photograph but even replace the Eiffel Tower with an even higher resolution (from a separate data set) picture of the toweror change the sky, the weather, etc.

The vision is yours, but suddenly you are not limited by the same constraints to execute your vision.

Digital art tools are built to make the process as easy as possible for the artist. If you consider the history of photography, as an art, back in the film days, far more time was spent developing film than actually taking pictures. This is essentially the injustice technologists are looking to solve. The belief in the digital art community is that more time shouldnt be spent doing all the boring things required for you to do what you love. Your time should be spent doing what you love and executing your vision, exclusively.

Taking this a step further, a photographer today usually spends 20-30% of their time giving a photo the look and feel they want. But they spend 70% of their time selecting an object in Photoshop or whichever program theyre using, cutting things out, creating a mask, adding new layers, etc. In this sense, the artist is more focused on theprocessof creating their visionwhich is what creates a hurdle for other artists and potentially very creative individuals to even get into digital art creation. They have to learn these processes and these skills in order to participate, when in actuality, they may be highly capable of delivering a truly remarkable resultif only they werent limited, either by their skills, their environment, or some other challenge.

So, artificial intelligence isnt here to replace the everyday artist. If anything, the goal of technology is to allow more people to express their own individual definition of art.

There may be more Mozarts and Picassos in our society than we realize.

This article first appeared on Minutes Magazine.

Original post:

Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders

Read More..

What Jobs Will Artificial Intelligence Affect? – EHS Today

Its impossible to ignore the fact that advances in artificial intelligence (AI) is changing how we do our current jobs. But what has captured even more interest is how the increasing capability of this technology will affect future jobs.

In trying to determine the specific effects on which jobs and which sectors, many studies have been undertaking but its hard to capture this information.

To add further research to this topic theBrookings Institutionissued a reporton Nov. 20, presenting a new method of analyzing this issue.

By employing a novel technique developed by Stanford University Ph.D. candidate Michael Webb, the new report establishes jobexposure levelsby analyzing the overlap between AI-related patents and job descriptions, the report said. In this way, the research homes in on the impacts of AI specifically and does it by studying empirical statistical associations as opposed to expert forecasting.

The technique Webb used was able to quantify the overlap between thetext of AI patents and the text of job descriptions that can identify the kinds of tasks and occupations likely to be affected by particular AI capabilities.

We find that Webbs AI measures depict a very different range of impacts on the workforce than those from robotics and software. Where the robotics and software that dominate the automation field seem to mostly to involve routine or rule-based, tasks and thus lower-or-middle pay roles, AIs distinctive capabilities suggest that high-wage occupations will be some of the most exposed, the report noted.

Using patents are useful here because they provide timely predictions of the commercial relevance of specific technological applications. Occupational descriptions are also useful because they provide detailed insight into economic activities at the scale of the whole economy."

Findings

Based on these conclusions the report says that we have a lot to learn about AI, and that these are extremely early days in our inquiries.Whats coming may not resemble what we have experienced or expect to experience.

Society should get ready for a very different pattern of impact than those that accompaniedthe broad adoption of robotics and software. While the last waves of automation led to increases of inequity and wage polarization, its not clear that AI will have the same effects.

Read the original:

What Jobs Will Artificial Intelligence Affect? - EHS Today

Read More..

The next generation of user experience is artificially intelligent – ZDNet

These days, in any discussion about enterprise computing, the action is at the front end -- delivering superior user or customer experiences and user interfaces. Artificial intelligence-based technologies are providing developers and IT teams the power they need to deliver, while reducing the repetitive, manual tasks that have characterized UX, CX and UI.

Not only does enhanced automaton help deliver the right information on demand, but also incorporates natural language processing to get smarter about questions being asked, relates Chris McNabb, CEO of Dell Boomi. I recently had the opportunity to chat with McNabb, who talked about the urgency of focusing on UX as a key part of enterprise computing initiatives. "You can't increase productivity without ease of use, without being smarter, and getting pervasive intelligence into your user experience," he says.

In today's digital era, the challenge has extended well beyond the data and application integration challenges enterprises have been wrestling with over the past two decades. "It's the engagement side as well that matters, he points out. "How do I engage customers, partners, prospects, and employees in a way that gives them world-class services that can make a difference in my business? Successful transformation lives both in data and in engagement."

AI, in all its forms, is taking on the UX experience for enterprises. "I think AI holds tremendous potential," McNabb says. "AI allows computer systems for instance to read human X-rays at a much higher or more granular read than humans can. That's a great use for AI." .The potential is also seen in re-orienting work within enterprises, "predicting and dynamically creating information for people on the fly, based on knowledge that's in your platform," he continues. "How you align that user experience to make it faster and easier for people to get their jobs done? Not, 'what is this component? What is this object? What is all the software engineering terminology?'"

While there has been a lot of concern about the looming AI skills shortage, McNabb believes the inherent automated nature of AI will help mitigate this. The skills are most needed for creating and training data models, he explains. "It is a complicated deal to train models, you need experts to establish the model, to establish the training method for that model. But it you look at how the training occurs, you don't need a tremendous amount of knowledge and experience to do that." With natural language processing, for instance, "if I just keep feeding it phrases, and keep asking it questions, the model will train itself."

In Dell Boomi's own employment of the technology, "what ends up happening for us in our use of machine learning is that the training does occur by our 9,000 customers," McNabb explains. "Every time somebody asks it a question, and we validate whether the response came back good or bad, and that model gets smarter and smarter and smarter."

View original post here:

The next generation of user experience is artificially intelligent - ZDNet

Read More..

Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva – The Connect Report

The report examines the world Emotion Artificial Intelligence market keeping in mind this the growth & development, trade chain, import & export knowledge of Emotion Artificial Intelligence business, and supply & demand.

The Global Emotion Artificial Intelligence Market report contains a valuable bunch of knowledge that enlightens the foremost imperative sectors of the Emotion Artificial Intelligence. The info on the Emotion Artificial Intelligence within the report delivers comprehensive information regarding the Emotion Artificial Intelligence industry, that is comprehendible not just for associate degree professional however additionally for a common man. The worldwide Emotion Artificial Intelligence market report provides info concerning all the aspects related to the market, which has reviews of the ultimate product, and therefore the key factors influencing or hampering the market growth.

IBM, Microsoft, Eyesight Technologies, Affectiva, NuraLogix, gestigon GmbH, Crowd Emotion, Beyond Verbal, nViso, Cogito Corporation, Kairos

Sample Copy of the Report here: http://www.marketresearchglobe.com/request-sample/1049195

North America, United States, Asia-Pacific, Central & South America, Middle East & Africa

Get Atractive Discount on Report at: http://www.marketresearchglobe.com/check-discount/1049195

Ask Questions to Expertise at: http://www.marketresearchglobe.com/send-an-enquiry/1049195

Customization of this Report: This Emotion Artificial Intelligence report could be customized to the customers requirements. Please contact our sales professional (sales@marketresearchglobe.com), we will ensure you obtain the report which works for your needs.

Here is the original post:

Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva - The Connect Report

Read More..

Meet the power players behind Microsoft’s Azure cloud – Business Insider

Microsoft's cloud business is on the rise and the Redmond, Washington-based company has assembled a team of high-powered executives to upend its rivals.

Microsoft Azure has long been considered the No. 2 cloud provider versus dominant Amazon Web Services, but that perception has started to change.

"Azure is the primary growth engine for the company and positions them to have a leading marketshare in a potentially multitrillion-dollar opportunity in the future of computing," RBC Capital analyst Alex Zukin said.

To be sure, Microsoft still has a lot of catching up to do. Gartner in a report released over the summer pegged the 2018 market share for AWS at 47.8% and that of Microsoft Azure at 15.5%. But Microsoft has scored some significant wins and recent moves indicate the company is prioritizing the cloud above all else.

Perhaps most significant is the company's recent win of a $10 billion cloud computing contract with the Pentagon. AWSwas considered the frontrunner but experts say the win puts Microsoft in the same league as the AWS.

"It signals to the market Microsoft is no longer a runner-up and can be viewed as a leader in the category where they can surpass AWS in certain areas," Zukin said.

To lead that charge, Microsoft has assembled a team of high-powered executives to guide its all-important cloud strategy. We spoke with insiders and experts who said that these were the 19 power players to watch within Microsoft's cloud business.

Meet Microsoft's ace cloud team:

Continued here:
Meet the power players behind Microsoft's Azure cloud - Business Insider

Read More..

Hyperscale Data Center Market – Global Outlook and Forecast 2019-2024: Adoption of Cloud-based Services and & Big Data Driving Hyperscale Data…

DUBLIN--(BUSINESS WIRE)--The "Hyperscale Data Center Market - Global Outlook and Forecast 2019-2024" report has been added to ResearchAndMarkets.com's offering.

The hyperscale data center market is expected to grow at a CAGR of over 9% during the period 2018-2024

Hyperscale construction in terms of area and power will be high in the US, the UK, Germany, China & Hong Kong, Ireland, Brazil, Canada, the Netherlands, Singapore, Japan, South Korea, Australia, India, France, Denmark, Sweden, and Norway. Adoption of Cloud-based Services and & big data driving hyperscale data center surge.

Tax incentives offered by regulatory agencies worldwide are likely to play an important role in the development of hyperscale facilities construction. A majority of development over the past few years has been concentrated in those regions that offer tax incentives. These tax breaks yield high savings for service operators. For instance, Google negotiated a 100%, 15-year sales tax exemption for a $600-million data center in New Albany, Ohio, the US in 2018.

Similarly, Facebook is likely to receive about $150 million through property tax incentives for building a facility in Utah. Tax incentives are being offered to grow the digital economy through multi-million-dollar investments. Many developing countries are looking to allure investors by providing incentives and land for development during the forecast period. Tax incentives, which are a major criterion for the site selection process, help to generate business opportunities for local sub-contractors. Hence, the availability of attractive tax incentives is expected to drive the hyperscale data center market.

Hyperscale Data Center Market: Segmentation

This research report includes detailed segmentation by IT infrastructure, electrical infrastructure, mechanical infrastructure, general construction, and geography. The demand for servers in the cloud environment is likely to grow during the forecast as service providers are expanding their presence globally. The server market is expected to witness demand for servers with multicore processors. Storage capacity will grow as the average number of virtual machines per physical server continues to grow. The US is likely to witness growth in the server segment and shipment is projected to increase during the forecast period.

The use of Lithium-ion UPS systems will continue to grow among hyperscale data center operators during the forecast period. Vendors are continually innovating their UPS solutions to increase efficiency and reduce cost. Diesel generators are likely to witness growth in the US. However, gas and bi-fuel generators are expected to witness steady growth due to the increased awareness of carbon emissions, especially in the US.

The adoption of switchgears will grow because of the increased construction of large and mega facilities that require medium and high-voltage switchgears. The adoption of basic rack PDUs is expected to decline with the higher adoption of metered, monitored, switched, and metered-by outlet PDUs.

The use of indirect evaporative cooler and air/water-side economizers is likely to continue as most hyperscale facilities are being developed in countries that experience cold climate for more than 4,000 hours per year. The facilities in Southeast Asia, China, India, the Middle East, Africa, and Latin America are likely to prefer chilled water systems.

A majority of the existing development in the US is being carried in locations that offer free cooling of a minimum of 4,000 hours. The facilities established in the South Western US are incorporated with energy-efficient water-based cooling systems, with on-site water treatment plants saving a minimum of 30% of water consumed. In the US, a majority of states provide tax incentives for data centers; also, they provide job-based tax incentives.

The growing hyperscale construction will be a major boost to contractors and sub-contractors operating in the market. Most projects established in MEA are of greenfield development type.

Key Vendor Analysis

The competition in cloud service providers to establish multiple cloud regions and increase the customer base for their service offerings is increasing the investment in hyperscale facilities construction. The market for infrastructure suppliers is becoming competitive YOY.Infrastructure suppliers are continuously innovating their product portfolio to increase their revenue shares. The competition will be high in infrastructure providers supplying mission-critical and high-performance infrastructure solutions.

Schneider Electric, Eaton, Vertiv, and ABB are leading the electrical infrastructure market. Cummins, Caterpillar, and MTU On Site Energy have a strong presence in the generator market.

Market Dynamics

Market Growth Enablers

Market Growth Restraints

Market Opportunities & Trends

Key Company Profiles

Other Prominent Vendors

For more information about this report visit https://www.researchandmarkets.com/r/kwkkeg

More:
Hyperscale Data Center Market - Global Outlook and Forecast 2019-2024: Adoption of Cloud-based Services and & Big Data Driving Hyperscale Data...

Read More..

Inside Intel’s billion-dollar transformation in the age of AI – Fast Company

As I walked up to the Intel visitor center in Santa Clara, California, a big group of South Korean teenagers ran from their bus and excitedly gathered round the big Intel sign for selfies and group shots. This is the kind of fandom you might expect to see at Apple or Google. But Intel?

Then I remembered that Intel is the company that put the silicon in Silicon Valley. Its processors and other technologies provided much of the under-the-hood power for the personal computer revolution.At 51 years old, Intel still has some star power.

But its also going through a period of profound change thats reshaping the culture of the company and the way its products get made. As ever, Intels main products are the microprocessors that serve as the brains of desktop PCs, laptops and tablets, and servers. Theyre wafers of silicon coated with millions or billions of transistors, each of which has an on and off state corresponding to the binary ones and zeros language of computers.

Since the 1950s, Intel has achieved a steady increase in processor power by jamming ever more transistors onto that piece of silicon. The pace was so steady that Intel cofounder Gordon Moore could make his famous 1965 prediction that the number of transistors on a chip would double every two years. Moores Law held true for many years, but Intels transistor-cramming approach has reached a point of diminishing returns, analysts say.

At 51 years old, Intel still has some star power.

Meanwhile, the demand for more processing power has never been greater. The rise of artificial intelligence, which analysts say is now being widely used in core business processes in almost every industry, is pushing the demand for computing power into overdrive. Neural networks require massive amounts of computing power, and they perform best when teams of computers share the work. And their applications go far beyond the PCs and servers that made Intel a behemoth in the first place.

Whether its smart cities, whether its a retail store, whether its a factory, whether its a car, whether its a home, all of these things kind of look like computers today, says Bob Swan, Intels president since January 2019.The tectonic shift of AI and Intels ambitions to expand have forced the company to change the designs and features of some of its chips. The company is building software, designing chips that can work together, and even looking outside its walls to acquire companies that can bring it up to speed in a changed world of computing. More transformation is sure to come as the industry relies on Intel to power the AI that will increasingly find its way into our business and personal lives.

Today, its mainly big tech companies with data centers that are using AI for major parts of their business. Some of them, such as Amazon, Microsoft, and Google, also offer AI as a cloud service to enterprise customers. But AI is starting to spread to other large enterprises, which will train models to analyze and act upon huge bodies of input data.

This shift will require an incredible amount of computation. And AI models hunger for computing power is where the AI renaissance runs head-on into Moores Law.

For decades, Moores 1965 prediction has held a lot of meaning for the whole tech industry. Both hardware makers and software developers have traditionally linked their product road maps to the amount of power they can expect to get from next years CPUs. Moores Law kept everyone dancing to the same music, as one analyst puts it.

Moores Law also implied a promise that Intel would continue figuring out, year after year, how to deliver the expected gain in computing power in its chips. For most of its history, Intel fulfilled that promise by finding ways to wedge more transistors onto pieces of silicon, but its gotten harder.

Were running out of gas in the chip factories, says Moor Insights & Strategy principal analyst Patrick Moorhead. Its getting harder and harder to make these massive chips, and make them economically.

Were running out of gas in the chip factories.

Its still possible to squeeze larger numbers of transistors into silicon wafers, but its becoming more expensive and taking longer to do soand the gains are certainly not enough to keep up with the requirements of the neural networks that computer scientists are building. For instance, the biggest known neural network in 2016 had 100 million parameters, while the largest so far in 2019 has 1.5 billionparametersan order of magnitude larger in just a few years.

Thats a very different growth curve than in the previous computing paradigm, and its putting pressure on Intel to find ways to increase the processing power of its chips.

However, Swan sees AI as more of an opportunity than a challenge. He acknowledges that data centers may be the primary Intel market to benefit, since they will need powerful chips for AI training and inference, but he believes that Intel has a growing opportunity to also sell AI-compatible chips for smaller devices, such as smart cameras and sensors. For these devices, its the small size and power efficiency, not the raw power of the chip, that makes all the difference.

Theres three kinds of technologies that we think will continue to accelerate: One is AI, one is 5G, and then one is autonomous systemsthings that move around that look like computers, says Swan, Intels former CFO who took over as CEO when BrianKrzanich left after allegations of an extramarital affair with a staffer in 2018.

Were sitting in a large, nondescript conference room at Intels headquarters. On the whiteboard at the front of the room, Swan draws out the two sides of Intels businesses. On the left side is the personal computer chip businessfrom which Intel gets about half of its revenue now. On the right is its data center business, which includes the emerging Internet of Things, autonomous car, and network equipment markets.

We expand [into] this world where more and more data [is] required, which needs more processing, more storage, more retrieval, faster movement of data, analytics, and intelligence to make the data more relevant, Swan says.

Rather than taking a 90-something-percent share of the $50 billion data center market, Swan is hoping to take a 25% market share of the larger $300 billion market that includes connected devices such as smart cameras, futuristic self-driving cars, and network gear. Its a strategy that he says starts with our core competencies, and requires us to invent in some ways, but also extends what we already do. It might also be a way for Intel to bounce back from its failure to have become a major provider of technology to the smartphone business, where Qualcomm has long played an Intel-like role. (Most recently, Intel gave up on its major investment in the market for smartphone modems and sold off the remains to Apple.)

The Internet of Things market, which includes chips for robots, drones, cars, smart cameras, and other devices that move around, is expected to reach $2.1 trillion by 2023. And while Intels share of that market has been growing by double digits year-over-year, IoT still contributes only about 7% of Intels overall revenue today.

The data center business contributes 32%, the second-largest chunk behind the PC chip business, which contributes about half of total revenue. And its the data center that AI is impacting first and most. Thats why Intel has been altering the design of its most powerful CPU, the Xeon, to accommodate machine learning tasks. In April, it added a feature called DL Boost to its second-generation Xeon CPUs, which offers greater performance for neural nets with a negligible loss of accuracy. Its also the reason that the company will next year begin selling two new chips that specialize in running large machine learning models.

By 2016, it had become clear that neural networks were going to be used for all kinds of applications, from product recommendation algorithms to natural language bots for customer service.

Like other chipmakers, Intel knew it would have to offer its large customers a chip whose hardware and software were purpose-built for AI, which could be used to train AI models and then draw inferences from huge pools of data.

At the time, Intel was lacking a chip that could do the former. The narrative in the industry was that Intels Xeon CPUs were very good at analyzing data, but that the GPUs made by Intels rival in AI, Nvidia, were better for trainingan important perception that was impacting Intels business.

So in 2016, Intel went shopping and spent$400 million on a buzzy young company called Nervana that had already beenworkingon a ripping fast chip architecture that was designed for training AI.

Its been three years since the Nervana acquisition, and its looking like it was a smart move by Intel. At a November event in San Francisco, Intel announced two new Nervana Neural Network Processorsone designed for running neural network models that infer meaning from large bodies of data, the other for training the networks. Intel worked withFacebook and Baidu, two of its larger customers, to help validate the chip design.

Nervana wasnt the only acquisition Intel made that year. In 2016, Intel also bought another company, calledMovidius, that had been building tiny chips that could run computer vision models inside things such as drones or smart cameras. Intels sales of the Movidius chips arent huge, but theyve been growing quickly, and they address the larger IoT market Swans excited about. At its San Francisco event, Intel also announced a new Movidius chip, which will be ready in the first half of 2020.

Intel Nervana NNP-I for inference [Photo: courtesy of Intel Corporation]Many of Intels customers do at least some of their AI computation on regular Intel CPUs inside servers in data centers. But its not so easy to link those CPUs together so they can tag-team the work that a neural network model needs. The Nervana chips, on the other hand, each contain multiple connections so that they easily work in tandem with other processors in the data center, Nervana CEO and founder Naveen Rao tells me.

Now I can start taking my neural network and I can break it apart across multiple systems that are working together, Rao says. So we can have a whole rack [of servers], or four racks, working on one problem together.

Naveen Rao, Intel corporate vice president and general manager of the Intel Artificial Intelligence Products Group, displays an Intel Neural Network Processor for inference during his keynote address Tuesday, November 12, 2019, at Intels AI Summit in San Francisco. [Photo: Walden Kirsch/Intel Corporation]In 2019, Intel expects to see $3.5 billion in revenue from its AI-related products. Right now, only a handful of Intel customers are using the new Nervana chips, but theyre likely to reach a far wider user base next year.

The Nervana chips represent the evolution of a long-held Intel belief that a single piece of silicon, a CPU, could handle whatever computing tasks a PC or server needed to do. This widespread belief began to change with the gaming revolution, which demanded the extreme computational muscle needed for displaying complex graphics on a screen. It made sense to offload that work to a graphics processing unit, a GPU, so that the CPU wouldnt get bogged down with it. Intel began integrating its own GPUs with its CPUs years ago, and next year it will release a free-standing GPU for the first time, Swan tells me.

That same thinking also applies to AI models. A certain number of AI processes can be handled by the CPU within a data center server, but as the work scales up, its more efficient to offload it to another specialized chip. Intel has been investing in designing new chips that bundle together a CPU and a number of specialized accelerator chips in a way that matches the power and workload needs of the customer.

When youre building a chip, you want to put a system together that solves a problem, and that system [often] requires more than a CPU, Swan says.

When youre building a chip, you want to put a system together that solves a problem.

In addition, Intel now relies far more on software to drive its processors to higher performance and better power efficiency. This has shifted the balance of power within the organization. According to one analyst, software development at Intel is now an equal citizen with hardware development.

In some cases, Intel no longer manufactures all its chips on its own, an epoch-shifting departure from the companys historical practice. Today, if chip designers call for a chip that some other company might fabricate better or more efficiently than Intel, its acceptable for the job to be outsourced. The new Nervana chip for training, for example, is manufactured by the semiconductor fabricator TSMC.

Intel has outsourced some chip manufacturing for logistical and economic reasons. Because of capacity limitations in its most advanced chip fabrication processes, many of its customers have been left waiting for their orders of new Intel Xeon CPUs. So Intel outsourced the production of some of its other chips to other manufacturers. Intel sent a letter to its customers earlier this year to apologize for the delay and lay out its plans for catching up.

All these changes are challenging long-held beliefs within Intel, shifting the companys priorities, and rebalancing old power structures.

The fact is that mobile devices have become vending machines for services delivered to your phone via the cloud.

In the midst of this transformation, Intels business is looking pretty good. Its traditional business of selling chips for personal computers is down 25% from five years ago, but sales of Xeon processors to data centers are rocking and rolling, as analyst Mike Feibus says.

Some of Intels customers are already using the Xeon processors to run AI models. If those workloads grow, they may consider adding on the new Nervana specialized chips. Intel expects the first customers for these chips to be hyperscalers, or large companies that operate massive data centersthe Googles, Microsofts, and Facebooks of the world.

Its an old story that Intel missed out on the mobile revolution by ceding the smartphone processor market to Qualcomm. But the fact is that mobile devices have become vending machines for services delivered to your phone via the clouds data centers. So when you stream that video to your tablet, its likely an Intel chip is helping serve it to you. The coming of 5G might make it possible to run real-time services such as gaming from the cloud. A future pair of smart glasses might be able to instantly identify objects using a lightning-fast connection to an algorithm thats running in a data center.

All of that adds up to a very different era than when the technological world revolved around PCs with Intel inside. But as AI models grow ever more complex and versatile, Intel has a shot at being the company best equipped to power themjust as it has powered our computers for almost a half-century.

See original here:
Inside Intel's billion-dollar transformation in the age of AI - Fast Company

Read More..

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., gestures as he speaks during ... [+] the company's event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

We found that if Nvidia Stock drops 10% or more in a week (5 trading days), there is a solid 36% chance itll recover 10% or more, over the next month (about 20 trading days)

Nvidia stock has seen significant volatility this year. While the company has been impacted by the broader correction in the semiconductor space and the trade war between the U.S. and China, the stock is being supported by a strong long-term outlook for GPU demand amid growing applications in Deep Learning and Artificial Intelligence.

Considering the recent price swings, we started with a simple question that investors could be asking about Nvidia stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Nvidia stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 40%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Nvidia stock become more likely after a drop?

Answer:

Not really.

Specifically, chances of a 5% rise in Nvidia stock over the next month:

= 40%% after Nvidia stock drops by 5% in a week.

versus,

= 44.5% after Nvidia stock rises by 5% in a week.

Question 2: What about the other way around, does a drop in Nvidia stock become more likely after a rise?

Answer:

No.

Specifically, chances of a 5% decline in Nvidia stock over the next month:

= 40% after NVIDIA stock drops by 5% in a week

versus,

= 27% after NVIDIA stock rises by 5% in a week

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, largely yes!

Given a drop of 5% in Nvidia stock over a week (5 trading days), while there is only about 28% chance the Nvidia stock will gain 5% over the subsequent week, there is more than 58% chance this will happen in 6 months.

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Nvidia stock are about 30% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 29% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

See the rest here:

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes

Read More..

Measuring Employee Engagement with A.I. and Machine Learning – Dice Insights

A small number of companies have begun developing new tools to measure employee engagement without requiring workers to fill out surveys or sit through focus groups. HR professionals and engagement experts are watching to see if these tools gain traction and lead to more effective cultural and retention strategies.

Two of these companiesNetherlands-based KeenCorp and San Franciscos Cultivateglean data from day-to-day internal communications. KeenCorp analyzes patterns in an organizations (anonymized) email traffic to gauge changes in the level of tension experienced by a team, department or entire organization. Meanwhile, Cultivate analyzes manager email (and other digital communications) to provide leadership coaching.

These companies are likely to pitch to a ready audience of employers, especially in the technology space. With IT unemployment hovering around 2 percent, corporate and HR leaders cant help but be nervous about hiring and retention. When competition for talent is fierce, companies are likely to add more and more sweeteners to each offer until they reel in the candidates they want. Then theres the matter of retaining those employees in the face of equally sweet counteroffers.

Thats why businesses utilize a lot of effort and money on keeping their workers engaged. Companies spend more than $720 million annually on engagement, according to the Harvard Business Review. Yet their efforts have managed to engage just 13 percent of the workforce.

Given the competitive advantage tech organizations enjoy when their teams are happy and productivenot to mention the money they save by keeping employees in placeengagement and retention are critical. But HR cant create and maintain an engagement strategy if it doesnt know the workforces mindset. So companies have to measure, and they measure primarily through surveys.

Today, many experts believe surveys dont provide the information employers need to understand their workforces attitudes. Traditional surveys have their place, they say, but more effective methods are needed. They see the answer, of course, in artificial intelligence (A.I.) and machine learning (ML).

One issue with surveys is they only capture a part of the information, and thats the part that the employee is willing to release, said KeenCorp co-founder Viktor Mirovic. When surveyed, respondents often hold back information, he explained, leaving unsaid data that has an effect similar to unheard data.

I could try to raise an issue that you may not be open to because you have a prejudice, Mirovic added. If tools dont account for whats left unsaid and unheard, he argued, they provide an incomplete picture.

As an analogy, Mirovic described studies of combat aircraft damaged in World War II. By identifying where the most harm occurred, designers thought they could build safer planes. However, the study relied on the wrong data, Mirovic said. Why? Because they only looked at the planes that came back. The aircraft that presumably suffered the most grievous damagethose that were shot downwerent included in the research.

None of this means traditional surveys surveys dont provide value. I think the traditional methods are still useful, said Alex Kracov, head of marketing for Lattice, a San Francisco-based workforce management platform that focuses on small and mid-market employers. Sometimes just the idea of starting to track engagement in the first place, just to get a baseline, is really useful and can be powerful.

For example, Lattice itself recently surveyed its 60 employees for the first time. It was really interesting to see all of the data available and how people were feeling about specific themes and questions, he said. Similarly, Kracov believes that newer methods such as pulse surveyswhich are brief studies conducted at regular intervalscan prove useful in monitoring employee satisfaction, productivity and overall attitude.

Whereas surveys require an employees active participation, the up-and-coming tools dont ask them to do anything more than their work. When KeenCorps technology analyzes a companys email traffic, its looking for changes in the patterns of word use and compositional style. Fluctuations in the products index signify changes in collective levels of tension. When a change is flagged, HR can investigate to determine why attitudes are in flux and then proceed accordingly, either solving a problem or learning a lesson.

When I ask you a question, you have to think about the answer, Mirovic said. Once you think about the answer, you start to include all kinds of other attributes. You know, youre my boss or youve just given me a raise or youre married to my sister. Those could all affect my response. What we try to do is go in as objectively as possible, without disturbing people as we observe them in their natural habitats.

Read more:

Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights

Read More..

Amazon Wants to Teach You Machine Learning Through Music? – Dice Insights

Machine learning has rapidly become one of those buzzwordsembraced by companies around the world. Even if they dont fully understandwhat it means, executives think that machine learning will magically transformtheir operations and generate massive profits. Thats good news fortechnologistsprovided they actually learn the technologys fundamentals, ofcourse.

Amazon wants to help with the learning aspect of things. At this years AWS re:Invent, the company is previewing the DeepComposer, a 32-key keyboard thats designed to train you in machine learning fundamentals via the power of music.

No, seriously. AWS DeepComposer is theworlds first musical keyboard powered by machine learning to enable developersof all skill levels to learn Generative AI while creating original musicoutputs, reads Amazonsultra-helpful FAQ on the matter. DeepComposer consists of a USB keyboardthat connects to the developers computer, and the DeepComposer service,accessed through the AWS Management Console.There are tutorials andtraining data included in the package.

Generative AI, the FAQcontinues, allows computers to learn the underlying pattern of a given problemand use this knowledge to generate new content from input (such as image,music, and text). In other words, youre going to play a really simple songlike Chopsticks,and this machine-learning platform will use that seed to build a four-hourWagner-style opera. Just kidding! Or are we?

Jokes aside, the ideathat a machine-learning platform can generate lots of data based on relativelylittle input is a powerful one. Of course, Amazon isnt totally altruistic inthis endeavor; by serving as a training channel for up-and-comingtechnologists, the company obviously hopes that more people will turn to it forall of their machine learning and A.I. needs in future years. Those interestedcan sign up for the preview on adedicated site.

This isnt the first time that Amazon has plunged into machine-learning training, either. Late last year, it introduced AWS DeepRacer, a model racecar designed to teach developers the principles of reinforcement learning. And in 2017, it rolled out AWS DeepLens camera, meant to introduce the technology world to Amazons take on computer vision and deep learning.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

For those who master the fundamentals of machine learning, the jobs can prove quite lucrative. In September, theIEEE-USA Salary & Benefits Salarysuggested that engineers with machine-learning knowledge make an annual average of $185,000. Earlier this year, meanwhile, Indeed pegged theaverage machine learning engineer salary at $146,085, and its job growth between 2015 and 2018 at 344 percent.

If youre not interested in Amazonsversion of a machine-learning education, there are other channels. For example,OpenAI, the sorta-nonprofit foundation (yes, itsas odd as it sounds), hosts what it calls Gym, a toolkit fordeveloping and comparing reinforcement algorithms; it also has a set of modelsand tools, along with a very extensive tutorialin deepreinforcement learning.

Googlelikewise has acrash course,complete with 25 lessonsand 40+ exercises, thats a good introduction to machine learning concepts.Then theres Hacker Noon and its interesting breakdown ofmachine learning andartificial intelligence.

Onceyou have a firmer grasp on the core concepts, you can turn to BloombergsFoundations of Machine Learning,afree online coursethat teaches advanced concepts such asoptimization and kernel methods. A lotof math is involved.

Whateverlearning route you take, its clear that machine learning skills have anincredible value right now. Familiarizing yourself through thistechnologywhether via traditional lessons or a musical keyboardcan only helpyour career in tech.

Excerpt from:

Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights

Read More..