Page 4,106«..1020..4,1054,1064,1074,108..4,1204,130..»

How to Use Machine Learning to Drive Real Value – eWeek

Continuously connected customers with multiple devices and an endless number of interaction touchpoints aren't easy to engage. Theyre on a multi-dimensional journey and can appear to a brand at any time, on any channel.

Its not surprising, then, that consumers give brands low marks for their ability to deliver an exceptional customer experience. According to a recent Harris Poll survey, only 18 percent of consumers rated brands ability to deliver an exceptional experience as excellent.

Even if the data about a customer is well managed, to successfully engage the connected consumer and deliver highly personalized experiences requires advanced analytical tools. Artificial intelligence and machine learning are now being applied by innovative businesses to create real-time, personalized experiences at scale with models that intelligently orchestrate offerings throughout the customer journey.

Its easy to get caught up in the hype surrounding AI and machine learning, with business leaders chasing shiny objects for an AI application that might have little to do with critical business goals.

When paired with a persistent, real-time, single customer record, AI and automated machine learning platforms can be utilized to meet those business goals, increase revenue and fundamentally change the way brands communication with customers.

In this eWEEK Data Points article, George Corugedo, Chief Technology Officer and co-founder of customer engagement hub maker RedPoint Global, suggests several truths about machine learning that every business leader should keep in mind when thinking about customer records.

The ultimate goal of machine learning shouldnt be a flashy, futuristic example but instead a system to drive revenue and results for the business. The result of effective machine learning isnt likely a robot, chatbot or facial recognition tool its machine learning-driven programs that are embedded behind the scenes, driving intelligent decisions for optimized customer engagement.

Organizations need many models running and working in real time to truly make machine learning work for their needs. For future-forward organizations, intelligence and analysis needs to be embedded, so instead of using one model, multiple in-line analytic models can incrementally adjust and find opportunities for growth. These fleets of ML models can optimize business functions and drive associated revenues.

Todays consumer is omnichannel. Businesses must forego the traditional channel-specific batch and blast approach that sufficed when customer choice was limited and the buying journey followed a mostly straight-line path. Todays customer journey is dynamic, and the learning applied to the customer relationship should be, as well. Machine learning is particularly well-suited to solving these multidimensional problems.

News flash: Machine-learning models age and can quickly become stale. For this reason, organizations must consistently rebuild and retrain models using todays data. In fact, models should be developed, updated and even applied in real-time on an ongoing testing basis so that businesses can truly capitalize on the moment of interaction. This is most effective in a closed loop system that continually looks for incremental changes to optimize.

When models are configured correctly, they will run 24/7 looking for opportunities within the data, set up and managed by marketers. These systems can be set once and guided to produce the specific business metrics needed. With every record tracked in the system, insights are pulled easily, and the recommendations can be made automatically. Businesses should focus on producing continually updated data and let the automation tools use machine learning to drive greater revenue.

Machine learning has the power to fully transform an enterprise. Therefore, its natural for business leaders to get lost in the hype and lose sight of the real value it can deliver day-to-day. The truth is, the real value of machine learning is that it allows businesses to try new things, amplify creative strengths, reveal new discoveries and enable collaboration across the organization. However, these benefits will only be realized once organizations get past the hype and are willing to walk into the weeds.

If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.

More:

How to Use Machine Learning to Drive Real Value - eWeek

Read More..

Can the planet really afford the exorbitant power demands of machine learning? – The Guardian

There is, alas, no such thing as a free lunch. This simple and obvious truth is invariably forgotten whenever irrational exuberance teams up with digital technology in the latest quest to change the world. A case in point was the bitcoin frenzy, where one could apparently become insanely rich by mining for the elusive coins. All you needed was to get a computer to solve a complicated mathematical puzzle and lo! you could earn one bitcoin, which at the height of the frenzy was worth $19,783.06. All you had to do was buy a mining kit (or three) from Amazon, plug it in and become part of the crypto future.

The only problem was that mining became progressively more difficult the closer we got to the maximum number of bitcoins set by the scheme and so more and more computing power was required. Which meant that increasing amounts of electrical power were needed to drive the kit. Exactly how much is difficult to calculate, but one estimate published in July by the Judge Business School at the University of Cambridge suggested that the global bitcoin network was then consuming more than seven gigwatts of electricity. Over a year, thats equal to around 64 terawatt-hours (TWh), which is 8 TWh more than Switzerland uses annually. So each of those magical virtual coins turns out to have a heavy environmental footprint.

At the moment, much of the tech world is caught up in a new bout of irrational exuberance. This time, its about machine learning, another one of those magical technologies that change the world, in this case by transforming data (often obtained by spying on humans) into depending on whom you talk to information, knowledge and/or massive revenues.

As is customary in these frenzies, some inconvenient truths are overlooked, for example, warnings by leaders in the field such as Ali Rahimi and James Mickens that the technology bears some resemblances to an older speciality called alchemy. But thats par for the course: when youve embarked on changing the world (and making a fortune in the process), why let pedantic reservations get in the way?

Recently, though, a newer fly has arrived in the machine-learning ointment. In a way, its the bitcoin problem redux. OpenAI, the San Francisco-based AI research lab, has been trying to track the amount of computing power required for machine learning ever since the field could be said to have started in 1959. What its found is that the history divides into two eras. From the earliest days to 2012, the amount of computing power required by the technology doubled every two years in other words, it tracked Moores law of growth in processor power. But from 2012 onwards, the curve rockets upwards: the computing power required for todays most-vaunted machine-learning systems has been doubling every 3.4 months.

When youve embarked on changing the world, why let pedantic reservations get in the way?

This hasnt been noticed because the outfits paying the bills are huge tech companies. But the planet will notice, because the correspondingly enormous growth in electricity consumption has environmental consequences.

To put that in context, researchers at Nvidia, the company that makes the specialised GPU processors now used in most machine-learning systems, came up with a massive natural-language model that was 24 times bigger than its predecessor and yet was only 34% better at its learning task. But heres the really interesting bit. Training the final model took 512 V100 GPUs running continuously for 9.2 days. Given the power requirements per card, wrote one expert, a back of the envelope estimate put the amount of energy used to train this model at over 3x the yearly energy consumption of the average American.

You dont have to be Einstein to realise that machine learning cant continue on its present path, especially given the industrys frenetic assurances that tech giants are heading for an AI everywhere future. Brute-force cloud computing wont achieve that goal. Of course smarter algorithms will make machine learning more resource-efficient (and perhaps also less environmentally damaging).

Companies will learn to make trade-offs between accuracy and computational efficiency, though that will have unintended, and antisocial, consequences too. And, in the end, if machine learning is going to be deployed at a global scale, most of the computation will have to be done in users hands, ie in their smartphones.

This is not as far-fetched as it sounds. The new iPhone 11, for example, includes Apples A13 chip, which incorporates a unit running the kind of neural network software behind recent advances in natural language processing language and interpreting images. No doubt other manufacturers have equivalent kit.

In preparation for the great day of AI Everywhere, I just asked Siri: Is there such a thing as a free lunch? She replied: I can help you find a restaurant if you turn on location services. Clearly, the news that there is no such thing hasnt yet reached Silicon Valley. Theyll get it eventually, though, when Palo Alto is underwater.

Capital ideaThe Museum of Neoliberalism has just opened in Lewisham, London. Its a wonderful project and website my only complaint is that neoliberalism isnt dead yet.

Who needs humans?This Marketing Blog Does Not Exist is a blog entirely created by AI. Could you tell the difference between it and a human-created one? Not sure I could.

All the right notesTheres a lovely post about Handel by Ellen T Harris on the Bank of Englands blog, Bank Underground. The German composer was a shrewd investor, but it was The Messiah that made him rich.

See more here:

Can the planet really afford the exorbitant power demands of machine learning? - The Guardian

Read More..

New InfiniteIO Platform Reduces Latency and Accelerates Performance for Machine Learning, AI and Analytics – Business Wire

AUSTIN, Texas--(BUSINESS WIRE)--InfiniteIO, the worlds fastest metadata platform to reduce application latency, today announced the new Application Accelerator, which delivers dramatic performance improvements for critical applications by processing file metadata independently from on-premises storage or cloud systems. The new platform provides organizations across industries the lowest possible latency for their mission-critical applications, such as AI/machine learning, HPC and genomics, while minimizing disruption to IT teams.

Bandwidth and I/O challenges have been largely overcome, yet reducing latency remains a significant barrier to improving application performance, said Henry Baltazar, vice president of research at 451 Research. Metadata requests are a large part of file system latency, making up the vast majority of requests to a storage system or cloud. InfiniteIOs approach to abstracting metadata from file data offers IT managers a nondisruptive way to immediately accelerate application performance.

As unstructured data has grown exponentially, requests for file metadatainformation such as file attributes and access privilegeshave also skyrocketed to become a major bottleneck for application performance. InfiniteIOs latest release, built on the InfiniteIO Metadata Engine (IME) architecture, responds to file metadata requests directly from the network instead of the network-attached storage (NAS) or cloud storage system. InfiniteIOs metadata abstraction can reduce latency from seconds to microseconds for all files in a hybrid cloud environment. This results in faster access to data and speeds up application performance.

Reducing latency is the last frontier in improving application performance. The tech industry has been focused on making incremental performance improvements with faster storage and file systems, when the biggest opportunity is in removing the file system latency created by processing metadata requests, said Mark Cree, CEO of InfiniteIO. Separating metadata processing from file I/O significantly decreases application latency, which translates into reduced product development cycles and greater worker productivity.

Turbocharging Data-intensive Applications

The Application Accelerator allows organizations to go faster and easily implement innovation, with minimal to no disruption to existing IT operations:

InfiniteIO today also released new software features that continue to simplify and accelerate tiering of cold data from primary NAS systems to lower-cost cloud storage, such as the ability to scan 1 billion files in a day and a new API for cloud usage charge-back. The IME architecture never recalls metadata back from the cloud, increasing performance and avoiding cloud egress charges. Robust policies automatically tier files so that even rarely accessed information is available on-demand, without disruption or performance compromises.

Exhibiting at SC19

Attendees of the Supercomputing 19 show in Denver can meet InfiniteIO application acceleration experts in the Startup Caf inside the exhibit hall from Nov. 18 to 21, 2019. Find more information and book a meeting at https://infinite.io/supercomputing.

Availability

Additional Information

About InfiniteIO

InfiniteIO provides the lowest possible latency for file metadata, enabling applications to run faster, reduce development cycles, and increase data productivity. Based in Austin, Texas, InfiniteIO independently processes file metadata to simultaneously accelerate application performance and hybrid-cloud data tiering for global enterprises, research organizations and media companies. Learn more at http://www.infinite.io or follow the company on Twitter @infiniteio and LinkedIn.

Read this article:

New InfiniteIO Platform Reduces Latency and Accelerates Performance for Machine Learning, AI and Analytics - Business Wire

Read More..

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip – TechCrunch

Deep learning is all the rage these days in enterprise circles, and it isnt hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning and particularly deep learning models have the potential to massively improve a range of products and applications.

The key word though is potential. While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that dont fit well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the companys Wafer Scale Engine the worlds largest silicon chip according to the company Cerebras theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a complete solution product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. Its 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined a claim that TechCrunch hasnt verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that Weve talked to more than 100 customers over the past year and a bit, in order to determine the needs for a new AI system and the software layer that should go on top of it. What weve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car, Feldman analogized. Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck. Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, We looked at other technologies freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that it massively reduces communication time by using locality.

In computer science, locality is placing data and compute in the right places within, lets say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, theres no need for data to flow through multiple storage clusters or ethernet cables everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in cancer, traumatic brain injury and many other areas important to society today at the lab. Feldman said that It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsofts Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Read the original post:

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip - TechCrunch

Read More..

AI-based ML algorithms could increase detection of undiagnosed AF – Cardiac Rhythm News

A joint press release from Bristol Myers Squibb and Pfizer has highlighted the findings of an artificial intelligence (AI)-based machine learning (ML) technique that has been shown in a test database to exhibit greater predictive performance than other currently available risk prediction models for atrial fibrillation (AF). The data from the UK study were published in PLoS ONE.

The study found that the algorithms, developed using routine patient records, have the potential to enrich the patient population for targeted screening. According to the joint statement, the next stage is to test the algorithm in routine clinical practice and quantify its impact in terms of the number of AF cases identified, and the associated potential cost savings in the earlier detection of AF.

Current methods for AF detection, such as opportunistic pulse checking in those >65 years and in the over age group, mean that around 100 people are screened to identify one person with AF. The study found that adopting the AI algorithm could reduce this number to one in nine. It tested whether AI was more accurate than existing risk prediction models, using the health records of nearly three million people.

Commenting in the press release, Mark ONeill (St Thomas Hospital and Kings College, London, UK), one of the study authors, says: This AI technique represents quite an astonishing leap in precision. The implications are huge, especially because ML can be so easily and affordably used in routine clinical practice with the potential to transform the diagnosis of AF. If we can find and treat people living unwittingly with AF, we can do a much better job of preventing complications like stroke and heart disease.

The press release states that the ML algorithm is potentially more precise than routine practice because it not only looks for risk factors, but also how they change, and can spot complex relationships between risk predictors, that cannot be readily identified by humans, such as subtle changes in blood pressure prior to diagnosis or frequency of GP visits.

In 2007, Pfizer and Bristol-Myers Squibb entered into a global alliance to commercialise the oral anticoagulant apixaban.

Originally posted here:

AI-based ML algorithms could increase detection of undiagnosed AF - Cardiac Rhythm News

Read More..

4 Popular Types of Website Hosting That You Need to Know – India West

Website hosting services essentially act as a place to house your website when you publish it online. But not all web hosting services are the same, and as a matter of fact there are several distinct types out there.

Before you get stuck in and start to look at the storage capacity, bandwidth, and other miscellaneous features that web hosts provide you should first determine the type of web hosting service you require. The good news is that in general there are four types that are the most popular, and should be at the top of your list.

As its name suggests this type of hosting will involving sharing a server. If you choose to use it, your website will be housed on a server alongside hundreds or possibly even thousands of others.

Because so many users share the same server, the cost of shared hosting is cheaper than any other type of hosting service. However at the same time all those websites are going to be sharing the same RAM, CPU, bandwidth and other server resources so the performance will be low.

Although many shared hosting packages claim to offer unlimited storage or bandwidth, it is often an empty promise. The truth is that once you go over the fine print youll find that shared hosting is more limited than any other option.

Technically VPS hosting involves sharing a server too, but the difference is that each website is hosted in its own dedicated virtualized space that acts as its own server. That means that although the physical server is shared, each website will have their own distinct set of resources to work with.

For all intents and purposes VPS hosting acts like a dedicated server and gives you full control over the virtualized server. The resources that are available can be scaled up or down as required too subject to certain limitations.

Generally speaking there is no significant disadvantage to this type of website hosting, only the cost is higher and it can be more difficult to set up. At times however hosting providers oversell their VPS servers which can result in users not getting the resources that were promised.

Overall however VPS hosting is a great middle-ground between shared hosting and a dedicated server.

If you use a dedicated server, youll essentially be exclusively leasing an entire physical server. Simply put youll have full control over the server, and can decide exactly what you want to do with it.

Although this is generally the most expensive type of hosting service, it is also the one that has the potential to provide the best performance, flexibility, and scalability. It will enable you to customize the server based on your requirements and optimize it to fit your needs.

Aside from the cost, the main downside of hosting your website on a dedicated server is that you need the technical expertise to set up and maintain the server. It is possible to find managed dedicated servers or hire a server administrator but both will cost even more.

In many ways cloud hosting shares a lot in common with VPS hosting, except your website and data will be hosted on a network of servers. Effectively it combines the network of servers into a virtualized machine, and allocated resources as required.

One of the main reasons why cloud hosting has become popular in recent years is that it is both scalable and flexible. Most providers will bill you based on the resources that you use, and you can scale it up or down as necessary.

Additionally the built-in redundancy that comes with having your website hosted on several servers can help to reduce the risk of downtime and data loss. Suffice to say unlike other types of hosting you wont have to worry about having a single point of failure.

The disadvantages of cloud hosting are that its costs can add up if you arent careful, and at times security can be a concern. Both these issues can be mitigated as long as youre careful however.

Final Words

Make no mistake there are a few other types of web hosting services that are relatively popular, such as managed hosting and co-location. However the four listed above are far more popular and odds are you should be able to find the type that you need from among them.

After you determine the type of website hosting that you feel is the best fit, you can then start to look at the various packages that are available. Ideally your goal should be to find one that matches your requirements and provides both the resources and features that you need.

(sponsored post)

Follow this link:
4 Popular Types of Website Hosting That You Need to Know - India West

Read More..

Is IT prepared for life in the fast lane of multi-cloud? – – Enterprise Times

The time has passed when organisations were content with gradually moving data and applications out of costly on-premises data centres and into the cloud. Now, the strategic imperative is to create IT environments that can support emerging technologies such as IoT, edge computing, AI and 5G. It has businesses opening the floodgates to migrate everything to a multi-cloud utopia ASAP.

At first there was a cloud first strategy. This was where new applications and workloads were developed in the cloud and legacy applications were slowly migrated over. Cloud NOW! is the new mandate where on-premises legacy and mission-critical applications are all subject to migration.

Businesses are laser-focused on getting to a resilient, high performance, future-proof cloud infrastructure ASAP. But they shouldnt gloss over the challenges along the way. Forrester research indicates that 50 percent of digital transformation efforts stall. One of the primary reasons is lack of preparation. Depending on your viewpoint, digital transformation is either your end goal, or the foundation you need for next generation technology. Either way, to benefit from whats next, we need to better prepare.

Making the decision to move to the cloud is just the start. It throws up a wealth of questions to address before the project can succeed. Over the past decade, iland has been helping customers answer those questions in relation to cloud hosting, cloud-based disaster recovery and cloud backup solutions. Ultimately, the multi-cloud or bust drive will likely happen.

We will adopt multiple clouds. But, we should focus on adopting one cloud strategy first, so the multi-cloud movement doesnt inflict unnecessary pain, risk or excessive cost. We should also make sure we are in a position to anticipate risk factors before they become complications. Its about getting your house in order before exposing your cloud infrastructure to the added pressures of multiple clouds or emerging technology.

In our experience, businesses must address six key areas for their multi-cloud ambitions to be successful:

You need visibility and control over your data before you innovate. However, the proliferation of cloud services makes it easy for anyone with a credit card to set up a SaaS-based application or storage site. The moment they do, the corporate data that theyre sharing and storing goes beyond the business control. In todays data protection environment, this constitutes an unacceptable security and compliance risk that must be prevented.

To solve this problem, you need a two-prong approach. First, identify why business units take matters into their own hands. Is it down to impatience with IT backlogs? Lack of respect for data protection policies? Demand for facilities you didnt offer or support? Once you address this question, you must implement a corporate cloud policy that restricts new, unauthorized cloud-based resources to give IT and security personnel full visibility over what data is in the cloud and where.

Cloud transition is all about finding the right fit. This is more than preventing over- or under-provisioning resources. Its also about getting your critical data, existing applications and workloads transitioned securely to the cloud with minimal disruption, and without rewriting them. Its also about finding a cloud that fits your broad business needs including those indicated by shadow IT cloud services that you want to eventually control.

If, for example, you operate a VMware environment on-premises, it makes sense to choose a cloud service provider with the expertise to support migration of virtualised applications and ensure the specific performance you need in the cloud. Getting this right early in your migration gives you the best chance of realising your goals.

In todays high-risk cybersecurity environment, security and compliance are more than something you check off. View them through the lens of business needs and risks. For example, there may be a geographic stipulation regarding data storage for GDPR compliance, or industry-specific security standards to meet if you operate in finance or healthcare.

Offering compliant cloud services is merely table-stakes for CSPs. Security environments are constantly changing. This requires businesses to develop ongoing, constructive CSP partnerships for expertise to support scheduled audits, security breaches or emerging threats. You need to know that you can call someone for informed counsel when the stakes are high to make the right decisions for your business.

As mentioned earlier, many businesses are pushing a cloud now strategy for critical applications to achieve the end-goals. But when planning how to get there, it is important to look at how to manage the journey. Some providers offer software and infrastructure but will not manage the migration process itself. Others offer a concierge service to migrate customer data to the cloud.

Your approach depends on how much in-house resource and expertise you need to support your project. A fully managed concierge approach is likely a better bet if you are already stretched. Consultancy fees can be costly and add up if you run into issues part-way through your migration. Its better to have your migration journey mapped out and managed to avoid nasty surprises.

Aim for a multi-layered approach to data protection that safeguards data based on its value and vulnerability. Mission-critical applications need a backup plan and disaster recovery solution that includes an off-premises backup. Use a mirror site to keep this separate from your production environment. This also ensures that data is protected based on its value and associated risk.

Visibility and management are often a common barrier when migrating to the cloud. However, it is possible to achieve the same level of familiarity, visibility and control over a cloud environment similar to an on-premises infrastructure, including security, costs and performance across the full stack available for management and reporting. This is the final key to gaining the benefits of moving to the cloud.

The urgency to transition fully to the cloud is understandable. But unless you focus on getting your house in order first, you could find that your projects cant achieve your stated goals. Get one cloud (and associated operational policies) done right before you deliberately adopt multiple clouds. Its a case of more haste, less speed. Focusing on key aspects of the migration journey and looking at how your business governs cloud use, security, data protection and compliance will pay off in the long term.

iland is a global cloud service provider of secure and compliant hosting for infrastructure (IaaS), disaster recovery (DRaaS), and backup as a service (BaaS). They are recognised by industry analysts as a leader in disaster recovery. The award-winning iland Secure Cloud Console natively combines deep layered security, predictive analytics and compliance to deliver unmatched visibility and ease of management for all of ilands cloud services. Headquartered in Houston, Texas and London, UK, iland delivers cloud services from its data centres throughout the Americas, Europe, Australia and Asia.

View post:
Is IT prepared for life in the fast lane of multi-cloud? - - Enterprise Times

Read More..

How managed services and the cloud is driving innovation in asset heavy industries – CIO Australia

Over a third (34 per cent) of executives say keeping up with shifting customer demands is a major challenge in enterprise, making it the second top barrier to business, according to a Business Reality Check, presented by American Express, and developed by The Economist Intelligence Unit.

Rising incomes, middle class bulges, ageing populations, and next-gen millennials are re-shaping the customer base around the globe.And technology has raised consumer expectations for curated experiences, the report states.

Organisations are looking to attack this challenge through Big Data, AI, and the Internet of Things (IoT), which can be tuned to everything from fine-grained sentiment analysis, to product mix optimisation and business risk management. To fully embrace what these technologies promise, organisations must first modernise their ERP platform so that it can be the foundation on which advanced, data-driven applications can be built.

PanAust Limited Head of Information Systems, Steven Ivanovski, said the mining company chose iTelligence, NTT DATA Business Solutions [part of the NTT Group] as its SAP hosting and BASIS partner because it offered a highly competitive, innovative and quality hosting product which met its requirements of working with a trusted partner who could deliver - scalability, flexibility and reliability.

PanAusts Ivanovski said, As an IT department we want to be great at providing transformational business services and innovation. As such when we look at working out what we want to leverage partners for, it is in the areas in which they are best of class. This in turn allows us to focus on how we can best service the business needs. The reliability of our SAP hosting solution allows us to do that.

According to NTT DATA, organisations that maintain legacy ERP systems face significant challenges. NTT DATA Head of Innovation, Jacqueline Gabb explained, The majority of legacy ERP systems have been built for business as it was at the time of implementation. Making changes can be complex, time consuming, and costly which is the opposite of what is needed. Systems need to be flexible and agile to adapt quickly to meet new customer expectations, support new opportunities, and provide a platform for business growth.

Some of these challengesinclude:

With a modern ERP system in place, organisations can then build out their business warehouse and reporting environment. What we are seeing is that customers quickly come to the realisation that while modernising ERP brings many benefits to an organisation, without a parallel focus on modern data platforms, companies are falling short of true digital transformation. Jon Carroll, NTT DATA Business solutions, further explained modern data platforms enable AI and ML, combinations of all types of data in a cost effective and flexible manner.Modern data platforms help organisation to democratise their data, turn data into insights and turn insight into knowledge.

In other words, organisations need to take a fluid and flexible approach to transformation, towards the eventual goals of Big Data and AI. The traditional, structured approach of building layer on layer, is no longer nimble enough for the competitive environment. NTT DATA instead advocates for organisations to find platforms and solutions that allow them to build enough of a step to quickly move up it, towards the top of the proverbial stairs.

By moving to SAPs next generation digital core with SAP S/4HANA and running it in Azure, all the organisations data transactional, unstructured, external come together to provide in-depth analysis and insight, in real time, and from a single, cohesive source across the organisation. Additionally, Azure offers easy-to-use tools that support a full range of analytic workflow capabilities for business-centric, agile, self-service analytics.

For enterprises, the value in approaching a partner such as NTT DATA for an ERP modernisation project is its experience in provisioning both Azure and SAP. Together with NTT Limited Azure on-demand resources customers can leverage, more than just a deployment option. The Azure modern data platform offering features deeply expanded capabilities and integrated toolset for enterprise and government to develop, iterate on and stand up new business models. Organisations gain the ability to try new things quickly, test them quickly with fast POCs and fast-fail iteration, and rapidly determine their success potential. Azure offers everything for app development from API management to managed databases (such as Azure CosmosDB) and Azure Container Services to event-driven serverless capabilities with support for a wide variety of development toolsets.

This roadmap to digital transformation and advanced AI involves moving through the ERP and SAP HANA modernisation steps early, as they provide the foundation for much of the subsequent innovation and technology applications. For more information on how to take those first steps, or how NTT DATA works with both Microsoft Azure and SAP, click here.

Error: Please check your email address.

Tags big dataAINTT DataPanAustLegacy ERP

More about American ExpressEconomist Intelligence UnitMicrosoftMicrosoft AzureModernPanAustReality CheckSAP

View original post here:
How managed services and the cloud is driving innovation in asset heavy industries - CIO Australia

Read More..

Salesforce and Microsoft Team Up to Improve Your Productivity – Tech.co

Two of the biggest names in business productivity are joining forces Salesforce and Microsoft have announced a partnership this week that should produce a powerful means of improving team productivity and meeting business needs.

To say that Salesforce is one of the most popular business software providers would be a staggering understatement. It's been the number one CRM platform for five years in a run, with its market share more than double its closest competitor. Plus, with reviews regularly singing its praises, there's no denying the brand has a firm hold on the CRM market.

Now, by partnering with Microsoft, Salesforce will further establish itself as the go-to CRM for pretty much any business looking for a boost.

Salesforce and Microsoft are indeed teaming up with the goal of making the CRM platform even more effective. The partnership boils down to the fact that Microsoft Azure will now be the public cloud provider for Salesforce Marketing Cloud, with additional integrations that connect Salesforce's Sales Cloud and Service Cloud with Microsoft Teams.

At Salesforce, were relentlessly focused on driving trust and success for our customers, said Marc Benioff and Keith Block, co-CEOs of Salesforce, in a press release. We're excited to expand our partnership with Microsoft and bring together the leading CRM with Azure and Teams to deliver incredible customer experiences.

Effectively, this collaborative effort will optimize Salesforce Marketing Cloud's performance, thanks to Microsoft's robust Azure cloud hosting service. As for the other benefits of this partnership, it's safe to say your business could seriously take advantage of the improvements.

Because so many businesses are already utilizing Salesforce CRM and Microsoft Teams platforms, this partnership will save a lot of time for a wide range of companies, by giving sales and service users the ability to search, view and share Salesforce records directly within Teams. Additionally, these integrations will be made available to Salesforce Sales and Service Clouds in late 2020. Some companies are already pretty excited about the deal.

Marriott has more than 7,200 properties spanning 134 countries and territories, so driving efficiency and collaboration is critical, said Brian King, Global Officer of Digital Distribution, Revenue Strategy, and Global Sales at Marriott International in the same press release. The combination of Salesforce and Microsoft enables our teams to work better together to enhance the guest experience at every touchpoint.

Considering integrations between Salesforce and Microsoft Outlook are already so popular for creating, communicating, and collaborating between users, this team up will be welcome news to the countless users of the popular CRM platform. But if your business isn't already on board, how can you take advantage of this news today?

Now is the time that many businesses begin reviewing their operations and planning for the new year. If that's the case for your company, this is the perfect time to consider investing in a software such as Salesforce. Salesforce is one of the most popular, best reviewed CRM platforms on the market that comes in at a reasonable price.

However, we have to admit, not every company is going to need the full extent of Salesforce's robust feature catalog, the endless integrations, or the helpful marketing tools. Tech.co has done a whole bunch of research on CRM platforms across the industry, which means that finding one that fits your particular business' needs should be a breeze. Just take a look at our helpful quotes tool to get some up-to-date pricing options, and you'll be on your way to a productive, well-run business in no time.

Learn more about the best CRM platforms with Tech.co

Originally posted here:
Salesforce and Microsoft Team Up to Improve Your Productivity - Tech.co

Read More..

Business Cloud VoIP & UC Services Market:Clear Understanding of The Competitive Landscape and Key Product Segments – Heraldresearch

Continuous adoption of cloud services by the government and business sector is the key factor contributes the growth of global business cloud VoIP & UC services market. Business cloud VoIP is a cloud-based voice over internet protocol (VoIP) technology which delivers voice and multimedia over Internet Protocol (IP) networks. Business cloud VoIP and UC is the convergence of cloud hosting and VoIP technology that allow next-generation calling features and integrated mobile access. For example, Unified Communications (UC) solution utilizing Microsofts hosted messaging, and collaboration tools such as Microsoft Office communications server, Microsoft Exchange, and Microsoft SharePoint, integrated with hosted voice over IP (VoIP) phone service helps improve fluidity of office collaboration and communication. Unified communication includes various components such as calling (voice, video), messaging (voice, video, email, instant messaging), conferencing (voice, web, and video), mobility, collaboration, and communication-enabled business processes (CEBP)

Business Cloud VoIP & UC Services Market: Market Dynamics

Download sample copy of this report:https://www.futuremarketinsights.com/reports/sample/REP-GB-4714

Business sector is rapidly migrating to cloud solutions, also rising interest in cloud-based communications solutions, and growing enterprise awareness on hosted VoIP solution coupled with rising importance for the collaborative work environment to enhance productivity, are the prominent factor contributes the growth of global business cloud VoIP & UC services market.

Additionally, growing preference from organizations across various industries to eliminate redundant infrastructure of the traditional communicant system, increase overall business productivity, and to reduce the costs of communications accelerates the growth of global business cloud VoIP & UC services market. Convergence of internet of things & unified communications, hybrid approach to hosted VoIP, and delivery of unified communications as a service (UCaaS) are some of the notable trends the global business cloud VoIP & UC services market. However, technological complexity to implement unified communication across the organization and challenges identifying the technologies, services, and features based on the specific business needs is identified as restraints likely to deter the progression of global business cloud VoIP & UC services market.

Business Cloud VoIP & UC Services Market: Segmentation

The global business cloud VoIP & UC services market is segmented on the basis of component, vertical, end-user type and by region

Request for [emailprotected]https://www.futuremarketinsights.com/customization-available/rep-gb-4714

Business Cloud VoIP & UC Services Market: Segmentation on the basis of component

Business Cloud VoIP & UC Services Market: Segmentation on the basis of vertical

Business Cloud VoIP & UC Services Market: Segmentation on the basis of End-user Type

Business Cloud VoIP & UC Services Market: Regional Outlook

Among all regions cloud VoIP & UC services market in North America is expected to dominate the market due to expanding SIP trunk infrastructure and high priority for communications-enabled business processes. In terms of revenue, Asia-Pacific is identified as the fastest growing cloud VoIP & UC services market, increasing demand for cost effective enterprise communication solutions and expanding mobile and remote workforce

Download table of contents with figures & tables: https://www.futuremarketinsights.com/askus/rep-gb-4714

Business Cloud VoIP & UC Services Market: Competition Landscape

Cisco, Verizon Enterprise, AT&T Inc., EarthLink, Inc., XO Communications, LLC, Evolve IP, LLC, Momentum Telecom, West Unified Communications Services Inc., New Horizon Communications, 88, Inc., and BroadSoft.

Here is the original post:
Business Cloud VoIP & UC Services Market:Clear Understanding of The Competitive Landscape and Key Product Segments - Heraldresearch

Read More..