Page 4,392«..1020..4,3914,3924,3934,394..4,4004,410..»

Top Hybrid Cloud Providers – Datamation – Datamation

Its likely that the hybrid cloud providers will see a robust market for years into the future. However, there is some debate about what, exactly, constitutes the hybrid cloud. The term hybrid cloud has several definitions but all are rather similar to each other in the final analysis: a hybrid cloud is a composition of two or more clouds working together but remaining distinct from each other. The clouds can either be from a public cloud provider (Salesforce, Amazon) or a private cloud that you maintain in your own data center.

This is the generally accepted definition. What's variable is the connection between one cloud provider and a company's own services. For most IT professionals, providers and services, a company using Salesforce or Microsoft Azure with their own internal systems is considered a hybrid cloud.

Not so, says Gartner, the leading IT market research firm. It defines the hybrid cloud as the coordinated use of two or more cloud environments operating in harmony. That could mean two public clouds, like ServiceNow and NetApp, or a public and private cloud connection.

"When we talk about hybrid cloud, we expect it will have all the attributes of cloud computing, so you access it as a service. It's scalable and elastic, you pay per use. Those are the fundamental definitions of cloud. If you have cloud plus non-cloud with coordination between them, that's great but it's not cloud. You are getting cloud on one side not on the whole experience. We call that hybrid IT," said Ed Anderson, research vice president with the firm.

Equally off the beaten path is Gartner's view on the top hybrid cloud providers. The reflexive answer would be the typical thought leaders: Amazon, Microsoft, Google, and IBM. But Gartner views hybrid providers not as the service providers but as consultancies that will come to your business and help connect you. Microsoft and IBM have on-site consulting services but Google and Amazon do not to any great degree, he notes.

"If you called up Amazon and said 'I need a hybrid cloud,' they would say 'hybrid with what?' Well I want it to work with VMware,' they would say we have this capability or talk to a partner. In general that's not their offering services," said Anderson.

That's why his choices of the top hybrid providers might come as a bit of a surprise, but these are the firms that will show up, hold your hand, and make the connections between the public cloud and your internal systems, whether they follow the private cloud model or traditional IT.

Microsoft is one of the few vendors to offer a true hybrid cloud solution because it has a little bit of everything. It has built massive data centers around the U.S. and globally, providing the compute and storage capacity anyone from a small business to giant enterprise would want.

Where Microsoft has surpassed market leader Amazon is that it had popular on-premises applications that it has successfully tuned for the cloud, giving it an enormous lead over the rest. Amazon, Google and IBM have no answer to Office 365 and Sharepoint, and Microsoft has made the most of that.

It also offers SQL Server and Dynamics as on-demand cloud apps, with the same features as the on-premises version. And everything is built on Windows Server, which is widely used, making migration easy.

Microsoft also has a services division, its MVPs, that Amazon and Google cannot match. IBM, of course, has its Global Services that are more than capable to help cloud migration projects.

Anderson described IBM as a "really holistic provider" offering components across the board, from general hosting to bare metal provisioning, shared multi-tenant public cloud, hosted private cloud, and its services business can help you deploy a private cloud in your data center.

IBM offers multiple entry points to the cloud, including its SoftLayer public cloud services (IaaS), Bluemix private cloud (PaaS) and Cloud Managed Services (SaaS) solutions. Bluemix is a development environment supporting multiple languages, including Java, Ruby, and Python. CMS supports analytics, SAP, and Oracle, among other SaaS offerings.

SoftLayer, which has turned into one of IBM's smartest acquisitions, gave the company a kick-start in cloud services. Its specialty is bare metal provisioning, meaning you provide the operating environment. It offers you nothing but the hardware. AWS, Azure and Google Compute all provide the operating environment for you. Oracle is the only other major bare metal provider, and it's a much newer entry in the field.

Accenture is the largest IT services provider in the world and has a huge array of managed services to provide brokerage between cloud and coordinated use. "They are a first class outsourcer with a lot of consulting capability," said Anderson.

Accenture has built what it calls a cloud management platform, a managed service where they set up a portal and bring all your IT and external cloud partners into that portal, providing one place to request and provision services and offers a dashboard to see all your services in one place, plus integrations on the back end.

While it serves all major cloud providers, Accenture also has a subsidiary called Avanade, a joint venture between Accenture and Microsoft dating back to 2000. This group specializes in and has a particularly deep understanding in Microsoft products.

Selling its services business which was built around EDS, the services giant it acquired in 2008 may seem counterintuitive but it worked out well for HP Enterprise because HPE was doing broad outsourcing and consulting.

With the divestiture of the services business last year, HPE built a new services group that functions more like a technical services business, helping clients with implementation of cloud services and managed services.

HPE's strength is it does much with integrating its own data center products compute, storage, and networking with public cloud services. Anderson said the company has particularly deep expertise around VMware and managed services around, along with coordination between internal systems and the offering from Microsoft and Amazon.

CSC has been through quite a bit of change. In 2015 it spun off the sizable business that handled federal IT into a new company, called CSRA, and last year acquired the HPE services business. It also bought a company called ServiceMesh, which does hybrid control integration and brokerage capabilities.

CSC was early to the hybrid market, starting almost a decade ago when it began offering platforms for clients to build SaaS, PaaS and IaaS services. It was an early adopter of VMware and told clients it could set up a shared environment with a public cloud provider or help them set up a dedicated hosted private cloud based on VMware.

What really helped it, said Anderson, was a single rate card across the environments. "It didnt matter the deployment, you paid for it the same way." However, he adds that CSC hasn't changed its strategy much recently to be more of a cloud integrator and broker and services provider.

Read more:
Top Hybrid Cloud Providers - Datamation - Datamation

Read More..

Your cloud choice: Succeed slowly or fail fast | InfoWorld – InfoWorld

Thank you

Your message has been sent.

There was an error emailing this page.

If you want proof that what Ive been advising for years is true, check out thisresearch on cloud computing from The Register: Planning pays off when it comes to cloud migration and deployments.

The Register found that most companies dont yet have meaningful cloud adoption or have truly embraced the idea of the cloud-first enterprise. The Registers report also shows that cloud adoption has been gradual over the past five yearsit didnt find the market explosion that many in the press have been writing about.

Finally, the report recommended that you get your internal systems aligned with public cloud systems. That means planning better cloud management and application, data, and platform integrations.

The report underscored what IT should already understand: Cloud computing is an incremental process for most enterprises. It takes time to get the resources aligned and more time to do something meaningful with them. My rule of thumb is that enterprises typically underestimate the amount of time by a factor of two.

Cloud migrations and deployments are hardor at least harder than most people believe.Why? Because cloud computing is systemic change in how we do computing. As you move up to 30 years of internal systems to the cloud, you'd better know what youre doing.

Unfortunately, many enterprises, especially in the United States, are run by short-term thinkers, egged on by the rosy scenarios painted by the tech press, cloud providers, and, dare I say, analyst reports.

When they decide to move to cloud, its within an aggressive timeframe thats largely unrealistic. Theyve set themselves up for failure.

No surprise then that most enterprises fail to meet their own expectations. Its those that spend the time doing upfront planning that typically do the best, fall short the least, and even sometimes meet or exceed their expectations.

My advice remains consistent: View this as the greatest IT shift since the initial automation of systems in the 1970s and 1980s. There needs to be a great deal of planning and understanding that occurs before you make the move. Only then can you find your path to success. Better a slower path to success than a fast one to failure.

David S. Linthicum is a consultant at Cloud Technology Partners and an internationally recognized industry expert and thought leader. Dave has authored 13 books on computing and also writes regularly for HPE Software's TechBeacon site.

Sponsored Links

Excerpt from:
Your cloud choice: Succeed slowly or fail fast | InfoWorld - InfoWorld

Read More..

Manufacturing Sectors Are Rapidly Moving to Cloud Computing – Manufacturing & Engineering Magazine

With the shift to the technological revolution growing nearer and nearer manufacturers are increasingly opting to use cloud computing as their first step towards the future. More than half of European manufacturers are now adopting a cloud-first approach and a report released by ScienceNow: The Cloud Computing Tipping Point estimated that almost all in the manufacturing sector (86%) are expected to complete the shift within the next twenty-four months.

This is higher than the average shift in other business sectors such as financial services which has an estimated average increase to 77% over the next two years. The popularity for cloud computing is taking a lot of stress from manufacturers across the globe. Due to the nature of the job many companies and employees find themselves under constant pressure to increase accuracy, capitalize internal intelligence and process speed at a competitive force. High quality tech is needed in modern manufacturing and using the cloud could be the solution people need for a little breathing space.

Cloud based strategies offer companies the chance to bring their own innate intelligence and knowledge into sales situations without having to sift through evidence of onsite systems. The cloud is quicker than physical options and once people gain the knowledge to understand how the system works it seems to be the quicker option while also being easier to customize and access.

Manufacturing industries are at various levels of adopting automating self-service but by changing to automating customer service, support and enquires can also shift to the online realms, meaning managerial roles can be part shifted to online too. Alongside pricing and content platforms that the cloud and IoT systems make easier than ever before to use, access and control.

Using cloud based systems seems to streamline key areas of the manufacturing business, freeing up more time to invest in new products and in turn sales.

Visit link:
Manufacturing Sectors Are Rapidly Moving to Cloud Computing - Manufacturing & Engineering Magazine

Read More..

Cloud Computing Chips Changing – SemiEngineering

An explosion in cloud services is making chip design for the server market more challenging, more diverse, and much more competitive.

Unlike datacenter number crunching of the past, the cloud addresses a broad range of applications and data types. So while a server chip architecture may work well for one application, it may not be the optimal choice for another. And the more those tasks become segmented within a cloud operation, the greater that distinction becomes.

This has set off a scramble among chipmakers to position themselves to handle more applications using more configurations. Intel still rules the datacentera banner it wrestled awayfrom IBM with the introduction of commodity servers back in the 1990sbut increasingly the x86 architecture is being viewed as just one more option outside of its core number-crunching base. Cloud providers such as Amazon and Google already have started developing their own chip architectures. And ARM has been pushing for a slice of the server market based upon power efficient architectures.

ARMs push, in particular, is noteworthy because it is starting to gain traction in a number of vendors server plans. Microsoft said last month it would use ARM server chips in its Azure cloud business to cut costs. This seemed like dream just a couple years ago, but a lot of people are putting money into it big time right now, said Kam Kittrell, product management group director in the Digital & Signoff Group at Cadence. As time goes on, what well see is that is instead of just having a general-purpose server farm that runs at different frequencies but basically has a different chip in it (depending if its high performance or not), youre going to see a lot of different types of compute farms for the cloud.

Fig. 1: ARM-based server rack. Source: ARM

Whether ARM-based servers will succeed just because they use less power than an x86 chip for specific workloads isnt entirely clear. Unlike consumer devices, which typically run in cycles of a couple years, battles among server vendors tend to move in slow motionsometimes over a decade or more. But what is certain is that inside large datacenters, power expended for a given workload is a competitive metric. Powering and cooling thousands of server racks is expensive, and the ability to dial power up and down quickly and dynamically can save millions of dollars per year. Already, Google and Nvidia have publicly stated that a different architecture is required for machine learning and neural networking.

In looking at the power performance tradeoffs, and how to target the designs properly, there are two distinct things that cloud has accelerated in both the multicore and networking space. What is common between these chips is that they are pushing whatever the bleeding edge is of technology, such as 7nm, Kittrell said. Youve got to meet the performance, theres no question. But youve also got to take into account the dynamic power in the design. At 65nm we got used to power being dictated by leakage all the way through 28nm. At 28nm, which was the end of planar transistor, dynamic power became more dominant. So now youre having to study the workloads on these chips in order to understand the power. Even today, datacenters use 2% of the power in the United States, so they are a humongous consumer. And when it comes to power, its not just how much power the chip uses, its the HVAC in order to keep the datacenter cool. In essence, youve got to keep the dynamic power under target workloads under control, and the area has to be absolutely as small as possible. Once you start replicating these things, it can make a tremendous difference in the cost of the chip overall. The more switching nodes you put in there, the more power it consumes overall.

Slicing up the datacenter Changes have been quietly infiltrating datacenters for some time. While there are still racks of servers humming along in most major data centers, a closer look reveals that not all of them are the same. There are rack servers, networking servers, and storage servers, and even within those categories the choices are becoming more granular.

While there is still a need for enterprise data centers to have a general server/traditional server primarily based on Intel Xeon-core-based processors with a separate NIC card connecting to external networking where the switching and routing occur, we see in these large-scale cloud datacenters that they have a number of specific applications that they feel can be optimized for those applications within the cloud, within that data center, said Ron DiGiuseppe, senior strategic marketing manager in the Solutions Group at Synopsys.

As an example, DiGiuseppe pointed to Microsofts Project Olympus initiative under its Azure business, which defines a server that is targeting different applications such as web services. Microsoft is large scale they are estimating that 50% of their data center capacity is allocated to web server applications. And obviously, every cloud data center would be different. But they wanted to have servers that can be optimized for the web applications. They announced last month that they have five different configurations of servers targeting segment-optimized applications.

Another example would be database services, he said. This is very fast, and therefore low latency search and indexing for databases, such as for financial applications. With that in mind, the system architectures are being optimized for those applications, and the semiconductor suppliers are architecting their chips to have acceleration capabilities tied to the end applications. Therefore, you could optimize semiconductor adding features to support those different segmented applications.

That could include a 64-bit ARM processor-based server chip or an Intel Xeon-based server chip in a database services application, where the database access is accelerated by adding very close non-volatile storage using SSDs or NAND Flash SSDs through PCI Express, which connects directly to the processor using NVMB protocol. The goal is to minimize latency and to be able to store and access commands.

Seeing through the fog While equipping the datacenters is one trajectory, a second one is reducing the amount of data that floods into a datacenter. There is increasing interest to be able to use the network fabric to do at least some of the signal processing, data processing, and DSP processing to extract patterns and information from the data. So rather than pushing all of this data up through the pipe into the cloud, the better option is to refine that data so only a portion needs to be processed in the cloud servers.

This requires looking at the compute equation from a local perspective, and it opens up even more opportunities for chipmakers. Warren Kurisu, director of product management in the embedded division at Mentor, a Siemens Business, said current engagements are focused on working with companies that build solutions for local processing, local data intelligence, and local analytics so that the cloud datacenters are not flooded with reams of data that clog up the pipes.

One of the key areas of focus here involves intelligent gateways for everything from car manufacturing to breakfast cereal and pool chemicals. It requires multicore processors in the gateway that can enable a lot of the fog processing, a lot of data processing in the gateway, he said. And that adds yet another element, which is security.

Security is the number one question, so we have made a very huge focus on being able to create a gateway that leverages hardware security built-in to the chip and the board and establish a complete software chain of trust so that anything that gets run and loaded onto that gateway any piece of software is authenticated and validated through cryptologythrough certificates and other things, Kurisu said. But you need some processing power to do just that sort of stuff. There needs to be some sort of hardware security available. One of our key demonstration platforms is on the NXP I.MX6 processor, which has a high-assurance boot feature in it. The high-insurance boot basically has a key burned into the silicon, and we can leverage that key to be able to establish that chain of trust. If there isnt that hardware mechanism enabled in the system, then we can leverage things like secure elements that might be on the board that would all do the same thing. There would be some sort of crypto element there or a key used to establish the whole chain of trust. A change in thinking The key to success comes down to thinking about these chip designs very holistically, Kurisu added, because when it comes to cloud in the datacenter, and if you think about Microsoft Azure or Amazon Web services or any of the others, the types of capabilities that are available from the cloud datacenter down to the actual embedded device, these things need to work in tandem. If you have a robot controller, and you need to do a firmware updateand you want to initiate that from the cloudhow that gets enabled on the end device is tied very explicitly into how the operation is invoked from the cloud side. What is your cloud solution? Thats going to drive what the embedded solution is. Youve got to think of it as a system, and in that way the stuff that happens in the datacenter is very closely related to the things that might seem very disconnected on the edge. But how the IoT strategy is implemented is somewhat tied together so it all has to be considered together.

It also could have an impact on chip designs within this market, and open doors for some new players that have never even considered tapping into this market in the past.

View original post here:
Cloud Computing Chips Changing - SemiEngineering

Read More..

IBM Misses Estimates in 20th Straight Quarterly Sales Drop – MSPmentor

IBMs revenue fell short of analysts projections, marking a 20th consecutive quarterly decline as growth in new businesses like cloud services and artificial intelligence failed to make up for slumping legacy hardware and software sales.

That sales miss, the first in more than a year, could temper estimates for a return to revenue growth by early 2018. For years, Chief Executive Officer Ginni Rometty has been investing in higher-growth areas and moving away from older products like computers and operating system software. Even as she has shed units to cut costs and made acquisitions to bolster technology and sales, the legacy products are still a drag.

Sales in the first quarter fell 2.8 percent from a year earlier to $18.2 billion, IBM said in a statement Tuesday. That was a bigger drop than the 1.3 percent decline in the previous quarter. Analysts had expected $18.4 billion on average. The shares fell as much as 4.3 percent to $162.71 in late trading.

Part of the sales miss stemmed from IBMs technology services and cloud platforms segment, where revenue declined for the first time in three quarters. That group helps clients move applications onto cloud servers and manage workloads through multi-year deals of $500 million to $1 billion. Some of those contracts were expected to get signed in the first quarter but didnt go through, Chief Financial Officer Martin Schroeter said in an interview.

Had they been completed, revenue from the Global Technology Services group would have been better, Schroeter said. When we do get those done in April, May or June, theyll start to deliver.

Profit, adjusting for some items, was $2.38 a share. Analysts expected $2.35 a share on average, according to data compiled by Bloomberg.

International Business Machines Corp. is aiming to reach $40 billion in sales in the new growth businesses by next year, which would require about a 21 percent jump from 2016. The company said it was ahead of pace to reach that target. Included in this group are all the products and services related to cloud, analytics, security and mobile technology.

The companys cognitive solutions segment, which houses much of the software and services in the newer businesses, has shown the most promise in recent quarters. Sales in cognitive solutions, which includes the Watson artificial intelligence platform and machine learning, grew for the fourth quarter in a row.

As part of its transformation, the company is working to sell more software that works over the internet, where customers pay as they use the tools. IBM has spent billions over the last few years building and buying the products and cloud data centers needed to support this type of business, a move thats eroded overall profitability. Gross margin shrank from a year earlier for the sixth straight quarter.

IBMs systems segment, home to legacy businesses like mainframe and operating systems software business, posted a 17 percent drop in sales. That compared with a 22 percent drop during the same period last year. IBM doesnt expect growth in the area, but Rometty has said the company can still extract value from the business.

Continue reading here:
IBM Misses Estimates in 20th Straight Quarterly Sales Drop - MSPmentor

Read More..

How robots will retool the server industry – ITProPortal

Larry Ellison, founder of Oracle, summed up on the concept of cloud computing very succinctly. All it is, is a computer attached to a network. Ellison and Oracle have gone on to embrace both open source and cloud technologies including OpenStack, but the basic premise that it starts with a physical server and a network still holds true.

The server industry is going through massive change, driven in the main part by advances in open source software, networking and automation. The days of monolithic on-site server rooms filled with rack-space, and blinking lights and buzzing air-con, are gone. However, the alluring simplicity of this concept is not quite how it works in the real world.

Organisations who want to run a private cloud on premises or a hybrid with public cloud must first master bare metal servers and networking and this is causing a major transition in the datacentre.

Instead, large organisations are deploying hybrid clouds, running on multiple smaller servers distributed across far-flung sites around the globe. These are being deployed and managed, as demand dictates, by robots, rather than IT administrators.

There is minimal human interaction, as the whole whirligig turns on slick IT automation, designed and directed by an IT technician on a dashboard on a laptop in any physical location.

Suddenly, traditional IT infrastructure is less gargantuan, and less costly, if no less important. But servers remain a part of a bigger solution, residing in software. It is crucial CIOs also make use of their existing hardware to take full advantage of the opportunities the cloud offers, rather than just tearing it out, and ripping it up, and starting again.

It does not make sense to renew their infrastructure, at great expense; such squandering actions hinder progress, ultimately. New architectures and business models are emerging that will streamline the relationship between servers and software, and make cloud environments more affordable to deploy.

What do robots bring to the party?

The automation of data centres to do in minutes what a teams of IT administrators used to do in days does present a challenge to organisations.

Reducing human interaction could be linked to fear of job losses, but instead IT Directors and CIOs will find that they can redeploy the workforce to focus on higher value tasks, giving them more time back to interact with their infrastructures and enabling them to extract real value from their cloud architectures.

In addition, automation opens up the field to new and smaller players. Rather than requiring an organisation to spend a great deal of time and money on specialist IT consultancy, automation and modelling allows smaller organisations to take advantage of the opportunities offered by cloud, and offer their service more effectively.

For example, imagine you are a pharmaceutical company analysing medical trial data. Building a Hadoop big data cluster to analyse this data set could have previously taken ten working days. Through software modelling on bare metal, this workload can be reduced to minutes, allowing analysts to do what they need to do more quickly, finding the trends or results from a trial, and bringing new drugs to market faster.

Deployment and Expansion

The emergence of big data, big software, and the internet of things is changing how data centre operators design, deploy, and manage their servers and networks. Big software is a term we at Canonical have coined to describe a new era of cloud computing. Where applications were once primarily composed of a couple of components on a few machines, modern workloads like Cloud, Big Data & IoT are made up of multiple software components and integration points across thousands of physical and virtual machines.

The traditional practice of delivering scale on a limited number of very large machines is being replaced by a new methodology, where scale is achieved via the deployment of many servers across many environments.

This represents a major shift in how data centres are deployed today, and presents administrators with a more flexible way to drive value to cloud deployments, and also to reduce operational costs. A new era of software (web, Hadoop, Mongodb, ELK, NoSQL) is enabling them to make more of their existing hardware. Indeed, the tools available to CIOs for leveraging bare metal servers are frequently overlooked.

Beyond this, new software and faster networking is starting to allow IT departments to take advantage of new workload benefits from distributed, heterogeneous architectures. But we are at a tipping point, as much of the new server software and technology takes hold, and comes to light.

OpenStack has been established as a public cloud alternative for enterprises wishing to manage their IT operations as a cost-effective private or hybrid cloud environment. Containers have brought new efficiencies and functionality over traditional virtual machine (VM) models, and service modelling brings new flexibility and agility to both enterprises and service providers.

Meanwhile, existing hardware infrastructure can be leveraged to deliver application functionality more effectively. What happens from here, in the next three-to-five years, will determine how end-to-end solutions are architected for the next several decades.

Next Generation Solutions

Presently, each software application has different server demands and resource utilisation. Many IT organisations tend to over-build to compensate for peak-load, or else over-provision VMs to ensure enough capacity in the future. The next generation of hardware, using automated server provisioning, will ensure todays IT professionals dont have to perform capacity planning in five years time.

With the right provisioning tools, they can develop strategies for creating differently configured hardware and cloud archetypes to cover all classes of applications within their current environment and existing IT investments.

This way, it is possible for administrators to make the most of their hardware by having the ability to re-provision systems for the needs of the data centre for instance, a server being used for transcoding video 20-minutes ago is a Kubernetes worker node now, a Hadoop Mapreduce node later, and something else entirely after that.

These next generation solutions, affording a high degree of automation, bring new tools, efficiencies, and methods for deploying distributed systems in the cloud. The IT sector is in a transition period, between the traditional scale-up models of the past and the scale-out architecture of the future, where solutions are delivered on disparate clouds, servers, and environments simultaneously.

Mark Baker, OpenStack Product Manager, Canonical

Image Credit: Scanrail1 / Shutterstock

See original here:
How robots will retool the server industry - ITProPortal

Read More..

Microsoft tools coalesce for serverless computing – InfoWorld

Microsofts adoption of serverless computing is a big piece of Azure maturing as a platform. Theres a lot going on here, as architectures and services evolve to take advantage of the unique capabilities of the cloud and we as users and developers migrate away from traditional server architectures.

Mark Russinovich, Microsofts CTO of Azure, has a distinct view on the evolution of cloud as a platform. Infrastructure as a service [IaaS] is table stakes, he said at an Azure Serverless computing event at Microsofts Redmond, Wash., headquarters last week, Platform as a service [PaaS] is the next step, offering runtimes and developing on them, an API and an endpoint, where you consume services. Thats where we are today, where we still define the resources we use when we build cloud applications.

Then comes serverless computing. Serverless is the next generation of computing, the point of maximum value, Russinovich said.

What hes talking about is abstracting applications from the underlying servers, where code is event-driven and scales on demand, charged by the operation rather than by the resources used. As he said, I dont have to worry about the servers. The platform gives me the resources as I need them. Thats the real definition of serverless computing: The servers and OS are still there, but as a user and a developer you dont need to care about them.

You can look at it as a logical evolution of virtualization. As the public cloud has matured, its gone from one relatively simple type of virtual machine and one specific type of underlying hardware to specialized servers that can support IaaS implementations for all kinds of use cases, such as high-performance computing servers with massive GPUs for parallel processing and for scientific computing working with numerical methods, or such as arrays of hundreds of tiny servers powering massive web presences.

That same underlying flexibility powers the current generation of PaaS, where applications and code run independently of the underlying hardware while still requiring you to know what the underlying servers can do. To get the most out of PaaS (that is, to get the right fit for your code), you still need to choose servers and storage.

With serverless computing, you can go a step further, concentrating on only the code youre running, knowing that its ephemeral and youre using it to process and route data from one source to another application. Microsofts serverless implementations have an explicit lifespan, so you dont rely on them being persistent, only on them being there when you need them. If you try to use a specific instance outside that limited life, you get an error message because the application and its hosting container will be gone.

Nir Mashkowski, principal group manager for Azure App Service, noted three usage patterns for Azures serverless offerings.

The first, and most common, pattern is what he calls brownfield implementations. They are put together by enterprises as part of an overall cloud application strategy, using Azure Functions and Logic Apps as an integration tool, linking old apps and new and on-premises systems and cloud.

The second pattern is greenfield implementations, which are typically the province of startups, using Azure Functions as part of a back-end platformthat is, as switches and routers moving data from one part of an application to another.

The third pattern is for internet of things applications. It is a combination of the two, using Azure Functions to handle signals from devices, triggering actions in response to specific inputs.

For enterprises wanting a quick on-ramp to serverless computing, Azure Functions closely related sibling Logic Apps is an intriguing alternative. Drawing on the same low-code foundations as the more business-focused Flow, it gives you a visual designer with support for conditional expressions and loops. (You can even can run the designer inside Visual Studio.)

Like Azure Functions, Logic Apps is event-triggered and can be used to coordinate a sequence of Azure functions. Wrapping serverless code in a workflow adds more control, especially if its used to apply conditions to a triggerfor example, launching one function if a trigger is at the low end of a range of values, another if its at the high end.

Russinovich described three organizations working with serverless computing:

One of the more interesting aspects of both Azure Functions and Logic Apps is that theyre not limited to running purely in the cloud. Functions themselves can be developed and tested locally, with full support in Visual Studio, and both Azure Functions and Logic Apps will be supported by on-premises Azure Stack hybrid cloud systems.

Inside the Azure datacenters, its serverless options are all containerized for rapid deployment. That same model will come to your own servers, with Azure Functions able to run on any server, taking advantage of containers for rapid deployment.

Currently, Azure Functions is based on the full .Net Framework release, so theres a minimum requirement of Windows Server Core as a host. But thats going to change over the next few months with an open source release based on .Net Core and the upcoming .Net Standard 2.0 libraries. With those in hand, youll be able to run Azure Functions in containers based on Windows Server Nano, as well as on .Net Core running on Linux. Youll be able to migrate code from on-premises to hybrid cloud and to the public cloud depending on the workload and on the billing model you choose.

Such a cross-platform serverless solution that runs locally and in the cloud starts looking very interesting, giving you the tools to build and test on-premises,then scale up to running on Azure (or even on Linux servers running on Amazon Web Services).

Theres a lot to be said for portability, and by working with REST and JSON as generic input and output bindings, Microsofts containerized serverless implementation appears to avoid the cloud lock-in of its AWS and Google competitors while still giving you direct links to Azure services.

View post:
Microsoft tools coalesce for serverless computing - InfoWorld

Read More..

Opinion: Why the cloud is good news for peak – eDelivery

System downtime for retailers can be disastrous, its effects rippling through the supply chain, so how can cloud computing help, particularly in peak period asks David Jack, CIO of MetaPack.

It is widely acknowledged that the tricky Peak season with its huge demands on retail systems has become a much more efficient operation over the last two years. Its no coincidence that during that period, more retailers have moved at least part of their IT infrastructure onto the cloud facilitating easier scalability to cope with the rapid rise in order volumes and deliveries.

But in the fast-moving world of technology, where cloud computing has been adopted readily by many market segments, the retail industry has been noticeably reticent.

There are a number of reasons for this, not least what retailers stand to lose from even the slightest glitch in their systems. An hour of downtime for a retailer selling online can result in millions of pounds-worth of lost revenue. With the exception of financial services and high-frequency trading, this is not the case for most other markets, where downtime is annoying and disruptive but not immediately damaging to the bottom line.

In retail, there has been a tendency for organisations to rely on systems that are under their own control and can be managed internally or maintained in a traditional data centre. The perceived stability and security of DC service level agreements and contracts, by comparison with those offered by cloud platforms, was, for a long time, reassuring for retailers.

So, understandably, CIOs have taken an if it aint broke, dont fix it approach, unwilling to disrupt the entire operation in order to move from a stable data centre-hosted environment. Without a compelling reason, such as price or better systems integration, why would they make such a major change?

Data storage costs

When serious retail cloud computing broke cover around five years ago, many early adopters were attracted by the seemingly low costs to host their systems. However, when it came to storing or transferring data the same appealing rates did not apply. For eCommerce operations, particularly with the rapid growth in omnichannel retailing, the cost of data storage and movement was, and still is, a huge consideration and this alone skewed the economic drivers for migrating to the cloud.

Data was also a challenge for another reason residency. For eCommerce retailers operating in mainland Europe or in the UK, data protection regulations, or even their own terms and conditions, dictate where that data can be held and managed. Because cloud platforms can sometimes blur the jurisdictions in which data actually resides, especially during fail over scenarios, there could be serious ramifications for the retailer if they simply cant be sure where their data is.

Maturing cloud

So, what has prompted the shift in retailer attitudes that has seen many more migrating their systems onto the cloud in the last two years? Put simply, cloud vendors have pivoted and have developed their solutions to meet the exacting needs of high transaction operations, not just online retailing, but financial trading systems too.

A new generation of specialist cloud application and platform providers are able to offer tougher platforms, multi-level authentication and improved security. They have taken seriously the doubts of organisations with sensitive data and the criticisms regarding insufficient cloud computing contracts. In addition, cloud offerings have matured, so pricing, not just for hosting systems, but also for flexible data storage has become much more competitive.

A secondary market for monitoring, alerting and instrumenting in the cloud has evolved, allowing retailers to quickly identify and address issues that arise, and giving them the control they want to ensure their service to customers is always-on and always delivering optimum performance.

The other major factor is peer and supplier influence. We know from our own experience that when we moved our services out of data centres and onto the cloud, it was challenging to convince our retail customers that it was safe. But they have benefited from our change, and they see other retailers moving in the same direction, and as a result, the conversations we are increasingly having with retailers revolve around them gaining those same benefits for their own operations.

The Peak question

But one of the most convincing reasons we are seeing greater take-up of cloud computing in retail is the Peak season. Still the most challenging time in the eCommerce calendar, those few pre-Christmas weeks of trading require phenomenal levels of agility in the retail supply chain. The difficulties of Black Friday in 2014, in particular, alarmed retailers to such a degree that major changes had to be made. What we have seen over the subsequent two Peak seasons is that those retailers who are on a cloud platform have had the flexibility to scale up their services in the approach to Peak, or to specific promotions, and scale down easily when online traffic falls away. The cloud facilitates retailers abilities to manage 100 times the traffic load for a few seconds, in a way that data centre-based systems simply could not.

As an industry, eCommerce is still in the early stages where cloud adoption is concerned. It is still primarily the largest retailers who have made that change, and it is much harder for the smaller operations who are already under pressure to deliver between 40 and 100% year on year growth. Increasingly however, cloud hosting costs will become even more competitive, levels of security will be sufficiently high across the board, and the ability to scale to meet customer demands will be so compelling that it becomes just a matter of time.

David Jack is CIO of MetaPack

Image credit: Fotolia and MetaPack

Continued here:
Opinion: Why the cloud is good news for peak - eDelivery

Read More..

Maharashtra schools raise Rs 216 crore via crowd funding – The Indian Express

Written by Alifiya Khan | Pune | Updated: April 18, 2017 6:55 am Over 1,000 Zila Parishad schools in Dhule district of Maharashtra have turned digital.

An LCD projector worth Rs 78,000, a Lenovo laptop and Wi-Fi-enabled classrooms are the latest additions to this Marathi-medium government school at Dhekusim, a village of 1,800 people in Jalgaons Ambaner taluka, some 360 km northeast of Mumbai.

Suresh Patil, headmaster of the zila parishad school, says the changes, including a 2,000-square-foot compound wall and revamped classrooms, came not through government intervention but from funds raised from villagers. When we reached out to people and sought help to improve the school, the villagers donated about Rs 5.5 lakh. As of today, at least 10 students have left private schools to join our school. Our total number of students has gone up from 42 to 78, Patil says.

This isnt an isolated story in Maharashtra. Data available with the Maharashtra State Council of Education Research and Training (MSCERT) shows that between July 2015 and December 2016, school teachers in the state managed to raise a whopping Rs 216 crore from the public funds that have been utilised for revamping classrooms, building new toilets and on digital initiatives, among others. According to the MSCERT data, Ahmednagar district has ensured the maximum public participation, having raised over Rs 30 crore, followed by Pune (Rs 19.82 crore), Solapur (19.03 crore), Aurangabad (15.59 crore) and Nashik (14.80 crore).

So what inspired these Maharashtra government schools to seek funds? Nand Kumar, principal secretary, state education department, says one of the reasons could be the Pragat Shaikshanik programme rolled out by the state government two years ago.

One of the components of the programme was the compulsory reporting of public participation. Until now, schools didnt actively seek outside funds. But now, since the programme documents public participation, teachers have started actively reaching out to the community, encouraging them to contribute and also seeking corporate help. There has also been an overall improvement in the quality of education, reflected in many recent surveys, due to which villagers have started investing in zila parishad schools, he says.

While in most districts, the change is being brought about by the community as a whole, Dhule district, said to be the first in the state to have 100 per cent digital classrooms in its government schools, might owe its transformation to a 35-year-old investment banker, Harshal Vibhandik.

I live in New York and two years ago, I came back to my hometown in Dhule. I had Rs 9 lakh with me and after visiting a few schools, I decided to digitise nine of them. But then, villagers wanted to pitch in too and that is when the 70:30 funding idea came to me, with villagers raising the majority of the money, Vibhandik says.

An idea that started off with nine schools soon covered all 1,103 zila parishad schools of Dhule with the bankers friends from abroad, donors, villagers, local NGOs all pitching in.

Vibhandik, who is working from India for now, says he also travels to other districts. People are investing in the idea, villagers are investing in it. It is amazing to see the change, says Vibhandik.

While public participation has cut down on the long wait for government funds and elevated the image of zila parishad schools, it has had an unexpected outcome the autonomy it provides to teachers to secure funds has led to some innovative methods.

At the Kardelwadi zila parishad school in Punes Shirur taluka, every donated item from the table fan to even blackboard dusters has the name of the sponsor. Primary school teachers Dattatrey and Bebinanda Sakat say they figured out that the practice encourages more people to donate. It gives them a sense of fulfillment and it doesnt really take anything from us. So we started writing the names of the donors. Now for almost anything we need, we just turn to the villagers and they get it for us instantly. So from computers to science labs, our small village school with less than 100 students has almost everything that a big city school would. This truly is a peoples movement for education, says Bebinanda.

For all the latest India News, download Indian Express App now

More here:
Maharashtra schools raise Rs 216 crore via crowd funding - The Indian Express

Read More..

Frexit Discussion Following French Elections Could Drive Bitcoin Demand – newsBTC

France may start contemplating Frexit referendum soon after the presidential elections. Demand for Bitcoin may increase in the following days. Read more...

In the coming days, Bitcoin could be looking at yet another opportunity to surge beyond record levels in terms of price. The reason this time is not the Brexit or the US Presidential Elections, but a combination of both circumstances France. The upcoming French elections and the potential of France exiting the European Union have become the most widely discussed topic in the past few days.

With the French Elections about a week away, there are speculations of Front National leader, Marine Le Pen one of the contestants in the election garnering the most number of votes in the first round. Le Pen has been vocal about both reforms in France and her anti-EU stance regarding various matters. Provided the speculations turn out to be true, with Marine Le Pen gaining a huge majority in the elections, the global markets are bound to go into shock.

Le Pens win in the upcoming French Presidential Elections is expected to move the country towards Frexit. Reports suggest that even other presidential election contestants may harbor similar Frexit designs, especially Francois Fillon. The candidates have made a promise to improve the countrys job market and economy, which they believe has fallen victim to increased globalization influenced by its position in the European Union.

However, the other side of the argument state that France may not be able to economically sustain its separation from the European Union, as the impact on its future independent currency will be devastating. Meanwhile, the potential exit of France from the European Union will leave it weakened, both politically and economically. France, one of the founding members of EU is also an attractive market for many European companies, who will, in turn, end up losing free-access, just like in the case of Brexit.

The accompanying uncertainty regarding the value of Francs and Euro following Frexit will see increased demand in Bitcoin among investors in Europe looking for ways to safeguard the value of their earnings. As more people start opting for Bitcoin over other traditional assets, the cryptocurrencys value will be driven further upwards, making it profitable for both new and existing owners.

Follow this link:
Frexit Discussion Following French Elections Could Drive Bitcoin Demand - newsBTC

Read More..