Page 3,690«..1020..3,6893,6903,6913,692..3,7003,710..»

How Edge Is Different From Cloud And Not – The Next Platform

As the dominant supplier of commercial-grade open source infrastructure software, Red Hat sets the pace and it is not a surprise that IBM was willing to shell out an incredible $34 billion to acquire the company. It is no surprise, then, that Red Hat has its eyes on the edge, that amorphous and potentially substantial collection of distributed computing systems that everyone is figuring out how to chase.

To get a sense of what Red Hat thinks about the edge, we sat down with Joe Fernandes, vice president and general manager of core cloud platforms at what amounts to the future for IBMs software business. Fernandes has been running Red Hats cloud business for nearly a decade, starting with CloudForms and moving through the evolution of OpenShift from a proprietary (but open source) platform to one that has become the main distribution of the Kubernetes cloud controller by enterprises. Meaning those who cant or wont roll their own open source software products.

Timothy Prickett Morgan: Is the edge different, or is it just a variation on the cloud theme?

Joe Fernandes: For Red Hat, the edge is really an extension of our core strategy, which is open hybrid cloud and which is around providing a consistent operating environment for applications that extends from the datacenter across multiple public clouds and now out at the edge. Linux is definitely the foundation of that, and Linux for us is of course Red Hat Enterprise Linux, which we see running in all footprints.

It is not just about trying to get into the core datacenter. Its about trying to deal with the growing opportunity at the edge, and I think its not just important for Red Hat. Look at what Amazon is doing with Outposts, what Microsoft is doing with Azure Stack, and what and Google is doing with Anthos, trying to put out cloud appliances for on premises use. This hybrid cloud is as strategic for any of them as it is for any of us.

TPM: What is your projection for how much compute is on the edge and how much is in the datacenter? If you added up all of the clock cycles, how is it going to balance out?

Joe Fernandes: It is very workload driven. Generally, the advice we always give to clients is that you should always centralize what you can because at the core is where you have the most capacity in terms of infrastructure, the most capacity in terms of your SREs and your ops teams, and so forth. As you start distributing out to the edge, then you are in constrained environments and you are also not going to have humans out there managing things. So centralize what you can and distribute what you must, right.

That being said, specific workloads do need to be distributed. They need to be closer to the sources of data that they are operating upon. We see alignment of the trends around AI and machine learning with the trends around edge, and thats where we see some of the biggest demand. That makes sense because people want to process data close to where it is being generated and they cant they cant incur either the cost or the latency of sending that data back to their datacenter or even the public cloud regions.

And it is not specific to one vertical. Its certainly important for service providers and 5G deployments, but its also important for auto companies doing autonomous vehicles, where those vehicles are essentially data generating machines on wheels that need to have made quick decisions that are as tell.

TPM: As far as I can tell, cars are just portable entertainment units. The only profit anybody gets from a car is all the extra entertainment stuff we add. The rest of the price covers commissions for dealers and the bill of materials for the parts in the car.

Joe Fernandes: At last years Red Hat Summit, we had both BMW and Volkswagen talking about their autonomous vehicle programs, and this year we received an award from Ford Motor Company, who also has major initiatives around autonomous driving as well as electrification. Theyll be speaking at this years Red Hat Summit. Another edge vertical is retail, allowing companies to make decisions in stores to the extent that they still have physical locations.

TPM: I didnt give much thought to the Amazon store where it has something ridiculous like 1,700 cameras and you walk in, you grab stuff, you walk out, it watches everything you do and it takes your money electronically. This is looking pretty attractive this week is my guess. And I thought it was kind of a bizarre two months ago, not shopping as I know and enjoy it. And I know were not going to have a pandemic for the rest of our lives, but this could be the way we do things in the future. My guess is that people are going to be less inclined to do all kinds of things that seem very normal only one or two months ago.

Joe Fernandes: Exactly. The other interesting vertical for edge is financial services, which has branch offices and remote offices. The oil and gas industry is interested in edge deployments close to where they are doing exploration and drilling, and the US Department of Defense is also thinking about remote battlefield and control of ships and planes and tanks.

The thing that those environments have in common is Linux. People arent running these edge platforms on Windows Servers, and they are not using mainframes or Unix systems. It is obviously all Linux and it puts a premium on performance and security, on which Red Hat has obviously made its mark with RHEL. People are interested in driving on open systems anyway, and moving to containers and Kubernetes, and Linux is the foundation of this.

TPM: Are containers a given for edge at this point? I think they are, except where bare metal is required.

Joe Fernandes: I dont think that containers are a prerequisite. But certainly, just like the rest of the Linux deployments, it is going in the direction of containers. The reason is portability, having that same environment to package and deploy and manage at the edge as you do in the datacenter and in the cloud. Bare metal containers can run directly on Linux; you dont need to have a virtualization layer in between.

TPM: Well, when I say bare metal, I mean not even a container. Its Linux. Thats it.

Joe Fernandes: I think that that distinction between bare metal Linux versus bare metal Linux containers is more around do what those packaged as container images, or as something like RPMs or Debian and you need orchestration, do you need orchestrated containers. Right. And again, thats very workload specific. We certainly see folks asking us about environments that are really small, that you might not do orchestration because youre not running more than a single container or a small number of containers. In that case, its just Linux on metal.

TPM: OK, but you didnt answer my question yet, and that is really my fault, not yours. So, to circle back: How much compute is at the edge and how much is on premises or in the cloud? Do you think it will be 50/50? Whats your guess?

Joe Fernandes: I dont think itll be 50/50 for some time. I think in the range of 10 percent to 20 percent in the next couple of years is possible, and I would put that at 10 percent or less because there is just a ton of applications running in core datacenter and a ton running out in the public cloud. People are still making that shift to cloud.

But again, itll be very industry specific. I think the adoption of edge compute using analytics and AI/ML is still now just taking off. For the auto makers doing autonomous vehicles, there is no other choice. It is a datacenter on wheels that needs to make life and death decisions on where to turn and when to brake, and in that market, the aggregate edge compute will be the majority at these companies pretty darn quick. You will see edge compute adoption go to 50 percent or more in some very specific areas, but if you took the entire population of IT, its probably still going to be in the single digits.

TPM: Does edge require a different implementation of Linux, say a cut-down version? Do you need a JEOS-type thing like we used to have in the early days of server virtualization? Do you need a special, easier, more distributed version of OpenShift for Kubernetes? Whats different?

Joe Fernandes: With Linux, the valuable thing is the hardware compatibility that RHEL provides. But we certainly see demand for Linux on different footprints. So, for example, RHEL on Arm devices or RHEL with GPU enablement.

When it comes to OpenShift, obviously Kubernetes is a distributed system, where the cluster is the computer, while Linux is focused on individual servers. What we are seeing is demand for smaller clusters, with OpenShift enabled on three node clusters. Three node clusters, which is sort of the minimum to have a highly available control plane because etcd, which is core to Kubernetes, requires three nodes for quorum. But in that situation, we may put the control plane and the applications run on the same three machines, whereas in a larger setup, you would have a three-node OpenShift control plane and then at least two separate machines running your actual containers so that you have HA for the apps. Obviously those application clusters will grow to tens or even hundreds of nodes. But at the edge, the premium is on size and power, so three nodes might be as much space as youre going to get in the rack out at the edge.

TPM: Either that or you might end up having put your control plane on a bunch of embedded microcontroller type systems and compacting that part down.

Joe Fernandes: Actually, we see a kind of the progression. So there are standard clusters as small as you can get them. So maybe its control plane with one or two nodes. And then the next step weve moved into is a control plane and app nodes are the same three machines. And then you get into what Id call distributed nodes, where you might have a control plane shared across five or ten or twenty edge locations that are running applications and talk back to a shared control plane. You have to worry about connectivity to the control plane.

TPM: If you lose the control plane or your connectivity to it, all it should mean is that you cant change the configuration of the compute cluster at the edge.

Joe Fernandes: Not exactly, because Kubernetes is a declarative system, so it thinks that needs to start up containers on another node or start a new node. In a case where you might have intermittent connectivity, we need to meet to make it more tolerant so it doesnt actually start that process unless it doesnt reconnect for some amount of time. And then the next step beyond that is clusters that have two nodes or a single node, and at that point the control plane, if it exists, is not HA, so youre focusing on high availability some other way.

TPM: You can do virtual machines on a slightly beefier server and have software resilience, but you have the potential of having a hardware resilience issue.

Joe Fernandes: Maybe their resiliency is between edge locations.

TPM: What happens with OpenStack at this point, if anything? AT&T obviously has been widely deploying OpenStack at the edge, with tens of thousands of baby datacenters planned, all linked by and controlled by OpenStack. Is this going to be something like use OpenShift where you can, use OpenStack where you must?

Joe Fernandes: We certainly see Red Hat OpenStack deployed at the edge. Theres an architecture that we put out called the distributed compute node architecture, which customers are adopting. It is relevant that customers virtualized application workloads and also want an open solution, and so I think you will continue to see Red Hat OpenStack at the edge and you continue to see vSphere at the edge, too.

For example, in telco, OpenStack has a big footprint where companies have been creating virtualized network functions, or VNFs, for a number of years and that has driven a lot of our business for OpenStack in telco because a lot of the companies we work with, like Verizon and others, they wanted an open platform to deploy VNFs.

TPM: These telcos are not going to suddenly just decide, to hell with it, and containerize all this and get rid of VMs and server virtualization?

Joe Fernandes: Its not going to be an either/or, but we now see a new wave of containerized network functions, or CNFs, particularly around like the 5G deployment. So the telcos are coming around to containers, but like every other vertical, they dont all switch overnight. Just because Kubernetes has been out for five years now doesnt mean the VMs are gone.

TPM: Is the overhead for containers a lot less than VMs? It must be, and that must be a huge motivator.

Joe Fernandes: Remember that the overhead of a VM includes the operating system that runs inside the guest and the overhead of a container, where you are not virtualizing that hardware, you are virtualizing just the process. You can make a container as small as the process that it runs. And for a VM, you can only make it as small as the operating system.

TPM: We wouldnt have done all this VM stuff if we could have just figured out containers to start with.

Joe Fernandes: You know, Red Hat Summit is coming up in a few weeks and we will be providing an update on KubeVirt, which allows Kubernetes to manage standard virtual machines along with containers. In the past year or more, we have been talking about it strictly in terms of what we are doing in the community to enable it. But it has not been something that we can sell and support. This is the year, its ready for primetime and that presents an opportunity to have a converged management plane. You could have Kubernetes directly on bare metal, managing both container workloads and VM workloads, and also manage the transition as more of those workloads move from VMs to containers. You wont have to switch environments or have that additional layer and so forth.

TPM: And I fully expect people to do that. Ive got nothing against OpenStack. Five years ago, when we started The Next Platform, it was not obvious if the future control plane and management and compute metaphor would be Mesos or OpenStack or Kubernetes. And for a while there, Mesos looked like it was certainly better than OpenStack because of some of the mixed workload capabilities and the fact that it could run Kubernetes better than OpenStack could. But if you can get KubeVirt to work and it provides Kubernetes essentially the same functionality that you get for OpenStack in terms of managing the VMs, then I think were done. It is emotional for me to just put a nail in the coffin like that.

Joe Fernandes: The question is: Is it going to put a nail not just in OpenStack, but in VMware, too.

TPM: VMware is an impressive legacy environment in the enterprise, and it generates more than $8 billion in sales for Dell. There is a lot of inertia with legacy environments I mean, there are still System z mainframes out there doing a lot of useful work and providing value to IT organizations and their businesses. I have seen so many legacy environments in my life, but this may be the last big one I see this decade.

Joe Fernandes: You have covered vSphere 7.0 and Project Pacific and look at the contrast in strategy. Were taking Kubernetes and trying to apply it to standard VM workloads as a cloud native environment. What VMware has done is take Kubernetes and wrap it back around the vSphere stack to keep people on the old environment that theyve been on for the last decade.

Read the original post:
How Edge Is Different From Cloud And Not - The Next Platform

Read More..

TSMC, Cadence and Others Benefit as Chip Design Activity Surges – TheStreet

We're in a golden age for chip R&D activity, and that makes it a pretty good time to be one of the companies providing the software, intellectual property and/or manufacturing plants needed by chip engineers to bring new products to market.

That thought came to mind as I read an Axios report about how Alphabet/Google (GOOGL) is working on a processor that could power its 2021 Pixel phones, along with future Chromebooks. The processor, codenamed Whitechapel, is said to feature 8 ARM CPU cores, rely on Samsung's next-gen, 5-nanometer, manufacturing process and contain dedicated circuitry for improving Google Assistant's performance.

To date, Google has relied on Qualcomm's (QCOM) processors to power its Pixel phones. And on the whole, Qualcomm has done a pretty good job of addressing the needs of the high-end and mid-range Android phone markets.

Nonetheless, Google feels that it's worth its while to develop its own smartphone processor. Just as Google has felt it's worth developing an image-processing chip for Pixel phones, and to develop proprietary silicon (its Tensor Processing Units, or TPUs) for accelerating AI/deep learning workloads within its data centers.

Apple (AAPL) , meanwhile, now relies on its own processors to power nearly its entire product line -- Macs are the exception, but perhaps not for long -- and has also developed a number of other chips for its hardware.

Amazon.com (AMZN) has designed a slew of chips that can be found inside of AWS data centers; Microsoft (MSFT) has created chip designs for both Azure data centers and its HoloLens headsets; Tesla (TSLA) is using a home-grown processor to power its third-gen Autopilot system; and Huawei's HiSilicon unit now develops chips for everything from phones to IoT devices to cloud servers.

The list goes on and on. In many cases, this surge in chip design activity among companies that aren't first and foremost chip developers simply stems from a wish to use chips that are cheaper and/or can deliver better performance/power efficiency than what third-party offerings can provide.

But other factors can also be at play. These include an explosion in the number of workloads -- in everything from smartphones and IoT devices to cars and cloud data centers -- for which no off-the-shelf silicon optimized for that particular workload happens to exist. Some of the custom ASICs developed by AWS and Azure to help underpin their data center infrastructures arguably fit this description.

In other cases, a company might be motivated by a wish for greater independence from and/or negotiating leverage with third-party suppliers. A lot of Huawei/HiSilicon's chip design work falls into this category, particularly in the wake of U.S. export restrictions, and so might some of the attempts by cloud giants to develop their own server CPUs and AI accelerators.

Regardless of a company's motivation, this uptick in chip design activity among the likes of hardware makers and cloud giants, together with attempts by independent chip developers to address a wider variety of workloads, is a positive for electronic design automation (EDA) software giants Cadence Design Systems (CDNS) and Synopsys (SNPS) . It also spells more licensing opportunities for chip IP giant ARM, which is owned by Japan's SoftBank and eying a 2023 IPO, as well as more business forindependent chip developers that work with others on custom ASICs, such as Broadcom (AVGO) .

Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.

See original here:
TSMC, Cadence and Others Benefit as Chip Design Activity Surges - TheStreet

Read More..

AMD takes another crack at Intel’s server stronghold with more Epyc silicon – The Register

AMD is once again hoping to muscle in on Intel's bread and butter with a new line of second-generation Epyc processors aimed squarely at the HPC, cloud, and enterprise markets.

The three server-class chips are designated 7Fx2: the eight-core 180W 7F32 clocked at 3.7-3.9GHz with 128MB of L3 cache; the 16-core 240W 7F52 clocked at 3.5-3.9GHz with 256MB of L3; and the 24-core 240W 7F72 clocked at 3.2-3.7GHz with 192MB of L3.

If you think about the learnings from today, the working from home and virtual desktops, everyone is thinking about how you modernize your data center

"The enterprise is really a core area for us and we are really going after it," AMD server business senior VP and general manager Dan McNamara told reporters on a briefing call.

AMD hopes each of the 7nm Zen-2-based chips will match or undercut Chipzilla on both price and performance: the trio are pitted against comparable Intel Xeon Gold or Platinum lines.

Specifically, the eight-core 7F32 costs $2,100 apiece, when ordering 1,000 at a time, and will take on Intel's eight-core Xeon Gold 6250 and 6244. In the middle, the 24-core 7F72 is AMD's answer to the 24-core Xeon Gold 6248R and Platinum 8268, and costs $2,450 apiece. At the high end in price terms, at least the 16-core 7F52 is gunning for the 16-core Xeon Gold 6242 and 6246R models, and supports a maximum of 3200MHz 4TB RAM, compared to Intel's 2933MHz 1TB, and is priced $3,100 apiece for orders with quantities of at least 1,000. Yes, the part numbers are a bit odd.

Among the vendors planning to integrate the new Epyc chips at launch are HPE with Nutanix, Dell, and Supermicro, which will all be selling rack-mount servers equipped with the new AMD chips. IBM, meanwhile, says it will bundle the new Epyc gear into its bare-metal cloud units.

AMD said the chips were aimed at the HPC and cloud markets, although enterprise is going to be a particular focus.

While its rival has seen gains in the desktop space, Intel has been plagued by shortages. However, for AMD, getting into the enterprise and data center markets, where Intel holds the x86 monopoly and focuses much of its efforts, will be a different situation.

One hope AMD has for its enterprise push is that, as many companies move to handle a remote workforce, data center rebuilds will come into fashion and the market for the Epyc chips will explode. "If you think about the learnings from today, the working from home and virtual desktops, everyone is thinking about how you modernize your data center," McNamara said.

"I think the enterprise is really in this transformation phase."

AMD expects the first Epyc 7Fx2 servers to arrive in the second quarter of this year.

Sponsored: Webcast: Build the next generation of your business in the public cloud

View post:
AMD takes another crack at Intel's server stronghold with more Epyc silicon - The Register

Read More..

There’s a free alternative to Zoom, but it has a catch – TechRadar

Demand for online collaboration tools and video conferencing software has boomed in recent weeks, with big players like Microsoft and Google chasing market share from the likes of Zoom and WebEx.

While Zoom customers grew from 10 million to 200 million in a few months, Microsoft Teams and Google Hangouts relaxed the rules to aid first responders, schools and enterprises with a view to on-board them to their respective platforms.

The latest to join the bandwagon is Twilio, which has announced three months of free access to its video platform for those working in healthcare, education and nonprofit organisations that are involved in the Covid-19 crisis response.

Twilio aims to offer a solution for front line health workers by letting them screen people for Covid-19 online, as well as assist other patients remotely. It also aims to offer an alternative solution to schools offering remote education to students and NGO workers fighting the pandemic.

The company has further limited the free offer to new signups before June 30 and for existing users who require additional video services.

While there are quite a few video conferencing solutions available, there seems to be a stretch on the bandwidth across several countries. Microsoft has already reported that its cloud servers are facing a lot of stress due to the unexpected increase in the usage of Teams applications.

Zoom, on the other hand, had been a preferred choice among the adopters because of its easy to use interface and free offerings to various sectors. However, reports about security vulnerabilities, lack of end to end encryption of video calls, and its Chinese connections with regards to routing calls through offshore locations has raised questions about the platform's security.

Even though the company has publicly apologized for these issues and has announced various steps to control the damage, it still faces a class-action lawsuit.

See the article here:
There's a free alternative to Zoom, but it has a catch - TechRadar

Read More..

VMware’s New vRealize IT Ops Suite Runs in the Cloud and Supports Kubernetes – Data Center Knowledge

VMware has moved vRealize, its suite of IT infrastructure operations tools, to the cloud.

The company announced this week availability of vRealize Operations version 8.1, sold through a SaaS (Software-as-a-Service) model. vROps is now a cloud-based overseer of workload performance.

Related: VMware Bakes Kubernetes into vSphere 7, Fleshes Out Tanzu

This is a comprehensive release across our cloud management portfolio, Teruna Gandhi, senior director for product marketing for VMwares cloud management business unit, told DCK.

The release will also make vROpss historical features, such as continuous performance optimization (predictive analytics applied to capacity planning data) and intelligent remediation (faster troubleshooting using logs and metrics correlation) available for vSphere 7.0, which, released last month, is the first version of vSphere with native Kubernetes support.

Related: VMware Adds More Hybrid Cloud Features to Network Virtualization Tech

Users with some infrastructure assets on Amazon Web Services will be able to integrate metrics served from AWS APIs with those from on-premises workloads, Gandhi told us.

With vROps 8.1, VMware aims to make its cloud-based operations center just as functional as it would be on-premises a feature that in two short months has progressed from a nice convenience into a necessity, with companies relying on remote workers more than ever before during the Covid-19 pandemic.

Everything that you can do around continuous performance optimization, workload balancing and optimization, as well as capacity and cost management, reclamation, show-back all of those capabilities are available on both on-premises and SaaS, said Gandhi.

Reclamation refers to a feature of capacity planning where underutilized and orphaned resources, especially storage (byproducts of relocating workloads and changing VM priorities) may be reclaimed and put back to use with more active resources.

Show-back refers to the vROps tools and resources that can report cost savings from a resource reclamation strategy to operators on a department-by-department basis.

VMware is touting operational consistency between vROps on-premises and vROps Cloud, including for cloud-native applications.

When a customer goes to the Workload Platform, they can start on-boarding the new constructs that vSphere 7 with Kubernetes has instituted, like supervisor clusters, namespaces, and pods, Gandhi said. VROps is able to auto-discover them and bring them into its inventory.

In other words, the new vROps recognizes these new classes of workload in their new native context. But theyre located in a vCenter just like any other workload class, applying predictive analytics to determine capacity consumption and time remaining (an estimate of how long before resources attributed to a workload are consumed. This capability is the bread-and-butter of what vROps does for capacity management, she explained.

With vROps 8.1 installed, Kubernetes clusters appear in context on the expanded layout of vSpheres Inventory Dashboard. They are auto-discovered, so theres no need for a separate configuration file or manifest to formally declare their existence. An operator can now drill down from the abstract level of the cluster down to the components of physical infrastructure. Here, an operator can determine whether any operating issues or alerts have been discovered, and whether a cluster is performing according to established KPIs.

This does not mean vROps can be thought of as a container monitoring platform, Gandhi warned us. vROps is not an alternative to Prometheus, which provides metrics from deep inside containers. Rather, she advised, vROpss metrics should be interpreted as relative measurements of workload performance as applications from a server-side perspective (as opposed to a client-side performance monitor, such as New Relic). It is making sure the infrastructure that is supporting my container environment is healthy, up and running. But its not actually monitoring the guest container or the guest cluster itself.

The beauty of what vROps does is if the customer is running both virtual machines and Kubernetes workloads in the same vCenter deployment, its able to get that full inventory and provide that holistic view across the board, she said.

Any tool that has visibility into an enterprises VM deployment platform must also have an agent residing on that platform. In vROps Clouds case, Gandhi explained, its operator installs a kind of reverse proxy agent. A secure connection is then established between this proxy and VMware vCenter, which feeds performance data to the cloud-based analytics engine.

Shouldnt this connection introduce latency into the management process that wasnt there before? Not significantly, she told us, since performance data is only being polled every five minutes.

As enterprise data centers and colocation facilities adapt their shifts and reschedule workers amid the coronavirus pandemic, IT operations executives and managers appear to be flooding vendors such as VMware with requests for remote functionality. It just so happens that much of everyday management was already taking place at a distance. Now it needs to take place from home with the same assurances as before.

Employers across the globe are sending their employees who are typically working from office locations to the home, noted Sanjay Uppal, senior VP and general manager of VMwares VeloCloud business unit, during a press briefing last week. And this is really driving demand through the roof.

Its a statement that brings home just how soon IT departments came to the realization that much of their expertise can be use just as effectively off-site as on.

Read more:
VMware's New vRealize IT Ops Suite Runs in the Cloud and Supports Kubernetes - Data Center Knowledge

Read More..

Data centre evolution the time for automation is here – ITWeb

Robert Graham, Technical Team Lead at Obsidian Systems.

The need to save on day-to-day operational expenses and minimise physical and carbon footprints are driving the move to virtual data centres.

And even more so now with the COVID-19 pandemic, this shows Africas cloud services space is evolving and that business leaders recognise the need to move their workloads to the cloud as on-premises data centres give way to automated or virtual options either on-premises or in the cloud.

This is according to Robert Graham, Technical Team Lead at leading open source technology and services provider Obsidian Systems.

Today, the discussion around cloud and data centre strategy is tailored around automation.

The question of why would one then want to automate your data centre arises, if it is so easy to deploy a server with the click of a button. It is not like the old days where you physically had to rack-and-stack a server room, lay the network cables and configure network switches and only after a few days or even months you were able to utilise these servers within your physical data centre. For data centre automation, one has to look at ways to enable the developers and operations engineers to easily deploy, configure and maintain an application or service within your data centre whether this is an on-premises or cloud data centre, Graham adds.

Automation means freeing your human resource to focus on building solutions instead of wasting time on troubleshooting and manually building an environment, says Graham.

You would want to be able to do this in a trusted and predictable way, utilising tools that will allow you to define your infrastructure as code. With infrastructure as code, you can continuously test your environment before it is deployed or upgraded. Without infrastructure-as-code one stands the risk of creating a snowflake environment - that requires a great number of person-hours to do manual tasks which can lead to human error and fatigue, he continues.

Graham believes generally, physical data centres are unlikely to disappear soon. However, the idea of a data centre has certainly changed.

A great number of organisations around the world are moving towards the concept of a virtual data centre. This allows companies to focus more on the product and service they want to deliver than on maintaining an on-premises data centre. A question that does come to mind is whether increased automation in the data centre will result in job losses. One should rather look at this challenge from another angle, and ask how can we utilise the skills of the engineers that spent countless hours in physical data centres in a more productive way? It mostly depends on the willingness of a person to learn new ways of doing things.

Graham raises operating systems as an example. A couple of years ago it required a lot of skill to deploy a server properly, today anyone can deploy a server with a choice of an operating system with little to no skill. Where the skill comes into play is to configure this particular server to perform the task it was deployed for the best way it can. With this knowledge, an engineer should be able to make use of an automation tool that will give them the ability to configure this server as required. With any form of automation, you will always need someone to maintain the automation tool.

Advantages to automation

Obsidian says that among the main benefits of automation in the data centre are that systems perform the boring repetitive tasks which in the end will give you predictability and minimise human error.

This will give your engineers and developers more time to focus on building exciting new systems and services and not waste unnecessary time on getting things to work as on the developer's machine.

With automation in place especially in the area of infrastructure-as-code, the engineers can be empowered to easily recover or replicate an environment somewhere else, which at the end of the day will save the company money with reduced downtime of their services. Another way to automate the data centre is to build your systems in such a way that they can automatic failover to another site or cloud environment. This will also reduce alert fatigue for your engineers on call," Graham adds.

But Obsidian underlines the main aim of automation should always be to make the engineers life easier.

Graham continues: Saying this, some engineers are not for automation as they feel they are losing control. This dichotomy brings us to the point where one has to decide what is better for the organisation. When we look at the modern data centre and compare them with the ones from a decade or two ago, some of the old data centres already had some form of automation by the means of robotic arms swapping backup tapes in and out during the daily backup process. Even this form of automation has become redundant due to today's fast network and Internet capabilities and more affordable storage. This just shows us that automation within the data centre will be an ever-evolving process, and as technology will evolve so will our data centres.

The best approach to automation

Obsidian says there are a few steps that must be taken to help an organisation begin its automation journey.

First, a business has to identify a few people to form part of the automation team. Then, identify a small component that can be automated to ensure a quick win.

This will build confidence in the team and the rest of the organisation that automation is an achievable option. This can be anything from automating the deployment of infrastructure for a small project utilising a tool like Terraform by Hashicorp, or deploying one piece of software such as the backup tool to your development environment with a tool like CHEF, Ansible, Puppet or Saltstack, says Graham.

It is of utmost importance that the automation team work closely along with the existing infrastructure and development team to identify other tasks and processes that can be automated within your data centre. Slowly but surely more people will start to see the success of automation. From a development point of view look at DevOps tools that will give your team the ability to continuously test and deploy their code. Tools like Gitlab, Jenkins and Bamboo are great for this.

Automation will keep on improving as technology improves, according to Graham.

He believes the future data centre will strongly rely on automation to continuously improve the service a company delivers, whether this is for an on-premises or cloud-based data centre.

The important thing is to start your automation journey as soon as possible, for this will allow you to clear out some technical debt, he adds.

Read the original here:
Data centre evolution the time for automation is here - ITWeb

Read More..

Shift to work-from-home: Most IT pros worried about cloud security – Help Net Security

As most companies make the rapid shift to work-from-home to stem the spread of COVID-19, a significant percentage of IT and cloud professionals are concerned about maintaining the security of their cloud environments during the transition, according to a survey conducted by Fugue.

The survey found that 96% of cloud engineering teams are now 100% distributed and working from home in response to the crisis, with 83% having completed the transition or in the process of doing so.

Of those that are making the shift, 84% are concerned about new security vulnerabilities created during the swift adoption of new access policies, networks, and devices used for managing cloud infrastructure remotely.

What our survey reveals is that cloud misconfiguration not only remains the number one cause of data breaches in the cloud, the rapid global shift to 100% distributed teams is creating new risks for organizations and opportunities for malicious actors, said Phillip Merrick, CEO of Fugue.

Knowing your cloud infrastructure is secure at all times is already a major challenge for even the most sophisticated cloud customers, and the current crisis is compounding the problem.

Because cloud misconfiguration exploits can be so difficult to detect using traditional security analysis tools, even after the fact, 84% of IT professionals are concerned that their organization has already suffered a major cloud breach that they have yet to discover (39.7% highly concerned; 44.3% somewhat concerned). 28% state that theyve already suffered a critical cloud data breach that they are aware of.

In addition, 92% are worried that their organization is vulnerable to a major cloud misconfiguration-related data breach (47.3% highly concerned; 44.3% somewhat concerned).

Over the next year, 33% believe cloud misconfigurations will increase and 43% believe the rate of misconfiguration will stay the same. Only 24% believe cloud misconfigurations will decrease at their organization.

Preventing cloud misconfiguration remains a significant challenge for cloud engineering and security teams. Every team operating on cloud has a misconfiguration problem, with 73% citing more than 10 incidents per day, 36% experiencing more than 100 per day, and 10% suffering more than 500 per day. 3% had no idea what their misconfiguration rate is.

The top causes of cloud misconfiguration cited are a lack of awareness of cloud security and policies (52%), a lack of adequate controls and oversight (49%), too many cloud APIs and interfaces to adequately govern (43%), and negligent insider behavior (32%).

Only 31% of teams are using open source policy-as-code tooling to prevent misconfiguration from happening, while 39% still rely on manual reviews before deployment.

Respondents cited a number of critical misconfiguration events theyve suffered, including object storage breaches (32%), unauthorized traffic to a virtual server instance (28%), unauthorized access to database services (24%), overly-broad Identity and Access Management permissions (24%), unauthorized user logins (24%), and unauthorized API calls (25%).

Cloud misconfiguration was also cited as the cause of system downtime events (39%) and compliance violation events (34%).

While malicious actors use automation tools to scan the internet to find cloud misconfigurations within minutes of their inception, most cloud teams still rely on slow, manual processes to address the problem.

73% use manual remediation once alerting or log analysis tools identify potential issues, and only 39% have put some automated remediation in place. 40% of cloud teams conduct manual audits of cloud environments to identify misconfiguration.

A reliance on manual approaches to managing cloud misconfiguration creates new problems, including human error in missing or miscategorizing critical misconfigurations (46%) and when remediating them (45%).

43% cite difficulties in training team members to correctly identify and remediate misconfiguration, and 39% face challenges in hiring enough cloud security experts. Issues such as false positives (31%) and alert fatigue (27%) were also listed as problems teams have encountered.

The metric for measuring the effectiveness of cloud misconfiguration management is Mean Time to Remediation (MTTR), and 55% think their ideal MTTR should be under one hour, with 20% saying it should be under 15 minutes.

However, 33% cited an actual MTTR of up to one day, and 15% said their MTTR is between one day and one week. 3% said their MTTR is longer than one week.

With cloud misconfiguration rates at such high levels and a widespread reliance on manual processes to manage it, the costs are predictably high for cloud customers. 49% of cloud engineering and security teams are devoting more than 50 man hours per week managing cloud misconfiguration, with 20% investing more than 100 hours on the problem.

When asked what they need to more effectively and efficiently manage cloud misconfiguration, 95% said tooling to automatically detect and remediate misconfiguration events would be valuable (72% very valuable; 23% somewhat valuable).

Others cited the need for better visibility into cloud infrastructure (30%), timely notifications on dangerous changes (i.e., drift) and misconfiguration (28%), and improved reporting to help prioritize remediation efforts (8%).

Cloud security is about preventing the misconfiguration of cloud resources such as virtual servers, networks, and Identity and Access Management (IAM) services. Malicious actors exploit cloud misconfiguration to gain access to cloud environments, discover resources, and extract data.

The National Security Agency states that misconfiguration of cloud resources remains the most prevalent cloud vulnerability and can be exploited to access cloud data and services.

With the cloud, theres no perimeter that can be defended, exploits typically dont traverse traditional networks, and legacy security tools generally arent effective. Because developers continuously build and modify their cloud infrastructure, the attack surface is highly fluid and expanding rapidly. Organizations widely recognized as cloud security leaders can fall victim to their own cloud misconfiguration mistakes.

With the Shared Responsibility Model, cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform are responsible for the security of the cloud, and the customer is responsible for the security in the cloud.

While cloud providers can educate and alert their customers about potentially risky misconfigurations and good security practices, they cant prevent their customers from making misconfiguration mistakes.

See the rest here:
Shift to work-from-home: Most IT pros worried about cloud security - Help Net Security

Read More..

Stealthbits enhances security and threat protection, expands cloud platform coverage – Help Net Security

Stealthbits Technologies, a customer-driven cybersecurity software company focused on protecting an organizations sensitive data and the credentials attackers use to steal that data, announced the release of StealthAUDIT 10.0, their flagship platform for auditing, governance, and access management across dozens of IT and data resources.

Security professionals struggle to keep up with and defend against the wide variety of tactics, techniques, and procedures (TTPs) attackers can use to infiltrate networks, elude detection, compromise credentials, and escalate privileges on their way to compromising enterprise data repositories.

Correspondingly, there is an ever-increasing number of storage platforms and repositories available to house the data security professionals need to protect, both on-premises and in the cloud.

With the launch of StealthAUDIT 10.0, Stealthbits continues to broaden their security and threat protection capabilities, as well as expand cloud platform coverage to ensure data is protected wherever it lives.

Headlining StealthAUDIT 10.0s impressive list of enhancements is the addition of Shadow Access Rights analysis, which enables users to proactively and explicitly identify attack paths bad actors can take to move laterally, escalate privileges, compromise entire domains, and gain access to sensitive data.

While some risks are simple to identify, others lurk beneath the surface and exist due to the right albeit toxic combination of permissions and conditions, explained Stealthbits Field CTO Gerrit Lansing.

The correlation of highly detailed and comprehensive views into object-level permissions within Active Directory, as well as environmental conditions such as the location of sensitive information and open administrative sessions across desktop and server infrastructure, allows for visualization of real and viable attack paths bad actors look to exploit, helping to simplify an administrators understanding of threats and how to remediate them, Lansing continued.

StealthAUDIT 10.0 provides expanded support for two of the clouds most widely-used object storage and database platforms with solutions for Amazon Simple Storage Service (S3) and Azure SQL.

Our goal is to protect data wherever it may reside, said Stealthbits GM of Products Jeff Warren. Amazon S3 and Azure SQL are among the most popular places for organizations to store data in the cloud, serving the needs of thousands of enterprises globally, Warren continued.

StealthAUDIT 10.0 has been enhanced to provide comprehensive platform support for AWS S3 Buckets and Azure SQL databases, providing an aggregated, normalized, single pane of glass view of user access, activity, and sensitive data across their entire environment.

Among the more noticeable enhancements in StealthAUDIT 10.0 is an overhaul of the products web reporting, governance, and console interfaces in an effort to provide greater ease of use and leverage a more modern aesthetic.

While these interfaces will feel fresh and new, they will still be instantly familiar to users, eliminating any learning curve that often comes with these sorts of enhancements. The redesign also brings the added benefit of a consistent look and feel across all web-based interfaces within Stealthbits portfolio, which users will enjoy and appreciate as they navigate across products, said Adam Laub, Stealthbits CMO.

Other important enhancements in StealthAUDIT 10.0 include speed optimizations for faster scanning of sensitive data, improved reporting, and streamlined query-building for ease of use and increased user satisfaction.

Excerpt from:
Stealthbits enhances security and threat protection, expands cloud platform coverage - Help Net Security

Read More..

Interconnection: Understanding the Types of Connections, and When to Use Them – Data Center Frontier

An aerial view of the CoreSite VA3 data center campus in Reston, Virginia. (Photo: CoreSite)

A newspecial report from CoreSite and Data Center Frontierexplores how colocation can be the nervous system of todays modern digital businesses. The fourth entry in this special reportseries offers a better understanding of interconnection, different types, and the reasons behind using acolocation provider for interconnection services.

Get the full report. (Report Cover Image: Kkssr/Shutterstock)

Interconnection refers to physical and virtual data connections between companies that enable the rapid exchange of data. Colocation and interconnection have been inseparable since the early days of the industry, but the interconnection component is growing as connectivity becomes an important a driver of the colocation decision for cost, security and scalability reasons.

Colocation provides a central meeting place for networks, clouds and enterprises to host their physical infrastructure and interconnection enables them to efficiently exchange traffic with one another. Selected IT services from business partners can be physically located in a colocation facility for maximum performance.

This approach takes advantage of the fact that colocation facilities are optimized to house multiple customers. Those customers can connect with each other over many different mediums including Ethernet to enable data exchange at extremely high speeds and at low cost without the latency penalty of public networks or the cost of private leased lines from an on-premises data center. Accessing the cloud via dedicated connections within a colocation facility can cut bandwidth costs by 60% compared to using the public Internet.

Interconnection has grown in importance as cloud usage has expanded and diversified. Companies nowuse an average of 2.3 infrastructure-as-a-service providers and two platform-as-a-service providers. A Skyhigh Networks analysis of anonymized data from over 600 enterprises in 2016 found that the number of cloud-based applications in use had nearly tripled over a three-year period to an average of 1,427, a figure that is no doubt higher today.

All cloud and many SaaS providers use on-ramps such as Amazon Web Services Direct Connect or Microsoft Azure ExpressRoute to enable customers to establish dedicated connections between their networks and the cloud providers. Most of these on-ramps are actually routed through colocation facilities in order to reach the largest number of customers. Colocation providers partner with the big cloud platform providers to enable customers to establish private, low-latency network links between their dedicated infrastructure and cloud infrastructure providers for optimal performance of a hybrid IT environment. Customers benefit from reduced data transfer costs, improved security and optimal network performance along with the ability to connect to multiple cloud providers through a single interconnection point.

For example, the CoreSite Open Cloud Exchangeenables direct, private, virtual connectivity into AWS, Microsoft Azure, Google Cloud Platform and Oracle Cloud. Customers have the flexibility to select speeds and connectivity options that meets their needs and to scale bandwidth as needed, all from a central console.

Customers using multiple cloud providers can enjoy access speeds equivalent to that of dedicated on-ramps along with the flexibility to switch sessions easily between cloud providers and transfer data with reduced egress charges, which can run as high as 12 cents per gigabyte.

PEERING EXCHANGE

In a peering exchange, two networks connect and exchange traffic with each other without having to pay a third party such as a telecom operator or Internet service provider to carry that traffic across the Internet. The operator can adjust routing to avoid bottlenecks and optimize performance. Peering essentially keeps traffic local to avoid the latency caused by multiple Internet hops.

CROSS CONNECTA cross connect is the equivalent of running a fiber or copper cable between each companys servers. A physical, hardwired cable is provisioned between two different termination locations within the colocation data center, enabling high-performance, dedicated connectivity, excellent reliability and minimal latency.

INTER-SITE CONNECTIVITYInter-site connectivity provides communications between campuses in the colocation providers network. This enables customers to access all the providers they need in an interconnection-dense facility and to grow without constraints within campus facilities. A variety of carrier-grade transport options are provided as well as common-carrier access to other regional interconnection hubs that are ideal for remote locations.

BLENDED IP

A blended IP service provides the convenience of the colocation provider working with a variety of upstream carriers and ISPs to create a highly reliable, SLA-backed solution based on a fully redundant network architecture that provides the best performance across providers.

In addition, virtualized services like the CoreSite Open Cloud Exchangeprovide a platform for accessing multiple public cloud providers and connecting distributed deployments across multiple markets. Using inter-site connectivity, customers can build multi-region cloud architectures from a single port with no long-term commitments.

One of the most revolutionary concepts introduced by the cloud has been pay-as-you-go pricing. Todays colocation services enable the same model to be applied to interconnection services so that IT resources can be provisioned and scaled up and down in a cloud-like manner with no long-term commitments.

Major colocation providers operate multiple large data centers with high-speed pipes between them. For latency-sensitive applications that require processing to be as close to the point of delivery as possible, such as edge computing and IoT environments, this distributed architecture offers customers more choices about how to deploy infrastructure. Instead of isolating IT resources in a single data center, compute and storage can be distributed across multiple colocation facilities for maximum speed and redundancy.

Interconnection enables data exchange between two or more entities or partners at the fastest possible speed by combining high-performance networks with physical proximity. Customers avoid having to work around the unpredictability of the public Internet and data communication costs are minimal.

Customers using multiple cloud providers can enjoy access speeds equivalent to that of dedicated on-ramps along with the flexibility to switch sessions easily between cloud providers and transfer data with reduced egress charges, which can run as high as 12 cents per gigabyte.

There are other reasons to consider interconnection services from a colocation provider:

Catch up on the first three articles in the special report series below:

Download the full report, How Colocation Can Be the Nervous System of Digital Business, courtesy of CoreSite and Data Center Frontier,thatexplores how colocation might just be the answer to remain competitive in todays markets.

See the article here:
Interconnection: Understanding the Types of Connections, and When to Use Them - Data Center Frontier

Read More..

Calculating the true cost of cloud – ITWeb

Darren Bak, Synthesis Technologies.

There is no question that cloud computing is the way of the future, and that having access to the cloud can benefit organisations of every type and size. But how many businesses have really considered the true costs associated with doing business in the cloud? And with uncertain economic times and a slew of different cloud and on-prem options available, it has become critical for organisations embarking on a cloud journey, to thoroughly examine the total cost of ownership of cloud versus their existing infrastructure before making a move.

The public cloud isnt the silver bullet that everyone is saying it is, says Joshua Grunewald, cloud hosting manager at Saicom. It has a purpose for specific systems or applications, which are very specific to any particular business. It may be a 95% fit for one business, 5% fit for another, or a 50/50. IT leaders need to know what they need and how to make use of what's available.

When it comes to auditing your current costs, it is about making sure that you have factored absolutely everything in - from infrastructure to shared costs and day-to-day operational costs, he adds. A lot of people forget the support contracts and agreements that you have to renew on an annual basis. This is generally a capex cost and will have to be depreciated over a certain period of time. This means there's opportunity costs lost. A good checklist would include hardware, support and maintenance agreements, software and licensing, staff resources to manage the infrastructure (including shared resource time from other teams). It would also include overheads such as power, cooling, security, fire suppression and environment control, and any associated support contracts.

The easiest way to conduct an audit is to use a software tool, of which there are a variety of different tools on the market to choose from, says Barry Kemp, head of IaaS at Vox. A key aspect is to have all the right information from the business to input into the tool, for example the costs of electricity consumption, HR in terms of the IT team, rent, the hardware currently being used by the business, and where it is in its depreciation cycle to be able to build a holistic picture of the business infrastructure cost.

Laggard organisations are now forced to go to cloud as the threat of disruption becomes a reality.

Darren Bak

Darren Bak, head solutionist at Synthesis technologies says most organisations do not have an up to date inventory management data store and therefore the best approach to data collection is an automated approach. TSO Logic (now part of AWS) can automatically gather the infrastructure information and generate a business case for you. However, gathering infrastructure costs alone does not give you a full picture. Application, people, third party, and migration costs all need to be gathered in order to produce a well round cloud business case.

It is surprising how many businesses under-estimate their costs for owned infrastructure, says Andrew Cruise, MD of Routed. The main reasons are firstly their inability to apportion shared costs correctly, and secondly their inability to value risk and business agility. It is trivial to cost infrastructure initially by looking at the capital expense of purchasing or financing the hardware and often software. These are usually individual line items in an income statement or ledgers on a balance sheet, easy to audit and ascertain.

But digging deeper, these costs are augmented by costs of the server room estate, power, cooling, security, and operating expenses such as human resources. Often these items are bulk line items in a business' income statement, such as power, cooling, rent. Or for human capital, invariably the resources have multiple responsibilities including infrastructure administration and management and its difficult to separate out the infrastructure costs. To audit these costs, businesses need to identify all contributing costs to the infrastructure and separate out the shared costs in a reasonable manner.

Alternately, its possible that some costs are just not borne by the business, which in this case increases the operational risk, such as lack of security, or understaffing. This too is difficult to value.

The public cloud isnt the silver bullet that everyone is saying it is. It has a purpose for specific systems or applications, which are very specific to any particular business.

Grunewald

Once you have your infrastructure cost it is fairly easy to determine the total cost of ownership, which will include HR costs, says Kemp. "Free software tools such as an Azure tool can be used and there are some paid ones that do a deeper dive into what the companys current hardware looks like. Once again it is important to have the right data to put into the model to receive an accurate set of outputs.

For Grunewald it isnt so simple. If you browse the internet for five minutes around TCO, you will come across multiple articles talking about calculating total cost of ownership in many different ways. Is there really one correct way or all encompassing way to calculate this? I dont believe there is. What will work for one business will possibly not work for another. Calculating the total cost of ownership is really about assessing the long term value of IT investments within an organisation, and infrastructure, whether on premise, or in the cloud, is no different.

An effective cloud business case is made up of migration costs, TCO, cost optimisations and value benefits, says Bak. Analysing the intangible benefits will provide a more rounded business case to justify a cloud migration and create urgency to get ready for cloud. The majority of a cloud business cases focus on cost savings, however, this is not the most compelling benefit. Business agility, productivity and resilience are far more so, however, they are intangible and therefore complex to calculate, and are often just ignored. Business strategy and objectives should drive any cloud strategy. Cloud adoption tends to be far more successful when executive sponsors see cloud as an enabler of business agility, productivity and resilience; not just cost savings.

The majority of a cloud business cases focus on cost savings, however, this is not the most compelling benefit.

Darren Bak

Speaking of the hyperscale cloud providers, Cruise says there are plenty of online calculators available to assist with assessing the costs of running infrastructure in their respective clouds. However, traditional IT workloads are often not suitable to these hyperscale clouds as they demand high resource usage such as storage IOPS and network data transfers. Its non-trivial to measure these on-premise and then input these into calculators to determine the overall cost. Migration costs need to be taken into account and tend to be grossly underestimated. Many businesses make a sweeping decision to migrate to hyperscale cloud without assessing how much time it will take to convert their workloads into native virtualisation format, let alone think about rearchitecting the applications in a cloud native fashion.

Then there's the question of shadow IT, and how to get handle on its true cost. According to Cruise, shadow or stealth IT usually refers to departments utilising hyperscale cloud providers (AWS, Azure, GCP) without central management.Costing this is relatively simple. For these services there must be invoices, which need to be discovered and aggregated. Of course, its the discovery that is tricky, its not called stealth IT for nothing. However, if a cloud migration is formally undertaken, the need for shadow IT recedes and all of these costs should be folded into the whole project.

Cost savings forecasted in a business case are not always realised in reality mainly due to development teams not having a culture of cost transparency and accountability, adds Bak. The shift when migrating to cloud means application teams are now responsible for their spend and have the control and ability to optimise on costs. The culture shift is not a priority in most organisations and this leads to bill shock at the end of the month. The unintended consequence is that developers have free reign to use whatever cloud service they want without consideration of cost and most importantly how the design of the solution impacts cost.

In a world where services are quite easy to consume, costing shadow IT can be very difficult unless one has the right processes in place. There are a few ways that this can be controlled, all of which should be employed to successfully get a handle on shadow IT, comments Grunewald. IT should be sitting with each department of the business and discussing their needs - understanding how to deliver services to each part of the business and successfully doing this will go along way into controlling shadow IT. Educating the business on how resources or services are consumed is also key to empowering the business, which will deter from trying to find alternatives and introduce unknowns and risks into the IT environment.

Quantifying is tricky, but not impossible. Less understood is the cost of stickiness", or being captive in any one particular cloud.

Andrew Cruise

Kemp believes shadow IT is another problem that can also be solved with software. Most of the cloud vendors have some sort of cost management ability built in that can show businesses if they have unused resources in the cloud or have spun up a virtual machine (VM) and that it is sitting idle. Robust governance policies are very important. These policies provide guidelines in terms of who has permission to spin up VMs or turn on services, and combined with an approval process ensure the business has a handle on its cloud costs. In the days of capex, the IT team would have to motivate for a hardware expense, which meant they had to do an analysis as to whether they really needed that particular piece of kit. With cloud, because it is so easy, IT teams may spin up a server and give it very little thought, which leads to a bill shock at the end of the month. The IT director is aware of the extra server, but doesnt necessarily conduct the analysis to find out if it is the right option and whether there is a better, more cost-effective way.

Over and above shadow IT, there are other hidden costs, which are hard to uncover and quantify, says Kemp. It is usually the small things that add up to big costs. An example is storage - in Azure it is charged according to the amount of storage a business takes, but it is also charged according to the number of transactions and it is very difficult to work out how many transactions there will be beforehand. Cloud providers also charge for data coming out of their cloud, which makes it difficult to budget as it depends on the application being used. Some applications may only push data into the cloud once, while others may need to send it out multiple times and then the business has to budget for the extra bandwidth cost.

Month end bill shock from the hyperscale cloud providers has been well documented so many of the hidden costs are now known, adds Cruise. However, quantifying is tricky, but not impossible. Less understood is the cost of "stickiness" - being captive in any one particular cloud. The hyperscale cloud providers differentiate by offering specialist functions and services, which businesses begin to rely on and can hold them hostage in the long run. Businesses should bear in mind that there are alternatives when it comes to infrastructure.

If you take like-for-like and move what you have on-prem into the cloud it is generally going to be more expensive, adds Kemp. An assessment of the on-prem infrastructure is crucial so that the servers can be resized for the cloud. Most of the assessment tools would recommend that a smaller sized VM can be moved into the cloud, which means the business ends up saving money. With cloud the business can budget for what it needs tomorrow and if it needs more resources later in the week it can just increase its requirements. The cloud requires a different mindset and although you budget for a year in advance you definitely dont have to budget three years in advance on your spending. Another aspect to consider is managing costs across the different clouds. With more multinational datacentres set to arrive this year costs management across different clouds can be a challenge.

For Bak, the most challenging aspect of cloud adoption in terms of hidden costs is, unsurprisingly, non-technical in nature. Upskilling your current IT teams on the new technology, creating new job roles and families, driving the change in culture and communicating this across the organisation is often underestimated and realised too late. From a technical perspective, your networking costs can become expensive however this can be mitigated by having experienced solution architects and cloud engineers with a deep and broad understanding of all the cloud services, how they work and in particular the cost metrics associated. Another hidden cost is the guardrails enabled to secure your AWS Account like AWS Config for asset management, Amazon Guard Duty for threat detection, KMS for managing encryption keys, AWS CloudTrail for monitoring and alerting and VPC Flow Logs for monitoring network traffic. These costs are often overlooked when designing applications in Cloud and therefore not contained in the estimate costs.

So do the costs of cloud outweigh the benefits of keeping everything on-prem? Definitely, says Kemp. Simply put, there is less that the business has to worry about such as load shedding that is a massive challenge for companies as well as connectivity. Headaches such as these go out the window. However, companies have to guard against thinking that they will save on their HR costs when they take their servers off-site. Moving to the cloud means that the business still requires people to run those services even though they are delivering more high-value tasks.

Not always, says Grunewald. A company needs to properly consider what their needs are and deploy their environments accordingly. This is why, with a greater knowledge of what is out there, many companies are employing a hybrid or multi-cloud strategies Companies can deploy into the hyperscalers, only what needs to be in public cloud, keeping everything else on-prem or in a private cloud.

Most cultural changes such as Agile, DevOps and similar are impractical and realistically not achievable without cloud, says Bak. Similarly cloud is the enabler for initiatives such as digital transformation, AI and big data. Without cloud all these initiatives are slow, expensive and hinder innovative thinking. Time to market in a highly competitive sectors isthe difference between customer attrition and true increases in market capitalisation. And while the cost of implementing cloud has meant that CIOs prefer to stick with the status quo, laggard organisations are now forced to go to cloud as the threat of disruption becomes a reality especially in the fintech space where start-ups are far more agile and productive.

In the beginning of your journey, you need to answer the core reasons for going to cloud, ends Bak. The easiest way to stimulate conversation with technical executives is by asking this simple question: What would happen if you did nothing? For financial areas, use the term opportunity costs; for marketing, sales and business areas call them intangible benefits. These mutually agreed reasons should drive all decisions about technology, process, tools and priority.

See more here:
Calculating the true cost of cloud - ITWeb

Read More..