Category Archives: Cloud Hosting

Microsoft Signs Deal to Put More Apps on Huawei’s Cloud in China – eWeek

Microsoft The Redmond, Wash. software maker and Chinese tech giant Huawei announced today that they had formed strategic partnership that will bring more of Microsoft’s enterprise applications to the Huawei cloud ecosystem.

The companies signed a memorandum of understanding, an early step in establishing the partnership, during a ceremony at the Huawei Connect 2017 conference in Shanghai, China. Huawei already serves up some Microsoft products on its cloud, including Windows Server and a relational database service that supports SQL Server. In a Sept. 5 media advisory, Huawei said the expanded partnership will “bring more Microsoft enterprise-level products online.”

Alain Crozier, CEO of Microsoft China, echoed his company’s focus on helping to make digital transformation-enabling technologies more accessible to businesses.

“The fourth industrial revolution, driven by technology innovation, is creating opportunities for customers to achieve more across nearly every industry,” he said in a statement. “As a global leader in enterprise IT, Huawei is a strategic partner for Microsoft in the mission to empower organizations as they transform.”

However, the partnership is not without risks, cautioned Marty Puranik, CEO of cloud hosting provider Atlantic.Net. Puranik said that hardware makers such as Huaweiwhich is best known for manufacturing networking products, smartphones and other deviceshave a spotty track record when it comes to sustaining a successful cloud business. “The partnership is interesting, because Huawei is traditionally a hardware maker that competed with Cisco,” Puranik told eWEEK in email remarks.

“Cisco tried ‘intercloud’ which was something like this, but failed. So it will be interesting to see if this succeeds. Other hardware makers like HP also tried doing public cloud and failed, so traditional hardware makers have not had success in doing this,” Puranik wrote, noting that the skills used in hardware manufacturing often don’t translate well into the realm of the public cloud.

There’s a bit more upside for the Microsoft half of the partnership, Puranik predicted.

“Huawei will have a big footprint in China, and obviously gets Microsoft penetration inside of a huge market (and inside the great firewall). So Huawei could be a great partner,” he said. “This is a win for Microsoft to help reach more enterprise-level customers in China.”

Still, there remains some uncertainty. The memorandum of understanding is an initial step in formalizing the partnership, meaning that it may take some time before additional Microsoft products show up on Huawei’s cloud, said Puranik.

Microsoft, meanwhile, already enjoys a big lead in the worldwide cloud application market.

Last week, Synergy Research Group released its latest analysis of the software-as-a-service market, stating that Microsoft is well ahead of its rivals. The analyst firm credited the software giant’s $26 billion acquisition of LinkedIn as part of the reason it has been able to fend off Salesforce, Adobe, Oracle, SAP and other cloud software providers.

Read the original post:
Microsoft Signs Deal to Put More Apps on Huawei’s Cloud in China – eWeek

Cloud Hosting Providers CommsTrader’s UK Channel Guide 2017 – CommsTrader

What are Cloud Hosting Providers?

Cloud hosting is unarguably the most important technological advancement of the decade and has drastically changed the way businesses communicate on a number of different platforms.

With Cloud Hosting (data centre hosting), businesses can save money, improve efficiency and collaboration, streamline their communications, minimise security threats and engage better with their customers and gain much greater control and flexibility over their business communications.

With Cloud Hosting providers (or vendors), channel partners and resellers are given complete cloud packages moulded together by specialists who use their years of experience and knowledge in the Industry to create a failsafe platform for mission critical hosting.

The CommsTrader Guide is a service that provides cutting-edge news on all the latest products that the UK Unified Communications (UC) top vendors have to offer and explains what benefits they bring to businesses.

In the guide, readers can find an alphabetised list of all the latest product groups, each contain a short yet informative overview of what they, what they do and what types of businesses would benefit most from investing in them.

Heres a list of wholesale cloud hosting companies in the UK,in alphabetical order.

If youre a data centre provider that offers channel partners and service providers a wholesale modeland would like us toaddyour company to this list, pleasecontact us.

Interested in Unified Communications? Sign up to ournewsletterhere.

See the article here:
Cloud Hosting Providers CommsTrader’s UK Channel Guide 2017 – CommsTrader

Huawei announces Microsoft cloud apps partnership – ZDNet

Chinese networking giant Huawei has announced signing a memorandum of understanding (MoU) for strategic cooperation on cloud services with Microsoft.

Huawei Cloud already provides Windows Server and Microsoft RDS for SQL Server, with Huawei now set to bring more Microsoft enterprise products online.

According to Huawei Cloud BU and IT Product Line president Zheng Yelai, Huawei and Microsoft will “build an open and win-win ecosystem”.

“As a global leader in enterprise IT, Huawei is a strategic partner for Microsoft in the mission to empower organisations as they transform,” Microsoft China CEO Alain Crozier added.

“Our increased collaboration will drive innovation as we build a seamless platform to benefit customers through industry-leading technology. Together, we are confident that we will lead, and win, in the era of digital transformation.”

Joint market expansion and marketing activities are also planned between Huawei and Microsoft.

“The two companies will initiate in-depth cooperation on the public cloud to provide the optimal experience of cloud services for enterprises and help enterprises implement digital transformation,” Huawei said.

“Joint innovation by the two parties provide customers with trusted hosting and various enterprise applications as a service.”

The news follows Huawei in April saying it was preparing to expand its public cloud offering outside of its home market of China, after setting up a new cloud business unit.

Huawei’s rotating CEO Eric Xu at the time said an “all-cloud ICT infrastructure” is key to enabling the digital transformation for enterprises, with Huawei looking to help telcos build cloud-computing capabilities and digitise their operational systems.

“Beginning in 2017, Huawei will focus on public services,” Xu said in April.

“We will invest heavily in building an open and trusted public cloud platform, which will be the foundation of a Huawei cloud family [that] will include public clouds we develop together with operators, and public clouds that we operate on our own.”

Clement Teo, Ovum’s principal analyst, at the time commented that Huawei’s small United States cloud footprint meant it was missing out on a large revenue stream.

“US will be important for them and, at least, in the long-term, they will need to be in the market,” Teo said.

Catherine Du, Huawei’s director of marketing for enterprise, told ZDNet that Huawei planned to approach the US enterprise market “step by step”.

Huawei made just 44.1 billion yuan in the Americas region in FY16, out of its total revenue of 521.6 billion yuan for the financial year.

Microsoft has had a strong focus on cloud, in July taking the wraps off its enterprise and business cloud services Microsoft 365 bundles.

The Microsoft 365 Enterprise E3 and E5 offerings contain Windows 10 Enterprise, Office 365 Enterprise, and Enterprise Mobility + Security, which includes Azure Active Directory Premium, Intune, Azure Information Protection Premium, Cloud App Security, and Advanced Threat Analytics.

Microsoft 365 Business includes Office 365 Business Premium plus security and management features for Office apps and Windows 10 devices via a single console.

The Redmond-based tech giant then attributed its strong financial results in July to growth in its cloud business segment; Q4 revenue in Intelligent Cloud was $7.4 billion, a year-on-year increase of 11 percent.

Within that, server products and cloud services revenue increased by 15 percent, driven by Azure revenue growth of 97 percent.

Microsoft plans on reaching an annualised commercial cloud run rate of $20 billion by the end of its 2018 financial year, last month acquiring Cycle Computing to bolster its Azure software development capabilities.

In July, Microsoft also detailed plans to turn Azure into an artificial intelligence (AI) cloud.

Huawei is similarly focused on AI, earlier this week unveiling its Kirin 970 chipset with built-in AI and calling it the “future of smartphones”.

The Kirin 970 mobile AI computing platform is powered by an 8-core CPU and a 12-core GPU, which Huawei said delivers 25x greater performance and 50x greater efficiency compared to a quad-core Cortex-A73 CPU cluster. It has a dedicated neural processing unit (NPU), and was built using a 10nm advanced process with 5.5 billion transistors contained in an area of just one square centimetre.

According to Huawei Consumer Business Group CEO Richard Yu, the company’s mobile AI is made up of a combination of cloud AI and on-device AI.

Huawei has additionally provided the Kirin 970 as an open platform for mobile AI developers and partners.

“Huawei is committed to developing smart devices into intelligent devices by building end-to-end capabilities that support coordinated development of chips, devices, and the cloud,” Yu said on Saturday.

Read the original here:
Huawei announces Microsoft cloud apps partnership – ZDNet

Trump Effect Could Cost US Cloud Providers Over $10 Billion: Report – Web Host Industry Review

Cloud computing companies in the U.S. could lose more than $10 billion by 2020 as a result of the Trump administrations reputation regarding data privacy, according to Swiss hosting company Artmotion.

A whitepaper published by Artmotionsuggests that growth rate in U.S. cloud revenue relative to the rest of the world will decline significantly more than previously forecast by IDC.

See also: Tech Goes From White House to Doghouse in Trumps Washington

IDCsWorldwide Public Cloud Services Spending Guidepredicts that the U.S. will account for 60 percent of cloud revenue worldwide to 2020. The same research, however, suggests revenue growth in the U.S. will be lower than that in all seven other regions analyzed by IDC, and according to Artmotion does not take into account the sharply falling confidence businesses have in the capacity of U.S. companies to protect the privacy of data in the cloud.

While these figures may be concerning for U.S. service providers already, they dont take full account of the scale of the disapproval of President Trumps actions since taking office, according to Mateo Meier, CEO of Artmotion.

Artmotions own research shows that half of U.S. and U.K. citizens feel online data privacy is less secure under President Trump. Further, 24 percent are most concerned about their own government, while only 20 percent consider the Russian government most concerning, and 15 percent fear the Chinese government.Both Russia and China were considered a greater threat to data privacy by Americans in Artmotions 2015 survey.

Artmotion, which has seen a 14 percent increase in revenue from U.S. companies from 2016 to 2017, estimates the total loss in revenue to U.S. cloud companies will be $1 billion this year, over $3 billion in 2018 and 2019, respectively, and $2.7 billion in 2020.

The whitepaper was released before the U.S. Department of Justice took the unprecedented step of demanding visitor logs containing the IP addresses of website visitors to anti-Trump website disruptj20.org.

The study cites survey results released by Pew Research Center, which show confidence in the U.S. presidents handling of international affairs falling from 64 percent at the end of Obamas term to only 22 percent at the beginning of Trumps.

(A)ny government, legislative and regulatory uncertainty is likely to make organizations think twice about where they host their data, Meier writes in the whitepaper.

Artmotion reported a 45 percent increase in revenue immediately following the PRISM program revelations of 2013, though the loss of business confidence in U.S. data privacy protections did not reach the $35 billion impact through 2016 estimated by the Information Technology and Innovation Foundation at the time.

Part of the reason for this may be slower than expected cloud adoption by European businesses.

IDC research indicates that Western European cloud adoption is about to catch up to that of U.S. businesses, just as the EU seeks clarification from the U.S. about the impact of an early President Trump executive order on the EU-US Privacy Shield.

Read the rest here:
Trump Effect Could Cost US Cloud Providers Over $10 Billion: Report – Web Host Industry Review

Cloud Native The Perfect Storm for Managed SD-WAN Services – Network World

We are excited to announce today that Silver Peak has joined MEF. With 130+ members, MEFs new SD-WAN initiatives are intended to address implementation challenges and help service providers to accelerate managed SD-WAN service deployments. Some of this work involves defining SD-WAN use cases, and a key use case revolves around connecting distributed enterprises and users to cloud-hosted SaaS applications and IaaS.

Enterprise CIOs continue to accelerate the pace of corporate digital transformation initiatives, often including plans to migrate enterprise applications to the cloud. Cloud-first is often the preferable choice for hosting new applications, enabling enterprises to securely connect users to applications from anywhere and across any type of WAN service.

The migration from data center-hosted to cloud-hosted applications is a perfect storm for building an SD-WAN. The best-in-class SD-WAN solutions and services take an application-aware approach that focuses on performance and availability.

Internet-Destined Traffic on the Rise

Two years ago, as an analyst, one of the questions that I always asked of my enterprise clients was, what percentage of your applications traffic is destined for the Internet? At the time, that percentage was less than 50%. Today, based on many recent conversations with Tier 1 service providers and distributed enterprises, I estimate that the percentage of internet-bound traffic has increased to 85%.

However, using broadband Internet services for the enterprise WAN, particularly for accessing cloud-native applications, poses additional challenges for enterprises that are concerned with the security, performance and visibility of their applications and network.

In fact, according to a recent Frost & Sullivan SD-WAN blog, 43% of enterprises chose improved cloud connectivity to deliver a better SaaS experience as the second most compelling reason to deploy an SD-WAN solution.

Ensuring High-Quality Cloud Connectivity

So how do enterprise IT managers ensure the equivalent customer experience when it comes to performance, security and visibility for cloud-hosted SaaS applications in contrast to data-center hosted applications and irrespective of the underlying network connectivity?

A best-in-class SD-WAN cloud connect use case can address the performance, security and visibility challenges for both on-net and off-net sites and across any network service including broadband.

Today, managed service providers offer either private MPLS or Ethernet cloud connect services for enterprises to connect on-net branch sites to a limited set of SaaS and IaaS providers. There are four key challenges that may limit the opportunity to fully address the managed cloud connect market:

1. Existing service provider cloud connect service offerings for enterprise users across off-net sites must be backhauled to their nearest on-net provider PoP. This can introduce latency and adversely affect SaaS application performance.

2. The complexity of identifying and securing all of the enterprises SaaS applications traffic requires additional resources and security policy flexibility to integrate a secure web gateway, enterprise branch firewall, or network-based security service.

3. The requirement to identify, manage and prioritize trusted applications vs. personal web applications (YouTube, Facebook or Netflix) on the first packet is difficult once an application flow has been directed to a specific path.

4. There are incremental expense, time and capital resource requirements to establish direct cloud connect peering relationships with every SaaS provider and for all SaaS data center sites.

Silver Peak SD-WAN Addresses Cloud Challenges

The Silver Peak Unity EdgeConnectSP SD-WAN solution addresses a full spectrum of key requirements for developing and deploying a compelling SD-WAN cloud connect service that can address all four key challenges:

1. Dynamic and secure steering of cloud-destined application traffic to any SaaS provider

2. Policy-based automated local internet breakout for trusted cloud applications with First-packet iQ that identifies and classifies applications on the first packet of each connection

3. High-performance SaaS optimization which calculates the round-trip latency and automatically selects the optimal cloud connect path for 50+ SaaS applications

4. Simple security service chaining to secure web gateways and industry-leading next generation firewalls to support granular security policies for SaaS and web-based applications.

5. Minimize the requirement to backhaul all off-net cloud-destined applications to the nearest service provider MPLS PoP

By taking advantage of these advanced EdgeConnectSP features and capabilities, service providers can extend their existing cloud connect services beyond the MPLS private cloud connect use cases and offer an advanced SD-WAN Cloud Connect service. This creates an enormous opportunity for service providers to offer tiered, managed cloud connect services that enable SLAs for public, private on-net and off-net deployments of SaaS applications.

As a new MEF member, Silver Peak looks forward to contributing to and enhancing the service provider market opportunity for new on-demand, tiered managed SD-WAN services.

Read the original here:
Cloud Native The Perfect Storm for Managed SD-WAN Services – Network World

Cost Reduction Strategies on Java Cloud Hosting Services – InfoQ.com

Key Takeaways

Cloud resources can be expensive, especially when you are forced to pay for resources that you dont need; on the other hand resource shortages cause downtimes. Whats a developer to do? In this article we will discuss techniques for determining the golden medium that lets you pay for just the resources you actually consume, without being limited as your application capacity requirements scale.

The first step to any solution of course is admitting that you have a problem. Below are some details on the issue that many cloud users face.

Almost every cloud vendor offers the ability to choose from a range of different VM sizes. Choosing the right VM size can be a daunting task; too small and you can trigger performance issues or even downtimes during load spikes. Over-allocate? Then during normal load or idle periods all unused resources are wasted. Does this scenario look familiar from your own cloud hosted applications?

And when the project starts growing horizontally, the resource inefficiency issue replicates in each instance, and so, the problem grows proportionally.

In addition, if you need to add just a few more resources to the same VM, the only way out with most of current cloud vendors is to double your VM size. See the sample of AWS offering below.

(Click on the image to enlarge it)

Exacerbating the problem, you need to incur downtime when you move, by stopping a current VM, performing all steps of application redeploy or migration, and then dealing with the inevitable associated challenges.

This shows that VMs are not quite flexible and efficient in terms of resource usage, and limits adjustment according to variable loads. Such lack of elasticity directly leads to overpaying.

If scale out is not helping to use resources efficiently, then we need to look inside our VMs for a deeper understanding of how vertical scaling can be implemented.

Vertical scaling optimizes memory and CPU usage of any instance, according to its current load. If configured properly, this works perfectly for both monoliths, as well as microservices.

Setting up vertical scaling inside a VM by adding or removing resources on the fly without downtimes is a difficult task. VM technologies provide memory ballooning, but its not fully automated, requiring tooling for monitoring the memory pressure in the host and guest OS, and then activating up or down scaling as appropriate. But this doesn’t work well in practice, as the memory sharing should be automatic in order to be useful.

Container technology unlocks a new level of flexibility thanks to its out-of-box automatic resource sharing among containers on the same host, with a help of cgroups. Resources that are not consumed within the limit boundaries are automatically shared with other containers running on the same hardware node.

And unlike VMs, the resource limits in containers can be easily scaled without reboot of the running instances.

As a result, the resizing of the same container on the fly is easier, cheaper and faster than moving to larger VMs.

There are two types of containers application and system containers. An application container (such as Docker or rkt) typically runs in as little as a single process, whereas a system container (LXD, OpenVZ) behaves like a full OS and can run full-featured init systems like systemd, SysVinit, and openrc, that allow processes to spawn other processes like openssh, crond, or syslogd, together inside a single container. Both types support vertical scaling with resource sharing for higher efficiency.

Ideally on new projects you want to design around application containers from the ground up, as it is relatively easy to create the required images using publicly available Docker templates. But there is a common misconception that containers are good only for greenfield applications (microservices and cloud-native). The experience and use cases prove possibility to migrate existing workloads from VMs to containers without rewriting or redesigning applications.

For monolithic and legacy applications it is preferable to use system containers, so that you can reuse architecture, configuration, etc., that were implemented in the original VM design. Use standard network configurations like multicast, run multiple processes inside a container, avoid issues with incorrect memory limits determination, write on the local file system and keep it safe during container restart, troubleshoot issues and analyze logs in an already established way, use a variety of configuration tools based on SSH, and be liberal in relying on other important “old school” tasks.

To migrate from VMs, monolithic application topology should be decomposed into small logical pieces distributed among a set of interconnected containers. A simple representation of the decomposition process is shown in the picture below.

Each application component should be placed inside an isolated container. This approach can simplify the application topology in general, as some specific parts of the project may become unnecessary within a new architecture.

For example, Java EE WebLogic Server consists mainly of three kinds of instances required for running in a VM: administration server, node manager and managed server. After decomposition, we can get rid of the node manager role, which is designed as a VM agent to add/remove managed server instances, as now they will be added automatically by the container and attached directly to administration server using the container orchestration platform and a set of WLST (WebLogic Server Scripting Tool) scripts.

To proceed with migration, you need to prepare the required container images. For system containers, that process might be a bit more complex than for application containers, so either build it yourself or use an orchestrator like Jelastic with pre-configured system container templates.

And finally, deploy the project itself and configure the needed interconnections.

Now each container can be scaled up and down on the fly with no downtime. It is much thinner compared to virtual machines, so this operation takes much less time compared to scaling with VMs. And the horizontal scaling process became very granular and smooth, as a container can be easily provisioned from the scratch or cloned.

For scaling Java vertically, it is not sufficient to just use containers; you also need to configure the JVM properly. Specifically, the garbage collector you select should provide memoryshrinking in runtime.

Such GC packages all the live objects together, removes garbage objects, uncommit and releases unused memory back to the operation system, in contrast to non-shrinking GC or non-optimal JVM start options, where Java applications hold all committed RAM and cannot be scaled vertically according to the application load. Unfortunately, the JDK 8 default Parallel garbage collector (-XX:+UseParallelGC) is not shrinking and does not solve the issue of inefficient RAM usage by JVM. Fortunately, this is easily remedied by switching to Garbage-First (-XX:+UseG1GC).

Lets see the example below. Even if your application has low RAM utilization (blue in the graph), the unused resources cannot be shared with other processes or other containers as its fully allocated to the JVM (orange).

(Click on the image to enlarge it)

However, the good news for the Java ecosystem is that as of JDK 9, themodern shrinking G1 garbage collector is enabled by default. One of its main advantages is the ability to compact free memory space without lengthy GC pause timesand uncommit unused heap.

Use the following parameter to enable G1, if you use JDK lower than 9th release:-XX:+UseG1GC

The following two parameters configure the vertical scaling of memory resources:

Also, the application should periodically invoke Full GC, for example, System.gc(), during a low load or idle stage. This process can be implemented inside the application logic or automated with a help of the external Jelastic GC Agent.

In the graph below, we show the result of activating the following JVM start options with delta time growth of about 300 seconds:

-XX:+UseG1GC -Xmx2g -Xms32m

(Click on the image to enlarge it)

This graph illustrates the significant improvement in resource utilization compared to the previous sample. The reserved RAM (orange) increases slowly corresponding to the real usage growth (blue). And all unused resources within the Max Heap limits are available to be consumed by other containers or processes running in the same host, and not wasted by standing idle.

This proves that a combination of container technology and G1 provides the highest efficiency in terms of resource usage for Java applications in the cloud.

The last (but not least) important step is to choose a cloud provider with a “pay per use” pricing model in order to be charged only based on consumption.

Cloud computing is very often compared to electricity usage, in that it provides resources on demand and offers a “pay as you go” model. But there is a major difference – your electric bill doesnt double when you use a little more power!

Most of the cloud vendors provide a “pay as you go” billing model, which means that it is possible to start with a smaller machine and then add more servers as the project grows. But as we described above, you cannot simply choose the size that precisely fits your current needs and will scale with you, without some extra manual steps and possible downtimes. So you keep paying for the limits – for a small machine at first, then for one double in size, and ultimately horizontal scaling to several underutilized VMs.

In contrast to that, a “pay as you use” billing approach considers the load on the application instances at a present time, and provides or reclaims any required resources on the fly, which is made possible thanks to container technology. As a result, you are charged based on actual consumption and are not required to make complex reconfigurations to scale up.

(Click on the image to enlarge it)

But what if you are already locked into a vendor with running VMs, and youre paying for the limits and not ready to change it, then there is still a possible workaround to increase efficiency and save money? You can take a large VM, install a container engine inside and then migrate the workloads from all of the small VMs. In this way, your application will be running inside containers within the VM – a kind of “layer-cake”, but it helps to consolidate and compact used resources, as well as to release and share unused ones.

Realizing benefits of vertical scaling helps to quickly eliminate a set of performance issues, avoid unnecessary complexity with rashly implemented horizontal scaling, and decrease cloud spends regardless of application type – monolith or microservice.

Ruslan Synytsky is CEO and co-founder of Jelastic, delivering multi-cloud Platform-as-a-Service for developers. He designed the core technology of the platform that runs millions of containers in a wide range of data centers worldwide. Synytsky worked on building highly-available clustered solutions, as well as enhancements of automatic vertical scaling and horizontal scaling methods for legacy and microservice applications in the cloud. Rich in technical and business experience, Synytsky is actively involved in various conferences for developers, hosting providers, integrators and enterprises.

See the original post:
Cost Reduction Strategies on Java Cloud Hosting Services – InfoQ.com

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 – The Data Center Journal

The U.S. cloud computing industry stands to lose more than $10 billion by 2020 as a result of President Trumps increasingly shaky reputation on data privacy, according to the latest research from secure data center experts Artmotion.

Growth for U.S. cloud computing providers is already thought to be slowing. Although IDCs latest Worldwide Public Cloud Services Spending Guide suggests that the US will generate more than 60% of total worldwide cloud revenues to 2020, the country is expected to experience the slowest growth rate of the eight regions in the analysis.

However, this forecast slowdown does not factor in the effect that President Trumps controversial record on data privacy has had on business confidence in the U.S. as a data hosting location. This coincides with a rapid increase in people expressing unfavorable opinions about the U.S. more generally. In fact, the latest study from the Pew Research Center highlights that just 22% of people have confidence in President Trump to do the right thing when it comes to international affairs.

As a result of this growing uncertainty, Artmotions new analysis suggests that U.S. cloud providers will experience further slowing of growth in the next three yearscreating estimated losses of $10.1 billion for the industry between 2017-2020.

Mateo Meier, CEO of Artmotion, commented: In a market that is still expected to grow significantly in the next few years, it is vital that U.S. service providers continue to attract new customers in order to retain market share. Despite the U.S.s current dominance of the global cloud computing market, there is no certainty that the status quo will be maintained. Perhaps the key reason for US cloud providers to be fearful is that this isnt the first time weve been here.

Edward Snowdens revelations about PRISM and the NSAs mass surveillance techniques were hugely damaging to U.S. cloud companies. It also encouraged many businesses to completely rethink their data strategies, rather than continuing to trust that U.S. cloud providers would guarantee the levels of data security and privacy they need. The impact that President Trump could have needs to be understood in that context.

Artmotions full analysis is available as a free download here.

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 was last modified: August 25th, 2017 by Press Release

Link:
President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 – The Data Center Journal

Why 2017 is the Year to Understand Cloud Computing – Business 2 Community

The Cloud has become a major buzzword in business for very good reason. Small businesses and large enterprises alike can take advantage of cloud computing to build and expand the computer based infrastructure behind the scenes. Follow this guide to better understand what cloud computing is, how it works, and how you can take advantage.

In the old world of web servers and internet infrastructure, websites and other online assets were typically limited to one main server, or a few linked servers using tools called load balancers, to process and send data, whether it be a customer facing website or internal facing application. The advent of content delivery networks (CDNs) powered up those servers to host and serve data from the edge of the network for faster serving and sometimes lower costs.

As computing demand exploded with the rise of the smartphone and high-speed internet, consumer and business needs downstream of those servers continues to creep upward. Cloud computing has emerged as the best option to handle an array of computing needs for startups and small businesses due to the ability to start at a low cost and scale, almost infinitely, as demand grows. Advances in cloud technology at Amazon, Google, Microsoft, IBM, Oracle, and other major cloud providers is making cloud computing more desirable for all businesses.

When cloud computing first emerged, large enterprises were the only businesses able to afford the cost of elastic, flexible computing power. Now, however, those costs are more likely a drop in the bucket for small businesses.

For example, I use the cloud to store and serve videos for Denver Flash Mob, a side hustle business I run with my wife. Our monthly bill is typically around a dollar or two, and heavy months lead to a bill around five bucks. No big deal! My lending startup Money Mola is also cloud based, with costs to run both a development server and public facing server running us around $30 per month.

The first time I logged into Amazon Web Services (AWS) it seemed like I needed a computer science degree to use it! I had a hard time doing even basic tasks outside of uploading and sharing videos. Thankfully Amazon has made using AWS much easier, though it is not without its challenges.

Im a pretty techy guy, so my skillset is a bit more advanced than the average computer user. I have setup AWS to send outgoing transactional emails, automatically backup websites, and more on my own. If you are willing and able to hire a cloud expert, the possibilities of the cloud are endless. Anything from web hosting to artificial intelligence and big data analysis can run in the cloud.

Webcast, August 29th: How to 8x Your SEO Traffic With These 3 Power Hacks

The most basic way to get started with cloud computing is website and computer backups. If you use WordPress for your website, setting up cloud backups is simple with one of a handful of plugins like Updraft Plus. If you can use the WordPress dashboard, you can setup cloud backups with Updraft plus. It is quick and easy and includes out of the box support. Easy from companies like AWS, Drobox, Google Drive, Rackspace Cloud, and other services. The paid plugin version adds access to Microsoft OneDrive and Azure, Google Cloud Storage, and other options.

I run several backups of both my laptop and my web based assets. If my home were to be burglarized or burned down, the cloud has me covered. If my laptop is stolen, I have a backup at home and in the cloud. Redundant backups are not optional, they are a must in 2017.

In addition to safe, secure backups, the cloud can reach far corners of the planet. Utilizing cloud based CDNs, you know your customers will get every video and web page they want with near instant speeds.

Lets say your business has a popular video you want to share around the world. With a cloud CDN, you upload your video once to the web. Then the CDN takes over and creates copies of that video file in data centers around the world. Whenever a customer clicks to view that video, they are served a copy from the closest data center to their location.

Thanks to the power of a CDN, you dont have to send viewers in Australia, London, Bangkok, and Buenos Aires a video from your web server in Texas. Each one gets a local copy so they get their video even faster, offering a better customer experience. App based businesses can even run multiple versions of their app in data centers around the world. This will nsure every user has the same great experience.

It doesnt matter what your business does, there is some way the cloud can help you achieve better results. The cloud is only going to grow and become more prominent in business. Older computer methods will go the way of the fax machine. If you want serious computing success with scalability and flexibility, the cloud is your best option.

Read the original post:
Why 2017 is the Year to Understand Cloud Computing – Business 2 Community

What You NEED To Look For In A Cloud Hosting SLA – TG Daily (blog)

In the modern world of business IT, the cloud is king thats just a fact. According to a recent survey, 95% of all businesses are using either public or private cloud hosting services and the vast majority of businesses are contracting with at least 6 different cloud computing providers.

This makes sense, of course. Cloud computing is inexpensive, reliable, and available even to SMEs (Small-To-Midsized-Enterprises), who often could not afford expensive, on-site IT infrastructure.

However, not all cloud hosting Vancouver companies are the same. As the cloud becomes more and more important to critical business operations, robust Service Level Agreements (SLAs) are essential for any business with a cloud hosting partner.

Essentially, an SLA is a legally-binding document that defines performance standards, uptime, and customer support standards between a cloud provider and a business.

In this document, things such as expected network uptime, Mean Time Between Failure (MTBF), data throughput, and server/software performance are defined in plain language.

The requirements both for the hosting provider and the customer are also defined as are the next steps that can be taken if either party fails to uphold their end of the contract.

An SLA is the single most important document youll sign when choosing a new cloud hosting partner. So heres what you should look for before signing a new cloud hosting SLA.

Cloud hosting SLAs are complicated documents but there are some simple things that you can look for to ensure youre signing an SLA from a reputable company.

System Uptime This is the single most important guarantee you can get on your SLA. Any reputable cloud hosting company should offer system uptime of 99.9% or higher, and have clear guarantees for compensation in case they fail to uphold the system uptime standards outlined in the SLA.

Clear Support And Response Time Guidelines Your SLA should include guarantees both for the level of customer support, and response times from support staff. Try to choose a cloud hosting provider that offers 24/7 customer support, and has a clear policy for fast, reliable response times.

Detailed Definitions For Data Ownership And Management Any SLA you sign should include details about data ownership. You must make it clear that your company still owns any data hosted by a third party.

Your SLA should include language that makes your data ownership clear as well as detailed next steps for retrieval of proprietary data in case you must break the service contract.

Clearly-Defined System Security An SLA should always include a set of security standards that are clearly defined, and testable by you, or a third party.

Your SLA should also allow you to take appropriate security precautions if desired such as using a third-party auditing service to ensure your data is properly protected.

Steps That Can Be Take In Case Of Non-Compliance Or Disputes If your cloud hosting provider fails to uphold their SLA, there must be proper, legal steps that your company can take to exit the contract, or obtain compensation from the company.

A clear strategy for resolving conflicts should be defined as should a clear exit strategy that can be implemented in case the terms of the contract are breached.

Any reputable cloud hosting company in Canada should be willing to create an SLA with these terms and if you find that your potential partner is unwilling to create a comprehensive SLA for any reason, walk away. You should never enter a contract with a cloud hosting provider without an SLA the risks are simply too great.

An SLA is a multifunctional legal document. It protects both you and your cloud hosting partner, and ensures that your business relationship is mutually beneficial.

For this reason, you should only do business with reputable companies that offer robust SLAs. And if you follow these tips and understand the basics behind SLAs, youre sure to find success when searching for a cloud hosting partner in Canada!

Read more from the original source:
What You NEED To Look For In A Cloud Hosting SLA – TG Daily (blog)

President Trump Could Cost US Cloud Computing Providers More … – The Data Center Journal

The U.S. cloud computing industry stands to lose more than $10 billion by 2020 as a result of President Trumps increasingly shaky reputation on data privacy, according to the latest research from secure data center experts Artmotion.

Growth for U.S. cloud computing providers is already thought to be slowing. Although IDCs latest Worldwide Public Cloud Services Spending Guide suggests that the US will generate more than 60% of total worldwide cloud revenues to 2020, the country is expected to experience the slowest growth rate of the eight regions in the analysis.

However, this forecast slowdown does not factor in the effect that President Trumps controversial record on data privacy has had on business confidence in the U.S. as a data hosting location. This coincides with a rapid increase in people expressing unfavorable opinions about the U.S. more generally. In fact, the latest study from the Pew Research Center highlights that just 22% of people have confidence in President Trump to do the right thing when it comes to international affairs.

As a result of this growing uncertainty, Artmotions new analysis suggests that U.S. cloud providers will experience further slowing of growth in the next three yearscreating estimated losses of $10.1 billion for the industry between 2017-2020.

Mateo Meier, CEO of Artmotion, commented: In a market that is still expected to grow significantly in the next few years, it is vital that U.S. service providers continue to attract new customers in order to retain market share. Despite the U.S.s current dominance of the global cloud computing market, there is no certainty that the status quo will be maintained. Perhaps the key reason for US cloud providers to be fearful is that this isnt the first time weve been here.

Edward Snowdens revelations about PRISM and the NSAs mass surveillance techniques were hugely damaging to U.S. cloud companies. It also encouraged many businesses to completely rethink their data strategies, rather than continuing to trust that U.S. cloud providers would guarantee the levels of data security and privacy they need. The impact that President Trump could have needs to be understood in that context.

Artmotions full analysis is available as a free download here.

President Trump Could Cost US Cloud Computing Providers More Than $10 billion by 2020 was last modified: August 25th, 2017 by Press Release

View original post here:
President Trump Could Cost US Cloud Computing Providers More … – The Data Center Journal