Page 3,811«..1020..3,8103,8113,8123,813..3,8203,830..»

Is the cloud really safe? – IT-Online

Optimal cloud security requires a distinct way of thinking about IT infrastructure, writes Ray Pompon, principal threat evangelist at F5 Labs.

Back in the day, the theft and loss of backup tapes and laptops were a primary cause of data breaches.

That all changed when systems were redesigned and data at rest was encrypted on portable devices.

Not only did we use technology to mitigate a predictable human problem, we also increased the tolerance of failure.

A single lapse, such as leaving a laptop in a car, doesnt have to compromise an organisations data. We need the same level of failure tolerance, with access controls and IT security, in the cloud.

In the cloud, all infrastructure is virtualised and runs as software. Services and servers are not fixed but can shrink, grow, appear, disappear, and transform in the blink of an eye. Cloud services arent the same as those anchored on-premises. For example, AWS S3 buckets have characteristics of both file shares and web servers, but they are something else entirely.

Practices differ too. You dont patch cloud servers they are replaced with the new software versions. There is also a distinction between the credentials used by an operational instance (like a virtual computer), and those that are accessible by that instance (the services it can call).

Cloud computing requires a distinct way of thinking about IT infrastructure.

A recent study by the Cyentia Institute shows that organisations using four different cloud providers have one-quarter the security exposure rate. Organisations with eight clouds have one-eighth the exposure. Both data points could speak to cloud maturity, operational competence, and the ability to manage complexity. Compare this to the lift and shift cloud strategies, which result in over-provisioned deployments and expensive exercises in wastefulness.

So how do you determine your optimal cloud defense strategy?

Before choosing your deployment model, it is important to note that there isnt one definitive type of cloud out there.

The National Institute of Standards and Technologys (NIST) definition of cloud computing lists three cloud service models infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS)). It also lists four deployment models: private, community, public, and hybrid.

Heres a quick summary of how it all works through a security lens:

* Software-as-a-Service (SAAS) cloud is an application service delivered by the cloud. Most of the infrastructure is managed by the provider. Examples include Office 365, Dropbox, Gmail, Adobe Creative Cloud, Google G Suite, DocuSign, and Shopify. Here, you are only responsible for your logins and data. Primary threats include phishing, credential stuffing, and credential theft. These can be controlled via solutions such as multi-factor authentication, application configuration hardening, and data-at-rest encryption (if available).

* Platform-as-a-Service (PaaS) cloud is a platform to build applications into before they are delivered by the cloud. The provider manages the platform infrastructure, but you build and run the applications. Examples include AWS S3 buckets, Azure SQL Database, Force.com, OpenShift, and Heroku. You are only responsible for your logins and data. In addition to SaaS threats (access attacks), there is a need to secure the application itself against web app attacks. In this model, you are likely to have exposed APIs and service interfaces that could leak data if unsecure. Controls include User/Role Rights Management processes, secure API gateways, Web App Security, Web App Firewalls, bot scrapers, and all the referenced SaaS controls.

* Infrastructure-as-a-Service (IaaS) Cloud is a platform to build virtual machines, networks, and other computing infrastructures. The provider manages the infrastructure below the operating system, and you build and run everything from the machine and network up. Examples include AWS EC2, Linode, Rackspace, Microsoft Azure, and Google Compute Engine. You are responsible for the operating systems, networking, servers, as well as everything in the PaaS and SaaS models. In addition to the threats targeting SaaS and PaaS models, the main security concerns are exploited software vulnerabilities in OS and infrastructure, as well as network attacks. This calls for a hardening of virtualised servers, networks, and services infrastructure. Youll need all the above-mentioned controls, plus strong patching and system hardening, and network security controls.

* On-Premises/Not Cloud is the traditional server in a rack, whether its in a room in your building or in a colocation (Colo) facility. Youre responsible for pretty much everything. Theres less worries about physical security, power, and HVAC but there are concerns related to network connectivity and reliability, as well as resource management. In addition to threats to networks, physical location, and hardware, youll have to secure everything else mentioned above.

If you have a hybrid cloud deployment, youll have to mix and match these threats and defenses. In that case, an additional challenge is to unify your security strategy without having to monitor and configure different controls, in different models and in different environments. Other, specific organisational proficiencies integral to reducing the chances of a cloud breach include:

Technical skills and strategy

* A strong understanding of cloud technology, including its deployment models, advantages, and disadvantages at the IT executive/management level.

* A deep understanding of the operating modes and limitations of associated controls.

* Comprehensive service portfolio management, including tracking environment, applications, deployed platforms, and ongoing IT projects.

* Risk assessments and threat modelling, including understanding possible breach impacts and failure modes for each key service.

Access control processes

* Defined access and identity roles for users, services, servers, and networks.

* Defined processes to correct erroneous, obsolete, duplicate, or excessive user and role permissions.

* Methods for setting and changing access control rules across all data storage elements, services, and applications.

* Automated lockdown of access to all APIs, logins, interfaces, and file transfer nodes as they are provisioned.

* Centralised and standardised management of secrets for encryption and authentication.

Observability

* Defined and monitored single-path-to-production pipeline.

* Inventory of all cloud service objects, data elements, and control rules.

* Configuration drift detection/change control auditing.

* Detailed logging and anomaly detection.

Adherence to secure standards

* Guardrails to ensure secure standards are chosen by default, including pre-security certified libraries, frameworks, environments and configurations.

* Audit remediation and hybrid cloud governance tools.

* Automated remediation (or deletion) of non-compliant instances and accounts.

* Automated configuration of new instances that includes secure hardening to latest standard.

Any strategy and priority decisions should come before the technological reasons. Dont go to the cloud for the sake of it. A desired goal and robust accompanying strategy will show the way and illuminate where deeper training and tooling are needed.

Related

Read the original:
Is the cloud really safe? - IT-Online

Read More..

When the Cloud Falls to Earth. Is It Time for Your Organization to Consider Cloud Repatriation? – Data Economy

For many oftodays applications and workloads, cloud computing offers the enterprise ahost of advantages over traditional data centers, including lowered operationaland capital expenditures, improved time to market, and the ability todynamically adjust provisioning to meet changing needs globally. Consequently,there has been a massive shift to cloud migration over the past decade, with cloud computing trends showing significantyear-over-year growth since it was first introduced, and Cisco predicting thatby 2021 cloud data centers will process 94 percent of all workloads.According toMarketsandMarkets, the global cloud computing market is projected to surge at acompound annual growth rate (CAGR) of 18 percent to reach approximately $623.3 billion by 2023,up from $272 billion in 2018.

Today, however, we are seeing more companies bringing workloads backinto their data centers or edge environments after having them run in the cloudfor several years because they didnt originally fully understand theirsuitability in a cloud environment. 451Research has referred to this dynamic as cloud repatriation,and a recent survey found that 20 percent of cloud users had already moved atleast one or more of their workloads from the public cloud to a private cloud,and another 40 percent planned to do so in the near future.

All of this begs a deceivingly simple question: How do I know whena workload would be better off running in or outside of the cloud?

When Latency, Availability and ControlAre Key

Aswith any IT decision, an inadequately researched, planned and tested process islikely to cause setbacks for enterprise end-users when the organization atlarge is faced with uncertainty whether to move an application or workload outof the public cloud and return it to an on-premises data center or edgeenvironment.

Very often, moving an application or workload from the cloud makes good business sense when critical operational benchmarks are not being met. This might mean inconsistent application performance, high network latency due to congestion, or concerns about data security. For example, we know of one Fortune 500 financial services firm that was pursuing an initiative to move its applications and data to the public cloud and only later discovered that its corporate policy prohibited placement of personally identifiable information (PII) and other sensitive data beyond their internal network/firewall. Although many security standards are supported by public cloud providers, because of its internal policy, the financial organization opted to keep its data on-premises.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

Somecompanies, such as Dropbox, have chosen to migrate from the public cloud tobenefit their bottom line. While cost is but one criterion for leaving, it is amajor one. In the wake of leaving the cloud, Dropbox was able to save nearly$75 million over two years.

Generallyspeaking, applications that are latency sensitive or have datasets which arelarge and require transport between various locations for processing are primecandidates for repatriation. Consider smart cities and IoT-enabled systems, whichcreate enormous amounts of data. While cloud computing provides a strongenabling platform for these next-gen technologies because it provides thenecessary scale, storage and processing power, edge computing environments willbe needed to overcome limitations in latency and the demand for more localprocessing.

Additionally,if your applications and databases require very high availability orredundancy, they may be best suited to private or hybrid clouds. Repatriation alsoprovides improved control over the applications and enables IT to better planfor potential problems.

Yes,moving to the cloud means a decrease in rack space, power usage and ITrequirements, which results in lower installation, hardware, and upgrade costs.Moreover, cloud computing does liberate IT staff from ongoing maintenance andsupport tasks, freeing them to focus on building the business in more innovativeways. And yet, while many businesses are attracted to the gains associated withpublic or hybrid cloud models, they often do not fully appreciate the strategynecessary to optimize their performance. Fortunately, there are tools to assistIT teams to better understand how their cloud infrastructure is performing.

DemystifyingCloud Decision-Making

Nomatter the shape of an organizations cloud public, private or hybrid data center management solutions can provideIT staff with greater visibility and real-time insight into power usage,thermal consumption, server health and utilization. Among the key benefits arebetter operational control, infrastructure optimization and reduced costs.

Beforeany organization moves its data to the public cloud, the IT staff needs tounderstand how its systems perform internally. The unique requirements of itsapplications, including memory, processing power and operating systems, should determinewhat it provisions in the cloud. Data center management solutions collect andnormalize data to help teams understand their current implementationon-premise, empowering them to make more informed decisions as to what isnecessary in a new cloud configuration.

Intel Data Center Manageris a software solution that collects and analyzes the real-time health, power,and thermals of a variety of devices in data centers. Providing the clarityneeded to improve data center reliability and efficiency, including identifyingunderlying hardware issues before they impact uptime, these tools bring invaluableinsight to increasingly cloudy enterprise IT environments, demystifying thequestion of on-premises, public and hybrid cloud decision-making.

Here are some factors to consider whenmaking a decision about embarking on a course of cloud repatriation:

If you answered yes to a majority of the questions above, it might be time to consider cloud repatriation.

Read the latest from the Data Economy Newsroom:

See more here:
When the Cloud Falls to Earth. Is It Time for Your Organization to Consider Cloud Repatriation? - Data Economy

Read More..

Germany Healthcare Cloud Computing Market analysis and forecast 2019-2025 edited by leading research firm – WhaTech Technology and Markets News

Germany Healthcare Cloud Computing Market Size, Share & Trends Analysis Report by Application (Clinical Information Systems and Nonclinical Information Systems) By Deployment Type (Private Cloud, Public Cloud and Hybrid Cloud) Forecast period (2019-2025). Germany healthcare cloud computing market is anticipated to grow at a CAGR around 21.2% during the forecast period.

Germany healthcare cloud computing market is anticipated to grow at a CAGR around 21.2% during the forecast period. Germanys sophisticated economy is one of the major drivers of advancing digitalization in all areas of personal life and business.

Germanys well-established infrastructure and consumer base are driving demand for cloud services in the country. As per an estimation, 26% of the German companies do not use or plan to use cloud services currently in their operations.

Request a Free Sample of our Report on Germany Healthcare Cloud Computing Market: http://www.omrglobal.com/requesting-market

This indicates huge market potential that private sector of the Germany offers for cloud adoption. Germanys cloud market is attractive to international, regional, and domestic cloud service providers.

Further, the government of the Germany has encouraged and effectively implemented an EU or German domestic data infrastructure without the necessity of legislating it by relying on pressure from German companies and consumers for storing information on cloud servers located domestically. Further, the rising adoption of EHR in this region is also augmenting the growth of the Germany healthcare cloud computing market.

Germany Healthcare cloud computing market is segmented on the basis of application and deployment type. Based on application, the market is segmented into clinical information systems and nonclinical information systems.

Based on deployment type, the market is segmented into private cloud, public cloud and hybrid cloud.

The key players that are active in the market include CareCloud Corp., Cisco, Inc, Deutsche Telekom, Dell Inc., GE Healthcare, IBM Corp., Merge Healthcare Inc., Microsoft Corp., Oracle Corp. and Siemens Healthineers. The market players are considerably contributing to the market growth by the adoption of various strategies including new product launch, merger, and acquisition, collaborations with government, funding to the start-ups and technological advancements to stay competitive in the market.

A Full Report of Germany Healthcare Cloud Computing Market is Available at http://www.omrglobal.com/industring-market

Germany Healthcare Cloud Computing Market Segmentation

By Application

By Deployment Type

Company Profiles

For More Customized Data, Request for Report Customization @ http://www.omrglobal.com/report-ing-market

This email address is being protected from spambots. You need JavaScript enabled to view it.

The rest is here:
Germany Healthcare Cloud Computing Market analysis and forecast 2019-2025 edited by leading research firm - WhaTech Technology and Markets News

Read More..

Reasons to consider hyperconverged infrastructure in the data centre – Small Business

(Image: Stockfresh)

By 2023, 70% of enterprises will be running some form of hyperconverged infrastructure

Print

Read More: data centre hyperconverged infrastructure virtualisation

Demand for on-premises data centre equipment is shrinking as organisations move workloads to the cloud. But on-prem is far from dead, and one segment that is thriving is hyperconverged infrastructure (HCI).

HCI is a form of scale-out, software-integrated infrastructure that applies a modular approach to compute, network and storage capacity. Rather than silos with specialised hardware, HCI leverages distributed, horizontal blocks of commodity hardware and delivers a single-pane dashboard for reporting and management. Form factors vary: Enterprises can choose to deploy hardware-agnostichyperconvergence softwarefrom vendors such as Nutanix and VMware, or an integrated HCI appliance from vendors such as HP Enterprise, Dell, Cisco, and Lenovo.

The market is growing fast. By 2023, Gartner projects 70% of enterprises will be running some form of hyperconverged infrastructure, up from less than 30% in 2019. And as HCI grows in popularity, cloud providers such as Amazon, Google and Microsoft are providing connections to on-prem HCI products for hybrid deployment and management.

So why is it so popular? Here are some of the top reasons.

A traditional data centre design is comprised of separate storage silos with individual tiers of servers and specialised networking spanning the compute and storage silos. This worked in the pre-cloud era, but it is too rigid for the cloud era. Its untenable for IT teams to take weeks or months to provision new infrastructure so the dev team can produce new apps and get to market quickly, said Greg Smith, vice president of product marketing at Nutanix.

HCI radically simplifies data centre architectures and operations, reducing the time and expense of managing data and delivering apps, he said.

HCI software, such as from Nutanix or VMware, is deployed the same way in both a customers data centre and cloud instances; it runs on bare metal instances in the cloud exactly the same as it does in a data centre. HCI is the best foundation for companies that want to build a hybrid cloud. They can deploy apps in their data centre and meld it with a public cloud, Smith said.

Because its the same on both ends, I can have one team manage an end-to-end hybrid cloud and with confidence that whatever apps run in my private cloud will also run in that public cloud environment, he added.

HCI allows you to consolidate compute, network, and storage into one box, and grow this solution quickly and easily without a lot of downtime, said Tom Lockhart, IT systems manager with Hastings Prince Edward Public Health in Bellville, Ontario, Canada.

In a legacy approach, multiple pieces of hardware a server, Fiber Channel switch, host-based adapters, and a hypervisor have to be installed and configured separately. With hyperconvergence, everything is software-defined. HCI uses the storage in the server, and the software almost entirely auto-configures and detects the hardware, setting up the connections between compute, storage, and networking.

Once we get in on a workload, [customers] typically have a pretty good experience. A few months later, they try another workload, then another, and they start to extend it out of their data centre to remote sites, said Chad Dunn, vice president of product management for HCI at Dell.

They can start small and grow incrementally larger but also have a consistent operating model experience, whether they have 1,000 nodes or three nodes per site across 1,000 sites, whether they have 40 terabytes of data or 40 petabytes. They have consistent software updates where they dont have to retrain their people because its the same toolset, Dunn added.

By starting small, customers find they can reduce their hardware stack to just what they need, rather than overprovision excessive capacity. Moving away from the siloed approach also allows users to eliminate certain hardware.

Josh Goodall, automation engineer with steel fabricator USS-POSCO Industries, said his firm deployed HCI primarily for its ability to do stretched clusters, where the hardware cluster is in two physical locations but linked together. This is primarily for use as a backup, so if one site went down, the other can take over the workload. In the process, though, USS-POSCO got rid of a lot of expensive hardware and software. We eliminated several CPU [software] licenses, we eliminated the SAN from other site, we didnt need SRM [site recovery management] software, and we didnt need Commvault licensing. We saved between $25,000 and $30,000 on annual license renewals, Goodall said.

To run a traditional three-tiered environment, companies need specialists in compute, storage, and networking. With HCI, a company can manage its environment with general technology consultants and staff rather than the more expensive specialists.

HCI has empowered the storage generalist, Smith said. You dont have to hire a storage expert, a network expert. Everyone has to have infrastructure, but they made the actual maintenance of infrastructure a lot easier than under a typical scenario, where a deep level of expertise is needed to manage under those three skill sets.

Lockhart of Hastings Prince Edward Public Health said adding new compute/storage/networking is also much faster when compared to traditional infrastructure. An upgrade to our server cluster was 20 minutes with no down time, versus hours of downtime with an interruption in service using the traditional method, he said.

Instead of concentrating on infrastructure, you can expand the amount of time and resources you spend on workloads, which adds value to your business. When you dont have to worry about infrastructure, you can spend more time on things that add value to your clients, Lockhart adds.

Key elements of hyperconvergence products are their backup, recovery, data protection, and data deduplication capabilities, plus analytics to examine it all. Disaster recovery components are managed from a single dashboard, and HCI monitors not only the on-premises storage but also cloud storage resources. With deduplication, compression rates as high as 55:1, and backups can be done in minutes.

USS-POSCO Industries is an HP Enterprise shop and uses HPEs SimpliVity HCI software, which includes dedupe, backup, and recovery. Goodall said he gets about 12-15:1 compression on mixed workloads, and that has eliminated the need for third-party backup software.

More importantly, recovery timeframes have dropped. The best recent example is a Windows update messed up a manufacturing line, and the error wasnt realised for a few weeks. In about 30 minutes, I rolled through four weeks of backups, updated the system, rebooted and tested a 350GB system. Restoring just one backup would have been a multi-hour process, Goodall said.

HCI products come with a considerable amount of analytics software to monitor workloads and find resource constraints. The monitoring software is consolidated into a single dashboard view of system performance, including negatively impacted performance.

Hastings recently had a problem with a Windows 7 migration, but the HCI model made it easy to get performance info. It showed that workloads, depending on time of day, were running out of memory, and there was excessive CPU queuing and paging, Lockhart said. We had the entire [issue] written up in an hour. It was easy to determine where problems lie. It can take a lot longer without that single-pane-of-glass view.

Goodall said he used to spend up to 50% of his time dealing with storage issues and backup matrixes. Now he spends maybe 20% of his time dealing with it and most of his time tackling and addressing legacy systems. And his apps are better performing under HCI. Weve had no issues with our SQL databases; if anything, weve seen huge performance gain due to the move to full SSDs [instead of hard disks] and the data dedupe, reducing reads and writes in the environment.

IDG News Service

Read More: data centre hyperconverged infrastructure virtualisation

More here:
Reasons to consider hyperconverged infrastructure in the data centre - Small Business

Read More..

2020s: The decade that tears down LANs, WANs, VPNs and Firewalls – ITProPortal

The walls of the corporate workplace will become fluid for enterprises over this decade. This will be a movement driven by the way you want to work and the birth of the fully internet-driven workplace; put another way, its the death of the legacy corporate network which naturally means the death of traditional network security. Its a dramatic improvement that will restyle the way we all connect, rewrite how IT leaders help you access work, and reshape entire technology markets where legacy infrastructure companies will struggle.

This movement will drive jobs closer to workers lives as part of a monumental reckoning with connectivity, mobility, cloud, and the way we all want to work. Mobility, BYOD, or whatever you may call it may be commonplace in Silicon Valley, large cosmopolitan cities, and some verticals like high tech, but outside of these fairly early adopters, it is not mainstream. We do already see this happening, though, in pockets. However, the 5G era is going to dramatically speed the adoption of this new way of working, and that will in turn speed the demise of the traditional corporate network.

The fallout from this change in the way we work will be extreme. Here are four of my predictions for the 2020 decade.

Any time you connect to the internet, there is an IP address to connect you, often through a firewall. A firewall is like a door that protects a house or a castle. Every firewall with an internet-facing IP address is an attack surface that creates significant business risk. New approaches and technologies will evolve this decade to mitigate this risk.

As more applications sit in the public cloud and more offices use the internet to connect to the cloud or SaaS applications, the attack surface is drastically increasing. As you connect 100x more applications, data, devices, and people to the internet, what happens to the attack surface? It increases 100-fold.

Think of it this way. If you want 100 friends to be able to reach you, you can publish your phone number on a website. Now they can call you, but so can robocallers. This is precisely what happens when you publish applications on a public cloud and use the internet to reach them. Your users can access them but so can a million hackers who can discover vulnerabilities or launch a DDoS attack.

How do you solve it? Suppose you hired a phone operator and gave him the names of your 100 friends. Your friends would be able to quickly reach you when they call the operator, but a robocaller that tries to connect to you would be denied by the operator and wouldnt be able to bother you. A similar approach works for protecting your enterprise and starts by preventing exposure of your enterprises user/branch traffic or applications/servers to the internet. This approach replaces the castle-and-moat legacy model with a digital exchange, somewhat like a sophisticated phone switchboard. Your applications remain invisible behind the exchange. Users connect to the exchange which then connects them to their applications. In this model, the user, the offices, and the applications are never exposed to the internet. This approach for secure access to applications will become widely used in the coming decade.

Internet connectivity improved so much in the past decade that enterprises started to dump their private, expensive wide area networks (WANs) that connected various offices to the data centre. Frederik Janssen, head of global infrastructure at Siemens, is a pioneer and a thought leader who coined the phrase The internet is the new corporate network several years ago. What he meant was that Siemens business was being done everywherethe office, coffee shops, airports, hotelsand the internet had become the de facto transport for all traffic.

With the widespread use of 5G in the 2020s, local area networks (LANs) will also disappear. Today, while sitting in our office, we look for Wi-Fi to access the internet, which securely connects us through routers or firewalls sitting at the companys perimeter. But when every PC or mobile phone is equipped with ultrafast 5G, would you ever connect to Wi-Fi in your office? No wayyou will use direct 5G connections and bypass traditional routers and firewalls. And, if there is no WAN or LAN in your control, then there is no use case for firewalls. The traffic from your 5G devices will connect the right people to the right applicationsthrough a digital services exchangeand this will deliver faster, more secure, and more reliable access to apps and services.

There are countless stories about VPNs being the launch pad for devastating malware/ransomware attacks. This is happening because firewalls and VPNs were built for the network-centric world, where apps resided solely in the data centre and a security perimeter around the castle was all you needed. With so many organisations moving toward a perimeter-less model, traditional network security based on the castle-and-moat approach, which is how firewalls fundamentally protect, is no longer relevant. They give enterprises a false sense of security. New approaches are being developed that use business policy engines to act like the previously mentioned digital services exchange to enforce security and provide better enterprise security.

Today, to provide a user access to applications, they are connected to the so-called trusted corporate network. Once on the network, the user can see more than they should. This was acceptable when you controlled the network, but with the internet being the corporate network, putting users on a network to access applications is dangerous. If a user machine becomes infected, the malware can laterally traverse the network and infect all the servers on the network. Maersk, a massive shipping company, faced that issue about 18 months ago, highlighting the danger of putting users and applications on the same network. A better approach to this problem is badly needed.

Many CISOs also manage physical security, so I like using an office metaphor to illustrate zero trust. If I am visiting an office, I get stopped at reception, which checks my ID, confirms my appointment, and issues me a badge. They could direct me to the elevators and tell me to head up to the sixth floor for my appointment. But this rarely happens anymore because I could simply wander around the company to do whatever I want, wherever I want. In contrast, a zero trust approach would have someone escort me directly to the conference room and take me back to the front desk after my meeting.

Gartners ground-breaking research note on zero trust network access (ZTNA) states how enterprises should provide users access to the specific applications they need; instead of granting access to a network, ZTNA provides access only to those applications a user is authorised to use. This approach provides security for the world of cloud thats far better than trying to create lots of network segments to create application segmentation.

At a high level, think of ZTNA like this: It starts with an assumption that you trust nobody; you can establish a level of trust based on authentication, device posture, and other factors, but youll still only trust users with the applications they are specifically authorised to use. Any other activity would be highly suspicious.

These are not simple incremental changes; these megashifts will bring tons of opportunities and challenges to businesses. Technologies such as cloud, mobility, IoT, and machine learning are upending many large global brands while giving rise to new businesses at a pace never seen before. They are also disrupting large, incumbent technology providers while creating new giants.

Jay Chaudhry, CEO, Chairman & Founder, Zscaler

Read the original here:
2020s: The decade that tears down LANs, WANs, VPNs and Firewalls - ITProPortal

Read More..

Ransomware protection is killer app for Datrium DRaaS – Blocks and Files

A ransomware attack is a disaster. When ransomware infects an organisations IT systems, stored and backup data are encrypted and made unavailable.

The IT system is unable to function and in many cases that means the organisation cannot function either until it remedies the attack. In essence there are two ways to do this: paying the ransom to decrypt the files or getting clean files restored from a disaster recovery (DR) facility.

Affordable and fast DR is a good way to defeat a ransomware infestation. Datrium, a hyperconverged systems vendor, has recognised this and in August 2019 launched its own DRaaS (disaster recovery as a service), incorporating home-grown HCI system backup technologies.

Historically, disaster recovery has been a hugely expensive but relatively niche aspect of customer storage and system buying strategy. But the massive increase in ransomware attacks in recent years has expanded the DR vulnerability surface. At the same time availability of the public cloud to provide a form of remote DR facility has brought costs tumbling.

A September 2016 FBI alert said: New ransomware variants are emerging regularly. Cyber security companies reported that in the first several months of 2016, global ransomware infections were at an all-time high. Within the first weeks of its release, one particular ransomware variant compromised an estimated 100,000 computers a day.

Data protection vendor Acronis reported the Spring 2017 WannaCry outbreak afflicted over 200,000 computers in over 150 countries. Global costs were estimated to total $8bn.

A second FBI alert in October 2019 said: Ransomware attacks are becoming more targeted, sophisticated, and costly, even as the overall frequency of attacks remains consistent. Since early 2018, the incidence of broad, indiscriminant ransomware campaigns has sharply declined, but the losses from ransomware attacks have increased significantly, according to complaints received by IC3 and FBI case information.

Although state and local governments have been particularly visible targets for ransomware attacks, ransomware actors have also targeted health care organizations, industrial companies, and the transportation sector.

Indeed ransomware is now so prevalent that automated failover to a recovery site is becoming table stakes for all data protection suppliers. In that sense ransomware recovery is a killer feature, and suppliers without this capability will be in trouble.

Many data protection suppliers already offer DR facilities, including Cohesity, Commvault, Dell EMC, Druva, Rubrik and Zerto. And more are sure to follow.

Datriums background is somewhat different. Founded in 2012, the company is a venture-backed startup that has raised $165m to date, including $60m in the most recent round in September 2018.

Datrium pioneered a middle way between converged and hyperconverged systems with hyperconverged nodes running storage controller software that linked them to a shared storage box. However, it faced enormous competition and the HCI market consolidated rapidly around two leading suppliers: Dell EMC, with VxRail, and Nutanix.

Datrium then moved in to unified hybrid cloud computing and protecting its DVX systems, specifically backup to the cloud. The company announced Cloud DVX in August 2018, claiming up to 10 times lower AWS costs for cloud backup, and CloudShift, a SaaS-based disaster recovery orchestration service for VMware.

This hit the market as the necessity of dealing with ransomware became even more pressing, and Datrium realised it had a potential killer app for VMware users.

CEO Tim Page told Blocks & Files in a phone interview that Datrium has gained 60 new accounts in under two months since launching its disaster recovery as a service. DR is catapulting our business revenues upwards.

He said the reason for this is that Datriums DRaaS preserves the VMware environment, is affordable and lightning fast, failing over in minutes when an attack takes place.

Datrium offers DR as a Service (DRaaS) using the VMware Cloud on AWS. In other words it protects VMware virtual machines (VMs) by spinning up DR copies in AWS. Page told me the time between attack detection and recovery should be as short as possible i.e. the DR copy VMs should be spun up quickly.

He said backups, even air-gapped backups such as tape, are inferior to a DR facility. It takes time to restore backup files and the ransomware infestation must be removed from the affected IT site. With a DR facility in place, the victim can use clean files while the ransomware is found, removed and infected files deleted. Post clean-up, the DR facility can fail back to the main site.

Datrium stores backup immutable snapshots in Amazons S3 storage, which lowers cost, but in a form that means they can be immediately spun up without rehydration or conversion as VMs running in the VMware cloud. Admin staff at the ransomware-infected customer just switch from one VMware environment to another; there is no difference.

Immutability means that the snapshotted data cannot be altered subsequently. Any ransomware infection after the date the snapshot was taken will not infect that snapshot.

Datrium offers a short RTO (Recovery Time Objective) because it has selectable restore points. This short RTO is made feasible by automating the recovery process, which can involve hundreds or thousands of separate operational steps to get a large suite of VMs up and running in the right order.

With the orchestration routine in place, the DRaaS facility is told via a mouse click to fail over to the cloud DR site when a ransomware attack or other disaster happens, and that takes just minutes. DR recovery can then start a few minutes later at the source site.

Backed-up VMs exist in a timeline. Some time before an attack with its file locking-by-encryption and ransom notification, ransomware infects a system and starts started encrypting files. This event can be located by checking file activity records.

In a recent incident a Midwest US municipality was attacked (the town is unwilling to reveal its identity, Datrium said). The IT department had backed up its VMs to a Datrium DVX system but without the DRaaS option in place. Admin staff and Datrium consultants checked the incoming snapshots to the target DVX system and found a sudden size increase:

The highlighted snapshots in the image above have sizes of 23.6Gib, 80.2Gib, and 80.7Gib, while prior and subsequent snapshots are 6.1Gib and 3.6Gib in size. This enlargement was caused by Ryuk ransomware encrypting files.

To combat the attack, a prior snapshot from a day earlier was used and powered up on a quarantined network. It was verified malware-free by a security team and became a so-called recovery golden copy.

The recovery team restored individual VMs in priority order and verified each one was clean with an anti-virus scanner before restoring the next one. This took almost two days to complete. A mass update restoration of all their VMs would have taken less time and a DRaaS option would have been quicker again.

Datrium initially provided cloud backup for its own on-premises DVX semi-hyperconverged system semi, because the storage repository was separate from the compute nodes. It extended this to source systems from Dell EMC, NetApp, Nutanix, Pure Storage and others, and also to VMware running in AWS.

Datrium can provide DR with failover to VMware Cloud on AWS so long as the source site is a VMware site. Datrium uses its own backed up VMs and data from the source site.

VMware is accommodating Kubernetes and containers and Page pointed out that as VMware embraces Kubernetes we can do so too.

He said Datrium DRaaS will work with Microsoft Azure cloud by the end of 2020.

And what about the rising tide of cloud-native applications that do not use VMware? We have a CSS login for bare metal servers, Page said. He suggested Datrium could develop this ability to backup bare metal Kubernetes environments to the public cloud, and reinstantiate containers there for DR, in the same way as it spins up VMs today.

As long as ransomware infections exists Datrium should prosper by offering a simple and fast recovery option, viable both for virtual machines and containerised environments.

View original post here:
Ransomware protection is killer app for Datrium DRaaS - Blocks and Files

Read More..

Sunday Digest: Bitcoin Price Crash and Other News – Bitcoinist

Its been hard to avoid the coronavirus this week, at least in the news, as the US confirmed its first death this morning. If it turns into an all-out pandemic, youll be glad that you sealed yourself away and stocked up on bitcoin you did do that, right?

What can be said? Not a great week for bitcoin price at the risk of stating the obvious. So how did it play out?

After gains last weekend, Monday trading saw these wiped out after $10k was rejected again. Analysts predicted a pullback, and price continued to fall on Tuesday, breaking $9.5k, with some expecting a drop to as low as $8,300.

Krakens Director of Business Development, Dan Held, remained bullish, however, believing that bitcoin could be on the edge of a super-cycle that would quickly take it to $100k and beyond. Meanwhile, a cash crunch in Lebanon propelled bitcoin price to an inflated $15k between local peer to peer sellers.

However, on global markets losses continued into Wednesday, breaking $9k support, and also dragging down most altcoins, and threatening to wipe out the years gains so far.

Stock to Flow proponent, PlanB, predicted that bitcoin would not fall below $8,200 during this crash, and we all hoped he was right, although prices didnt seem to stop falling during early trading on Thursday.

Thankfully bitcoin did then find some support at $8,600, bouncing up as high as $8,900, before tailing off again on Friday. This time solid support was found at $8.5k, and the price has been trading in a range between this level and $8,800 ever since.

However, at a macro level, there are concerns that bitcoin is still in a bear market. Certainly, the bulls must defend key levels to stem the recent flow.

The Bank of Englands chief cashier voiced her support for a Central Bank Digital Currency this week, saying that governments must act fast to avoid losing out to tech giants such as Facebooks Libra.

Meanwhile, Chinas proposed CBDC faced delays due to the coronavirus outbreak, although officials were still aiming to launch a pilot in 2020.

Although Canada has decided that it doesnt need a CBDC at the moment, it has put in place a blueprint for one, just in case.

YouTube continued its assault on channels related to cryptocurrency, this week suspending Ivan on Tech until after the May halving event.

Craig Wright faces more questions about the purported Tulip Trust and Bonded Courier, as the estate of Dave Kleiman wants to expose what it says are just more lies. Meanwhile, Binance CEO, CZ, called Wright a fraud in no uncertain terms earlier in the week.

Amid major losses for Ethereum this week, the community also came to blows over the approval of a controversial change to its mining algorithm. The Programmatic Proof of Work (ProgPoW) algorithm is supposed to reduce the advantage of ASIC hardware but caused a big discussion online as to its implications.

Exchanges OKEx and Bitfinex were both hit by DDoS attacks this week, with OKEx CEO Jay Hao offering a bounty for information on the attackers.

Bitfinex claims that it has repaid $100 million of the $700 million it borrowed from Tether last year. And Paolo Ardoino, CTO of Bitfinex & Tether, will be speaking at the Digital Asset Summit next month, marking the first time that a Bitfinex/Tether exec has spoken at an industry event.

The SEC denied Wilshere-Phoenixs application for a Bitcoin ETF, which was disappointing, but not entirely unexpected given its track record.

Warren Buffett appeared on CNBC this week, telling the world once again that he will never own cryptocurrency.

This caused Justin Sun, who Buffett said was very polite at their recent charity lunch date, to call foul play. After all, the joy of blockchain meant that he could show the bitcoin and Tron he gifted Buffett at the dinner had not been moved.

To which, Buffett responded that he had given the phone away on which Sun had loaded the wallets to his favored Glide charity.

Surely its time to stop trying to teach the old dog new tricks already.

What was your favorite bitcoin or crypto story of this week? Let us know in the comments below!

Image via Shutterstock

Read the original:
Sunday Digest: Bitcoin Price Crash and Other News - Bitcoinist

Read More..

Bitcoin Rallies After Biggest Weekly Drop Since November – CoinDesk – Coindesk

View

Bitcoin (BTC) is flashing green on Monday, having suffered a double-digit price drop last week.

The top cryptocurrency is currently trading over $8,740, representing a 1.9 percent gain on a 24-hour basis, according to CoinDesks Bitcoin Price Index.

The news will be welcomed by bitcoin's bulls, as prices fell by 13 percent in the seven days to March 1, registering the biggest weekly loss since the third week of November. Back then, the cryptocurrency dropped by 18.7 percent over the same period.

Equity markets across the globe also fell sharply last week as investors shunned risk on fears the coronavirus pandemic will cause a serious slowdown in the global economy.

The S&P 500, Wall Streets benchmark stock index, fell for the seventh straight day on Friday. The sell-off wiped out five months of the rally from 2,855 to 3,393.Even so, the index outperformed bitcoin on a weekly basis with an 11 percent drop.

Despite last week's drop, bitcoin is still outperforming both gold and the S&P 500 on a year-to-date basis with 20 percent gains. Meanwhile, ethereums ether (ETH) token, the second-largest cryptocurrency by market value, has rallied 74 percent so far this year.

Liquidity source?

Bitcoin fell last week as investors liquidated their holdings to fund margin calls triggered by the stock market crash, according to billionaire investor and Galaxy Digital founder Micheal Novogratz.

A margin call occurs when the value of the investors leverage account drops below the minimum margin requirement. The investor is then required to bring in additional capital or securities to build back the account up to the minimum margin requirement.

Essentially, Novogratz thinks bitcoin served as a source of liquidity last week, having apparently found a role as a safe haven in January when prices rose by 30 percent amid the U.S. Iran tensions and the beginnings of the coronavirus outbreak in China.

Currently, there is no evidence to prove that margin call-related selling fueled bitcoins price drop. That possibility, however, cannot be ruled out altogether, as perceived safe havens like gold are often used as source of liquidity.

Gold, a classic anti-risk asset, fell by 4 percent last week also the biggest weekly loss since November reportedly due to margin calls.

At press time, the metal is changing hands at $1,610 per ounce up 3 percent from Fridays low of $1,563.

As bitcoin also gains ground, the bulls need a close above Sundays high to maintain the rally.

Daily chart

Bitcoin created a doji candle on Sunday, signaling seller exhaustion. A UTC close above Sundays high of $8,756 would validate the doji candle and confirm an end of the pullback from the recent highs above $10,500.

A bullish close could bring additional gains towards the former support-turned-resistance of the head-and-shoulders neckline at $9,500.

Alternatively, if prices find acceptance under Sundays low of $8,410, a bearish doji continuation pattern would be confirmed and a deeper drop to $8,213 (Feb. 24 low) and possibly to $8,000 may be seen.

Disclosure:The author holds no cryptocurrency at the time of writing

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

See the rest here:
Bitcoin Rallies After Biggest Weekly Drop Since November - CoinDesk - Coindesk

Read More..

Bitcoin Price Analysis: BTC/USD recovery picks up the pace above $8,600 – FXStreet

Bitcoin price is having a positive start to the weeks trading. The Asian session on Monday is characterized by increased buying entries amid the building bullish momentum. BTC/USD is up a subtle 1% while trading at $8,614. Bitcoin price has advanced upwards from an opening price of $8,525.75 although it also recorded an intraday low of $8,485.52.

The daily chart shows Bitcoin trading under the moving averages where the 200-daySMA is limiting upward movements at $8,760.46 and the 50-day SMA holding the ground at $9,276. The 61.8% Fibonacci level of the last swing high at $10,537 to a swing low of $6,438.93 will also cap gains towards $9,000.

Technically examining the BTC/USD daily chart, while the digital asset downside is strongly protected, upward movements remain to be limited. Therefore, formidable direction the price will is likely sideways. The RSI, for instance, is at 37.93 with gradual movement towards 50 (the average zone).

According to confluence levels, Bitcoin first support is seen at $8,533 highlighted by the Bollinger Band 15-minutes lower curve and the previous low 1-hour. The second support also major support is $8,444.57 where the previous low 4-hour, Bollinger Band 1-hour lower and the Bollinger Band 4-hour lower curve converge. Other minor support areas include $8,355, $8,266 and $8,087.

On the upside, initial resistance is holding the ground at $8,623 as shown by the SMA 100 15-mins, previous high 1-hour and the Bollinger Band 1-hour middle. The next hurdle likely to give the bulls a hard time in the journey to $9,000 is $8,979, $9,069 and $9,158.

Read more from the original source:
Bitcoin Price Analysis: BTC/USD recovery picks up the pace above $8,600 - FXStreet

Read More..

Bitcoin is Showing Early Signs of Fresh Rally, But 100 SMA is the Key – newsBTC

Bitcoin is likely forming a short term bottom near $8,400 against the US Dollar. However, BTC price must climb above the 100 hourly SMA and $8,800 to start a fresh increase.

This past week, we saw a steady decline below the $9,000 support in bitcoin against the US Dollar. BTC price made a couple of attempts to bounce back, but it failed and settled below the 100 hourly simple moving average.

On the downside, it seems like the price is finding a significant buying interest near the $9,500 and $9,400 levels. The recent low was formed near $8,403 and the price is currently correcting higher.

It is trading above the $8,450 and $8,500 levels. More importantly, there was a break above a key bearish trend line with resistance near $8,575 on the hourly chart of the BTC/USD pair. It has opened the doors for more gains above the $8,600 level.

Bitcoin Price

The pair is now trading near the 50% Fib retracement level of the recent decline from the $8,821 high to $8,403 low. On the upside, an initial resistance is seen near the $8,660 level and the 100 hourly simple moving average.

The 61.8% Fib retracement level of the recent decline from the $8,821 high to $8,403 low is also near the $8,660 level to prevent gains. If bitcoin price climbs above the $8,700 level and the 100 hourly SMA, there are chances of a decent upward move.

The next key resistance is near the $8,800 area, above which the bulls are likely to take control. In the stated bullish scenario, the price is likely to recover above $9,000 or even $9,200 in the coming sessions.

If bitcoin fails to climb above the $8,700 level or the 100 hourly SMA, there is a risk of another bearish reaction. On the downside, there are two key supports $8,500 and $8,400.

A downside break below the $8,403 low may perhaps lead the price towards the next set of important supports near $8,200 and $8,000 (as discussed in one of the analyses using the daily chart).

Technical indicators:

Hourly MACD The MACD is slowly moving in the bullish zone.

Hourly RSI (Relative Strength Index) The RSI for BTC/USD is back above the 50 level.

Major Support Levels $8,500 followed by $8,400.

Major Resistance Levels $8,700, $8,800 and $9,000.

Visit link:
Bitcoin is Showing Early Signs of Fresh Rally, But 100 SMA is the Key - newsBTC

Read More..