Page 4,083«..1020..4,0824,0834,0844,085..4,0904,100..»

Review: How NeuVector protects containers throughout their lifecycle – CSO Online

Businesses and organizations of all sizes are finally embracing cloud computing. Even holdout organizations like some large government agencies are starting to deploy private clouds, or hybrids that contain some mix of private and public cloud infrastructure. The benefits of cloud computing are numerous and well-known at this point. They include near infinite expandability, having an external provider worry about maintaining the base infrastructure, and the ability to spin up new servers or services in just a few seconds.

But the most advanced enterprises are taking cloud computing a step further, into the realm of containerization. The concept of containers is a pretty brilliant one because it provides all the benefits of cloud computing, like infinite expandability, but also provides individual control over each container, which is essentially a fully-operational and independent virtual machine.

A container can be created to fill almost any need, from a tiny microservice to a full operating system. And because each container has all the resources that it needs within its perimeter, it can easily be transported to other computing environments, such as moving from a development cloud to a production environment. Some large enterprise networks might deploy, move or modify thousands of containers every day.

Unfortunately, cybersecurity has been slow to catch up with advancements in containerization, and most traditional security products have very little visibility into containers running in the cloud. The closed and independent nature of containers means that cybersecurity scanning from the outside will yield limited results. And because most cybersecurity programs have little to no insight about how containers should and do work, even if a container is successfully scanned, the scanning program may not understand if the container is operating properly or not.

There are other issues as well. Containers can expand if they need more resources and might even be deployed or destroyed nearly instantly as needed. Most cybersecurity programs, especially things like scanners that check the network on a schedule, will almost always be operating with old information. And because containers are part of a network of other containers, they need to coordinate with one another and with the container orchestration software like Kubernetes or Docker. That causes an explosion of so-called east to west internal traffic, which is often not monitored by cybersecurity defenses.

The NeuVector container security platform was created specifically to safeguard containerized environments. In fact, its deployed as a privileged container itself within the environment that it will be protecting. From its position within the containerized environment, it can monitor all Layer 7 network traffic, including that moving between containers and the host orchestration software. In this way, it can protect against attacks made against individual containers or the entire environment.

The rest is here:
Review: How NeuVector protects containers throughout their lifecycle - CSO Online

Read More..

AWS re:Invent 2019 – Predictions And A Wishlist – Forbes

With less than a week to go, the excitement and anticipation are building up for industrys largest cloud computing conference - AWS re:Invent.

Las Vegas

As an analyst, I have been attempting to predict the announcements from re:Invent (2018, 2017) with decent accuracy. But with each passing year, its becoming increasingly tough to predict the year-end news from Vegas. Amazon is venturing into new areas that are least expected by the analysts, customers, and its competitors.

AWS Ground Station is an example of how creative the teams at Amazon can get in conceiving new products and services. Announced at last years re:Invent, AWS Ground Station is a managed service to control satellite communications and process the downlink data. This service took everyone by surprise. I wonder if AWS is planning to launch a managed air traffic controller (ATC) service for airports all over the world to manage air traffic.

Along with a few predictions, I want to share my wishlist for re:Invent 2019. This is based on my personal experience of using AWS combined with the feedback from my customers.

Here is what I expect to see from AWS re:Invent 2019:

Streamlined Developer Experience

AWS now has multiple compute services in the form of EC2 (IaaS), Beanstalk (PaaS), Lambda (FaaS) and Container Services offered through ECS, Fargate and EKS (CaaS). Modern applications rely on more than one compute model for execution. For example, its a common practice to deploy containerized applications on EKS while running event-driven code in Lambda. Legacy applications continue to target Amazon EC2 instances.

Within the container services, ECS, EKS and Fargate use different approaches and patterns of deployment. Fargate is a node-less environment while EKS is a managed Kubernetes service.

Amazon customers need a better experience of deploying and managing applications on AWS. A unified framework that abstracts underlying compute services will simplify packaging and deploying applications. This framework may be an extension of CloudFormation, Kubernetes YAML, Cloud Developer Kit (CDK) and Serverless Application Model (SAM).

Its time for Amazon to simplify consuming the compute services of AWS.

Kubernetes-based Hybrid Cloud

Going by the industry trend, I expect AWS to leverage Kubernetes for hybrid cloud. There may be a new control plane to manage EKS clusters running in the cloud along with the non-EKS clusters running on-premises. Customers will be able to deploy applications and apply configuration settings to managed EKS clusters, EKS clusters running on AWS Outposts and even unmanaged Kubernetes clusters running within the data center. I doubt if AWS will go the Google Anthos and Azure Arc way to support multi-cloud environments but integrating Kubernetes clusters with a single control plane will help enterprise customers. This will also enable VMware environments running Project Pacific and PKS deployments seamlessly integrate with Amazon EKS.

App Model for Kubernetes

With the new found love for open source, Amazon may announce an OSS project targeting Kubernetes much on the lines of Knative and Rudr. Google is investing in Knative as the platform for Kubernetes while Microsoft is building Rudr as the application layer of Kubernetes.

New Managed Services for VMware and Outposts

Additional managed services such as DynamoDB and Lambda may become available on VMware Cloud on AWS and Outposts.

Finally, there will be new EC2 instances optimized for deep learning training and inference. New instance types based on ARM processors may also be announced this year.

AutoML for Custom Machine Learning Models

Though AWS has a solid ML PaaS in the form of SageMaker and a set of AI services, it lacks AutoML - a capability that simplifies training deep learning models on custom datasets. Amazon Rekognition Custom Labels is a step towards AutoML for vision. I expect AWS to add AutoML support for video, text classification, translation, and even tabular data.

Custom Processor for ML Training & Inference

Currently, AWS relies on NVIDIA GPUs for training deep learning models. For inference, it uses both NVIDIA GPUs for Elastic Inference and a purpose-built machine learning chip for AWS Inferentia.

With its investments in Project Nitro, Amazon is expected to build an Application Specific Integrated Circuit (ASIC) optimized for training and inference. The chips based on the ASIC will power a new family of EC2 instances and also a subset of AWS Outposts.

AWS will be able to offer a cheaper training and inferencing service compared to GPU-based environments. AWS may fork Apache MXNet, TensorFlow, and PyTorch to build optimized versions of frameworks targeting the ASIC.

Hosted ML Project Management

To support managing, tracking, and sharing ML projects, SageMaker may get a hosted ML management tool. This will be modeled on the lines of Databricks MLflow and Azure ML Services. Even those ML experiments running outside of SageMaker may consume the service through an API and SDK.

Autonomous Systems based on Reinforcement Learning

Amazon is one of the first public cloud providers to bet on reinforcement learning. It launched highly successful DeepRacer device and a racing competition last year.

This year, AWS may get a managed reinforcement learning platform to build autonomous systems. The new deep reinforcement learning service will enable domain experts in the fields of medical, automobile, energy, and manufacturing to build sophisticated models.

Amazon Comprehend for Legal and Finance

Having launched Comprehend Medical, I expect AWS to extend Comprehend to legal and finance domains. This service will enable customers to extract domain-specific terminology and nomenclature from unstructured data. Law firms, accounting professionals, and stock broking companies will benefit from this service.

Amazon Transcribe with Custom Speaker Identification

Amazon Transcribe, the speech-to-text service may be able to identify and recognize speakers based on custom datasets with voice clips. This enhancement will enable developers to build user profiles and personalisation based on speech and voice recognition.

Cheaper Jupyter Notebooks on GPU-backed Spot Instances

The only way customers launch Jupyter Notebooks on AWS is through Deep Learning AMI or Amazon SageMaker. AWS may make it easy and cheap to launch Jupyter Notebooks by hosting them on Spot Instances backed by GPUs. Though it may not be free, the service will be modeled around Google Colab.

ONNX Support for SageMaker Neo

Amazon is one of the founding members of the Open Neural Network Exchange (ONNX) initiative. ONNX aims to bring interoperability to deep learning frameworks by enabling developers to import and export models from one framework to the other. Amazon SageMaker Neo is a runtime based on Apache TVM to run machine learning models across the cloud and edge. AWS may finally announce the support for ONNX for SageMaker Neo. This makes it possible to build deployment pipelines that target a variety of edge environments including mobile phones and desktops.

AIOps for CloudWatch

Amazon is enhancing CloudWatch to support modern observability patterns. With AIOps, CloudWatch may be able to detect anomalies based on the logs that are ingested. It can even perform root cause analysis (RCA) when the workload experiences disruption.

GPU-enabled Data Pipelines

Currently, none of the data processing and ETL services on AWS support parallelized processing taking advantage of GPUs. Much on the lines of NVIDIA Rapids, AWS may enable GPU support for data processing pipelines. This may get extended to query processing engines powering Amazon RDS, Amazon Aurora, and Redshift.

SaaS Service for IoT

Amazon has a wide range of services related to IoT and Edge. However, it lacks a SaaS-based IoT offering to easily connect and manage devices. There may be a new IoT service designed on the lines of Azure IoT Central.

I am looking forward to re:Invent 2019! Stay tuned as I continue to bring the analysis and commentary from the biggest cloud event of the year.

Read this article:
AWS re:Invent 2019 - Predictions And A Wishlist - Forbes

Read More..

VMware Reports Earnings Today. Heres What to Expect. – Barron’s

Text size

VMware will report its earnings after the close of trading on Tuesday. Its an opportunity for investors to learn more about how management sees the risks from the accelerating shift to cloud computing.

Since VMware (ticker: VMW) last disclosed its results on Aug. 22, the companys stock has risen about 14%, which is above the S&P 500s 7% return in the same period.

Here is a snapshot of Wall Streets expectations and some recent history:

On Friday, Barrons suggested the rise of a new technology trend called Kubernetes will accelerate the shift to cloud computing. This may lead to slower demand for on-premises software and equipment, putting VMwares core server-virtualization business at risk over time.

Wall Street analysts are predicting that the company will report fiscal third-quarter adjusted earnings of $1.43 per share and $2.41 billion of revenue, according to FactSet.

On Friday, RBC Capital Markets analyst Matthew Hedberg reiterated his Outperform rating for VMware shares.

We expect VMW to report solid results when it reports 11/26, he wrote. In terms of the demand environment, our reseller survey was generally above FY/19 averages.

Other Wall Street analysts are mixed on VMware. About 53% have ratings of Buy or the equivalent on the stock, while 43% have Hold ratings, according to FactSet. Their average price target for the stock is $174.88, while VMware closed at $168.73 on Monday.

Management has scheduled a conference call for 4:30 p.m. Eastern time on Tuesday to discuss the results with analysts and investors.

Write to Tae Kim at tae.kim@barrons.com

Link:
VMware Reports Earnings Today. Heres What to Expect. - Barron's

Read More..

HiveIO Top 5 IT Predictions For 2020 – RTInsights

Five predictions for cloud computing and artificial intelligence in 2020.

We are living in a post-migration era of cloud computing, according to HiveIO VP of Product, Toby Coleridge, who provided RT Insights with a list of five predictions for cloud computing and artificial intelligence in 2020.

Influxof AI in Healthcare

Theethics behind large companies harvesting medical data for artificial intelligencesystems may rage on for another decade, but HiveIO sees healthcareorganizations as the catalysts for this evolution.

As a result of datas increasing perceived value, healthcare organizations will go to greater lengths in collecting data to meet end-user demands in 2020, said Coleridge. By capturing more personal data, healthcare organizations will be able to more accurately assist patients and predict their needs.

SEE ALSO: Cloudwick Teams Up with Pepperdata for Improved AWS Cloud Migration

Coleridgebelieves that synergy between fitness brands, like Fitbit and Apple, andhealthcare organizations will occur in the next five to ten years. Biometricdata will be sent to a doctors office, which may run AI-assisted diagnosticsto recognize any problems at an earlier stage.

Automation of diagnostics will remove a lot of tedious work for the doctor, while also spotting problems quicker. The only issue, currently, is the lack of clear connections between healthcare providers and technology companies, although there are some signs the big four want to break into the healthcare market.

EducatorsUse IT To Meet Student Needs

Digitalnative students require, according to Coleridge: immediate gratification and adeeper level of knowledge and understanding. To meet this demand, educationalfacilities will continue to adopt virtual desktop infrastructure systems, whichallow students to work from remote locations, save the school money in updatingand upgrading systems, and improve security with a centralized interface.

Startingin 2020, we will see a shift in the entire education system and VDI will be akey enabler for this, said Coleridge.

On-Premisevs Cloud: Its Not An Either-Or

While a vast majority of workloads will be processed by data centers, on-premises is still relevant and will remain necessary for some companies.

In2020, we will see the conversation around data storage shift from choosingcloud or on-premise, to deciding which applications an organization should runon-premise, said Coleridge. Its not a matter of selecting one or the other,but rather, determining how both contribute to a comprehensive IT strategy.

Ciscopredicts that over 90 percent of workloads will be processed in the cloud by 2021,but that other 5-10 percent of usage will remain on-premise, and its notlikely that number will fall rapidly in the next few years.

CloudMigration Stage Will Pass

Onereason why that percentage wont drop is that the cloud migration stage isover, according to Coleridge. Cloud migration will decrease in 2020, for thefirst time since analysts predicted major migration.

Thisis because most organizations interested in implementing a cloud strategy havealready done so, said Coleridge. We will now begin to see the migration focuson automation in cloud and edge computing.

HowMuch Can We Store?

Atopic rarely spoken about in the cloud computing world is whats the limit forstorage. More and more data centers are being built around the world, and moreinformation than ever is stored on hard drives, remaining on there for decades.Coleridge expects we will start to see storage constraints in the next five to10 years, which will force the major data storage providers to build tools thatdiscard raw data while keeping primary themes.

View original post here:
HiveIO Top 5 IT Predictions For 2020 - RTInsights

Read More..

Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity – InfoQ.com

Key Takeaways

Architecture stabilization gaps and anti-patterns can emerge as part of a hasty microservices adoption.

Understanding the caveats and pitfalls of historic paradigm shifts should enable us to learn from previous mistakes and position our organizations to thrive at the latest technology waves.

Its important to know the pros and cons of different architectural styles like monolithic apps, microservices, and serverless functions.

Repeating cycle of architecture evolution: initial stage of not knowing best practices in the new paradigm, which accelerates the technical debt. As the industry develops new patterns to address the gaps, teams adopt new standards and patterns.

Consider the architecture patterns as strategies that favor rapid technological evolution while protecting the business apps from volatility.

Technology trends such as microservices, cloud computing, and containerization have been escalating so quickly in recent years that most of these technologies are now part of the day-to-day duties of top IT engineers, architects, and leaders.

We live in a cloud-enabled world. However, being cloud-enabled does not mean being cloud-native. In fact, its not only possible but dangerous to be cloud-enabled without being cloud-native.

Before we examine these trends and discuss what architectural and organizational changes corporations should implement to take full advantage of a cloud-enabled world, it is important to look at where we have been, where we are, and where we are going.

Understanding the caveats and pitfalls of the historic paradigm shifts should allow us to learn from previous mistakes and position our organizations to thrive on the latest waves of this technology.

As we briefly walk through this evolution, well be exploring the concept of anti-patterns, which are common responses to a recurring problem that are usually ineffective and risk being counterproductive.

This article series will describe the anti-patterns mentioned.

For the last 50 years or so, software architecture and application hosting models have experienced major transformation from mainframes to microservices and serverless.

Figure 1 shows this evolution of architecture models and the paradigms they promoted.

Figure 1: Architecture evolution from mainframe to cloud and microservices

Back in the 70s and 80s, mainframes were the way of computing. Mainframes are based on a centralized data storage and computing model, with basic client terminals used for data entry and data display on primitive screens.

The original mainframe computers used punch cards and most of the computation happened within batch processes. There was no online processing and latency was at 100% as nothing was processed in real time.

Some evolution happened within the mainframe paradigm with the introduction of online processing and user interface terminals. The overall paradigm of a massive central unit of processing contained within the four walls of a single organization still had a "one size fits all" approach, however, and that was only partially able to supply the capabilities needed by most business applications.

Client/server architecture put most of the logic on the server side and some of the processing on the client. Client/server was the first attempt in distributed computing to replace the mainframe as the primary hosting model for business applications.

In the first few years of this architecture, the development community was still writing software for client/server using the same procedural, single-tier principles that they had used for mainframe development, which resulted in anti-patterns like spaghetti code and the blob. This organic growth of software also resulted in other anti-patterns like big ball of mud. The industry had to find ways to stop teams from following these bad practices and so had to research what was necessary to write sound client/server code.

This research effort mapped out several anti-patterns and best-practice design and coding patterns. It introduced a major improvement called object-oriented programming (OOP), which had inheritance, polymorphism, and encapsulation features, along with paradigms to deal with decentralized data (as opposed to a mainframe with one version of the truth) and guidance for how industry could cope with the new challenges.

The client/server model was based on three-tier architecture consisting of presentation (UI), business logic, and data tiers. But most of the applications were written using two-tier models with a thick client encapsulating all presentation, business, and data-access logic, directly hitting the database. Although the industry had started to discuss the need to separate presentation from business from data access, that practice didnt really become vital until the advent of Internet-based applications.

In general, this model was an improvement over the mainframe limitations, but the industry soon ran into its limitations like needing to install the client application on every users computer and an inability to scale at a fine-grained level as a business function.

During mid 90s, the Internet revolution occurred and a completely new paradigm arrived with it. Web browsers became the client software while web and application servers hosted all the processing and logic. The World-Wide Web (www) paradigm promoted a true three-tier architecture with presentation (UI) code hosted on web servers, business logic (API) on application servers, and the data stored in database servers.

The development community started to migrate from thick (desktop) clients to thin (web) clients, driven mainly by ideas like service-oriented architecture (SOA) that reinforced the need for a three-tiered architecture and fueled by improvements to client-side technologies and the rapid evolution of web browsers. This move sped up time to market and required no installation of the client software. But developers were still creating software as tightly coupled designs, leading to jumble and other anti-patterns.

The industry in response came up with evolved three-tiered architectures and practices such as domain-driven design (DDD), enterprise integration patterns (EIP), SOA, and loosely coupled techniques.

The first decade of the 21st century saw a major transformation in application hosting when hosting became available as a service in the form of cloud computing. Application use cases requiring capabilities like distributed computing, network, storage, compute, etc., became much easier to provision with cloud hosting at a reasonable cost compared to traditional infrastructure. Also, consumers were taking advantage of elasticity of the resources to scale up and down based on the demand. They only needed to pay for the storage and compute resources that they used.

The elastic capabilities introduced in IaaS and PaaS allow for a single instance of a service to scale as needed, eliminating duplication of instances for the sake of scalability. However, these capabilities cannot compensate for the duplication of instances for other purposes, such as having multiple versions, or as a byproduct of monolith deployments.

The appeal of cloud-based hosting is that the dev and ops teams no longer had to worry about server infrastructure. It offered three different hosting options:

PaaS became the sweet spot among the cloud options because it allows developers to host their own custom business application without having to worry about provisioning or maintaining the underlying infrastructure.

Even though cloud hosting encouraged modular application design and deployment, many organizations found it enticing to lift and shift legacy applications that had not been designed to work on an elastic distributed architecture directly to the cloud, resulting in a somewhat modern anti-pattern called "monolith hell".

To address these challenges, the industry came up with new architecture patterns like microservices and 12-factor apps.

Moving to the cloud also presented industry with the challenges of managing the application dependencies on third-party libraries and technologies. Developers started struggling with too many options and not enough criteria for selecting third-party tools, and we started seeing some dependency hell.

Dependency hell can occur at different levels:

Library-based dependency hell is a packaging challenge and the latter two are design challenges. A future article in this series will examine these dependency-hell scenarios in more detail and offer design patterns for avoiding the unintended consequences to prevent any proliferation of technologies.

Software design practices like DDD and EIP have been available since 2003 or so and some teams then had been developing applications as modular services, but traditional infrastructure like heavyweight J2EE application servers for Java applications and IIS for .NET applications didn't help with modular deployments.

With the emergence of cloud hosting and especially PaaS offerings like Heroku and Cloud Foundry, the developer community had everything it needed for true modular deployment and scalable business apps. This gave rise to the microservices evolution. Microservices offered the possibility of fine-grained, reusable functional and non-functional services.

Microservices became more popular in 2013 - 2014. They are powerful, and enable smaller teams to own the full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely impacting the other parts of the systems (client applications or other services). The services can also be scaled up or down based on demand, at the individual service level.

A client application that needs to use a specific business function calls the appropriate microservice without requiring the developers to code the solution from scratch or to package the solution as library in the application. The microservices approach encouraged a contract-driven development between service providers and service consumers. This sped up the overall time of development and reduced dependency among teams. In other words, microservices made the teams more loosely coupled and accelerated the development of solutions, which are critical for organizations, especially the business startups.

Microservices also help establish clear boundaries between business processes and domains (e.g., customer versus order versus inventory). They can be developed independently within that vertical modularity known as the "bounded context" in the organization.

This evolution also accelerated the evolution of other good practices like DevOps, and it provided agility and faster time to market at the organization level. Each development team would own one or more microservices in its domain and be responsible for the whole process of designing, coding, deploying to production as well as post-production support and maintenance.

However, similar to the previous architecture models, the microservices approach ran into its own issues.

Legacy applications that had not been designed as microservices from bottom-up started being cannibalized in attempts to force them into a microservices architecture, leading to the anti-pattern known as monolith hell. Other attempts tried to artificially break monolithic applications into several microservices even though these resulting microservices were not isolated in terms of functionality and still heavily depended on other microservices broken out of the same monolithic application. This is the anti-pattern called microliths.

It's important to note that monoliths and microservices are two different patterns, and the latter is not always a replacement for the former. If we are not careful, we can end up creating tightly coupled, intermingled microservices. The right option depends on the business and scalability requirements of an applications functionality.

Another undesired side effect of the microservices explosion is the so-called "Death Star" anti-pattern. Microservices proliferation without a governance model in terms of service interaction and service-to-service security (authentication and authorization) often results in a situation where any service can willy-nilly call any other service. It also becomes a challenge to monitor how many services are being used by different client applications without decent coordination of those service calls.

Figure 2 shows how organizations like Netflix and Twitter ran into this nightmare scenario and had to come up with new patterns to cope with a "death by Death Star" problem.

Figure 2: Death Star architectures due to microservices explosion without governance

Although the examples depicted in figure 2 might look like extreme cases that only happen to giants, do not underestimate the exponential destructive power of cloud anti-patterns. The industry must learn how to operate a weapon that is massively larger than anything the world has seen before. "Great power involves great responsibility," said Franklin D. Roosevelt.

Emerging architecture patterns like service mesh, sidecar, service orchestration, and containers can be effective defense mechanisms against malpractices in the cloud-enabled world.

Organizations should understand these patterns and drive adoption sooner rather than later.

With the emergence of cloud platforms, especially the container orchestration technologies like Kubernetes, service mesh has been gaining attention. A service mesh is the bridge between application services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. It allows the applications to offload these capabilities from application- level libraries and allows developers to focus on business logic.

Some service mesh technologies like Istio also support features like chaos injection so that developers can test the resilience and robustness of their application and its potentially dozens of interdependent microservices.

Service mesh fits nicely on top of platform as a service (PaaS) and container as a service (CaaS), and enhances the cloud-adoption experience with the above-mentioned common platform services.

A future article will delve into the service-mesh-based architectures with discussion on specific use cases and comparison of solutions with and without service mesh.

Another trend that has received a lot of attention in the last few years is serverless architecture, also known as serverless computing. Serverless goes a step further than the PaaS model in that it fully abstracts server infrastructure from the application developers.

In serverless, we write business services as functions and deploy those functions to the cloud infrastructure. Some examples of serverless technologies are Amazon Lambda, Spring Cloud Function, Google Cloud Functions, and Microsoft Azure Functions.

The serverless model sits in between PaaS and SaaS in the cloud-hosting spectrum, as shown in the diagram below.

Figure 3: Cloud computing, containers, service mesh, and serverless

In a similar conclusion to the discussion of monolithic versus microservices, not all solutions should be implemented as functions. Also, we should not replace all microservices with serverless functions just like we shouldnt replace or break down all of monolithic apps into microservices. Only the fine-grained business and technical functions like user authentication or customer notification should be designed as serverless functions.

Depending on our application functionality and non-functional requirements like performance and scalability and the transaction boundaries, we should choose the appropriate monolith, microservices, or serverless model for each specific use case. Its typical that we may need to use all three of these patterns in a solution architecture.

If not designed properly, serverless solutions can end up becoming nanoliths, where each function is tightly coupled with other functions or microservices and cannot operate independently.

Complementary trends like container technologies came out around the same time as microservices to help with deploying the services and apps in microserver environments that offered true isolation of business services and scalability of individual services. Container technologies like Docker, containerd, rkt, and Kubernetes can complement the microservices development very well. Nowadays, we cannot mention one - microservices or containers - without the other.

As mentioned earlier, its important to know the pros and cons of the three architectural styles: monolithic apps, microservices, and serverless functions. A written case study on monolith versus microservices describes in detail one decision to avoid microservices.

Table 1 highlights the high-level differences between these three options.

Note: Sometimes teams artificially break down related functions into microservices and experience the limitations of microservices model.

Application is completely shut down when there's no traffic.

Dev teams don't have to care about underlying infrastructure.

Table 1: Service architecture models and when to use or avoid them

Its important for us to keep an eye on the anti-patterns that may develop in our software architecture and code over time. Anti-patterns not only cause technical debt but, more importantly, they could drive subject-matter experts out of the organization. An organization could find itself with only the people who dont bother about the architecture deviations or anti-patterns.

After the brief history above, lets focus on the stabilization gaps and anti-patterns that can emerge as part of a hasty microservices adoption.

Specific factors like the team structure in an organization, the business domains, and the skillsets in a team determine which applications should be implemented as microservices and which should remain as monolith solutions. But we can look at some general considerations for choosing to design a solution as a microservice.

The Eric Evans book, Domain-Driven Design (DDD), transformed how we develop software. Eric promoted the idea of looking at business requirements from a domain perspective rather than from one based on technology.

The book considers microservices to be a derivation of the aggregate pattern. But many software development teams are taking the microservices design concept to the extreme, by attempting to convert all of their existing apps into microservices. This has led to anti-patterns like monolith hell, microliths, and others.

Following are some of the anti-patterns that architecture and dev teams need to be careful about:

Well look in more detail at each of these anti-patterns in the next article.

To close the stabilization gaps and anti-patterns found in different application hosting models, the industry has come up with evolved architecture patterns and best practices to close the gaps.

These architecture models, stabilization gaps and patterns are summarized in the table below.

Connected/shared

Table 2: Application hosting models, anti-patterns, and patterns

Figure 4 shows all these architecture models, the stabilization gaps in the form of anti-patterns, and the evolved design patterns and best practices.

Figure 4: Architecture evolution and application-hosting models

Figure 5 lists the steps of architecture evolution, including the initial stage of not knowing the best practices in the new paradigm, which accelerates the technical debt. As the industry develops new design patterns to address the stabilization gaps, teams adopt the new standards and patterns in their architecture.

Figure 5: Architecture models and adoption of new patterns

IT leaders must protect their investment against the ever-growing rapid transformation of technologies while providing a stable array of business applications running on a constantly evolving and optimizing technological foundation. IT executives across the globe have been dealing with this problem more and more frequently.

They and we should embrace the evolution of technology but not at the price of constant instability of the apps supporting the business.Disciplined systematic architecture should be able to deliver just that. Consider the patterns discussed in this article series as strategies that favor rapid technological evolution while protecting the business apps from volatility. Lets explore how that can be done in the next article.

See the rest here:
Adoption of Cloud-Native Architecture, Part 1: Architecture Evolution and Maturity - InfoQ.com

Read More..

Cloud, AI, and personalisation: Key issues to consider – www.computing.co.uk

According to analysts at IDC, worldwide spending on AI systems will hit $35.8 billion by the end of 2019, a year-on-year increase of 44 percent. Much of that growth will come from applications of AI in the cloud and online, because of what IDC calls a "natural, evolutionary symbiosis between AI and the internet".

However, parallel to that growth are rising public concerns over a broad range of related issues: privacy, transparency, liability, security, bias, and the unknown workings of so-called black box' solutions.

This is partly due to public worries over the security of their private data on platforms such as Facebook - especially in the wake of the Cambridge Analytica scandal. Those concerns have hardly been helped by more recent stories, such as the November 2019 news that the Facebook mobile app has been tracking users' faces as they look at their feeds.

Policy matters

At a recent Westminster eForum event on UK AI policy and skills in London, one speaker raised his own concerns about the extent to which our lives are becoming influenced and managed by offshore algorithms, largely written in Silicon Valley and run in Californian data centres, such as Facebook's, Google's, or Amazon's.

These algorithms increasingly make decisions about what we watch and read, based on our previous likes and dislikes, implying that we are being streamed like primary school children into different groups - often without us knowing - in order to sell us advertising.

That speaker was none other than Roger Taylor, Chair of the government's new Centre for Data Ethics and Innovation. He said, "We now have a media world in which anyone can put something out there and a Californian algorithm decides whether or not to distribute it to every household, or only to certain households, in our country. And there is no mechanism in that process where anybody has any degree of real social accountability."

Despite the UK's Data Protection Act 2018 and Europe's General Data Protection Regulation (GDPR), which came into force in May 2018, nearly half of UK consumers (48 percent) have no idea how brands are using their data, according to a recent survey by the Chartered Institute of Marketing (CIM).

As a result, canny enterprises are beginning to recognise that ethics, data protection and consumer rights could be real competitive differentiators in terms of winning users' loyalty and trust.

A personal approach?

Personalisation is part of this particular knot of challenges. While personalising content, such as information feeds, to individual users might be useful or help to create a more direct or loyal relationship between a service and its users, it may screen out other data that might have been of equal interest to that person.

More, personalisation implies underlying trust, privacy, and ethical concerns: clearly a platform is learning about each user, but what does it do with that data? Who is it shared with? And to what end?

As the Internet of Things grows, with greater intelligence, AI, and inference abilities being embedded into smart devices, those fears can only deepen. For example, last year a Consumers Association survey found that one smart TV sent information to 700 different IP addresses in just 15 minutes - invisibly to the user.

The personal enterprise

But what about the use of cloud services and AI within the enterprise itself? A Computing Research survey of 150 IT leaders across every type of medium to large enterprise in the UK, found that access to AI and automation capabilities were either a major or significant motivation for shifting back-office applications into the cloud for 64 percent of respondents. A further 19 percent regarded it as important.

Nearly as many respondents said that AI and automation access had been achieved either extremely or very successfully in the cloud, with a further 25 percent indicating some success.

Gaining customer and employee insights are of similar importance to respondents, according to the survey. Customer insights were cited as major or significant motivations by over 60 percent of IT leaders, with 20 percent indicating some importance. Meanwhile, 57 percent acknowledged a major or significant motivation in gaining employee insights, with 23 percent seeing this as important.

However, when it came to moving back-office functions such as Finance, Accounting, and Human Capital Management (HCM) into the cloud, personalisation was not a massive driving factor for IT leaders, found the Computing survey.

Fifty-eight percent of respondents identified it as either very important or important, but those figures were significantly smaller than the responses for business insights, reliability, customer service, the applications always being up to date, security, or the overall user experience, among other factors.

This article is from Computing'sCloud ERP Spotlight, hosted in association withWorkday.

See the original post:
Cloud, AI, and personalisation: Key issues to consider - http://www.computing.co.uk

Read More..

Global Medical Device Security Solutions Market 2020-2024 | Increasing Demand for Cloud-Based Solutions to Boost the Market Growth | Technavio -…

LONDON--(BUSINESS WIRE)--The global medical device security solutions market size is expected to grow by USD 301.04 million during 2020-2024, according to the latest market research report by Technavio. Request a free sample report

Healthcare organizations are gradually moving toward creating a connected hospital infrastructure with the aid of IoT to provide timely and improved care. IoT is increasingly leveraged in the healthcare industry through various applications, including telemedicine, connected imaging, medication management, and inpatient monitoring. However, the increasing use of connected medical devices coupled with increasing deployment of IoT has made computer systems more vulnerable to cybersecurity threats. This is prompting stakeholders in the healthcare sector to increase their focus on improving network security and forming robust healthcare IT infrastructure. Thus, with the growing adoption of IoT and connected devices in healthcare industry, the demand for medical device security solutions is expected to rise considerably during the forecast period.

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR40164

As per Technavio, the increasing demand for cloud-based solutions will have a positive impact on market growth over the forecast period. This research report also analyzes other significant trends and market drivers that will influence market growth over 2020-2024.

Medical Device Security Solutions Market: Increasing Demand for Cloud-based Solutions

The deployment of cloud computing in the healthcare industry has increased considerably in recent years as it offers business agility, privacy, and security at lower costs. Cloud computing quickens the access of electronic medical records and enables the storage of clinical statistical data related to hospitals and clinics. Furthermore, factors including the rising need to comply with regulations, growing penetration of high-speed networks, and rising digital awareness are increasing the adoption of cloud-based solutions in the healthcare sector. With more healthcare organizations upgrading to cloud-based systems, the demand for cloud-based medical device security solutions is anticipated to rise considerably during the forecast period.

Growing concerns about healthcare data, stringent government regulations, and rising demand for self-medication and homecare medical devices are expected to boost the medical device security solutions market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Medical Device Security Solutions Market: Segmentation Analysis

This market research report segments the medical device security solutions market by device (wearable and external medical devices, hospital medical devices, and internally embedded medical devices) and geography (APAC, Europe, North America, MEA, and South America).

North America led the market in 2019, followed by Europe, APAC, South America, and MEA, respectively. The growth of the market in North America can be attributed to the vast adoption of wired and wireless networked medical devices. Factors such as rising geriatric population, growing demand for better healthcare facilities, and increasing cyberattacks on healthcare organizations are leading the region to contribute the highest incremental growth during the forecast period.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more. Request a free sample report

Some of the key topics covered in the report include:

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation

Geographical Segmentation

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

If you are interested in more information, please contact our media team at media@technavio.com.

View post:
Global Medical Device Security Solutions Market 2020-2024 | Increasing Demand for Cloud-Based Solutions to Boost the Market Growth | Technavio -...

Read More..

Cloud computing IaaS in Life Science Market Global Industry Demand, Scope and Strategic Outlook,Growth Analysis,Business Opportunities and Future…

TheGlobal Cloud computing IaaS in Life Science Market the report gives CAGR values alongside its vacillations for the particular estimate time frame. The report contains start to finish investigation and estimation of different market related variables that are amazingly vital for better basic leadership. The Cloud computing IaaS in Life Science report gives complete clarification of market definition, showcase division, focused examination and key advancements in the business. This statistical surveying report is surrounded with the most fantastic and complex apparatuses of gathering, recording, evaluating and dissecting market information. Market report contains information that can be really essential when it is tied in with overwhelming the market or making an imprint in the market as a most recent developing.

The examination additionally consolidates R&D status, channel abilities, and local development. Whats more, the report offers showcase gauges and piece of the pie for the conjecture time frame. The exploration report holds the information sourced from the essential and auxiliary research group of industry specialists and the in-house databases. The report contains significance on land spread, bits of the general business, key procedures, improvement structures, and distinctive financials frameworks of industry.

Key players cited in the report:

Cleardata Networks, Dell Global Net Access (GNAX), Carecloud Corporation, Vmware, Carestream Health, IBM Corporation, Iron Mountain, Athenahealth, and Oracle Corporation

Request your Sample PDF Report: @ https://www.verifiedmarketresearch.com/download-sample/?rid=4625

Competitive Landscape

Key players of the global Cloud computing IaaS in Life Science market are profiled on the basis of various factors, which include recent developments, business strategies, financial strength, weaknesses, and main business. The Cloud computing IaaS in Life Science report offers a special assessment of top strategic moves of leading players such as merger and acquisition, collaboration, new product launch, and partnership.

Cloud computing IaaS in Life Science Market: Scope of the Report

Along with the market overview, which comprises of the market dynamics the chapter includes a Porters Five Forces analysis which explains the five forces: namely buyers bargaining power, suppliers bargaining power, threat of new entrants, threat of substitutes, and degree of competition in the Cloud computing IaaS in Life Science Market. It explains the various participants, such as system integrator, intermediaries and end-users within the ecosystem of the market. The report also focuses on the competitive landscape of the Cloud computing IaaS in Life Science Market.

Click Here to Get Exclusive Discount:

Report Enquiry For Ask For Discount

Table of Content

1 Introduction of Global Cloud computing IaaS in Life Science Market

2 Executive Summary

3 Research Methodology of Verified Market Research

4 Global Cloud computing IaaS in Life Science Market Outlook

5 Global Cloud computing IaaS in Life Science Market, By Deployment Model

6 Global Cloud computing IaaS in Life Science Market, By Solution

7 Global Cloud computing IaaS in Life Science Market, By Vertical

8 Global Cloud computing IaaS in Life Science Market, By Geography

Overview, North America, U.S., Canada, Mexico, Europe, Germany, U.K., France ,Rest of Europe, Asia Pacific, China, Japan, India, Rest of Asia Pacific

9 Global Cloud computing IaaS in Life Science Market Competitive Landscape

10 Company Profiles

Complete Report is Available @ https://www.verifiedmarketresearch.com/product/

Additional Offerings:

Econometric modeling

Acquisition, divestment, and investment analysis

Analysis of business plans

Patent analysis

Positioning and targeting analysis

Demand forecasting

Analysis of product and application potential

Highlights of TOC:

Mrket Overview:It starts with product overview and scope of the global Cloud computing IaaS in Life Science market and later gives consumption and production growth rate comparisons by application and product respectively. It also includes a glimpse of the regional study and Cloud computing IaaS in Life Science market size analysis for the review period 2014-2026.

Company Profiles:Each company profiled in the report is assessed for its market growth keeping in view vital factors such as price, Cloud computing IaaS in Life Science market gross margin, revenue, production, markets served, main business, product specifications, applications, and introduction, areas served, and production sites.

Manufacturing Cost Analysis:It includes industrial chain analysis, manufacturing process analysis, the proportion of manufacturing cost structure, and the analysis of key raw materials.

Market Dynamics:Readers are provided with a comprehensive analysis of Cloud computing IaaS in Life Science market challenges, influence factors, drivers, opportunities, and trends.

Market Forecast:Here, the Cloud computing IaaS in Life Science report provides consumption forecast by application, price, revenue, and production forecast by product, consumption forecast by region, production forecast by region, and production and revenue forecast.

This report can be customized to meet your requirements. Please connect with our representative, who will ensure you get a report that suits your needs.

We Also Offer Customization on report based on specific client equirement:

Free country Level analysis for any 5 countries of your choice. Free Competitive analysis of any 5 key market players. Free 40 analyst hours to cover any other data point.

About Us:

Veified Market Research has been providing Research Reports, with up to date information, and in-depth analysis, for several years now, to individuals and companies alike that are looking for accurate Research Data. Our aim is to save your Time and Resources, providing you with the required Research Data, so you can only concentrate on Progress and Growth. Our Data includes research from various industries, along with all necessary statistics like Market Trends, or Forecasts from reliable sources.

Contact Us: Mr. Edwyne Fernandes

Call: +1 (650) 781 4080

Email: [emailprotected] https://www.linkedin.com/company/verified-market-research

Read the original here:
Cloud computing IaaS in Life Science Market Global Industry Demand, Scope and Strategic Outlook,Growth Analysis,Business Opportunities and Future...

Read More..

How much cloud does an IT disaster recovery plan need? – TechTarget

To avert disaster, organizations look at all sorts of combinations of cloud-based and on-premises resources. It's understandable to an extent, but every move to advance your resiliency comes with complications, costs and catches.

A solid IT disaster recovery plan will almost certainly include some coverage from a cloud provider. Valuable information and mission-critical applications in your data center are protected to whatever degree your organization can keep them safe. And that will be fine -- until it's not. In a moment of crisis, where do you turn? Unless you've got a reasonably sophisticated second data center elsewhere, the business is in considerable peril.

Those moments of emergency are why cloud computing is so appealing for disaster recovery (DR), as cloud expert Brian Kirsch explains in this handbook's lead article. The idea is that you sync your data and have some VMs ready to go in an environment managed by a trusted cloud provider. If your data center fails, then those cloud-based resources come riding to the rescue -- right?

Kirsch explains that this premise is correct, but only if you've gone to the trouble -- and the expense to do things properly. Your IT disaster recovery plan needs to be thorough enough that you are sure your data is not just protected from whatever afflicted your on-premises environment, but also quickly recoverable from its off-premises safe haven. Doing this is neither simple nor cheap.

DR in the cloud is possible, and, in most cases, it is perfectly sensible. Just don't expect it to be easy.

Read the original post:
How much cloud does an IT disaster recovery plan need? - TechTarget

Read More..

Moving to the cloud are you doing it right? – TechRadar

The benefits of choosing the cloud are clear; cloud services can offer much-needed flexibility, scalability and increased productivity for businesses. However, for companies considering moving IT infrastructure to the cloud, ongoing planning and execution is essential in ensuring success.

Russell Barley is a Cloud Architect at 1&1 IONOS.

When it comes to cloud migration, the first decision businesses need to make is choosing the right solution. Public cloud, where a third party provider hosts and looks after cloud storage management of data, is often deemed the most popular choice. However, no two businesses are the same, and every workload, employee and team has different requirements.

To create a successful cloud strategy, all businesses should explore hybrid and multi-cloud options that can adapt to suit business needs. Enterprise Cloud a corporate cloud solution that offer a modern IaaS platform could be the right fit. As its highly available, secure, reliable, and has fast software defined networking, its often a good choice for businesses.

However, before making any decision, businesses must ensure they invest the time in truly understanding the productivity and efficiency benefits.

The role of any cloud architect is to lead the effort in making sure that the right plan and steps are put in place to make a migration process as effective as possible. A key part of successful migration is checking and confirming that the cloud is working hard for the businesss needs.

From an infrastructure perspective, the migration process is straightforward: analyse workloads, servers and applications; decide on the resources and cloud databases needed and deploy them; monitor the resources, and scale up or down accordingly.

Then, assess the load cycle of the applications and understand whats required. If there are seasonal peaks, or perhaps dont run out-of-hours, map the details and architecture for those usage cycles. Shutdown systems when theyre not needed, and scale systems during on and off-peak times. If that sounds like an intensive maintenance job, put automation tools and cloud logging in place to do it.

We can also take it one step further. Is there a need for the underlying operating system to be present for all your business applications anymore? Is containerisation a better solution for applications and workloads? Is there a requirement to reduce the consumption of resources? If so, maybe a business should explore Managed Kubernetes, and utilise the host of PaaS offerings that cloud providers have available.

For EU businesses, choosing an EU cloud hosting provider can help ensure that business data is secure. While the EU has long been at the forefront of data protection worldwide and has directed much of its legislative effort towards ensuring consumer data protection through GDPR, the rights of European Cloud users are now being called into question.

This is due to the US Cloud Act, passed by Congress in early 2018, which allows US courts to grant warrants to law enforcement agencies forcing the disclosure of customer data stored outside of the US.

This applies to EU providers that are subsidiaries of US companies and EU-based companies that have a subsidiary in the US. The legislation has direct contradictions with GDPR, and can leave companies facing the situation of breaching the US CLOUD Act on one hand, or GDPR on the other, when being ask to disclose protected data.

With that in mind, for EU companies, the current solution is to choose cloud providers based in the EU that do not store or process data anywhere other than European datacentres. Providers that are subject to EU law must only act in accordance with GDPR, and therefore there is no danger of being obliged to disclose personal data on the basis of the CLOUD Act.

One of the most common cloud migration strategies is to migrate everything in a lift and shift approach, and then to re-factor, re-architect and re-strategise. However, businesses often migrate, but then embark on other projects, and dont fully complete the transformation. The end result can be a poorly designed infrastructure, limited cloud storage and functionality and a cost that is higher than expected or often budgeted for.

Another mistake made is taking all requirements and rules forward from the pre-cloud infrastructure. Its important to re-ask every question that helped lead to the current infrastructure, as most decisions were made within the boundaries of that set-up. Re-evaluate the choices, and take the opportunity to re-align your technology to business needs and data strategies.

In addition, over-provisioning where businesses choose increased cloud capacity that isnt immediately needed - is a common issue as companies act as if they own the hardware themselves. While over-provisioning isnt necessarily a bad thing, its important to knowingly make that choice to avoid unnecessary expenditure, and continuously re-adjust resources depending on whats needed.

Finally, if a business has a multi-cloud strategy, a priority focus will be to avoid vendor lock in. Manage Kubernetes can help, but its essential to make workloads as portable as possible, should there be a desire to utilise multiple cloud providers.

When it comes to cloud migration, preparation, planning and ongoing modifications are essential to ensure your business feels the full benefits of the cloud. A business must truly embrace cloud technologies, and shouldnt be afraid to explore multi and hybrid-cloud options to make cloud-based systems and services the best they can be.

Russell Barley is a Cloud Architect at 1&1 IONOS.

See more here:
Moving to the cloud are you doing it right? - TechRadar

Read More..