Page 3,111«..1020..3,1103,1113,1123,113..3,1203,130..»

What is cloud native? – BusinessCloud

Today, people are talking more and more about cloud-native apps and cloud-native development. But what does this actually mean? Many applications are hosted on public cloud resources, but this alone doesnt mean theyre cloud-native.

A definition of cloud-native

A cloud-native app is one which has been designed solely for the cloud. This means cloud computing has been leveraged in every element of its design, which separates it from an application thats merely been lifted and shifted (moved onto cloud) or cloud enabled (partially built on cloud).

This more cloud-based design approach garners many benefits, incorporating the latest technologies and practices, such as DevOps, microservices and containers.

This approach is whats known as cloud-native development.

Elements of cloud-native development

DevOps

DevOpsis a delivery/development philosophy which holds that the ops and dev teams should be integrated for faster, more reliable developments.

In practice, this means utilizing cloud automation technologies to reduce manual work and guard against human error.

The end goal of DevOps is to reach continuous delivery/intergeneration (CI/CD), in which new features can be tested and deployed with minimal human input.

Microservices

Microservices is a cloud architecture approach in which applications are made up of many smaller cloud-based services. This is in contrast tomonolithic architectures, in which all elements of an application are hostedcentrally, andare generally inseparable.

Microservice applications are more costly to develop in the short run, but once achieved, offer far more flexibility and scaling options than their monolithic counterparts.

Serverless

Serverless applications are not, as the name suggests, wholly serverless. Instead, the applications requests are performed by servers provisioned on an on-demand/per request basis, as opposed to running on permanent cloud servers.

This comes with a host of benefits, including reduced latency, faster time to market, and lower cost of production completely bypassing the traditional costs of infrastructure provisioning.

Containers

Containerised applicationsrun off containers, which are a newer form of the traditional cloud server the VM.

Containers are much lighter than VMs in terms of required compute resources and are more environment agnostic, meaning they can be deployed on a greater range of infrastructures without undergoing change.

This means that, in many situations, containers offer increased flexibility and lower total cost.

Why cloud-native applications and cloud-native deployments will be key moving forward

Although some of the above cloud-native deployment methods are mutually exclusive, in general, these approaches confer many of the same advantages, which are already indispensable in todays competitive landscape.

Compared to traditionally designed applications and eventomany cloud-enabled or lifted and shifted applications cloud-native is:

More scalable

Soley cloud-based applications, and particularly microservice-based applications, can scale more easily. In the case of microservices, the ability to scale one service without scaling another is especially impactful, avoiding the costs of scaling all other elements in tandem, as would be the requirement for a monolithic application.

Easier to manage

Cloud-native infrastructure is more geared towards automation and reduced management costs. Serverless is the most obvious example of this trend, with applications being uploaded as functions only, and provisioning taken care of automatically.

Quicker pace of development

Cloud-native applications are better suited to DevOps, which seeks to automate testing, building and deployment. This in turn, of course leads, to shorter overall time to market.

More reliable

Many cloud-native technologiesare able tocope with faults far better than traditional technologies. Kubernetes one of the most widely used container orchestration tools automatically detects and heals non-functional clusters of containers.

Cloud-native technologies also allow faults to be more easily isolated.

Trends for the future

Increasingly, cloud-native is seen as the required benchmark for competitive development. And although many may cut corners with cloud-ready or cloud-enabled, the clear advantages of true cloud-native development and applications will ensure cloud-native becomes the norm in the years to come.

More here:
What is cloud native? - BusinessCloud

Read More..

Akash Network, the World’s First Decentralized Cloud Computing Marketplace and the First DeCloud for DeFi, Develops Critical IBC Relayer for…

"This ability for chains to transact and interoperate will be revolutionary for the industry"

Key features for the IBC Relayer include:

Essential to launching the IBC protocol and the only way users will be able to use IBC, the Relayer is the user interface that enables all transfers and transactions on IBC. In development for over three years, IBC is the flagship feature of the Cosmos Network. For crypto and blockchain, where interoperability and composability are essential for continued growth for decentralized sectors like DeFi, IBC is the most promising and production-ready solution.

Akash will be one of the first networks in the world to integrate with IBC and IBC Relayer, through the early March 2021 launch of Akash MAINNET 2, the first viable decentralized cloud alternative to centralized cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud.

For media inquiries, please contact Kelsey Ruiz at (916) 412-8709 or kelsey(at)akash(dot)network.

About Akash Network:

Akash Network is developing the world's first and only decentralized cloud computing marketplace, enabling any data center and anyone with a computer to become a cloud provider by offering their unused compute cycles in a safe and frictionless marketplace. Akash DeCloud greatly accelerates scale, efficiency, and price performance for DeFi, decentralized organizations, and high-growth industries like machine learning/AI.

Through Akash's platform, developers can easily and securely access cloud compute at a cost currently 2x-3x lower than centralized cloud providers (AWS, Google Cloud, and Microsoft Azure). The platform achieves these benefits by integrating advanced containerization technology with a unique staking model to accelerate adoption. For more information, visit: https://akash.network/

SOURCE Akash Network

https://akash.network/

Read more from the original source:
Akash Network, the World's First Decentralized Cloud Computing Marketplace and the First DeCloud for DeFi, Develops Critical IBC Relayer for...

Read More..

The complexities of moving to the cloud | Industry Trends | IBC – IBC365

Media organisations are moving at different paces and there is no one size fits all technical approach. While some have already moved lock stock and barrel to the cloud, others face investment, training and technology dilemmas about how best to proceed.

Cloud is a one-way street and something broadcast CTOs have to embrace, says Baskar Subramanian, Co-founder, Amagi. Its a question of what to move and how fast.

The poster child for this is Discovery which began its wholesale move to AWS Cloud in 2016.

Previously wed have to buy a load of servers and install them, wed need a file transfer system, and we wouldnt know what the return on investment would be over time, explains Simon Farnsworth, CTO Broadcast Technology & Operations at an SDVI hosted webinar. Now, we can very accurately cost things like major new projects. It has become a lot less emotional and more binary since we can accurately predict cost.

He estimates Discoverys cloud-based supply chain has already saved the company $100m. Cloud is also claimed to have shaved $1bn in synergies from Discoverys 2018 acquisition of Scripps Networks.

Historically, [when Discovery entered] new territories we had siloed ops teams with siloed tech stacks and siloed workflows but were able to standardise that now, Farnsworth says. All the content for Discovery+ is in the cloud and its just a question of feeding it through to our own operated platforms or to affiliates. We need to be fast. What [cloud] has allowed us to do is generate the same amount of content while investing a truck load in new product.

Comcast Technology Solutions, perhaps the largest service provider in the world, is about to make a major acceleration in moving its own supply chain (though not yet including Sky) to the cloud.

We want to move into the cloud for flexibility and speed, explains Bart Spriester, VP and GM of Content and Streaming. To provide services to spin up and down and we need it to be usage based. We need to remove the integration lead time of on-prem solutions and remove capital approval cycles and slow software deployment.

In 2020 the company syndicated 66 million minutes of content to partners like Cox and Rogers and more than 170 affiliates. With this volume we need to take a lot of friction out of the system, Spriester says. We think there will be a huge benefit to moving this out to public cloud infrastructure.

The benefits of moving the supply chain to the cloud are clear. This includes the ability to build up and down rapidly and only pay for resources when required. Enterprise scale operations can be run with greater efficiency and accuracy than before.

Changing from a custom on-premises environment where different processes are done on different vendors kit to using common tools in an open source environment gives a much more consistent view of the operational state of the platform, explains Tony Jones, principal technologist, MediaKind. The way you build and configure systems is declarative meaning that you instruct system components what you state you want it to be and the system executes how to get there.

It is the deterministic behaviour of systems in a cloud environment that means broadcasters can predict with far greater certainty exactly what operating a service should cost.

Changing from a custom on-premises environment where different processes are done on different vendors kit to using common tools in an open source environment gives a much more consistent view of the operational state of the platform, Tony Jones,Mediakind

Add to that a microservices approach to development and deployment and broadcasters can upgrade equipment and introduce new features far faster, more economically and more flexibly than before.

If broadcasters want to have a healthy future in competition with SVOD vendors they have got to think along those lines, urges Jones.

One destination, many pathsThese arguments may be well known but getting there is not straightforward for the majority of broadcasters. In the negative column are cost, complexity and cultural inertia.

People are on different pathways, says Peter Sykes, Strategic Technology Development Manager at Sony Europe. At one end you have more traditional organisations making SDI to a IP as a first step while others are now moving to combine IP with cloud. Media companies know they have to reach new audiences but cant increase resources and in some case are having to reduce capital outlays.

This financial squeeze is one reason for a phased migration to cloud. Many broadcasters put their toe in first by moving disaster recovery operations. This has sped up since Covid-19 underlined the necessity for business continuity.

Another step might be to take less critical workflows like media processing and VOD to cloud. For others it makes sense to move complete sub-systems into a public cloud environment rather than a component-based approach. These systems are typically operated as-a-service by external providers.

They could move a complete broadcast chain encompassing playout, compression and multiplexing or ABR packaging as a one functional unit, Jones says. Theres not really any value to the broadcaster to build that themselves but if they choose to take it prepacked its an operationally easier environment and theres just one [vendor] to talk to if theres a problem.

The pace also differs depending on delivery technology. The traditional DTH anchor of broadcast delivery is moving slower than OTT DTC service launches which are more likely to be cloud deployments, says Richard Mansfield, MediaKinds Steaming Director. Broadcasters not ready to migrate their entire infrastructure are making this their first step.

Arguably, the biggest issue hindering broadcaster moves to the cloud surround skills and mindset.

The primary issue is cultural mindset more so than technology, says Subramanian. Its a question of being comfortable with a particular way of doing things and a reluctance to doing things differently.

Broadcasters used to plugging-in individual components using SDI or its IP version SMPTE 2110 face difficulties in working out how to apply that to the cloud.

Imagine you picked a handful of vendors, one of whom deploys into AWS virtual machines, one deploys into a Kubernetes environment and another one into a Kubernetes service in a cloud provider, posits Jones. How you integrate that as a complete system is a nightmare and probably beyond most broadcast engineers.

Not only do different vendor applications need to interface together but you also need to consider whether the deployment environment they work in are compatible with each other, he adds. A lot of legacy software apps that were built to run on premises have been adapted for the cloud but are not cloud native. There are no standards for this deployment. Its a wild west.

We have seen some big network operators that been able to grasp that change but it does take quite a big investment.

IT training neededRelated to this is the need for a whole new set of IT skills required of broadcast engineers.

Broadcasters launching OTT services in the cloud are often doing so using an IT team, says Mansfield. In the long run, this separation is insane. They are essentially doing the same thing as the broadcast team but delivering to a different output medium. To be successful those teams need to be merged together as one operation.

For Subramanian the answer lies in better education about the total ownership cost of cloud workloads. The finance team, the operations and tech departments are all used to a capex model which, when suddenly taken to opex-driven model, catches them off guard. In some senses the cloud complicates life because there are so many different pieces of the puzzle.

In one simple illustration, buying a server for on-premises versus putting a server on the cloud cannot be compared like for like. With the cloud model you need to consider the networking gear, the data centre, the air con power and performance, he says.

Aside from Kubernetes, which is an orchestration layer adopted by all major cloud platforms, Subramanian agrees that the internet is fracturing away from the broadcast safety net of unified standards. He doesnt think this a problem. There will be a plurality of standards that we all need to support including NDI, SRT, RIST and Zixi but this multiplicity breeds innovation.

Fundamentally what is missing from the whole ecosystem is better education to create business models. We have seen customers cross that bridge once they understand the significant benefits.

Somehow, Discovery seems to have done this. We managed to flip the [internal] conversation from finance looking at the bottom line to looking at metrics, explains Farnsworth. How much volume is flowing through? What is the reliability like? What is the cost so we can start delivering KPIs? Its a much more straightforward conversation and means we can concentrate on creating a better consumer experience rather than how we make it work technically.

Read the original post:
The complexities of moving to the cloud | Industry Trends | IBC - IBC365

Read More..

Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica – Security Boulevard

Jamaica just experienced a massive data breach that exposed the immigration and COVID-19 records of hundreds of thousands of people who visited the island over the past year. Much of the information found on the exposed server was from Americans.

According to TechCrunch, the Jamaican government contractor Amber Group left a storage server on Amazon Web Services (AWS) unprotected and without a password. The server was set to public which enabled anyone to have access to the data. The unprotected data consisted of 70,000 COVID lab results, 425,000 immigration records, 250,000 quarantine orders, and 440,000 images of travelers signatures.

Anyone traveling to Jamaica had to download Amber Groups app to report COVID results before allowed entry. Those who tested positive in Jamaica had to use the app to monitor symptoms and allow the government to track their whereabouts to ensure they were quarantining. All of that data from the app, over 1.1 million records, was exposed.

When handling sensitive and private information like Amber Group was, there is no room for mistakes like leaving a server unprotected. Organizations need to adopt cloud security strategies to protect their data and provide automated real-time remediation to catch risks.

DivvyCloud by Rapid7 protects your cloud and container environments from misconfigurations, policy violations, threats, and IAM challenges. With automated, real-time remediation, DivvyCloud by Rapid7 customers achieve continuous security and compliance, and can fully realize the benefits of cloud and container technology.

The post Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica appeared first on DivvyCloud.

*** This is a Security Bloggers Network syndicated blog from DivvyCloud authored by Shelby Matthews. Read the original post at: https://divvycloud.com/blog-covid-records-exposed-in-jamaica/?utm_source=rss&utm_medium=rss&utm_campaign=blog-covid-records-exposed-in-jamaica

Read the original:
Hundreds of Thousands Immigration and COVID Records Exposed in Jamaica - Security Boulevard

Read More..

Oracle Adds New Hybrid Cloud Offering – Database Trends and Applications

Oracle has expanded its hybrid cloud portfolio withOracle Roving Edge Infrastructure, a new offering that brings core infrastructure services to the edge with Roving Edge Devices (REDs)ruggedized, portable, scalable server nodes.

The new service is part ofOracles comprehensive hybrid cloud portfolio, which provides customers with more flexibility and control over their cloud deployments than other vendors. Global customers across financial services, public sector, healthcare, logistics, and communications industries are using Oracles hybrid cloud solutions to support their cloud transformations without the trade-offs in scale, data sovereignty, and control that they have had to make in the past.

Customers want choice when it comes to running workloads in the cloud. Each customer has different requirements based on data sovereignty, scale, or wanting the full experience of a public cloud on-premises with all of Oracles cloud services. Oracle Roving Edge Infrastructure is the latest example, delivering core infrastructure services to remote locations, said Clay Magouyrk, executive vice president, Oracle Cloud Infrastructure. Oracles hybrid cloud portfolio essentially delivers a cloud region wherever and however a customer needs it.

Oracle Roving Edge Infrastructure enables customers to operate cloud applications and workloads in the field, including machine learning inference, real-time data integration and replication, augmented analytics, and query-intensive data warehouses. In addition, it delivers cloud computing and storage services at the edge of networks for government and enterprise organizations, enabling low-latency processing closer to the point of data generation and ingestion, which provides timely insights into data.

The new offering is a fully mobile, connection-independent extension of customers Oracle Cloud Infrastructure (OCI) tenancy with a similar interface and workflow to provide a consistent, unified experience. An Oracle RED device is equipped with high-performance hardware including 40 OCPUs, an NVIDIA T4 Tensor Core GPU, 512GB RAM, and 61TB of storage, and can be clustered into groups of 5 to 15 nodes in a single cluster, starting at $160 per node per day.

For more information about the Oracle Roving Edge Infrastructure, go to http://www.oracle.com/cloud/roving-edge-infrastructure.

Continued here:
Oracle Adds New Hybrid Cloud Offering - Database Trends and Applications

Read More..

United States Virtualization Security Market Report 2021 – Growth, Trends, COVID-19 Impact, and Forecasts to 2026 – ResearchAndMarkets.com – Business…

DUBLIN--(BUSINESS WIRE)--The "United States Virtualization Security Market - Growth, Trends, COVID-19 Impact, and Forecasts (2021 - 2026)" report has been added to ResearchAndMarkets.com's offering.

The US virtualization security market is expected to register a CAGR of about 14.4% during the forecast period 2021-2026

In the current business scenario, enterprises in the United States are out to transform their businesses to improve their full digital potential. One of the suitable technologies powering the digital transformation is the shifting of companies to their virtual infrastructure. Virtualization provides organizations the ability to process a large amount of data as well as gives access to better processing capabilities.

In the United States, virtual infrastructures, such as hypervisors, virtual machines, and web servers, are prone to several web threats, such as Trojan.Script.Generic and Hoax.Loss.Script.gen. Therefore, virtualization security solutions ensure a significant security level to restrict cyber-attack incidents in virtual infrastructures.

The US market for virtualization security is projected to obtain a massive growth as enterprises are shifting toward cloud and cloud-based technologies as a part of their IT model. In 2019, a survey conducted by Nutanix showed that more than 80% of the respondents prefer cloud-based technologies as an efficient operating model.

As enterprises move to the virtual environment to modernize their conventional infrastructures, they face numerous security and integration issues that can affect system or server performance as well as costs. When software applications have moved to the wrong virtual environment during migration, it results in decreased software performance if the environment is under-equipped or increased enterprise costs if it is over-equipped.

As per the report published by the Ponemon Institute and Siemens in October 2019, the utility industry has faced the worsening effect of virtual migration with more than 50% of organizations reporting significant operational data loss or at least one shutdown per year, and more than 25% impacted by mega cyber attacks.

According to some representatives of FireEye and CrowdStrike, which are regularly tracking malicious web activity, Iran has increased a significant number of offensive cyberattacks against the United States government and critical infrastructure as trade tensions have increased between the two nations. These increasing cyberattacks in the country's government IT infrastructure is expected to drive the demand for virtualization security in the coming years.

The COVID-19 pandemic has created several exploitation opportunities for advanced persistent threat (APT) groups and cybercriminals. In April 2020, the US Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) published a joint alert on the details of security threats in the United States. These government bodies are observing an increasing use of COVID-19-related themes by cybercriminals.

Presently, the use of teleworking has increased, such as virtual private networks (VPNs), which therefore amplify the threat to individuals as well as organizations. The increasing cyberattacks at the time of crisis are expected to accentuate the usage of virtualization security solutions by enterprises to get rid of such attacks.

Competitive Landscape

The United States virtualization security market is dominated by players, like VMware, Sophos Ltd., and Cisco Systems, which provide virtualization security solutions used by major end-user enterprises.

The players operating in the country account for a significant share in the market and focusing on expanding their customer base. These players are primarily focusing on investment in developing new solutions, strategic collaborations, and other organic and inorganic growth strategies to earn a competitive edge during the forecast period.

Key Topics Covered:

1 INTRODUCTION

1.1 Study Assumptions and Market Definition

1.2 Scope of the Study

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET INSIGHT

4.1 Market Overview

4.2 Porter's Five Forces Analysis

4.3 Market Drivers

4.4 Market Challenges

5 IMPACT OF COVID-19 ON THE UNITED STATES VIRTUALIZATION SECURITY MARKET

6 MARKET SEGMENTATION

6.1 Virtualization Type

6.1.1 Hardware Virtualization

6.1.2 Software Virtualization

6.2 Component

6.2.1 Solutions

6.2.2 Services

6.3 Deployment

6.3.1 On-Premise

6.3.2 Cloud

6.4 Type of Virtual Infrastructure

6.4.1 Hypervisor

6.4.2 Virtual Machines

6.4.3 Web Servers

6.4.4 Other Type of Virtual Infrastructures

6.5 End User

6.5.1 IT and Telecom

6.5.2 Cloud Service Providers

6.5.3 Data Centers

6.5.4 BFSI

6.5.5 Healthcare

6.5.6 Government

6.5.7 Other End Users

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

7.1.1 VMware Inc.

7.1.2 Trend Micro Incorporated

7.1.3 Sophos Ltd

7.1.4 Hytrust Inc.

7.1.5 Juniper Networks Inc.

7.1.6 10ZiG Technology

7.1.7 Cisco Systems Inc.

7.1.8 Centrify Corporation

8 INVESTMENT ANALYSIS

9 FUTURE OUTLOOK OF THE MARKET

For more information about this report visit https://www.researchandmarkets.com/r/3uc5fa

See the article here:
United States Virtualization Security Market Report 2021 - Growth, Trends, COVID-19 Impact, and Forecasts to 2026 - ResearchAndMarkets.com - Business...

Read More..

Nexion joins the ASX boards with global ambitions for its hybrid cloud technology – Stockhead

Special Report: With strong organic revenue growth and global partnerships in place, Nexion is poised to scale up as it joins the ASX boards.

Technology company Nexion (ASX:NNG) is looking to maximise its market opportunity in global cloud computing services with its listing on the ASX this Thursday.

The company is joining the boards after closing out an over-subscribed IPO with an $8m raise, which it increased from $5m to meet demand interest from institutional investors.

Speaking with Stockhead ahead of the listing, Nexion Group CEO Paul Glass discussed how the company has filled a crucial niche in the market for cloud services, which is already a $US130bn global industry.

While big players such as Amazon Web Services (AWS) and Microsoft Azure provide the software infrastructure, getting cost-efficient access isnt always a simple process.

A lot of companieswant to store data in the cloud. But often, they have legacy infrastructure to run their core delivery systems thats not compatible with a public cloud, Glass explained.

Because you cant physically put a server in AWS. And most importantly the size of that server if you put it in a public cloud it would just cost you too much.

Our solution allows customers to access public cloud but also bring their own physical resources and put it into servers in our cloud.

Effectively, Nexion leverages its own infrastructure as a point of entry to create a hybrid cloud services approach, with a mixture of software and hardware.

The Perth-based company saw business surge in its home market after launching in 2017, due to its clients relative isolation from major cloud services hubs.

And in the wake of its ASX listing, the company is building out capacity to expand to new markets globally that are in need of a hybrid cloud solution.

Thats where the adoption of hybrid cloud is prevalent, and its now the fastest growing cloud industry, Glass said.

He added that Nexions service adds an extra level of bespoke security, by providing a network access point which is in closer proximity to the actual clients it serves and customised customer cyber security solutions.

So thats the essence of where Nexion started. For example, we have a customer in WA who has 16 odd existing sites they dont want to pay to change their legacy infrastructure in the data centre. But they still want the security and bandwidth guarantee, he said.

Founded just four years ago, Nexion gained traction almost immediately and has since driven triple-digit percentage gains in top line revenue growth.

With a viable business model now established, Glass said it marks the right time to access public capital markets primarily with a view to pursue the companys global growth strategy.

Nexion was never set up with just the Australian market in mind. We were alwaysset up to be global telecommunications company, Glass said.

In that sense, a public listing gives Nexion an increased level of awareness in markets as it pursues a global partnerships strategy, he added.

The second reason is that public funds give us the capital base to rapidlydeploy out our OneCloud nodes across different regions. Were growing rapidly but we have aspirations to scale up to a global solution on back of our agreement with Aryaka.

Headquartered in California, Aryaka is a global SD-WAN network provider with offices in 65 countries.

In the December quarter, Nexion completed a key strategic partnership which was the first of its kind in the world, Glass said.

It allows Nexion to build and operate an Aryaka point of presence, Glass said.

So for example, every WA company that wants to connect to offices globally now has the ability to connect via Nexion from Perth, directly to one of the Aryaka hubs around the world.

If youre a CIO and youre paying high fees for traditional MPLS circuits, all of a sudden your information doesnt have to go via Sydney or Singapore. It goes directly from Perth to the rest of world, so in that sense it really is a great achievement.

That combination strong organic growth with an established global partnership is why Nexions public offer was so heavily oversubscribed in such a short period of time, Glass said.

I really think the markets ready for this, he said.

If you look at the tech industry right now, in my viewtheres a lot of concept stocks with applications that still need to be developed.

Were not a concept stock, were a strong stock with recurring revenue and line of sight to continue our global expansion.

So when it came to marketing and understanding our hybrid model and our agreements with companies like IBM and Aryaka, that really underpinned the strength of what we were offering.

This article was developed in collaboration with Nexion, a Stockhead advertiser at the time of publishing.

This article does not constitute financial product advice. You should consider obtaining independent advice before making any financial decisions.

Get the latest Stockhead news delivered free to your inbox.

It's free. Unsubscribe whenever you want.

You might be interested in

Read more from the original source:
Nexion joins the ASX boards with global ambitions for its hybrid cloud technology - Stockhead

Read More..

Kubernetes backup demands new IT practices and tools – TechTarget

Widespread adoption of containers has made Kubernetes a fixture in most organizations, which means Kubernetes backup is now a critical IT task.

The biggest challenge associated with Kubernetes backup is that Kubernetes environments are vastly different from traditional IT infrastructures. Backup methods that organizations have long relied on cannot protect Kubernetes clusters or the applications that run on Kubernetes. Although most backup application vendors now offer some degree of support for Kubernetes backup, those tools tend to be somewhat immature compared to traditional backup tools.

Here's a look at some top Kubernetes backup best practices, challenges and tool options.

One of the first best practices for Kubernetes backups is to focus on applications, not on servers. In a Kubernetes environment, there is no static mapping of applications to specific servers or VMs. Backups need to work in this highly dynamic model.

Agility is one of the main reasons organizations use containers. An organization can create a containerized application in a development environment and then move it to the production environment on premises or in the cloud. This underscores that organizations tend to adopt and retire containerized applications far more frequently than server-based applications.

A Kubernetes backup tool should be able to automatically detect newly deployed applications. Backup admins should tag or use a similar mechanism to identify applications, because some Kubernetes backup tools can back up and restore applications based on their tags.

Tags are just key-value pairs applied to various types of objects. The tag's only purpose is to make it easier for an admin to find certain objects.

In some organizations, admins assign tags to applications as a way of identifying them. Kubernetes environments often refer to tags as labels.

An organization must be comfortable with whatever backup tool it uses.

There are a few critical things to look for in a Kubernetes backup tool. First, containers do much more now than host stateless applications. A Kubernetes backup product must be able to back up persistent volumes used by applications. The backup tool also needs to protect all other application components, such as configuration data and databases.

A backup product also needs the ability to restore a backup to a different Kubernetes cluster, if necessary. Clusters can and sometimes do fail, and when that happens, it is important to be able to recover an application to a cluster that is still functional.

An organization must be comfortable with whatever backup tool it uses. Some of the tools to back up Kubernetes environments are command-line utilities that require a deep understanding of the Kubernetes environment. Examples of these tools include Velero and Stash, which, when used together, can perform Kubernetes backups. There is nothing wrong with using these types of tools with the necessary technical expertise. However, organizations that adopt a simpler, GUI-based backup tool, such as those offered by Portworx and Druva, can prevent mistakes that may leave data unprotected.

See more here:
Kubernetes backup demands new IT practices and tools - TechTarget

Read More..

And So It Begins Quantum Physicists Create a New Universe With Its Own Rules – The Daily Galaxy –Great Discoveries Channel

Albert Einstein was fond of saying that Imagination is everything. It is the preview of lifes coming attractions. What if our world, our universe, following Einsteins insight, is the result of a quantum-physics experiment performed by some ancient hyper-advanced alien civilization. A civilization that, as astrophysicist Paul Davies speculates, may exist beyond matter.

In The Eerie Silence: Renewing Our Search for Alien Intelligence Davies writes: Thinking about advanced alien life requires us to abandon all our presuppositions about the nature of life, mind, civilization, technology and community destiny. In short, it means thinking the unthinkable. Five hundred years ago the very concept of a device manipulating information, or software, would have been incomprehensible. Might there be a still higher level, as yet outside all human experience?

Within an infinite space of current cosmology, suggests the University of Chicago theoretical physicist, Dan Hooper, there are inevitably an infinite number of universes that are indistinguishable from our own. Yet some of the regions within the multiverse, says Hooper, are likely to be alien worlds with unknown forces and new forms of matter along with more or fewer than three dimensions of space worlds utterly unlike anything we can imagine.

Coming Attractions Alien Intelligence as Physics

Challenging 100-year Old Notions at the Quantum Level

As if on cue, reports Finlands Aalto University, a team of physicists used an IBM quantum computer to explore an overlooked area of physics, and have challenged 100 year old cherished notions about information at the quantum level, creating new quantum equations that describe a universe with its own peculiar set of rules. For example, by looking in the mirror and reversing the direction of time you should see the same version of you as in the actual world. In their new paper they created a toy-universe that behaves according to these new rules.

Beyond Fundamental Equations

The rules of quantum physics which govern how very small things behave use mathematical operators called Hermitian Hamiltonians. Hermitian operators have underpinned quantum physics for nearly 100 years but recently, theorists have realized that it is possible to extend its fundamental equations to making use of Hermitian operators that are not Hermitian. The new equations describe a universe with its own peculiar set of rules: for example, by looking in the mirror and reversing the direction of time you should see the same version of you as in the actual world.

New Rules of Non-Hermitian Quantum Mechanics

The researchers made qubits, the part of the quantum computer that carries out calculations, behave according to the new rules of non-Hermitian quantum mechanics. They demonstrated experimentally a couple of exciting results which are forbidden by regular Hermitian quantum mechanics. The first discovery was that applying operations to the qubits did not conserve quantum information a behavior so fundamental to standard quantum theory that it results in currently unsolved problems like Stephen Hawkings Black Hole Information paradox.

The Spooky Reality Behind the Quantum Universe We Havent a Clue What It Is

Enter Entanglement

The second exciting result came when they experimented with two entangled qubits. Entanglement is a type of correlations that appears between qubits, as if they would experience a magic connection that makes them behave in sync with each other. Einstein was famously very uncomfortable with this concept, referring to it as spooky action at a distance. Under regular quantum physics, it is not possible to alter the degree of entanglement between two particles by tampering with one of the particles on its own. However in non-Hermitian quantum mechanics, the researchers were able to alter the level of entanglement of the qubits by manipulating just one of them: a result that is expressly off-limits in regular quantum physics.

Einsteins Spooky Action Becomes Spookier

The exciting thing about these results is that quantum computers are now developed enough to start using them for testing unconventional ideas that have been only mathematical so far said lead researcher Sorin Paraoanu. With the present work, Einsteins spooky action at a distance becomes even spookier. And, although we understand very well what is going on, it still gives you the shivers.

Source: Quantum simulation of parity-time symmetry breaking with a superconducting quantum processor. The work was performed under the Finnish Center of Excellence in Quantum Technology (QTF) of the Academy of Finland.

The Daily Galaxy, Max Goldberg, via Aalto University and Nature

Image credit: Shutterstock License

The Galaxy Report newsletter brings you twice-weekly news of space and science that has the capacity to provide clues to the mystery of our existence and add a much needed cosmic perspective in our current Anthropocene Epoch.

Read more:

And So It Begins Quantum Physicists Create a New Universe With Its Own Rules - The Daily Galaxy --Great Discoveries Channel

Read More..

Quantum Theory May Twist Cause And Effect Into Loops, With Effect Causing The Cause – ScienceAlert

Causalityis one of those difficult scientific topics that can easily stray into the realm ofphilosophy.

Science's relationship with the concept started out simply enough: an event causes another event later in time. That had been the standard understanding of the scientific community up until quantum mechanics was introduced.

Then, with the introduction of the famous "spooky action at a distance" that is a side effect of the concept ofquantum entanglement, scientists began to question that simple interpretation of causality.

Now, researchers at the Universit Libre de Bruxelles (ULB) and theUniversity of Oxfordhave come up with a theory that further challenges that standard view of causality as a linear progress from cause to effect.

In their new theoretical structure, cause and effect can sometimes take place in cycles, with the effect actually causing the cause.

Thequantum realmitself as it is currently understood is inherently messy.

There is no true understanding of things at that scale, which can be thought of better as a set of mathematical probabilities rather than actualities.These probabilities do not exactly lend themselves well to the idea of a definite cause and effect interaction between events either.

The researchers further muddied the waters using a tool known as aunitary transformation.

Simply put, a unitary transformation is a fudge used to solve some of the math that is necessary to understand complex quantum systems. Using it makes solving the famousSchrodinger equationachievable using real computers.

To give a more complete explanation requires delving a bit into the "space" that quantum mechanics operates in.

In quantum mechanics, time is simply another dimension that must be accounted for similarly to how the usual three dimensions of what we think of as linear space are accounted for. Physicists usually use another mathematical tool called aHamiltonianto solve Schrodinger's equation.

A Hamiltonian, though a mathematical concept, is often time-dependent. However, it is also the part of the equation that is changed when a unitary transformation is introduced.

As part of that action, it is possible to eliminate the time dependency of the Hamiltonian, to make it such that, instead of requiring time to go a certain direction (ie., for action and reaction to take place linearly), the model turns more into a circle than a straight line, with action causing reaction and reaction causing action.

If this isn't all confusing enough, there are some extremely difficult to conceive of implications of this model (and to be clear, from a macro level, it is just a model).

One important facet is that this finding has little to no relevance to everyday cause and effect.

The causes and effects that would be cyclical in this framework "are not local in spacetime", according to thepress releasefrom ULB, so they are unlikely to have any impact on day to day life.

Even if it doesn't have any everyday impact now, this framework could hint at acombined theoryof quantum mechanics and general relativity that has been the most sought after prize in physics for decades.

If that synthesis is ever fully realized, there will be more implications for everyday life than just the existential questions of whether we are actually in control of our own actions or not.

This article was originally published by Universe Today. Read the original article.

Originally posted here:

Quantum Theory May Twist Cause And Effect Into Loops, With Effect Causing The Cause - ScienceAlert

Read More..