Page 2,477«..1020..2,4762,4772,4782,479..2,4902,500..»

6 use cases for Docker containers — and when to pass – TechTarget

By now, you've probably heard all about Docker containers -- the latest, greatest way to deploy applications.

But which use cases does Docker support? When should or shouldn't you use Docker as an alternative to VMs or other application deployment techniques?

Let's answer these questions.

Docker containers are lightweight application hosting environments. Like VMs, they are designed to be easily portable between different computers and isolate workloads.

However, one of the main differences between Docker and VMs is that Docker containers share OS resources with the server that hosts the Docker containers. VMs use a virtualized guest OS instead.

Because sharing an OS consumes fewer resources than running standalone guest OSes on top of a host OS, Docker containers are more efficient, and admins can run more containers on a single host server than VMs. Docker containers also typically start faster than VMs because they don't boot a complete OS.

Docker is only one of several container engines available, but there is some ambiguity surrounding the term Docker containers.

Technically speaking, the most important aspect of Docker is its runtime, which is the software that executes containers. In addition to Docker's runtime, which is the basis for containerd, modern containers can also be executed by runtimes like CRI-O and Linux Containers.

Most modern container runtimes can run any modern container, if the container conforms with the Open Container Initiative standards. But Docker was the first major container runtime to gain widespread adoption, and people still use Docker as a shorthand for referring to containers in general -- like how Xerox can be used to refer to any type of photocopier. Thus, when people talk about Docker containers, they are sometimes referring to any type of container, not necessarily containers designed to work with Docker alone.

That said, the nuances and semantics in this regard are not important for understanding Docker use cases. Almost any use case that Docker supports can also be supported by other mainstream container runtimes. We call them Docker use cases throughout this article, but we're not strictly speaking about Docker alone here.

Docker containers can deploy virtually any type of application. But they lend themselves particularly well to certain use cases and application formats.

Applications designed using a microservices architecture are a natural fit for Docker containers. This is because developers can deploy each microservice in a separate container and then integrate the containers to build out a complete application using orchestration tools, like Docker Swarm and Kubernetes, and a service mesh, like Istio or VMware Tanzu.

Technically speaking, you could deploy microservices inside VMs or bare-metal servers as well. But containers' low resource consumption and fast start times make them better suited to microservices apps, where each microservice can be deployed -- and updated -- separately.

The ability to test applications inside Docker containers and then deploy them into production using the same containers is another major Docker use case.

When developers test applications in the same environment where the applications will run into production, they don't need to worry as much that configuration differences between the test environment and the production environment will lead to unanticipated problems.

Docker comes in handy for developers who are in the early stages of creating an app and want a simple way to build and run it for testing purposes. By creating Docker container images for the app and executing them with Docker or another runtime, developers can test the app from a local development PC without execution on the host OS. They can also apply configuration settings for applications that are different from those on the host OS.

This is advantageous because application testing would otherwise require setting up a dedicated testing environment. Developers might do that when applications mature and they need to start testing them systematically. But, if you're just starting out with a new code base, spinning up a Docker container is a convenient way to test things without the work of creating a special dev/test environment.

Docker containers are portable, which means they can move easily from one server or cloud environment to another with minimal configuration changes required.

Teams working with multi-cloud or hybrid cloud architectures can package their application once using containers and then deploy it to the cloud or hybrid cloud environment of their choice. They can also rapidly move applications between clouds or from on premises and back into the cloud.

The same Docker container can typically run on any version of Linux without the need to apply special configurations based on the Linux distribution or version. Because of this, Docker containers have been used by projects like Subuser as the basis for creating an OS-agnostic application deployment solution for Linux.

That's important because there is not generally a lot of consistency between Linux distributions when it comes to installing applications. Each distribution or family of distributions has its own package management system, and an application packaged -- for example, Ubuntu or Debian -- cannot typically be installed on a Linux distribution, like RHEL, without special effort. Docker solves this problem because the same Docker image can run on all of these systems.

That said, there are limitations to this Docker use case. Docker containers created for Linux can't run on Windows and vice versa, so Docker is not completely OS-agnostic.

The efficiency of Docker containers relative to VMs makes Docker a handy option for teams that want to reduce how much they spend on infrastructure. By taking applications running in VMs and redeploying them with Docker, organizations will likely reduce their total resource consumption.

In the cloud, that translates to lower IaaS costs and a lower cloud computing bill. On premises, teams can host the same workloads with fewer servers, which also translates to lower costs.

While Docker comes in handy for many use cases, it's not the best choice for every application deployment scenario.

Common reasons not to use Docker include the following:

Read the original:
6 use cases for Docker containers -- and when to pass - TechTarget

Read More..

CIOs across Europe add their VOICE to chorus of calls to regulate cloud gatekeepers – The Register

Industry bodies representing thousands of CIOs and tech leaders across Europe have thrown their weight behind calls to rein in some of the iffier software licensing practices of the cloud giants.

A letter sent to the European Parliament reiterates the harm being done by certain vendors, as flagged up by Professor Frdric Jenny in a report commissioned by the Cloud Infrastructure Service Providers in Europe (CISPE)

Findings in the report included pricing for Microsoft's Office productivity suite being higher when bought for use on a cloud that wasn't Azure and the disappearance of "Bring Your Own License" deals, making it expensive to migrate on-premises software anywhere but Microsoft's cloud.

Oracle also took heat for its billing practises, which could differ between its own and third-party clouds.

The letter - seen by The Register - sent to MEPs on 5 November, states:

"The studys sample included businesses of all sizes, all cloud users, including members of our organizations, seeking to digitalise their operations to improve service, cost and choice to their customers. Its findings provide evidence on the wide variety of unfair practices being deployed to deprive the members of our organisations of choice, and as a result our customers of innovative and more effective products.

"Proving the illegality of these unfair practices currently requires long and expensive investigations under existing competition laws. The timescale and resources required coupled with the absence of workable alternative solutions and the potential retaliatory measures feared by many if they speak out - simply means that many enterprises will simply accept the onerous and unfair terms instead of seeking any legal resolution," the leter adds.

As for the signatories of the letter, VOICE (from Germany) represents over 400 public-sector or corporate CIOs. France's CIGREF accounts for 150 large users, including Airbus and Thales. The Netherlands' CIO Platform represents more than 130 members, and Belgium's BELTUG accounts for over 1800 CIOs and digital tech leaders.

Dr Hans-Joachim Popp of the German VOICE CIO group, told The Register:

"We are standing with the 'back to the wall' as far as the free choice of cloud providers is concerned. E.g. in Germany we are not in the position to fulfil mandatory legal requirements, since we are depending on the co-operation of mostly American cloud providers (who cannot be GDPR compliant when sticking to the public cloud approach because of the cloud act and other legal instruments of the US government)."

Dr Popp also highlighted the need for a requirement in the Digital Markets Act (DMA) for a stable set of common API and operation standards to ensure compatibility across cloud providers, governed by a committee drawn from "both sides of the table."

"By now," he said, "these standards are fully proprietary and can be changed randomly (for marketing reasons). The use of special features of a cloud provider locks you in to this provider with almost no way out."

As for the quality issues cropping up in the wares of the tech giants, he said "We would pay for good quality but certainly not for randomly forced, frequent updates.

"You would kick out a car manufacturer, who calls you in for an urgent inspection every single week," he noted, drily.

Iffy updates aside, the CISPE report also criticised "ever-changing" licensing practises and "deliberately vague terms" that all contributed to increased costs once customers were "locked-in" to the vendor's cloud infrastructure.

The lengthy legal process required to deal with such behaviour under existing competition laws in indeed a problem, hence the call for proscribing the activities in the forthcoming DMA.

"It is therefore essential that these unfair software licensing practices and behaviours are considered and added to the ex-ante requirements of gatekeepers in the DMA," thundered the letter. Companies dominating the legacy world of enterprise software in Europe should be "fully identified as gatekeepers in the final legislation," it adds.

The DMA seeks to regulate the tech giants with regard to their behaviour as gatekeepers. It has a few hurdles to go before its eventual implementation.

We have asked Microsoft and Oracle to comment.

Read the rest here:
CIOs across Europe add their VOICE to chorus of calls to regulate cloud gatekeepers - The Register

Read More..

SoftBank Corp. and Honda start use case verification of technologies to reduce collisions involving pedestrians and vehicles using 5G SA and Cellular…

SoftBank Corp. (SoftBank) and Honda R&D Co., Ltd. (Honda) announced they have started a use case-based verification of technologies to reduce collisions between pedestrians and vehicles using a 5G standalone mobile communication system (5G SA)1 and a cellular V2X communication system (cellular V2X)2 in the effort to realize a society where both pedestrians and vehicles can enjoy mobility safely and with total peace of mind.

Using SoftBank's 5G SA experimental base station installed at Honda's Takasu Proving Ground (located in Takasu Town, Hokkaido Prefecture) and Honda's recognition technology, SoftBank and Honda are conducting technology verifications for the following three use cases:

In an environment where a pedestrian can be seen from the moving vehicle, and when the vehicle's on-board camera recognizes the risk of a collision such as the pedestrian entering the roadway, the vehicle sends an alert to the pedestrian's mobile device directly or via an MEC server. *3 This will enable the pedestrian to take evasive action to prevent a possible collision with the vehicle.

In an environment where a pedestrian cannot be seen from the moving vehicle due to obstacles such as parked cars along roadsides, the vehicle checks with mobile devices and other vehicles nearby about the presence or absence of a pedestrian in an area with poor visibility .If there is a pedestrian present, the system notifies the pedestrian of the approaching vehicle and also notifies the vehicle of the pedestrian from the pedestrian's mobile device. When there is a second vehicle in a position to see the pedestrian that is in the area with poor visibility, that vehicle notifies the other vehicle of the pedestrian . These high-speed data communications between the moving vehicle, pedestrians, and other vehicles will help prevent collisions.

The moving vehicles send information about the areas with poor visibility to the MEC server, and the MEC server organizes the information and notifies vehicles driving in the vicinity. When a vehicle receives the notification and approaches an area with poor visibility, it checks with the MEC server about the presence or absence of pedestrians. If there is a pedestrian present, the MEC server sends an alert to the vehicle and the pedestrian.These high-speed data communications between the MEC server, vehicles, and pedestrians will help prevent collisions. In this use case, it is possible to send information about an area with poor visibility to vehicles that are not equipped with a camera-based recognition function, which makes it possible to prevent collisions between vehicles and pedestrians regardless of whether vehicles have recognition functions.

SoftBank and Honda had already been working together conducting technology verification for 5G-based connected vehicles by setting up a 5G experimental base station at the Takasu Proving Grounds. Though this new initiative, Softbank and Honda aim to realize a cooperative society where pedestrians and drivers can enjoy mobility safely and with total peace of mind by utilizing network technology that will be created by connecting pedestrians and vehicles. To this end, Softbank and Honda will pursue technological verification with a view to linking 5G SA and cellular V2X, with the goal to complete it before the end of fiscal year 2021 (the year ending March 31, 2022).[Notes]

Standalone 5G is a cutting-edge technology that combines new 5G dedicated core equipment and 5G base stations, unlike the conventional standalone system that uses 4G core equipment and combines it with 5G base stations.

A communication standard established by 3GPP (a standardization organization that formulates standards for mobile communication systems), which is a technology that uses mobile networks for vehicle-to-vehicle, vehicle-to-infrastructure, vehicle-to-network and vehicle-to-pedestrian communications.

MEC stands for Multi-access Edge Computing, a technology that optimizes and accelerates communications compared to cloud servers by deploying data processing functions in locations close to terminals, such as base stations.

Original post:
SoftBank Corp. and Honda start use case verification of technologies to reduce collisions involving pedestrians and vehicles using 5G SA and Cellular...

Read More..

Atom Computing: A Quantum Computing Startup That Believes It Can Ultimately Win The Qubit Race – Forbes

Atom Computing

Atom Computing describes itself as a company obsessed with building the worlds most scalable quantum computers out of optically trapped neutral atoms. The companyrecently revealed it had spent the past two years secretly building a quantum computer using Strontium atoms as its units of computation.

Headquartered in Berkeley, California, Benjamin Bloom and Jonathan King founded the company in 2018 with $5M in seed funds. Bloom received his PhD in physics from the University of Colorado, while King received a PhD in chemical engineering from California Berkeley.

Atom Computing received $15M in Series A funding from investorsVenrock, Innovation Endeavors, and Prelude Ventures earlier this year. The company also received three grants from the National Science Foundation.

Atom Staff

Rob Hays, a former Intel, and Lenovo executive was recently named CEO of the company. Atom Computingsstaff of quantum physicists and design engineers fully complements quantum-related disciplines and applications.This month Atom Computing signaled its continued momentum by adding twoquantum veterans to key positions within the company:

Qubit technologies

While traditional computers use magnetic bits to represent a one or a zero for computation, quantum computers usequantum bits or qubits to represent a one or a zero or simultaneously any number in between.

Todays quantum computers use several different technologies for qubits. But regardless of the technology, a common requirement for all quantum computing qubits is that it must be scalable, high quality, and capable of fast quantum interaction with each other.

IBM uses superconducting qubits on its huge fleet of about twenty quantum computers. Although Amazon doesnt yet have a quantum computer, it plans to build one using superconducting hardware. Honeywell and IonQ both use trapped-ion qubits made from a rare earth metal called ytterbium. In contrast, Psi Quantum and Xanadu use photons of light.

Atom computing chose to use different technology -nuclear-spin qubits made from neutral atoms.Phoenix, the name of Atoms first-generation, gate-based quantum computer platform, uses 100 optically trapped qubits.

Atom Computings quantum platform

First-Generation Quantum Computer, Phoenix, Berkeley,

The Phoenix platform uses a specific type of nuclear-spin qubits created from an isotope of Strontium, a naturally occurring element. Strontium is a neutral atom. At the atomic level, neutral atoms have equal numbers of protons and electrons. However, isotopes of Strontium have varying numbers of neutrons. These differences in neutrons produce different energy levels in the atom. Atom Computing uses the isotope Strontium-87 and takes advantage of its unique energy levels to create spin qubits.

Qubits need to remain in a quantum state long enough to complete computations. The length of time that a qubit can retain its quantum state is its coherence time. Since Atom Computings neutral atom qubits are natural rather than manufactured, no adjustments are needed to compensate for differences between qubits. That contributes to its stability and relatively long coherence time in a range greater than 40 seconds compared to a millisecond for superconducting or a few seconds for ion-trapping systems. Moreover, a neutral atom has little affinity for other atoms, making the qubits less susceptible to noise.

Neutral atom qubits offer many advantages that make them suitable for quantum computing. Here are just a few:

How neutral atom quantum processors work

Atom Computing

The Phoenix quantum platform uses lasers as proxies for high-precision, wireless control of the Strontium-87 qubits. Atoms are trapped in a vacuum chamber using optical tweezers controlledby lasers at very specific wavelengths, creatingan array of highly stable qubits captured in free space.

First, a beam of hot strontium moves the atoms into the vacuum chamber. Next, multiple lasers bombard each atom with photons to slow their momentum to a near motionless state, causing its temperature to fall to near absolute zero. This process is called laser cooling and it eliminates the requirement for cryogenics and makes it easier to scale qubits.

Then, optical tweezers are formed in a glass vacuum chamber, where qubits are assembled and optically trapped in an array. One advantage of neutral atoms is that the processors array is not limited to any specific shape, and it can be either 2D or 3D. Additional lasers create a quantum interaction between the atoms (called entanglement) in preparation for the actual computation. After initial quantum states are set and circuits are established, then the computation is performed.

The heart of Phoenix, showing where the Atom Computings qubits entangle. (First-Generation Quantum ... [+] Computer, Phoenix - Berkeley, California)

Going forward

Atom Computing is working with several technology partners. It is also running tests with a small number of undisclosed customers. The Series A funding has allowed it to expand its research and begin working on the second generation of its quantum platform. Its a good sign that Rob Hays, CEO, believes Atom Computing will begin generating revenue in mid-2023.

Atom Computing is a young and aggressive company with promising technology. I spoke with Denise Ruffner shortly after she joined Atom. Her remarks seem to reflect the optimism of the entire company:

"I am joining the dream team - a dynamic CEO with experience in computer development and sales, including an incredible Chief Product Officer, as well as a great scientific team. I am amazed at how many corporations have already reached out to us to try our hardware. This is a team to bet on."

Analyst notes

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, or speaking sponsorships. The company has had or currently has paid business relationships with 88,A10 Networks,Advanced Micro Devices, Amazon,Ambient Scientific,AnutaNetworks,Applied Micro,Apstra,Arm, Aruba Networks (now HPE), AT&T, AWS, A-10 Strategies,Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera,Clumio, Cognitive Systems, CompuCom,CyberArk,Dell, Dell EMC, Dell Technologies, Diablo Technologies,Dialogue Group,Digital Optics,DreamiumLabs, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud,Graphcore,Groq,Hiregenics,HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM,IonVR,Inseego, Infosys,Infiot,Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo,Linux Foundation,Luminar,MapBox, Marvell Technology,Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco),Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek,Novumind, NVIDIA,Nutanix,Nuvia (now Qualcomm), ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Panasas,Peraso, Pexip, Pixelworks, Plume Design, Poly (formerly Plantronics),Portworx, Pure Storage, Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat,Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak (now Aruba-HPE), SONY Optical Storage,Springpath(now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity,TensTorrent,TobiiTechnology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications,Vidyo, VMware, Wave Computing,Wellsmith, Xilinx,Zayo,Zebra,Zededa, Zoho, andZscaler.Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is a personal investor in technology companiesdMYTechnology Group Inc. VI andDreamiumLabs.

Visit link:
Atom Computing: A Quantum Computing Startup That Believes It Can Ultimately Win The Qubit Race - Forbes

Read More..

Creating Dynamic Symmetry in Diamond Crystals To Improve Qubits for Quantum Computing – SciTechDaily

By Matthew Hutson, MIT Department of Nuclear Science and EngineeringNovember 15, 2021

Instrumentation setup in the Quantum Engineering Group at MIT to study dynamical symmetries with qubits in diamond crystals. Credit: Guoqing Wang/MIT

MIT researchers develop a new way to control and measure energy levels in a diamond crystal; could improve qubits in quantum computers.

Physicists and engineers have long been interested in creating new forms of matter, those not typically found in nature. Such materials might find use someday in, for example, novel computer chips. Beyond applications, they also reveal elusive insights about the fundamental workings of the universe. Recent work at MIT both created and characterized new quantum systems demonstrating dynamical symmetry particular kinds of behavior that repeat periodically, like a shape folded and reflected through time.

There are two problems we needed to solve, says Changhao Li, a graduate student in the lab of Paola Cappellaro, a professor of nuclear science and engineering. Li published the work recently in Physical Review Letters, together with Cappellaro and fellow graduate student Guoqing Wang. The first problem was that we needed to engineer such a system. And second, how do we characterize it? How do we observe this symmetry?

Concretely, the quantum system consisted of a diamond crystal about a millimeter across. The crystal contains many imperfections caused by a nitrogen atom next to a gap in the lattice a so-called nitrogen-vacancy center. Just like an electron, each center has a quantum property called a spin, with two discrete energy levels. Because the system is a quantum system, the spins can be found not only in one of the levels, but also in a combination of both energy levels, like Schrodingers theoretical cat, which can be both alive and dead at the same time.

Dynamical symmetries, which play an essential role in physics, are engineered and characterized by a cutting-edge quantum information processing toolkit. Credit: Courtesy of the researchers

The energy level of the system is defined by its Hamiltonian, whose periodic time dependence the researchers engineered via microwave control. The system was said to have dynamical symmetry if its Hamiltonian was the same not only after every time period t but also after, for example, every t/2 or t/3, like folding a piece of paper in half or in thirds so that no part sticks out. Georg Engelhardt, a postdoc at the Beijing Computational Science Research, who was not involved in this work but whose own theoretical work served as a foundation, likens the symmetry to guitar harmonics, in which a string might vibrate at both 100 hertz and 50 Hz.

To induce and observe such dynamical symmetry, the MIT team first initialized the system using a laser pulse. Then they directed various selected frequencies of microwave radiation at it and let it evolve, allowing it to absorb and emit the energy. Whats amazing is that when you add such driving, it can exhibit some very fancy phenomena, Li says. It will have some periodic shake. Finally, they shot another laser pulse at it and measured the visible light that it fluoresced, in order to measure its state. The measurement was only a snapshot, so they repeated the experiment many times to piece together a kind of flip book that characterized its behavior across time.

What is very impressive is that they can show that they have this incredible control over the quantum system, Engelhardt says. Its quite easy to solve the equation, but realizing this in an experiment is quite difficult.

Critically, the researchers observed that the dynamically symmetry of the Hamiltonian the harmonics of the systems energy level dictated which transitions could occur between one state and another. And the novelty of this work, Wang says, is also that we introduce a tool that can be used to characterize any quantum information platform, not just nitrogen-vacancy centers in diamonds. Its broadly applicable. Li notes that their technique is simpler than previous methods, those that require constant laser pulses to drive and measure the systems periodic movement.

One engineering application is in quantum computers, systems that manipulate qubits, bits that can be not only 0 or 1, but a combination of 0 and 1. A diamonds spin can encode one qubit in its two energy levels.

Qubits are delicate: they easily break down into simple bit, a 1 or a 0. Or the qubit might become the wrong combination of 0 and 1. These tools for measuring dynamical symmetries, Engelhardt says, can be used to as a sanity check that your experiment is tuned correctly and with a very high precision. He notes the problem of outside perturbations in quantum computers, which he likens to a de-tuned guitar. By tuning the tension of the strings adjusting the microwave radiation such that the harmonics match some theoretical symmetry requirements, one can be sure that the experiment is perfectly calibrated.

The MIT team already has their sights set on extensions to this work. The next step is to apply our method to more complex systems and study more interesting physics, Li says. They aim for more than two energy levels three, or 10, or more. With more energy levels they can represent more qubits. When you have more qubits, you have more complex symmetries, Li says. And you can characterize them using our method here.

Reference: Observation of Symmetry-Protected Selection Rules in Periodically Driven Quantum Systems by Guoqing Wang, Changhao Li and Paola Cappellaro, 29 September 2021, Physical Review Letters.DOI: 10.1103/PhysRevLett.127.140604

This research was funded, in part, by the National Science Foundation.

Read more here:
Creating Dynamic Symmetry in Diamond Crystals To Improve Qubits for Quantum Computing - SciTechDaily

Read More..

Don Kahle: Quantum quandaries are emerging – The Register-Guard

Don Kahle| Register-Guard

The challenges and opportunities ahead as computers become ever more powerful are coming more clearly into view. Count me among those who are not surprised at how well novelist and humorist Douglas Adams anticipated them. Artists and comics often speak the truth before anyone else.

The bandwagon is getting fuller by the day. Former Google CEO Eric Schmidt has partnered with Henry Kissinger to author a book about the challenges. Kissinger is 98 years old. Hes strategizing the coming clash with artificial intelligence as if it will be the last war any of us will ever wage.

He might be right.

Ive written almost every year about how we are leaving behind the Age of Enlightenment, without any confidence about what will replace it. Kissinger, Schmidt, Elon Musk and others are warning us that AI will sneak up on us if were not careful, rewriting the rules for civilization without our consent.

I hope the next epoch is organized around empathy, a decidedly human trait thats beyond the ken of calculations. As futurists become realists, its beginning to look like emergent properties may be the frontier were entering. Its very like what Adams anticipated in his Hitchhikers Guide to the Galaxy in 1978.

Leave aside artificial intelligence for a moment. Consider the consequences of quantum computing. IBM announced this week it has built a quantum computer that couldnt be matched with a conventional computer unless that computer was larger than our planet.

Although Adams anticipated that factoid quite accurately, its not the interesting part. According to IBM CEO Arvind Krishna, this super-duper-computer is not adept at computations in the traditional sense. That would be too easy. This quantum computer wont calculate as much as ruminate.

Weve used computers to solve problems but not to wonder how a problem could be solved. Big difference!

To review, Adamss characters asked the most powerful computer to give them the answer to life, the universe, and everything. The answer was 42. Understanding the question was exponentially more complicated, keeping his characters busy for three more volumes.

Kissinger and Schmidt posit that computers soon will give us answers before we understand the questions. Alexander Fleming discovered penicillin before anyone understood how microbes and cells operate inside the human body. We learned what works before we learned why.

With that in mind, mysteries abound. How do starlings execute their mesmerizing murmurations, flying like a three-dimensional marching band producing an amazing halftime show, but without a conductor? How can humans reverse or adapt to global warming? And everything in between.

Finding answers to unimaginably complex problems will be the easy part. Thoroughly understanding the questions being posed will be new. If quantum computing fulfills its promise, we may soon send it searching for the emergent properties behind self-organizing cities, coordinated starling flight patterns, and human consciousness.

Each is beyond the scope of calculations. The resultsemerge as if by magic. The whole is literally greater than the sum of its (calculated) parts. We may soon be envisioning the most hopeful future for our planet since 1650 and terrifyingly so.

Don Kahle (fridays@dksez.com) writes a column each Friday for The Register-Guard and archives past columns atwww.dksez.com.

Read the rest here:
Don Kahle: Quantum quandaries are emerging - The Register-Guard

Read More..

4 key threats to the new central bank digital currencies – World Economic Forum

With G7 officials recently endorsing principles for central bank digital currencies (CBDC), and over 80 countries launching some form of initiative related to CBDC, it seems their widespread deployment is a matter of time. CBDC is a digital form of central bank money that can be accessible to the general public; essentially, it consists of individuals and firms having access to transaction and savings accounts with their home countrys central bank. Those of the Bahamas, China and Nigeria have all implemented early CBDC programmes, with more expected in the future. If successful, CBDC could help policy-makers achieve goals around payment efficiency, financial inclusion, banking and payment competitiveness, access to safe central bank money in the era of digital payments, and more.

Yet like any digital payment system, CBDC is vulnerable to cybersecurity attack, account and data breaches and theft, counterfeiting, and even farther-off challenges related to quantum computing. For citizens to be comfortable adopting CBDC, they will need to be confident in its security. Ultimately, it will not be successful if it does not carefully consider and invest in a robust cybersecurity strategy. Decision-makers should look to cybersecurity best practices such as those published by the US National Institute of Standards and Technology (NIST) and the Microsoft STRIDE model. This article, which summarizes key points from the World Economic Forums new white paper on CBDC Technology Considerations, lays out additional imperative considerations for CBDC cybersecurity.

How can we make sure CBDC is secure for decades to come? We discuss four major dimensions to its cybersecurity below:

CBDC access credentials are needed for accessing and transferring funds. Such credentials could be given in the form of a passphrase that could be easily communicated even on paper, or a hardware token that stores the private keys. Regardless of the form, the threat of theft and credential loss is significant, meaning account funds and data could be compromised.

Theft can be physical or virtual, especially in the case of passphrases. Given the arsenal of modern attackers, techniques such as social engineering, side-channel attacks and malware could be used to extract credentials from a CBDC users device. Moreover, if passphrases or hardware tokens are lost/damaged due to fire/water or natural calamities, CBDC users should not simply lose all their funds and data. Therefore, the system should have built-in credential recovery mechanisms.

If a CBDC is based on blockchain technology, it might use a multi-signature (multi-sig) wallet where at least two other trusted parties hold credentials to the same wallet (this could be the central bank itself and/or family members or other contacts of the end users). The drawback of multi-sig wallets is that they are less user-friendly, since for any transfer one needs to coordinate with at least one other party. Such security-usability trade-offs are common even nowadays with internet banking where 2 Factor Authentication (2FA) is extremely common. If CBDC is based on traditional technology, a privileged authority could simply update a database entry with new credentials.

Over 80 countries are launching some form of initiative related to CBDC

Image: BIS

One concern is that central bank or government insiders, law enforcement and other agents may have roles that allow privileged actions, such as the freezing or withdrawal of funds in CBDC accounts without the users consent. These capabilities are in line with todays compliance procedures in regulated payment systems. Though such roles are likely to be a functional requirement of a CBDC, it is possible for them to enable malicious insiders to abuse the system. As with other types of information security, the central bank and any intermediaries involved should have and execute a cybersecurity risk-management plan covering such privileges. Multi-party mechanisms, such as those employed by multi-signature wallets or other protections, could increase the difficulty of such attacks.

If the CBDC operates on blockchain technology, where nodes include non-central bank entities that have powers to validate or invalidate transactions, malicious validator nodes can pose security threats. They could also undermine the central banks monetary authority and independence by virtue of accepting or rejecting transactions that are contrary to the central banks intention. Thus, it is generally not recommended for non-central bank nodes to have transaction validation powers unless absolutely necessary.

Depending on the consensus protocol used, non-central bank nodes with privileged power could declare transactions as invalid, essentially blocking them from being accepted by the network and creating a denial-of-service attack for CBDC users and censorship of their transactions.

Collusion by non-central bank nodes could also enable double-spending attacks, a form of counterfeiting where the CBDC is spent multiple times illegitimately. The nodes may also decide to fork the distributed ledger, creating a different track and view of the ledger of transactions that disagrees with the central banks. CBDC end users could try to spend funds from their wallets in multiple places, also constituting digital counterfeiting. Risk of double-spend is higher if the CBDC in question has offline capability, depending on the technology with which it operates; in this scenario, double-spend transactions could be sent to offline entities without the high-security validation process that would normally occur online.

By imposing spending limits and transaction frequency when the CBDC user is offline, the impact of such attacks would be reduced. Further, once a device that is conducting transactions comes back online, compliance software could sync with any transactions that have concurred during the offline period.

Quantum computing will ultimately impact all financial services as it compromises major data encryption methodologies and cryptographic primitives used for protecting access, confidentiality and integrity of data stored and transmitted. CBDC is no exception. Therefore, the threat of emerging quantum computers, which can compromise the cryptography employed to secure CBDC accounts, must be taken into account during technology design. For instance, central banks should consider the vulnerability of certain primitives to forthcoming quantum computing. Moreover, quantum computers in the future might be able to break the cryptography in the CBDC system without detection.

The World Economic Forum's Centre for Cybersecurity is leading the global response to address systemic cybersecurity challenges and improve digital trust. We are an independent and impartial global platform committed to fostering international dialogues and collaboration on cybersecurity in the public and private sectors. We bridge the gap between cybersecurity experts and decision makers at the highest levels to reinforce the importance of cybersecurity as a key strategic priority.

Our community has three key priorities:

Strengthening Global Cooperation - to increase global cooperation between public and private stakeholders to foster a collective response to cybercrime and address key security challenges posed by barriers to cooperation.

Understanding Future Networks and Technology - to identify cybersecurity challenges and opportunities posed by new technologies, and accelerate forward-looking solutions.

Building Cyber Resilience - to develop and amplify scalable solutions to accelerate the adoption of best practices and increase cyber resilience.

Initiatives include building a partnership to address the global cyber enforcement gap through improving the efficiency and effectiveness of public-private collaboration in cybercrime investigations; equipping business decision makers and cybersecurity leaders with the tools necessary to govern cyber risks, protect business assets and investments from the impact of cyber-attacks; and enhancing cyber resilience across key industry sectors such as electricity, aviation and oil & gas. We also promote mission aligned initiatives championed by our partner organizations.

The Forum is also a signatory of the Paris Call for Trust and Security in Cyberspace which aims to ensure digital peace and security which encourages signatories to protect individuals and infrastructure, to protect intellectual property, to cooperate in defense, and refrain from doing harm.

For more information, please contact us.

Cybersecurity, along with technical resilience and sound technical governance, are the most important elements of CBDC technical design. Failure to implement a robust cybersecurity strategy and consider the risks introduced above could compromise citizen data and funds, the success of the CBDC programme, central bank reputational risk and broader opinions of the new currency. Based on past experiences in cybersecurity failures, the bar for security is not only about keeping the bad guys out or minimizing unauthorized account access. It must be comprehensive and consider the full spectrum of risks, ensuring that the system works as it was designed and that its integrity remains intact. Only then will CBDC be successful in achieving its goals.

Written by

Sebastian Banescu, Senior Research Engineer / Security Auditor, Quantstamp

Ben Borodach, Vice-President, Strategy and Operations, Team8

Ashley Lannquist, Project Lead, Blockchain and Distributed Ledger Technology, World Economic Forum

The views expressed in this article are those of the author alone and not the World Economic Forum.

Visit link:
4 key threats to the new central bank digital currencies - World Economic Forum

Read More..

What is Cloud Storage | IBM

An introduction to the important aspects of cloud storage, including how it works, its benefits, and the different types of cloud storage that are available.

Cloud storage allows you to save data and files in an off-site location that you access either through the public internet or a dedicated private network connection. Data that you transfer off-site for storage becomes the responsibility of a third-party cloud provider. The provider hosts, secures, manages, and maintains the servers and associated infrastructure and ensures you have access to the data whenever you need it.

Cloud storage delivers a cost-effective, scalable alternative to storing files on on-premise hard drives or storage networks. Computer hard drives can only store a finite amount of data. When users run out of storage, they need to transfer files to an external storage device. Traditionally, organizations built and maintained storage area networks (SANs) to archive data and files. SANs are expensive to maintain, however, because as stored data grows, companies have to invest in adding servers and infrastructure to accommodate increased demand.

Cloud storage services provide elasticity, which means you can scale capacity as your data volumes increase or dial down capacity if necessary. By storing data in a cloud, your organization save by paying for storage technology and capacity as a service, rather than investing in the capital costs of building and maintaining in-house storage networks. You pay for only exactly the capacity you use. While your costs might increase over time to account for higher data volumes, you dont have to overprovision storage networks in anticipation of increased data volume.

Like on-premise storage networks, cloud storage uses servers to save data; however, the data is sent to servers at an off-site location. Most of the servers you use are virtual machines hosted on a physical server. As your storage needs increase, the provider creates new virtual servers to meet demand.

For more information on virtual machines, see Virtual Machines: A Complete Guide.

Typically, you connect to the storage cloud either through the internet or a dedicated private connection, using a web portal, website, or a mobile app. The server with which you connect forwards your data to a pool of servers located in one or more data centers, depending on the size of the cloud providers operation.

As part of the service, providers typically store the same data on multiple machines for redundancy. This way, if a server is taken down for maintenance or suffers an outage, you can still access your data.

Cloud storage is available in private, public and hybrid clouds.

As with any other cloud-based technology, cloud storage offers some distinct advantages. But it also raises some concerns for companies, primarily over security and administrative control.

The pros of cloud storage include the following:

Cloud storage cons include the following:

There are three main types of cloud storage: block, file, and object. Each comes with its set of advantages:

Traditionally employed in SANs, block storage is also common in cloud storage environments. In this storage model, data is organized into large volumes called blocks." Each block represents a separate hard drive. Cloud storage providers use blocks to split large amounts of data among multiple storage nodes. Block storage resources provide better performance over a network thanks to low IO latency (the time it takes to complete a connection between the system and client) and are especially suited to large databases and applications.

Used in the cloud, block storage scales easily to support the growth of your organizations databases and applications. Block storage would be useful if your website captures large amounts of visitor data that needs to be stored.

Block Storage: A Complete Guide provides a wealth of information on block storage.

The file storage method saves data in the hierarchical file and folder structure with which most of us are familiar. The data retains its format, whether residing in the storage system or in the client where it originates, and the hierarchy makes it easier and more intuitive to find and retrieve files when needed. File storage is commonly used for development platforms, home directories, and repositories for video, audio, and other files.

In the video Block Storage vs. File Storage, Amy Blea compares and contrasts these two cloud storage options:

Block Storage vs. File Storage (04:03)

Object storage differs from file and block storage in that it manages data as objects. Each object includes the data in a file, its associated metadata, and an identifier. Objects store data in the format it arrives in and makes it possible to customize metadata in ways that make the data easier to access and analyze. Instead of being organized in files or folder hierarchies, objects are kept in repositories that deliver virtually unlimited scalability. Since there is no filing hierarchy and the metadata is customizable, object storage allows you to optimize storage resources in a cost-effective way.

Check out "IBM Cloud Object Storage: Built for business" to learn more about benefits of object storage:

IBM Cloud Object Storage: Built for business (04:10)

A variety of cloud storage services is available for just about every kind of business anything from sole proprietor to large enterprises.

If you run a small business, cloud storage could make sense, particularly if you dont have the resources or skills to manage storage yourself. Cloud storage can also help with budget planning by making storage costs predictable, and it gives you the ability to scale as the business grows.

If you work at a larger enterprise (e.g., a manufacturing company, financial services, or a retail chain with dozens of locations), you might need to transfer hundreds of gigabytes of data for storage on a regular basis. In these cases, you should work with an established cloud storage provider that can handle your volumes. In some cases, you may be able to negotiate custom deals with providers to get the best value.

Cloud storage security is a serious concern, especially if your organization handles sensitive data like credit card information and medical records. You want assurances your data is protected from cyber threats with the most up-to-date methods available. You will want layered security solutions that include endpoint protection, content and email filtering and threat analysis, as well as best practices that comprise regular updates and patches. And you need well-defined access and authentication policies.

Most cloud storage providers offer baseline security measures that include access control, user authentication, and data encryption. Ensuring these measures are in place is especially important when the data in question involves confidential business files, personnel records, and intellectual property. Data subject to regulatory compliance may require added protection, so you need to check that your provider of choice complies with all applicable regulations.

Whenever data travels, it is vulnerable to security risks. You share the responsibility for securing data headed for a storage cloud. Companies can minimize risks by encrypting data in motion and using dedicated private connections (instead of the public internet) to connect with the cloud storage provider.

Data backup is as important as security. Businesses need to back up their data so they can access copies of files and applications and prevent interruptions to businessif data is lost due to cyberattack, natural disaster, or human error.

Cloud-based data backup and recovery services have been popular from the early days of cloud-based solutions. Much like cloud storage itself, you access the service through the public internet or a private connection. Cloud backup and recovery services free organizations from the tasks involved in regularly replicating critical business data to make it readily available should you ever need it in the wake of data loss caused by a natural disaster, cyber attack or unintentional user error.

Cloud backup offers the same advantages to businesses as storagecost-effectiveness, scalability, and easy access. One of the most attractive features of cloud backup is automation. Asking users to continually back up their own data produces mixed results since some users always put it off or forget to do it. This creates a situation where data loss is inevitable. With automated backups, you can decide how often to back up your data, be it daily, hourly or whenever new data is introduced to your network.

Backing up data off-premise in a cloud offers an added advantage: distance. A building struck by a natural disaster, terror attack, or some other calamity could lose its on-premise backup systems, making it impossible to recover lost data. Off-premise backup provides insurance against such an event.

Cloud storage servers are virtual serverssoftware-defined servers that emulate physical servers. A physical server can host multiple virtual servers, making it easier to provide cloud-based storage solutions to multiple customers. The use of virtual servers boosts efficiency because physical servers otherwise typically operate below capacity, which means some of their processing power is wasted.

This approach is what enables cloud storage providers to offer pay-as-you-go cloud storage, and to charge only for the storage capacity you consume. When your cloud storage servers are about to reach capacity, the cloud provider spins up another server to add capacityor makes it possible for you to spin up an additional virtual machine on your own.

Virtualization: A Complete Guide offers a complete overview of virtualization and virtual servers.

If you have the expertise to build your own virtual cloud servers, one of the options available to you is open source cloud storage. Open source means the software used in the service is available to users and developers to study, inspect, change and distribute.

Open source cloud storage is typically associated with Linux and other open source platforms that provide the option to build your own storage server. Advantages of this approach include control over administrative tasks and security.

Cost-effectiveness is another plus. While cloud-based storage providers give you virtually unlimited capacity, it comes at a price. The more storage capacity you use, the higher the price gets. With open source, you can continue to scale capacity as long as you have the coding and engineering expertise to develop and maintain a storage cloud.

Different open source cloud storage providers offer varying levels of functionality, so you should compare features before deciding which service to use. Some of the functions available from open source cloud storage services include the following:

As mentioned, cloud storage helps companies cut costs by eliminating in-house storage infrastructure. But cloud storage pricing models vary. Some cloud storage providers charge monthly the cost per gigabyte, while others charge fees based on stored capacity. Fees vary widely; you may pay USD 1.99 or USD 10 for 100 GB of storage monthly, based on the provider you choose. Additional fees for transferring data from your network to the fees based on storage cloud are usually included in the overall service price.

Providers may charge additional fees on top of the basic cost of storage and data transfer. For instance, you may incur an extra fee every time you access data in the cloud to make changes or deletions, or to move data from one place to another. The more of these actions you perform on a monthly basis, the higher your costs will be. Even if the provider includes some base level of activity in the overall price, you will incur extra charges if you exceed the allowable limit.

Providers may also factor the number of users accessing the data, how often users access data, and how far the data has to travel into their charges. They may charge differently based on the types of data stored and whether the data requires added levels of security for privacy purposes and regulatory compliance.

Cloud storage services are available from dozens of providers to suit all needs, from those of individual users to multinational organizations with thousands of locations. For instance, you can store emails and passwords in the cloud, as well as files like spreadsheets and Word documents for sharing and collaborating with other users. This capability makes it easier for users to work together on a project, which explains while file transfer and sharing are among the most common uses of cloud storage services.

Some services provide file management and syncing, ensuring that versions of the same files in multiple locations are updated whenever someone changes them. You can also get file management capability through cloud storage services. With it, you can organize documents, spreadsheets, and other files as you see fit and make them accessible to other users. Cloud storage services also can handle media files, such as video and audio, as well as large volumes of database records that would otherwise take up too much room inside your network.

Whatever your storage needs, you should have no trouble finding a cloud storage service to deliver the capacity and functionality you need.

IBM Cloud Storage offers a comprehensive suite of cloud storage services, including out-of-the-box solutions, components to create your own storage solution, and standalone and secondary storage.

Benefits of IBM Cloud solutions include:

You also can take advantage of IBMs automated data backup and recovery system, which is managed through the IBM Cloud Backup WebCC browser utility. The system allows you to securely back up data in one or more IBM cloud data centers around the world.

Storage software is predicted to overtake storage hardware by 2020, by which time it will need to manage 40 zettabytes (40 sextillion bytes) of data. Check out IBMs report Hybrid storage for the hybrid cloud.

Build storage skills through courses within the Cloud Architect Professionaland theCloud SRE Professional curriculums.

Get started with an IBM Cloud account today.

Original post:
What is Cloud Storage | IBM

Read More..

iCloud vs pCloud: Which is the best cloud storage service? – Macworld UK

iCloud may well be the default choice for keeping documents, photos and data synced across all of your Apple products, but there are plenty of alternatives available. One you may not be that familiar with is pCloud, which offers a wide range of features and compatibility with not only your iOS and macOS devices, but also Windows and Android. We take a look at how the two services compare.

On the surface, iCloud looks like a normal online storage service along the lines of Google Drive, Dropbox or OneDrive. But, due to the fact that iCloud is Apple's own product, it integrates far deeper into macOS, iOS and iPadOS, giving it some unique abilities.

As well as providing a virtual drive on your device that you can use to store files, iCloud can also sync your contacts, messages, calendars, notes, and email. It should be noted that these features only work on Apple apps, and you'll need to grant permissions first and ensure that any device you want to sync is signed in with the same Apple ID.

You should also remember that as these files are synced, any changes made on one device will affect it on all others. So don't delete a contact on your iPhone and expect it to still appear on your Mac.

Read our What is iCloud? guide for more details on how it works and what capabilities are on offer.

Lifetime Cloud Storage

pCloud is more of a traditional backup-and-sync service. Once installed, you can create folders or use the existing ones within the pCloud Drive to copy and store files in the cloud. These are synced across your devices via the apps.

Alongside the pCloud Drive you can also select different folders from your Mac which will be automatically backed up to the pCloud servers. This is a handy way to create a continuous backup of important data, without having to move or alter your existing folder layout.

The service also provides the ability to back up other cloud services, such as Google Photos, Google Drive, OneDrive, Dropbox and Facebook.

As you can see, there are some similarities with the two services, but even the implementation of these can be a little different. Here's how they compare:

iCloud is baked into Apple's software and as such covers the extra data we outlined above. You don't have to download any apps, as everything is part of iOS, Macs and iPadOS.

In many ways iCloud is a suite of services rather than just a cloud storage facility, with iCloud Drive and iCloud Photo Library being there to store your documents, files, photos and videos. iCloud is available on iPhone, iPad, Mac, Apple Watch and Apple TV.

pCloud doesn't have the access privileges granted to iCloud, so it acts as a separate app that you can use to store your files on whichever platform you desire. This makes it a better option if you regularly move between Apple and non-Apple devices. There are apps available for iPhone, iPad, Mac, Windows, Linux and Android.

Both have web-based versions you can use, but the better user experience is definitely found in the apps.

In many ways, iCloud is essentially an extension of the storage on your device. This is not a back up - it's a way to store all your files in the cloud so you can view them on all your devices. It's also a great way to save space. For example, you can store high res versions of your photos in iCloud, rather than on your iPhone, freeing up space on your iPhone. You'll still be able to view the photos on your device, but these will be lower-res versions that won't take up as much space. As mentioned above, this is a file-syncing service rather than a back up, so deleting a photo on one device deletes it on all of them.

The iCloud Drive folder acts as a synced drive with the iCloud servers, so whatever you put in there will be available across all your devices. The only limitation is that you'll need enough iCloud storage capacity to be able to store all the data (we discuss the costs below).

iCloud also links straight into Apple's iWork apps (Pages, Numbers and Keynote) so you can use it as your document storage online. While you can access iCloud through the web version, which makes it available for non-Apple users, it's not great. So, really, iCloud is purpose-built to be Apple-only.

On the other hand, pCloud is a classic 'virtual' drive, meaning you have a drive folder on your device into which you can drag or send various files, all of which will then be synced up to the pCloud servers.

By default, pCloud has a variety of folders in place where you can store Music, Pictures, and Videos alongside documents and any other files you want to back up. You can also select various folders on your Mac that will be backed up automatically in real time.

While the maximum file size available on iCloud is 50GB, pCloud doesn't have any restrictions. The latter also provides full support for file versioning, which is when it retains previous versions of a file in case you want to return to an early iteration.

iCloud doesn't really support this, which makes it less useful if you work on documents that go through multiple versions and could contain previous information you want to access.

The standard versioning from pCloud is 15-days, but if you sign up to a paid tier (see below) then this can be extended up to a year.

Security is also different for the two services, with iCloud offering end-to-end encryption only for elements such as the Keychain (passwords), payment information and health data, while other areas (Mail, Notes, iCloud Drive etc) are securely transferred to Apple's servers where they are then encrypted. This means that with the latter, Apple can technically see the files in an unencrypted format.

pCloud offers client-side encryption via its Crypto Folder, though it is a paid extra. Any data you store on the service is accessible only by you, as the encryption key is held on your device rather than the pCloud servers.

With the introduction of iCloud+ (the paid tiers of iCloud) towards the end of 2021, Apple did add Private Relay to the feature-set. This protects your browsing from any prying eyes, so long as you use Safari. There's also Hide My Email, which allows you to instantly create disposable email addresses you can use when signing up to things online, so your real one is never exposed.

You get a free allocation with both services.In addition you can pay for the following:

iCloud offers a free 5GB tier, then you can move up to these monthly paid tiers of iCloud+:

pCloud gives you a base free allocation of 2GB but this is immediately upgraded by completing basic tasks such as downloading the apps and saving a file, so that you end up with 7GB. This can be extended further by recommending the service to friends, with an extra 1GB for any that sign up, topping out at 10GB of free storage.

There are fewer options when it comes to storage amounts, with these options available:

There's also the option to buy a lifetime subscription with the following one-time costs:

If you want the Crypto storage option, that uses the ultra-safe client-side encryption, then it costs an additional 4.29/$4.99/4.99 per month or you can buy a lifetime subscription 107/$125/125.

Both services give you the ability to save specific files and folders, but neither creates full system backups you can use to recover from things like a hard disk crash. For that you'll need to read our best backup software for Mac roundup.

If you're looking for a seamless way to keep your various Apple devices in sync, then iCloud makes life very easy indeed. Thanks to its deep system integration, your contacts, calendars, passwords, photos, files and other important data are all automatically backed up to the iCloud servers, where they can be accessed via apps or the web. The free allocation is pitiful, but if you're happy to pay the (we think quite reasonable) monthly subscription then it becomes an excellent solution.

pCloud is probably a better option if you're looking to permanently store large numbers of files in the cloud. It's not particularly cheap, but if you think long-term, then the lifetime subscriptions do offer good value.

Read the original here:
iCloud vs pCloud: Which is the best cloud storage service? - Macworld UK

Read More..

Media And Entertainment Digital Storage Growth Swells With Increased Cloud And Remote Services – Forbes

Projected Growth in Cloud Storage Revenue for Professional Media and Entertainment from Coughlin ... [+] Associates Report

Coughlin Associates released its seventeenth report focusing on digital storage in all aspects of professional media and entertainment.The report includes results from a 2021 survey of M&E professionals on their digital storage needs.

As a result of changes in the economics of storage devices higher performance solid-state storage is playing an increasing role as primary storage.The cloud and hybrid storage including the cloud have assumed a new importance for many workflows during the Covid-19 pandemic.When the pandemic passes, use of cloud storage will continue to grow in the media and entertainment storage market going forward.The growth of cloud storage revenue for M&E application from the 2022 report is shown above.

Some additional highlights from the report:

The Covid-19 pandemic in 2020-2021 had a big impact on content creation during 2020 and 2021, except for broadcast acquisition

Spending for digital cinema in 2021 and during the next few years will be impacted by the pandemic

Creation, Distribution & Conversion of video content creates a huge demand driver for storage device and systems manufacturers

As image resolution increases and as stereoscopic VR video becomes more common, storage requirements explode

The development of 4K TV and other high-resolution venues in the home and in mobile devices will drive the demand for digital content (especially enabled by high HEVC (H.265) and VVC (H.266) compression and even greater standards for compression to enable 8K and higher resolution and frame rate workflows.

Activity to create capture and display devices for 8K X 4K content is occurring with planned implementation in common media systems in this decade

Active archiving will drive increased use of HDD storage for archiving applications, supplementing tape for long term archives

Flash memory dominates cameras and is finding wider use in post-production and content distribution systems

The growth in storage capacities will result in a total media and entertainment storage revenue growth of about 2.1 X between 2020 and 2026 (from $9.1B to $19.2B)

Between 2020 and 2026 we expect about a 3.2 X increase in the required digital storage capacity used in the entertainment industry and about a 4.4 X increase in storage capacity shipped per year (from about 69EB to 304EB

In 2020 content distribution is estimate at 44% of total storage revenue followed by archiving and preservation at 32%, post-production at 5% (due to the impact of COVID) and content acquisition at 19%.

In 2026 the projected revenue distribution is 37% content distribution, 23% post production, 23% content acquisition and 17% archiving and preservation.

By 2026 we expect about 59% ofarchived content to be in near-lineand object storage, up from 50% in 2020

In 2020 we estimate that about 72% of the total storage media capacity shipped for all the digital entertainment content segments was in HDDs with digital tape at about 21%, about 3% optical discs and flash at about 4%

By 2026 tape capacity shipment share has been reduced to about 12%, HDDs shipped capacity is about 76%, optical disc capacity is down to about 0.3% and flash capacity percentage is at about 11%

Media revenue is expected to increase about 1.8 X from 2020 to 2026 ($1.eB to $2.3B).

Although no longer the biggest driver of digital storage growth, the digital conversion of film, video tape and other analog formats and its long-term digital preservation is still a significant driver for archived content

Over141 Exabytes of new digital storage will be used for digital archiving and content conversion and preservation by 2026

Storage in remote clouds is playing an important role in enabling collaborative workflows, content distribution and in archiving

Overall cloud storage capacity for media and entertainment is expected to grow over 13.8 X between 2020 and 2026 (10.1EB to 140EB)

Overall object storage capacity for media and entertainment is expected to grow about 5.6 X between 2020 and 2026 (17.1EB to 96.5EB)

Cloud storage revenue will be about $3.3B by 2026

By our estimates, professional media and entertainment storage capacity represents about 4.9% of total shipped storage capacity in 2020.

In 2020 professional media and entertainment consumed about 15% of all tape capacity shipments, 6% of all hard disk drive shipments and 2% of all flash memory shipments.We estimate that media and entertainment spending was about 10% of total storage revenue in 2020.

The media and entertainment industry is a significant driver for digital storage growth and development, including all types of storage media and storage technology.Particularly, remote M&E workflows are driving cloud-based storage growth.

Read the rest here:
Media And Entertainment Digital Storage Growth Swells With Increased Cloud And Remote Services - Forbes

Read More..