Category Archives: Cloud Storage
What if there was AirBnb for the Cloud? – Capacity Media
80% of disk drives worldwide are only 20% full, says Ben Gloub, CEO of innovative cloud provider Storj.
This fact is the very reason Storj exists.
Storj taps into this existing storage capacity from tens of thousands of nodes worldwide, eliminating the need to build new drives and data centres, he tells Capacity.
And alongside the obvious sustainability benefits this provides, it allows Storj to offer cloud storage for a tenth of the price of legacy providers. Golub says that the carbons emissions saved by using this model can reach up to 83%.
Storing data securely across tens of thousands of nodes worldwide eliminates a single point of failure, and because of parallelism, reduces congestion. This enables Storj to deliver upload and download speeds that are consistently high compared to players like AWS, Golub claims. The distributed model is also able to meet requirements that cloud users desire, such as speed, reliability, security, global accessibility and low cost.
However, industries that handle highly sensitive data like healthcare and finance need to meet certain compliance requirements that limit where data can be stored, Golub says.
This has inspired Storj to launch Storj Select, a new enterprise feature that delivers customisation on which of the thousands of nodes specific data is stored. With Storj Select, customer data is only stored on points of presence which meet customer specified qualifications.
Cloudwave, a provider of healthcare data security solutions, was revealed yesterday as one of Storj Selects first customers.
The more than 280 hospitals that we work with across six countries are focused on protecting patients. Our role is to protect their patient data from cyber attacks, CloudWave said.
Storj Select's uniquely customisable compliance features, speed and security is a key component of our cloud strategy.
The idea for Storj Select came directly as a result of feedback from enterprises who were interested in switching to its cloud storage platform, but couldnt because they needed to meet specific compliance requirements.
The new feature allows Storj to store data only on the storage nodes that meet those qualifications, such as SOC2, a a voluntary compliance standard that governs how data should be protected.
Storj Select opens the door for us to serve more customers like CloudWave which is now a Storj customer because of the existence of Storj Select, Golub says.
As a disruptive startup, we have to be agile to meet our customers needs, he continues. There are several aspects of our business model and architecture that allow us to bring this type of service online very quickly.
Firstly, because Storj use existing capacity thats already spun, powered and cooled in data centres, it doesnt face the same supply chain challenges of other providers when it comes to sourcing the physical hardware to bring a multi-petabyte storage service online.
Storj were also able to leverage globally distributed facilities that already had SOC2 certifications in place, further reducing the time to market.
It was a coordinated effort to build the product capabilities, source the supply, and onboard our first enterprise customer, but 90% of what we needed was already in place, Golub says.
Golub also says that the product capabilities Storj built enables it to be much more dynamic in how it selects and restricts the nodes that will store a particular data set.
This enabled us to bring a tier of storage online that stores data only in SOC2 Type2 data centres in the US very rapidly,
How does it work?
Instead of storing data on a few centralised data centres that are vulnerable to outages and cyber attacks, Storj encrypts and splits data into small pieces including redundancy that are then stored across its global network, Golub explains.
When a download request is made, only a small subset of the fastest data segments are needed to reconstitute the file. This distributed architecture is designed to protect against outages, ransomware, and data compromise.
Storj Select takes this one step further and enhances data security by ensuring that customer data is only stored on nodes that meet the specific compliance requirements of that customer.
The distribution of data on the network is critical to performance, durability and security of data stored on the network, Golub continues.
All of the processes for selecting nodes for storing a particular object, distributing the encrypted pieces of erasure encoded data to nodes, auditing the nodes and repairing data is fully automated.
Those processes are highly scalable, both horizontally by adding more nodes and resources, as well as vertically by scaling up the resources of a particular node, he explains.
The Storj Select service does require an initial manual process to set placement rules or geofenced regions and choose where the data in particular buckets or projects will be stored .
But once a placement rule is created and applied, everything after that is fully automated.
That automation ensures we achieve our eleven 9s of durability with billions of objects stored on the Storj network in tens of billions of pieces distributed over 10s of thousands of nodes in over 100 countries, Golub says.
Whats next for Storj?
Were continuing to innovate and adapt Storj to better meet the needs of our growing customer base. Additionally, we have some exciting new integrations, customers, partnerships and sustainability initiatives coming down the pipeline that well be sure to keep you apprised of! Golub concludes.
See the original post here:
What if there was AirBnb for the Cloud? - Capacity Media
DNA storage finally reaches mainstream (well, sort of) but it will cost you a whopping $1000 per KB (yes, kilobyte) – TechRadar
DNA storage is finally within reach, with the promise of being able to store billions of terabytes of data in a tiny volume, for hundreds of years and at a minimal cost. In a tantalizing breakthrough, French firm Biomemory has rolled out credit-card-sized storage devices that can store one kilobyte of data each on DNA.
For $1,000, you get two identical Biomemory cards; yes, they are very expensive - the first commercially available hard drive (IBM 350-1) retailed for $34,500 for 3.75MB - and slow; but this is primarily a proof of concept. The DNA on the card is dried; it has to be rehydrated and read by a sequencing machine (Biomemory has partnered with a third party to get it done).
DNA storage has been in the realm of science (and a bit of fiction) for decades but the past two years have seen a flurry of announcements (from giants like Microsoft and Seagate to smaller players like Catalog or Biomemory) that have made it one of the most likely candidates to help quench the global thirst for data storage.
Storing bytes in DNA could pave the way for near unlimited data storage capacity at extremely low acquisition cost, with market-leading operational costs (both to store and retrieve data) and a shelf-life counted in multiple decades.
For now thought, theres some serious hurdles that will need to be cleared out before DNA can make its way up the storage food change as the goto replacement for tape, then hard drives. Other than price, theres the speed at which data is stored and retrieved (currently in the low MB/s) and the still relatively unproven nature of the medium.
Its been almost a decade since Seagate told us that the world would be running out of storage capacity by 2016 (spoiler alert: it never happened) but generative AI and machine learning will probably help us reach that crunch point much faster than expected.
Biomemory wants to launch a 100PB enterprise-focused DNA card by 2026, which is just a few years away. It took hard drives about 60 years to grow in size by a factor of 1,000,000 to 4TB.
The French startup wants to smash through this, achieving a growth factor of 100 trillion (thats one followed by 14 zeros) in under 36 months. If they managed to achieve that, it will be one of the biggest technological feats of all times and will become one of the hottest startups ever.
Continued here:
DNA storage finally reaches mainstream (well, sort of) but it will cost you a whopping $1000 per KB (yes, kilobyte) - TechRadar
HPE goes all-in for AI with new hybrid cloud compute, storage products – Network World
At its annual HPE Discover event in Barcelona Thursday, Hewlett Packard Enterprise (HPE) unveiled a range of new and enhanced hybrid cloud products -- including storage, development tools, and rack-scale hardware co-engineered with Nvidia -- aimed at easing the development and deployment of generative AI systems for enterprises.
"We think the next decade is going to require full stack thinking from hardware all the way up through the software layers, as organizations lean into the deployment of AI applications," Evan Sparks, HPE's vice president and general manager for AI Solutions and Supercomputing Cloud told a news conference. "These applications are extremely computationally intensive. They require data as a first-class input and lots of lots of data. And our view is that this new architecture will require more thinking in terms of scale-up processing; they need to do workloads that are hybrid by design."
The announcement came a day after HPE announced its fourth-quarter earnings in which revenue and profit were down on the whole and down for traditional categories of storage and computing but up for high-performance computing, AI, and intelligent edge. While overall revenue declined 7% year over year to $7.4 billion, and traditional compute sales for servers was down 31% to $2.6 billion, intelligent edge revenue jumped 41% to $1.4 billion and HPC and AI products were up 37% to $1.2 billion.
HPE GreenLake for File Storage, an all-flash data platform, is adapting to better support growing and evolving large-scale AI workloads, including those based on generative AI and large language models (LLMs). The platform has been upgraded for AI model training and tuning. The improvements include increased performance density, integration with Nvidias new Quantum-2 InfiniBand for better GPU-focused computing, and significantly boosted throughput and capacity.
HPE previously said it is launching a cloud-based service powered by its supercomputers to support advanced AI applications. Initially focusing on LLMs, HPE plans to extend this service to specialized AI applications in areas including climate modeling, healthcare, finance, manufacturing, and transportation. Part of the HPE GreenLake portfolio, these services will be distinct in being exclusively cloud-based, unlike other GreenLake offerings that are on-premises or hybrid.
HPE and Nvidia are collaborating to develop a computing hardware and software platform specifically for generative AI tuning and inferencing -- the process where an AI model makes predictions and decisions on news data -- in enterprise data centers. The idea is to make it easier for businesses, regardless of their size, to adapt and use AI. This new offering is designed to be user-friendly, allowing companies to customize AI models with private data and deploy them from local devices to cloud-based systems.
The hardware component of the platform is based on a rack-scale architecture, where computing components are managed at the level of an entire rack of servers as a single entity to optimize resources. It includes HPE ProLiant Compute DL380a servers incorporating Nvidia L40S GPUs and Nvidia BlueField-3 DPUs (data processing units).
HPE's Machine Learning Development Environment Software has been updated to a managed service as a cloud-based solution for AI model training. The service, also included in its new AI tuning platform, is intended to help businesses quickly and safely start or advance their generative AI projects. It simplifies the training process of AI models, supposedly making it faster to develop them, and its designed to adapt to future needs, reducing the strain on management and processing resources. Additionally, the software includes new features for generative AI to enable quick testing and prototyping of models.
The new AI platform also includes HPE Ezmeral Software, which has been updated to make it easier and quicker for businesses to handle data, analytics, and AI tasks. The software platform, which allows users to run cloud-native or non-cloud-native applications in containers, is designed to work smoothly across various cloud environments. Key updates include a more efficient hybrid data lakehouse, which is now better optimized for GPU and CPU usage, facilitating easier data management and analysis. Also, integration with HPE Machine Learning Development Environment Software is meant to improve model training and tuning. The software also features better GPU allocation management, enhancing performance across different workloads. Moreover, Ezmeral now supports more third-party tools, such as Whylogs for monitoring models and Voltron Data for faster, GPU-accelerated data queries.
Customers will be able to order the new AI platform and the various updated components in the first quarter of 2024.
HPE also announced a GreenLake Flex Solution for Digital Twin built on Nvidia OVX-certified HPE ProLiant Gen11 servers with Nvidia L40S GPUs. The solution is meant to allow users to create, simulate, and optimize virtual assets and processes. Its a scalable, multi-GPU framework that provides the necessary infrastructure, software, and services to leverage the benefits of industrial digitalization. The digital twin features AI-accelerated infrastructure and Nvidia Omniverse Enterprise, designed to enable rapid insights from all available data. The company says the solution combines a public clouds flexibility with a private clouds security.
A digital twin is like a virtual model of a real-life object or system. This technology, which started with individual items, now covers bigger things like buildings, factories, and whole cities. Some people even think it can be used for people and the way things are done, making the idea of digital twins much broader.
View post:
HPE goes all-in for AI with new hybrid cloud compute, storage products - Network World
Amazon S3 storage picks up speed at re:Invent 2023 – TechTarget
The glut of generative AI products at AWS can only work if customers are sitting on a trove of usable data, leading the public cloud provider to introduce new storage services it sees as vital to AI training.
At AWS re:Invent 2023 this week, AWS expanded its cloud storage offerings with almost a dozen new object, file and block services.
File storage in the AWS cloud gained the majority of these services, including additions to the AWS native version of NetApp's data platform, OnTap. A few colder storage tiers for block and file rounded out the additions, further indicating the vendor's big data play, according to industry analysts.
But it was the additions to its object storage service that took center stage, with AWS CEO Adam Selipsky introducing a new Amazon S3 object storage tier during his opening keynote Tuesday. The new addition brings a higher-performance tier to the traditionally slower storage format.
Selipsky joked that storage "surely can't be an area that's ready for reinvention," but he went on to state that generative AI will need performance gains at every part of the tech stack.
Overall, the additions are incremental improvements to the AWS storage portfolio, said Dave Raffo, an analyst at Futurum Group. Compared with the vendor's headliner products such as Amazon Q, a generative AI assistant and a response to Microsoft's Copilot, these storage offerings aren't going to make any IT administrators transition to AWS.
Instead, the additions are meant to keep existing customers in AWS and help them further grow their data footprint. Supporting massive amounts of customer data to feed AI development is the current marketing play for storage vendors, Raffo said, and that message could once again change in several months.
"The announcements look incremental and are adding features to those [storage services]," he said.
Selipsky personally introduced Amazon S3 Express One Zone during his opening keynote.
Selipsky and AWS materials said the service will provide "single-digit millisecond data access" for customer applications using object storage but will be limited to operating in a single availability zone. The service mostly benefits the processing of large amounts of smaller objects, according to the vendor, which can lead to reduced runtime costs for other processing services.
AWS said use cases include AI/machine learning training, media processing and high-performance computing. The service starts at $0.016 per GB, compared with the $0.023 or $0.022 per GB for S3 Standard tiering.
Analyst briefings on the service indicated the Express One Zone uses flash storage, rather than the traditional hard drives commonly used by object storage services, Raffo said.
Amazon S3, a common standard of object storage across cloud services, wasn't used with databases or other applications demanding high performance in the past due to its slower access speeds, said Scott Sinclair, an analyst at TechTarget's Enterprise Strategy Group.
Object storage was considered useful for less frequently accessed data or repository storage. Now, through the advent of cheaper flash memory, enterprises are seeing object storage as a way to maximize available storage and collect massive data sets for AI/ML development, Sinclair said.
Many cloud customers have already built up significant reserves of data ripe for AI training and are unlikely to pay the costs of moving it out, he said.
"Data has gravity," Sinclair said. "Once you get 2 to 4 petabytes in the clouds, it's not going anywhere."
It isn't small, fast storage or big, slow storage [anymore]. Bottom line, everything needs to be fast. Scott SinclairAnalyst, Enterprise Strategy Group
Effective AI pipelines, at least those looking to maintain some speed for a business, are now looking to become all flash, he added.
"People need flash all throughout their data pipelines," Sinclair said. "It isn't small, fast storage or big, slow storage [anymore]. Bottom line: Everything needs to be fast."
Increased speed for cloud storage still isn't the cutting edge of performance AWS may tout it as, said Marc Staimer, president of Dragon Slayer Consulting.
Object storage isn't the best fit for databases or high-performance computing, which make use of block storage, and on-premises or colocated services can provide sub-millisecond access speeds, he said. Faster object storage can be a benefit to a generative AI learning development, such as expediting retrieval-augmented generation, a learning reinforcement technique.
"This is not a wow," Staimer said. "I'd call it high-performance object storage. If you really want performance, you're not using S3. You'll be using different storage."
About half of the storage product and capabilities debuting at re:Invent were related to file storage services offered by AWS, including Amazon Elastic File System (EFS) and Amazon FSx for OpenZFS.
Amazon EFS now offers replication failback, enabling replication of changes from backup to primary files systems, and an increase in total IOPS for the service. Amazon FSx for OpenZFS now has the ability to send snapshots from one file storage system in a customer's AWS account to another.
File storage customers in AWS have a new option for colder storage with Amazon EFS Archive tier. The service targets data that customers expect to access about once a quarter and is priced at $0.008 GB per month, compared with the Standard S3 tier at $0.30 or Infrequent Access S3 at $0.016.
For Elastic Block Storage (EBS) customers, the AWS Backup service adds additional snapshot management capabilities with Snapshots Archive. EBS snapshots with retention periods of at least 90 days can be set to automatically move into the colder Snapshots Archive as part of the snapshot's lifecycle.
The AWS Backup service now offers automatic restore test and validation as well, enabling customers to frequently test their recovery process in case of disaster.
Amazon FSx for NetApp OnTap, an AWS native file service built on NetApp's technology, also gained new additions including a scale-out file system capability, multiple availability zones for virtual private clouds, and storage management for FlexGroup volumes with the AWS console.
The new OnTap scale-out file system capabilities increase the total storage NetApp's platform can manage within AWS, namely with the addition of scale-out file systems. An OnTap system can now handle up to 1 pebibyte of data, offering a little more space than the traditional petabyte, compared with the 192 tebibytes of a scale-up system.
Other improvements include increased throughput for reads and writes, as well as IOPS increases. The option is limited to a single availability zone, compared with the multiple zones of scale-up systems.
AWS has showcased NetApp as a file system partner in the past few years, as creating and managing file storage systems are challenging, Sinclair said. Considering NetApp's storage legacy, and its business with other public clouds, Sinclair said he believes AWS benefits far more from courting NetApp than vice versa.
"Building a good enterprise file system is incredibly difficult," Sinclair said. "When you're looking at third-party storage options in AWS, having it with an AWS interface is a differentiation."
Tim McCarthy is a journalist living in the North Shore of Massachusetts. He covers cloud and data storage news.
Read more:
Amazon S3 storage picks up speed at re:Invent 2023 - TechTarget
Get to know the different storage options for your irreplaceable digital images – Digital Camera World
As photographers, having a safe place to back up our images is essential, especially if clients are dependent on our material.
Create your own archive
Every photographer works with their own archive system this could be chronologically ordered or structured according to different themes or projects. As long as it makes sense to you and the material can be accessed quickly, there is no right or wrong system. Remember to import photos regularly to back them up and save time with your organisation.
External hard drives are a solution for photographers who travel a lot and need access to their images with or without an internet connection. In general, keep an eye on these three key features: speed, storage size and safety.
Apart from backing up essential data, organizing files is also important for many professional photographers. Modern camera technology allows us to produce huge amounts of digital files, however, even if they are safely stored, these images are of little use if we do not save them to an archive with a proper filing system. After all, we dont want our valuable work and precious memories to be hidden away in folders where they cant be shared with the world.
This works with a local cloud storage system, customised to your projects needs. NAS systems arent usually equipped with hard disks inside, so you will need to back up your files. The advantage here is that it works through a WLAN connection, meaning that everyone who uses this specific network can access files via a password. For example, the WD My Cloud EX2 Ultra NAS drive is an excellent system, which is even equipped with a dual bay for RAID setups.
A RAID drive is one of the safest storage services around. It operates with two or more hard disks that work like mirrors and create an exact copy of the data. Systems are available in different expansion levels, including the TerraMaster D5 Thunderbolt 3, which works with five disks. That means you can make more than one copy of your files so there isnt the same risk of losing material through technical issues or failure.
A great solution when it comes to not only saving current projects but also your whole photography archive, a desktop drive can often be more affordable than other solutions. It also operates offline, which ensures a high level of security as no additional third party can access your files. The WD My Book Duo, for example, is an excellent solution it operates offline, has an impressive storage of up to 24TB and features two bays for RAID support.
Photographers often lead busy lives, travelling around and requiring access to their images wherever they are. One simple and practical solution is a portable SSD their size, weight and robust housing mean you can take them with you wherever you go. However, most devices operate with only one hard disk, so you may need an additional backup device. The LaCie Rugged RAID Pro is an advanced solution that automatically provides a backup but is more expensive than other SSDs.
There are lots of cloud-based storage services competing to offer you a place to back up your files. The advantage here is that you can access files from any device, wherever you are and you can easily share projects with clients or the rest of the team. Google Drive operates with the same security protocol as other Google services. One downside of this storage system is that the rate at which data uploads depends on your internet speed and so uploading large files can often take a long time.
Original post:
Get to know the different storage options for your irreplaceable digital images - Digital Camera World
8 Best Indoor Security Cameras (2023): For Homes and Apartments – WIRED
This feature-packed camera from Eufy is likely overkill for most folks, but its an impressively versatile device. It combines a regular camera with a 130-degree field of view that can go up to 4K with a 2K telephoto lens that provides 3x optical zoom. It also has 360-degree pan and 75-degree tilt controls, AI tracking that works well, and support for up to four preset positions it can patrol through, including the default it returns to after tracking a subject out of frame. Theres also privacy mode, two-way audio, and onboard AI to detect people, pets, and sounds. You can record locally with a microSD card up to 128 GB (not included), hook it up to a HomeBase 3 (sold separately), or subscribe for cloud storage from $3 per month.
The footage is crisp and detailed enough to zoom in on, though bright areas like sun streaming in a window can appear blown out. Because the frame rate is 15, sometimes fast-moving subjects appear blurry. The motion detection is reliable, and you can set up privacy zones in the app. Notifications are swift and come with a thumbnail if you dont mind uploading to the cloud (it is optional). I sometimes noticed a slight lag on the live feed, and sound quality could be better. There is no HomeKit support, but you get Alexa and Google Assistant, though the camera was very slow and sometimes failed to load for me via Google.
Note: After a security researcher identified cloud uploads from a local only device and a report warned of video streaming without encryption following a bug in May 2021 that exposed some camera feeds to other Eufy users, we stopped recommending the brand. After initial denials, parent company Anker acknowledged and fixed the issues, overhauled its policies, and instituted a bug bounty program. We spoke with third-party security researcher Ralph Echemendia, who was hired to conduct an audit, and have decided to start testing Eufy cameras again.
Read the rest here:
8 Best Indoor Security Cameras (2023): For Homes and Apartments - WIRED
AWS Launches High-Speed Amazon S3 Express One Zone – Datanami
(Elnur/Shutterstock)
Amazon Web Services (AWS) is a major player in the public cloud space, however, with several other emerging competitors, AWS needs to continue evolving and innovating to continue enjoying the largest slice of the cloud pie.
In its annual re:Invent conference in Las Vegas, AWS announced the general availability of the Amazon S3 Express One Zone a new storage class to deliver better performance and speed compared to the standard edition. Amazon claims the Express One Zone offers 10x performance improvement over the standard S3 service and is capable of handling hundreds of thousands of requests per second with consistent single-digit latency.
The new Amazon S3 Express One Zone is tailor-made for data-intensive applications, such as AI/ML training, high-performance computing, and financial modeling. With its extremely low latency, the upgrade works well for objects of any size, including very large amounts of small objects.
According to AWS, the Express One Zone offers a 50 percent reduction in data request costs and a 60 percent reduction in computer expenses compared to the standard S3 storage class. This means that Spot and On-Demand compute resources are now used more efficiently and can be shut down earlier, helping reduce the overall processing costs. Deprovisioning instances earlier helps lower the associated compute costs
Launched in 2006, The Amazon Simple Storage Service (S3) is one of the most popular cloud storage services with millions of customers around the globe. Over the years, the S3 has gone through several upgrades. It evolved from humble internet storage for personal storage to the backbone of the modern data stack of many data-driven enterprises.
With the new Amazon S3 Express One Zone, you get a new bucket type and new authentication model optimized for AI/ML use cases and other data-intensive workloads. The new bucket type is specific to this storage class and can support hundreds of requests per second. It has a hierarchical namespace and storage object key names in a directory-like manner to deliver outstanding efficiency.
Traditionally, Amazon S3 users have been able to choose specific AWS regions to store S3 data. With the new storage class, users can also choose to co-locate One Zone data in the same availability zone as compute resources.
Millions of customers rely on Amazon S3 for everything from low-cost archival storage to petabyte-scale data lakes, and they want to expand their use to support their most performance-intensive applications where every millisecond counts, said James Kirschner, general manager of Amazon S3 at AWS.
Amazon S3 Express One Zone delivers the fastest data access speed for the most latency-sensitive applications and enables customers to make millions of requests per minute for their highly accessed datasets, while also reducing request and compute costs.
In 2021, AWS launched the S3 Object Lambda and that same year added the S3 Glacier Instance Retrieval function to its platform. The launch of the Amazon S3 Express One Zone marks the most significant update to the Amazon S3 in recent years. It also signals the continued longevity and popularity of the Amazon S3. Cloudera, Colorfront, ChaosSearch, and Pinterest are among the customers and partners that already using Amazon S3 Express One Zone
Related Items
AWS Announces Powerful New Offerings to Accelerate Generative AI Innovation
Rubrik Helps Customers Further Protect Amazon S3 Data
AWS Unleashes the DataZone
See original here:
AWS Launches High-Speed Amazon S3 Express One Zone - Datanami
Dell places multi-cloud storage navigation in its APEX Blocks and … – Blocks & Files
Dell is providing automated deployment and monitoring of its public cloud block storage with the APEX Navigator for Multicloud Storage, and has file storage support coming.
APEX is a set of services from Dell where it supplies its compute, storage and networking gear through a public cloud-like subscription model. Navigator is a software agent, an automation engine, for setting up things like a VMware cluster or multi-cloud storage. They can be automatically deployed and managed.
Magi Kapoor, director, Product Management at Dell Technologies, said in a blog that B&F saw pre-publication: This is just the beginning. Were also previewing the integration of Dell APEX Navigator with Dell APEX File Storage for AWS (integration availability expected in the first half of 2024). Dell is committed to expanding our support for more storage offerings, more public cloud providers and more regions.
APEX Navigator for Multicloud Storage is a SaaS tool providing centralized management of Dell storage software across multiple public clouds. It enables ITOps and storage admins to deploy, configure, and manage Dell storage in public clouds, with monitoring and data mobility across on-premises and public clouds.
Dell says deployment is a four-stage process with simple configuration and automated provisioning of underlying public cloud resources, and automated deployment of its storage software.
The multicloud storage Navigator has a zero trust-based security approach with role-based access control (RBAC), single sign-on (SSO) and federated identity, featuring control over roles, permissions, groups, certificates, and keys. It provides APIs so it can integrate with other automation tools such as Terraform. The software uses Dells CloudIQ to monitor storage health, with a traffic light status display.
Data placement within the on-premises and multi-cloud environment can be adjusted as needs demand.
APEX Navigator for MultiCloud Storage isinitially available with APEX Block Storage for AWS with a 90-day evaluation. It will be available for quoting on November 30 and will be generally available in December in the US. Dell will show it off at the AWS re:Invent conference starting on November 27 in Las Vegas.
Dell APEX File Storage for AWS, based on Dells PowerScale scale-out OneFS software, has a Dec 13 update coming, with:
The main competition for APEX File Storage for AWS comes from NetApp and Qumulo, with APEX Navigator providing functionality that overlaps with NetApps Blue XP and the Qumulo One control plane. HPE, OEMing VAST Data software, has multi-cloud file storage and HPE GreenLake will no doubt face up to APEX Navigator in the multicloud file storage management area as well.
Read the original here:
Dell places multi-cloud storage navigation in its APEX Blocks and ... - Blocks & Files
An introduction to hybrid multi-cloud storage – TechTarget
Much of the discussion around cloud storage has been on multi-cloud and hybrid cloud strategies, with the assumption that most IT teams choose one or the other. However, many organizations take a hybrid multi-cloud approach in which they use a private cloud in conjunction with multiple public clouds.
Hybrid multi-cloud storage can help organizations maximize their flexibility and store data in the way that best meets their requirements. It's a relatively new concept, so admins should know what it means, how to best use it and what's available from vendors.
Cloud platforms are typically categorized as either public clouds or private clouds. Public clouds are usually what people are referring to when they say "the cloud." Public clouds include platforms such as AWS, Google Cloud Platform (GCP) and Microsoft Azure. The platforms offer a variety of services that users can access over the internet. The services are typically categorized as software as a service (SaaS), platform as a service (PaaS) or infrastructure as a service (IaaS), although other variations exist.
A private cloud is a platform that an organization implements for its own private use. The platform might be deployed on premises or hosted with a third-party provider that maintains the environment for the organization's exclusive use. A private cloud delivers services in much the same way as a public cloud, although it's typically smaller in scope and geared toward the organization's specific requirements. An organization has more control over a private cloud than a public cloud, but that control comes with additional costs and administrative overhead.
Many organizations use a multi-cloud strategy for their storage requirements. At its most basic, multi-cloud means that an organization employs more than one cloud service. Beyond that, the exact meaning of multi-cloud gets a little fuzzy, with some slightly different interpretations:
Some vendors and industry pundits have adopted the term hybrid multi-cloud, which describes an environment that includes at least one private cloud and at least two public clouds.
At its core, a hybrid cloud is one that consists of both public and private clouds. It typically includes a management component for orchestrating the services between platforms, with the goal of creating a unified platform. As with multi-cloud, however, the meaning of hybrid cloud can get a little muddy, with several slightly different meanings:
Although the exact definitions vary, the idea persists that a hybrid cloud includes both public and private clouds. What's not clear, however, is the exact number of each or whether hybrid cloud can be considered a type of multi-cloud.
To help clarify this issue, some vendors and industry pundits have adopted the term hybrid multi-cloud, which describes an environment that includes at least one private cloud and at least two public clouds. As with a basic hybrid cloud, the environment relies on a management component to orchestrate the services between platforms.
Another way to describe a hybrid multi-cloud is that it includes multiple public clouds. Some sources might expand on this definition to include multiple private clouds or traditional on-premises resources, but the basic idea remains the same: It has at least one private cloud component and at least two public cloud components.
With a hybrid multi-cloud, organizations can distribute their data across multiple cloud platforms. This enables them to gain the benefits of each platform without being locked into any one of them. For example, an organization might store archive data on Amazon S3 object storage, maintain document data in Google Firestore to support a web app hosted on GCP and use their on-premises private cloud for highly sensitive data.
Organizations that take a hybrid multi-cloud approach can see several important benefits:
Despite the many advantages that hybrid multi-cloud promises, these environments have challenges. Decision-makers should carefully weigh the tradeoffs and consider the following disadvantages:
Hybrid multi-cloud is not a one-size-fits-all strategy. Here are some of the more common situations that can benefit from a hybrid multi-cloud strategy:
Many vendors now offer products that support a hybrid multi-cloud storage strategy. Here are just a few examples of the types of products now available:
Read the original post:
An introduction to hybrid multi-cloud storage - TechTarget
Save 50% on Adobe Creative Cloud with this can’t-miss Black Friday … – PCWorld
Adobe dominates the creative software market thanks to the raw power of Photoshop, Premiere, and After Effects. Unsurprisingly, its software subscriptions arent cheapits full Creative Cloud plan normally runs $60 per month. But right now, thanks to early Black Friday deals, you can get 50 percent off for a year. And the best part? You dont have to pay for all 12 months upfront to get the sale price.
With previous Creative Cloud deals (like one we saw just last month), you had to shell out for the whole subscription term, but this is a discount on the monthly subscription plan. For one year, youll pay just $30 per month before the price reverts to the normal rate. However, if you dont mind prepaying for a year upfront, you can get an even bigger discount: Newegg is selling a 12-month subscription for $290 right now. (Students and teachers can get it even cheaper at $130!)
As part of the Creative Cloud plan, youll have access to almost all of Adobes software lineup, including the big guns: Photoshop, Premiere, Illustrator, Lightroom, InDesign, After Effects, and even Acrobat Pro. You get access to the web version of Photoshop, too, as well as 100GB of cloud storage, Adobe fonts, and credits for Adobes Firefly generative image tool.
While excellent free alternatives exist for Photoshop and Premiere, sometimes theres no substituting the real thingespecially if your work demands it. With this deal, you can massively soften the pain of that expense.
Black Friday deal: Get Adobe Creative Cloud for 50 percent off
More here:
Save 50% on Adobe Creative Cloud with this can't-miss Black Friday ... - PCWorld