Category Archives: Cloud Storage

Protecting and Managing Sensitive Customer Data with Skyflow and Cloud Storage Security | Amazon Web Services – AWS Blog

By Ashok Mahajan, Sr. Partner Solutions Architect, Startups AWS By Ed Casmer, CTO Cloud Storage Security By Gokhul Srinivasan, Sr. Partner Solutions Architect, Startups AWS By Sean Falconer, Head of Marketing Skyflow

Securing personally identifiable information (PII) while maintaining compliance can be a daunting task for organizations. Despite best intentions, PII often finds itself scattered across various repositories such as databases, data warehouses, log files, and backups. This makes the maintenance of robust security and compliance measures an uphill battle.

File management only adds to the complexity, requiring stringent security measures, strict access controls, and compliance-oriented storage practices. The risk of data loss and malware threats further intensifies when organizations receive files from external sources such as customers. Organizations must scan such external files before processing for viruses and malware to mitigate potential threats.

To minimize risk and de-scope existing upstream and downstream systems, organizations use Skyflow whichis available in AWS Marketplace. Skyflow Data Privacy Vault delivers security, compliance, and data residency for your Amazon Web Services (AWS) workloads.

Skyflow, an AWS Partner, uses Cloud Storage Security (CSS) to automatically and asynchronously scan uploaded files for malicious code and malware. CSS is an AWS Specialization Partner with the Security Competency, and it helps to further protect your infrastructure and ease the burden of sensitive file management.

In this post, well show how to secure PII data using Skyflow Data Privacy Vault and add malware protection using Cloud Storage Security on AWS.

Skyflow is a software-as-a-service (SaaS) offering that supports multi-tenant and single-tenant deployment models. Skyflow Data Privacy Vault isolates, protects, and governs access to sensitive customer data, which is transformed by the vault into opaque tokens that serve as references to this data. The non-sensitive tokens can be safely stored in any application storage systems or used in data warehouses.

A Skyflow vault can keep sensitive data in a specific geographic location, and tightly controls access to this data. Other systems only have access to non-sensitive tokenized data.

In the example below, a phone number (555-1212) is collected by a frontend application. This phone number, along with any other PII, is transformed by the vault, which is isolated outside of your companys existing infrastructure.

Any downstream services (such as a database) store only the token representation of the data (e.g. ABC123), and are removed from the scope of compliance. The token representation can preserve formatting as needed and be consistently generated to not break analytics and machine learning (ML) workflows.

Figure 1 Reducing compliance and security scope with a data privacy vault.

A data privacy vault serves as core infrastructure for PII, and Skyflow Data Privacy Vault provides this core infrastructure as a service which includes compute, storage, and network. The core architectural block is simplified to an API call, and Skyflow uses polymorphic encryption which combines multiple forms of encryption to secure PII and make it usable. This allows you to perform operations over fully encrypted data.

You can build any PII-specific workload on a Skyflow vault for data sharing, analytics, and encrypted operations. This way you could find all records with the same area code without decrypting the data or calculate the average income of your customers, again without exposing yourself, your employees, or your infrastructure to PII.

While a data privacy vault isnt a database, Skyflow Data Privacy Vault was designed to have some similar properties. For example, a Skyflow vault supports a schema that can consist of tables, columns, and rows (see image below).

Figure 2 Vault schema with four tables.

The vault is specially designed for supporting the full lifecycle of sensitive data, and it understands the structure of PII and its uses. For example, a Skyflow vault understands a social security number as a data type, not simply a string. This means the vault natively supports use cases like showing only the last four digits of a social security number based on the roles and policies you set up, or securely sharing the full social security number with a third-party vendor of identity verification.

The vault not only transforms sensitive data into non-sensitive data, but it tightly controls access to sensitive data through a zero-trust model where no user account or process has access to data unless its granted by explicit access control policies. These policies are built from the bottom, granting access to specific columns and rows of PII. This allows you to control who sees what, when, where, for how long, and in what format.

To store, manage, and retrieve data with Skyflow, you can use APIs directly or software development kits (SDKs). Skyflow supports both frontend and backend SDKs. Depending on your needs and where you choose to integrate, that will impact which SDK you use.

To learn more about the Skyflow SDKs and APIs, check out the documentation.

To demonstrate secure file storage and management through Skyflow, lets look at how this solution de-scopes both the frontend and backend application from touching the sensitive documents.

The following architecture diagram illustrates the file upload flow with Skyflow, AWS services mentioned above and CSS.

Figure 3 Example of file upload processed through Skyflow and CSS.

To control access to the customers vault, policies are created in Skyflow to allow programmatic writes into the vault table for client records.

Read and update access needed to be restricted to the single record owned by the currently logged in user. Skyflow customers can use an authentication service like Auth0 and the customer application knows who the user is based on the Auth0 token.

Skyflow vault respects the identity of the user and restrict access based on this identity. To support this requirement, customers use Skyflows context-aware authorization.

Programmatic access to Skyflow APIs is controlled through a service account created within your Skyflow account. The service accounts roles, and the policies attached to those roles, decide the level of access a service account has to a vault. The creation of Skyflow roles, policies, and service accounts is controlled programmatically through Skyflows management APIs or through Skyflow Studio, Skyflows web-based vault administration portal (see image below).

Figure 4 Example of creating a policy from Skyflow Studio.

Context-aware authorization lets your backend insert an additional claim for end user context into the JWT insertion. You can use any string that uniquely identifies the end user, such as the token provided by Auth0 after a client successfully logs in.

After the additional claim is added, the vault verifies the request and returns a bearer token with the context identifier. The diagram in Figure 5 below illustrates authentication with contextual information for the Skyflow customer and data retrieval.

Figure 5 Context-aware authorization flow diagram using Auth0 token for context.

Using the returned bearer token with the context restriction, the frontend customer application is able to retrieve the PII and files owned by the currently logged in user and only that user (Step 6).

Further, the time-to-live (TTL) on the bearer token can be controlled, so the token can be set to live only long enough to retrieve the record for the client.

When collecting and managing sensitive data like files containing PII, its best practice to take the entire application infrastructure out of security and compliance scope including the frontend.

Skyflow Elements provides a secure way to collect and reveal sensitive data including files. It offers several benefits, including complete programmatic isolation from your frontend applications, end-to-end encryption, tokenization, and the ability to customize the look and feel of the data collection form.

When users interact with Skyflow Elements, various components work together to collect and reveal sensitive data. Heres how it works:

After uploading a file, Skyflow automatically scans the file for viruses leveraging the CSS integration within the vault. You can retrieve the status of a scan using the Get Status Scan API.

If the file doesnt contain a virus, a status of SCAN_CLEAN is returned and the file is available for downloading or in-page retrieval. Otherwise, a status of SCAN_INFECTED is returned and the file moved into quarantine.

To reveal an uploaded file, the file is embedded into the web frontend as an iframe so the file never touches the customers servers.

Skyflow enables a business to offload the security, privacy, and compliance responsibilities of sensitive file and PII handling so its can focus resources on their core business.

In this post, we discussed the challenges businesses face with managing sensitive customer data. We reviewed how to secure personally identifiable information (PII) using Skyflow Data Privacy Vault and add malware protection using Cloud Storage Security (CSS) on AWS.

We also showed how Skyflow Data Privacy Vault can securely collect, manage, and use sensitive data. Skyflow integrates with CSS to support automatic virus and malware detection and protection for files.

To learn more, contact Skyflow or try out Skyflow in AWS Marketplace. For additional information regarding Cloud Storage Security, check out CSS in AWS Marketplace.

Read the rest here:
Protecting and Managing Sensitive Customer Data with Skyflow and Cloud Storage Security | Amazon Web Services - AWS Blog

Hitachi Vantara Announces Availability of Virtual Storage Platform One, Providing the Data Foundation for Unified … – PR Newswire

Unbreakable hybrid cloud platform seamlessly integrates structured and unstructured data, redefining data management efficiency and flexibility for enterprises

SANTA CLARA, Calif., April 16, 2024 /PRNewswire/ -- As businesses face unprecedented challenges managing data among the proliferation of generative AI, cloud technologies, and exponential data growth, Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, Ltd. (TSE: 6501), today announced the availability of Hitachi Virtual Storage Platform One. The hybrid cloud platform is poised to transform how organizations manage and leverage their data in today's rapidly evolving technological landscape.

To learn more Hitachi Vantara Virtual Storage Platform One, visit:https://www.hitachivantara.com/en-us/products/storage-platforms/data-platform

With organizations struggling to scale data and modernize applications across complex, distributed, multi-cloud infrastructure, the need for a comprehensive data management solution across all data types has never been more critical. Resultsfrom a recent TDWI Data Management Maturity Assessment (DMMA) found while 71% of IT experts agreed that their organization values data, only 19% said a strong data management strategy was in place. Complicating matters are rising cyberattacks, which leave business leaders increasingly worried about security and resiliency. A recent survey showed 68% of IT leaders are concerned their organization's data infrastructure is resilient enough.

Virtual Storage Platform One products available now include:

Virtual Storage Platform One simplifies infrastructure for mission critical applications, with a focus on data availability and strong data resiliency and reliability measures, including mitigation of risks such as downtime, productivity losses, and security threats.

"Virtual Storage Platform One is transformational in the storage landscape because it unifies data and provides flexibility regardless of whether your data is in an on-premises, cloud, or software-defined environment," said Octavian Tanase, chief product officer, Hitachi Vantara. "Additionally, the platform is built with resiliency in mind, guaranteeing 100% data availability, modern storage assurance, and effective capacity across all its solutions, providing organizations with simplicity at scale and an unbreakable data foundation for hybrid cloud."

Virtual Storage Platform One tackles the challenges of modern data management by eliminating the constraints of data silos and allowing every piece of information to work cohesively and with the flexibility to scale up or down with ease, empowering data to thrive in an environment that prioritizes efficiency and speed. Additionally, Virtual Storage Platform One SDS Cloud is available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS). This offers businesses seamless integration and accessibility to leverage Hitachi Vantara's cutting-edge data management solutions within AWS.

A New Era of Data ManagementAt the heart of Virtual Storage Platform One lies a unified data ecosystem that seamlessly integrates block and file storage, eliminating data silos and fragmented landscapes. Powered by Hitachi Storage Virtualization Operating System (SVOS), Virtual Storage Platform One ensures every piece of information is collected, integrated, and accessible from any device or location, making it easier to access, view, and fuel their business.

"For more than a decade, we've forged a partnership with Hitachi Vantara, finding great satisfaction in their services, expertise, and product durability," Deniz Armen Aydn, cloud data storage technologies manager, Garanti BBVA. "The introduction of Virtual Storage Platform Onepromises to revolutionize data management and efficiency within the GarantiBBVA, igniting anticipation for the cutting-edge automation and resiliency features it offers."

The hybrid cloud platform sets itself apart from competitors with key differentiators that redefine the data management landscape:

Additional Virtual Storage Platform One products will be available later this year. For more information about Virtual Storage Platform One and its suite of solutions, please visit https://www.hitachivantara.com/VirtualStoragePlatformOne.

Additional Resources

Connect With Hitachi Vantara

About Hitachi VantaraHitachi Vantara is transforming the way data fuels innovation. A wholly owned subsidiary of Hitachi Ltd., Hitachi Vantara provides the data foundation the world's leading innovators rely on. Through data storage, infrastructure systems, cloud management and digital expertise, the company helps customers build the foundation for sustainable business growth.To learn more, visit http://www.hitachivantara.com.

About Hitachi, Ltd.Hitachi drives Social Innovation Business, creating a sustainable society through the use of data and technology. We solve customers' and society's challenges with Lumada solutions leveraging IT, OT (Operational Technology) and products. Hitachi operates under the business structure of "Digital Systems & Services" - supporting our customers' digital transformation; "Green Energy & Mobility" - contributing to a decarbonized society through energy and railway systems, and "Connective Industries" - connecting products through digital technology to provide solutions in various industries. Driven by Digital, Green, and Innovation, we aim for growth through co-creation with our customers. The company's consolidated revenues for fiscal year 2022 (ended March 31, 2023) totaled 10,881.1 billion yen, with 696 consolidated subsidiaries and approximately 320,000 employees worldwide. For more information on Hitachi, please visit the company's website at https://www.hitachi.com.

HITACHI is a trademark or registered trademark of Hitachi, Ltd. All other trademarks, service marks, and company names are properties of their respective owners.

SOURCE Hitachi Vantara

Read the original:
Hitachi Vantara Announces Availability of Virtual Storage Platform One, Providing the Data Foundation for Unified ... - PR Newswire

From the NAB Floor | Amove by Jose Antunes – ProVideo Coalition – ProVideo Coalition

We use cookies to optimize our website and our service.

The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.

The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.

Visit link:
From the NAB Floor | Amove by Jose Antunes - ProVideo Coalition - ProVideo Coalition

Backblaze introduces Event Notifications for enhanced workflow automation Blocks and Files – Blocks and Files

Backblaze has added Event Notification data change alerts to its cloud storage so that such events can be dealt with faster by triggering automated workflows.

The fast-growing B2 Cloud Storage provides S3-compliant storage for less money than Amazon S3 and with no egress charges. AWS offers a Simple Queue Service (SQS) designed for microservices, distributed systems, and serverless applications, enabling customers to connect components together using message queues. An S3 storage bucket can be configured to send notifications for specific events, such as object creation, to SQS and so on to SQS queue-reading services, which in turn can inform upstream applications to trigger processing.

Gleb Budman, Backblaze CEO and chairperson, said: Companies increasingly want to leverage best-of-breed providers to grow their business, versus being locked into the traditional closed cloud providers. Our new Event Notifications service unlocks the freedom for our customers to build their cloud workflows in whatever way they prefer.

This statement was a direct shot at AWS, as evidenced by an allied customer quote from Oleh Aleynik, senior software engineer and co-founder at CloudSpot, who said: With Event Notifications, we can eliminate the final AWS component, Simple Queue Service (SQS), from our infrastructure. This completes our transition to a more streamlined and cost-effective tech stack.

Event Notifications can be triggered by data upload, update, or deletion, with alerts sent to users or external cloud services. Backblaze says this supports the expanding use of serverless architecture and specialized microservices across clouds, not just its own.

It can trigger services such as provisioning cloud resources or automating transcoding and compute instances in response to data changes. This can accelerate content delivery and responsiveness to customer demand with automated asset tracking and streamlined media production. It also helps IT security teams monitor and respond to changes, with real-time notifications about changes to important data assets.

Event Notifications is now available in private preview with general availability planned later this year. Interested parties can join a private preview waiting list.

Read this article:
Backblaze introduces Event Notifications for enhanced workflow automation Blocks and Files - Blocks and Files

The 10 Coolest Storage Component Vendors: The 2024 Storage 100 – CRN

As part of CRNs 2024 Storage 100, here are 10 vendors storage component vendors taking their offerings to new heights.

The software-defined storage and the data recovery/observability/resiliency sections of the Storage 100 emphasized the value that software has on the storage features and services that add value to the storage offered to businesses small and large.

However, the focus on software should not by any means take away from the value of the hardware. No matter how wonderful and valuable storage software is, it still requires a solid base of hardware on which to run.

That hardware may be an industry-standard server, or a purpose-built appliance. Or it may be on purpose-built hardware stacked in a cloud providers data center. But it needs components: flash storage, SSDs, hard disk drives, memory. So for all the focus on software, hardware still counts.

As part of CRNs 2024 Storage 100, here are 10 vendors storage component vendors taking their offerings to new heights.

Apacer

Chia-Kun Chang

CEO

Apacer develops a wide range of digital storage and sharing products and services, including SSDs and DRAM. Its SSD lineup includes PCIe, SATA and PATA, industrial, eMMC and specialty models, while its DRAM lines include embedded memory, server and workstation memory, specialized memory, and memory with specific characteristics including ruggedized, wide temperature and lead-free.

Kingston

John Tu

Co-Founder, CEO

Kingston, one of the worlds leading manufacturers of memory products, was an early pioneer in developing memory modules for computers. The company has since expanded to offer a wide range of memory and memory cards, SSDs, USB flash drives, memory card readers, and embedded and industrial embedded flash and DRAM components.

Kioxia

Hideyuki Namekawa

President, CEO, Kioxia America

Kioxia is a global leading developer and manufacturer of flash memory and SSDs. The company, which was spun out of Toshiba as Toshiba Memory Corp. before getting its current name, produces a wide range of memory and SSDs for both business and personal computing requirements, in addition to enterprise, data center and client storage applications.

Micron Technology

Sanjay Mehrotra

President, CEO

Micron is one of the worlds largest producers of computer memory, as well as a major developer of flash storage technologies. Its memory products include DRAM modules and components as well as high-bandwidth memory and CXL modules for data center memory expansion. The company also develops a wide range of data center, client and industrial SSDs.

Pliops

Ido Bukspan

CEO

Pliops develops what it calls extreme data processors. These XDPs combine multiple data and storage technologies including a hardware-based storage engine, in-line transparent compression, RAID 5+ data protection, and built-in application integration into a single device that works with any server or SSD to improve application performance while cutting overall infrastructure cost.

Samsung

Kye Hyun Kyung

President and CEO, Device Solutions Division

Samsung is one of the worlds largest producers of semiconductor components and products, including DRAM components and modules and SSDs for PC, data center, enterprise and consumer applications. The company is also a major provider of semiconductor foundry services. In addition, Samsung develops microprocessor, image sensor, display, security and power technologies.

ScaleFlux

Hao Zhong

Co-Founder, CEO

ScaleFlux builds what it calls a better SSD by embedding computational storage technology into its flash drives. Its system-on-a-chip technology is behind the companys Computational Storage Engine technology that embeds intelligent storage processing capabilities into NVMe SSDs, which the company says helps reduce data movement, enhance performance and improve efficiency.

Seagate Technology

Dave Mosley

CEO

Seagate manufactures external and internal SSDs and hard drives for cloud, edge, data center and personal storage. The company also develops integrated mass storage for business and personal use, including arrays and expansion devices for managed block storage and hybrid storage applications as well as the Lyve edge to cloud storage service.

Solidigm

David Dixon, Kevin Noh

Co-CEOs

Solidigm, founded when Korea-based SK hynix acquired Intels NAND and SSD business, is a major developer of SSDs for data center and client device use. Its data center SSD line ranges from standard-endurance SATA drives to high-end models with varying performance and capacity levels. Its consumer line includes both PCIe 3.0 and PCIe 4.0 NVME SSDs.

Western Digital

David Goeckeler

CEO

Western Digital has long been a leader in the development of hard drive, SSD, flash drive and memory card technologies, as well as NAS and other storage products. However, the company is currently in the process of separating into two independent, publicly traded companies by year end, one focused on hard drives and the other on flash storage.

Follow this link:
The 10 Coolest Storage Component Vendors: The 2024 Storage 100 - CRN

Stateful Cloud Services at Neon Navigating Design Decisions and Trade-Offs: Q&A with John Spray – InfoQ.com

At QCon London, John Spray, a storage engineering lead @neon.tech, discussed the often-overlooked complexities of stateful cloud service design, using Neon Serverless Postgres as a case study. His session was part of the Cloud-Native Engineering track on the first day of the conference.

In his talk, Spray discussed the key considerations for data management and storage within modern IT infrastructures. He addressed questions about data localization and replication, the optimal strategies for storing data, and determining the necessary number of copies to maintain data integrity and availability.

Spray also tackled the challenge of ensuring service availability during the initialization of a new node with an empty cache drive and discussed strategies for efficiently scaling services that rely on local disk storage. His analysis further extended to assessing Kubernetes' influence in this domain and the financial ramifications of attaining data durability across multiple availability zones or regions, underpinning the talk's focus on balancing cost, performance, and reliability.

InfoQ interviewed John before his talk at QCon London.

InfoQ: To ensure data durability and service availability, especially for databases like Neon Serverless Postgres, could you discuss the trade-offs in choosing between synchronous and asynchronous data replication methods?

John Spray: Where practical, synchronous replication is usually preferred,it is easier for the user to reasonabout by avoiding the "time travel" problem of async systems when switching from the primary to a secondary location. For example, internally within Neon, we use fully synchronous replication between Safekeeper nodes: latency stays within ~1 ms, and the resulting behavior is what the user expects (i.e., nothing changes from their point of view if one of our nodes is lost).

Asynchronous replication allows the primary to proceed regardless of the secondary's responsiveness. This is obviously useful over high latency links, but it is also important when the secondary is subject to high loads, such as read-intensive analytics workloads. Ensuring that a primary can maintain high performance irrespective of the secondary's workload is a valuable tool for building robust architectures.

Neon's Read Replica endpoints add one further level of isolation. Because the read replica can read directly from our disaggregated storage backend, the primary doesn't even have to transmit updates to the replica, so any number of replicas may be run without putting extra load on the primary Postgres instance.

InfoQ: Additionally, how do recovery strategies differ when utilizing local disk storage versus block or object storage, and what factors should influence the choice of one over the others?

Spray: It's not quite clear what kind of recovery is meant; I'll assume we're discussing recoveries from infra failures.

Within Neon, we provide data durability through a combination of an initial 3x cross-AZ replication of users' incoming writes (WAL) to our "Safekeeper" service and later writes to object storage (also replicated across AZs), where this data can later be read via our "Pageserver" service.

This provides a useful example of the contrast between local disk and object storage for recovering from failures:

Using replicas on a local disk is more expensive than object storage for primary storage, but we accept that cost to provide a lower latency for our users' writes. Failure strategies when using object storage are more flexible; for example, we can avoid holding extra warm copies of objects in the local disk cache for a less active database. This enables finer-grained optimization of how much hardware resource we use per tenant, compared with designs that must always maintain 3+ local disk copies of everything (we only keep three local disk copies of the most recent writes).

Using replicated/network block devices such as EBS can simplify some designs. Still, we avoid it because of the poor cost/durability trade-off: EBS volumes are only replicated within a single AZ (users typically expect their databases to be durable against an AZ failure).

InfoQ: Deploying stateful services across multiple availability zones or regions is crucial for high availability, but often has significant cost implications. Could you share insights on how organizations can balance the cost and performance when designing multi-region deployments for stateful services?

Spray: Multi-AZ deployments are a frequent source of "bill shock" for cloud users: replicating a high rate of writes between storage instances in two or more AZs can have a comparable cost to the underlying storage instances.

Therefore, cross-AZ replication traffic should be limited to what is essential: incoming writes with tight latency requirements. Avoid using cross-AZ replication for later data management/housekeeping: Get the data into object storage as soon as possible so you can access it from different AZs without the egress costs.

How can we mitigate this?

Similar issues apply to multi-region deployments, but there is less scope for mitigation: moving data longer distances over fiber optic cable has an intrinsic cost. For industries with a regulatory requirement for cross-region replication for durability, this is simply a cost of doing business. Others should carefully consider whether the business benefit of having a presence in a remote region is sufficient to justify the cost of replicating data inter-region.

InfoQ: Are there specific patterns or Kubernetes features that can help minimize costs while maintaining or enhancing service performance and data durability?

Spray: I'll cover this in some detail in my talk. The short version is that one must be careful, as there are pitfalls when using kubernetes StatefulSet, and consider how node replacements in managed kubernetes services will impact the maintenance of your service. Kubernetes is still sometimes the right tool for the job, but using kubernetes requires more careful thought for stateful services than for typical stateless use cases.

Access recorded QCon London talks with a Video-Only Pass.

Read the rest here:
Stateful Cloud Services at Neon Navigating Design Decisions and Trade-Offs: Q&A with John Spray - InfoQ.com

Backblaze Enables Automated Cloud Workflows With Event Notifications – GlobeNewswire

SAN MATEO, Calif., April 15, 2024 (GLOBE NEWSWIRE) -- Backblaze, Inc. (Nasdaq: BLZE), the leading specialized cloud storage platform, announced the launch of Event Notifications, a service that instantly notifies users and external cloud services whenever data changes in Backblaze B2 Cloud Storage. The new release gives businesses the freedom to build automated workloads across the different best-of-breed cloud platforms they use or want to use, saving time and money and improving end user experiences.

With Backblazes Event Notifications, data changeslike uploads, updates, or deletionscan automatically trigger other actions including transcoding video files, sending reports to IT teams, spooling up data analytics, delivering finished assets to end users, and many others. Previous to this announcement, companies often had to settle for notification features that restricted them to one platform or tied them to legacy intermediary tools like AWS messaging services. This solution from Backblaze enables them to break free from the restrictive and expensive intermediaries and orchestrate activity between their preferred solution providers on their own terms.

"Companies increasingly want to leverage best-of-breed providers to grow their business, versus being locked into the traditional closed cloud providers," said Gleb Budman, Backblaze CEO and Chairperson of the Board. Our new Event Notifications service unlocks the freedom for our customers to build their cloud workflows in whatever way they prefer."

The new service can send notifications to any external designated endpoint, eliminating unnecessary complexity and overhead through:

"With Event Notifications, we can eliminate the final AWS component, Simple Queue Service (SQS), from our infrastructure. This completes our transition to a more streamlined and cost-effective tech stack, said Oleh Aleynik, Senior Software Engineer and Co-Founder at CloudSpot. It's not just about simplifying operationsit's about achieving full independence from legacy systems and future-proofing our infrastructure."

This new service supports the expanding use of serverless architecture and specialized microservices across clouds, highlighting Backblaze's commitment to helping forward-thinking organizations operate their way, easily and affordably in an open cloud environment. For more information about Event Notifications, visit the Backblaze blog.

Event Notifications is available in private preview today with general availability planned later this year. Organizations can join the private preview waiting list.

About Backblaze

Backblaze makes it astonishingly easy to store, use, and protect data. The Backblaze Storage Cloud provides a foundation for businesses, developers, IT professionals, and individuals to build applications, host content, manage media, back up and archive data, and more. With over three billion gigabytes of data storage under management, the company currently works with more than 500,000 customers in over 175 countries. Founded in 2007, the company is based in San Mateo, CA. For more information, please go to http://www.backblaze.com.

Press Contact: Jeanette Foster Communications Manager, Backblaze jfoster@backblaze.com

Continue reading here:
Backblaze Enables Automated Cloud Workflows With Event Notifications - GlobeNewswire

Back up your devices with a 5TB 2-year subscription for $120 – Mashable

TL;DR: Through April 21, get two years and 5TB of cloud storage for just $119.97.If you want one central backup for all your devices, try ElephantDrive and get 5TB of backup.

Losing important files is frustrating at best, and devastating at worst. Whether it's an essay for school, a project for work, or the monthly budget you've been working on for hours, all it takes is a spilled cup of coffee or a corrupted save to lose your work.

Even if you have physical data storage, cloud backups are another level of insurance that might be a little harder to get rid of. ElephantDrive is a secure cloud storage solution that you can use to backup and sync across your devices, and a 5TB 2-year plan is only $119.97 for a limited time.

ElephantDrive gives you a whole lot of room to work with, but this cloud backup is about quality as well as quantity. Tools like the Everywhere Folder let you manage your files across all synced devices. If you like to bring your work home with you, that means you can access your work files without transferring them manually, and it works on Windows, Mac, Linux, iOS, and Android devices.

If all your files meet in one cloud hub, security is a must. That's why all your data on ElephantDrive is locked with an AES 256-bit encryption before leaving your device, but you can still share files by creating links. You can even give your links a password.

This subscription is only available to new users.

Give your important files some backup.

Mashable Deals

Until April 21 at 11:59 p.m. PT, you can get a 2-year 5TB subscription to ElephantDrive for $119.97. No coupon needed.

StackSocial prices subject to change.

Read this article:
Back up your devices with a 5TB 2-year subscription for $120 - Mashable

Google fans rage Im so angry as two popular app benefits are being phased out the first will vanish… – The US Sun

SOME Google One users have been frustrated with the service as two popular app features are being phased out.

Google One is a subscription service developed by Google that offers expanded cloud storage and other benefits.

1

Now, two well-liked features are being retired: free shipping on photo prints and the Google One VPN.

Google One users are expressing their frustrations on social media after the company's announcement.

"Received this message from Google One about changes to my subscription (located in US)," one thread reads.

"If they're 'expanding access' of some features to all users AND phasing out other benefits all that's left is cloud storage and a 3% Google store discount which is a joke," they added.

The user reasoned that Google needs to adjust its subscriptions if all they're going to offer is "bare bones features."

"Between this and Google Podcast app being shut down, I'm officially over it," they said.

Since going live three days ago, the thread amassed dozens of upvotes and responses.

"I'm so angry at them for removing the VPN," one person remarked.

"Instead of improving it or implementing useful stuff they are giving us photo editing tools no one cares about and making things free for all Google Photos users?" they continued.

"Truly someone explain to me what is the point of a Google One subscription other than storage, I couldn't care less about their photo/video editing AI tools," they concluded.

"Man, I JUST found out about the VPN and its been great to have. I dont need anything crazy" a second person said.

"Just want to be able to listen to Soondcloud while at work and it was nice to have this easily installed on all my devices," they added.

"We are going to strip away features and not drop the price or increase your storage space," another user echoed.

"I'll be stopping my subscription when it runs out. Already looking for more private Google Photo alternatives," they added.

Here's what you need to know...

Howwever, not all users are upset about the changes, saying that the platform is mostly for storage.

"Google One was always mostly about the storage you need for Photos and Drive, which are still reasonable," one person said.

"All other perks are there just to sweeten the deal," they added.

The free photo shipping benefit is set to vanish on May 15 for users in Canada, the UK, the US, and the EU.

It's a major blow to Google One users who relied on the perk to get their memories delivered to their doorsteps for free.

Google One's built-in VPN is also being phased out later this year.

VPN stands for Virtual Private Network, and its a tool you can use to browse the internet safely.

A VPN extends a private network across a public network, allowing users to share and receive data without the prying eyes of nefarious third parties.

Google One starts at $1.99 a month, and increases based on a user's storage needs.

Here is the original post:
Google fans rage Im so angry as two popular app benefits are being phased out the first will vanish... - The US Sun

Google Photos is making it easier to free up space for your pictures and videos here’s how – Tom’s Guide

Its really easy to fill up your phones storage with photos and videos. In a time when microSD card support is rarer and rarer, it means the cloud can be a lifesaver. But whats to stop you eating through your storage allowances in the exact same way? Thats where Google Photos storage saver feature comes in.

The goal of storage saver is to free up space in your Google cloud storage, by reducing the quality of backed up photos lowering the amount of storage they need in the process. So far this feature has only been available on desktop, but it looks like itll be making the jump to Android in the near future.

Android Authority spotted this during an APK teardown of the latest version of the Google Photos Android app. The code references Googles storage saver feature, with dialogue mentioning users being able to choose the quality of photos that are backed up to the cloud.

However, reducing the quality is a permanent change, so while you will save storage space it means your photos wont look as detailed as they were when you took them. Which is exactly how storage saver works on the web right now. Presumably that means photos will be reduced to 16MP, and videos downgraded to 1080p.

Its also worth noting that this change covers everything backed up to your Google Photos account. So theres no picking and choosing which files get downgraded, while keeping some at their original quality. While it would be very nice to do that, its not something Google has offered at the time of writing. Google also limits compression to once per day, which should be fine for most people.

Reducing the quality is a permanent change, so while you will save storage space it means your photos wont look as detailed as they were when you took them.

Judging from the code this change just means Android users will be able to tell Google Photos to compress photos and videos on the cloud from their phones rather than having to log into Google Photos in a web browser. Which could prove useful if you dont mind losing some quality to save storage space. After all, Googles 15GB free allowance isnt a lot.

Of course if you need to keep all your photos and videos in their original form, then youll want to pay up for the right amount of storage. Google offers up to 5TB storage as part of Google One, but you may prefer to use one of the other best cloud storage services instead.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

Or, alternatively, if youd rather not be locked in with a subscription, or upload to the cloud, the best external hard drives gives you a way to keep everything backed up locally. Just make sure to back everything up regularly, since it cant be done automatically.

The rest is here:
Google Photos is making it easier to free up space for your pictures and videos here's how - Tom's Guide