Category Archives: Cloud Storage

Google Cloud’s AI Hypercomputer cloud infrastructure gets new GPUs, TPUs, optimized storage and more – SiliconANGLE News

Google Cloud is revamping its AI Hypercomputer architecture with significant enhancements across the board to support rising demand for generative artificial intelligence applications that are becoming increasingly pervasive in enterprise workloads.

At Google Cloud Next 24 today, the company announced updates to almost every layer of the AI Hypercomputer cloud architecture, with new virtual machines powered by Nvidia Corp.s most advanced graphics processing units one of the most significant revelations. In addition, it unveiled enhancements to its storage infrastructure for AI workloads, plus the underlying software for running AI models, and more flexible consumption options with its Dynamic Workload Scheduler service.

The updates were announced by Mark Lohmeyer, vice president and general manager of Compute and ML Infrastructure at Google Cloud. He explained that generative AI has gone from almost nowhere just a couple of years ago to becoming widespread across a wide range of enterprise applications encompassing text, code, videos, images, voice, music and more, placing incredible strains on the underlying compute, networking and storage infrastructure that supports it.

To support the increasingly powerful generative AI models being adopted across the enterprise today, Google Cloud has announced the general availability of what it says is its most powerful and scalable tensor processing unit to date. Its called the TPU v5p, and it has been designed with a single purpose in mind to train and run the most demanding generative AI models.

TPU v5p is built to deliver enormous computing power, with a single pod containing 8,960 chips running in unison, which is more than twice as many as the number in a TPU v4 pod. According to Lohmeyer, the TPU v5p delivers some impressive performance gains, with twice as many floating point operations per second and three-times more high-bandwidth memory on a per-chip basis, resulting in vastly improved overall throughput.

To enable customers to train and serve AI models running on large-scale TPU clusters, Google is adding support for the TPU v5p virtual machines on Google Kubernetes Engine, its cloud-hosted service for running software containers.

As an alternative, customers can also use the latest hardware from Nvidia to train their generative AI models on Google Cloud. Besides its TPU family, its also providing access to Nvidias H100 GPUs through its new A3 family of VMs. The A3 Mega VM will become generally available from next month, and one of its main advantages will be support for confidential computing, which refers to techniques that can protect the most sensitive data from unauthorized access even while its being processed. This is a key development, Lohmeyer said, as it will provide a way for generative AI models to access data that was previously deemed too risky for them to process.

Character.AI is using Google Clouds Tensor Processor Units and A3 VMs running on Nvidias H100 Tensor Core GPUs to train and infer LLMs faster and more efficiently, said Character Technologies Inc. Chief Executive Noam Shazeer. The optionality of GPUs and TPUs running on the powerful AI-first infrastructure makes Google Cloud our obvious choice as we scale to deliver new features and capabilities to millions of users.

More exciting, perhaps, is what Google Cloud has in store for later in the year. Though it hasnt said when, the company confirmed that its planning to bring Nvidias recently announced but not yet released Blackwell GPUs to its AI Hypercomputer architecture. Lohmeyer said the Blackwell GPUs will be made available in two configurations, with VMs powered by both the HGX B200 and GB200 NVL72 GPUs. The former are designed for the most demanding AI workloads, while the latter is expected to support a new era of real-time large language model inference and massive-scale training for trillion-parameter scale models.

More powerful compute is just one part of the infrastructure equation when it comes to supporting advanced generative AI workloads. In addition, enterprises also need access to more capable storage systems that keep their data as close as possible to the compute instances that power them. The idea is that this reduces latency to train models faster, and with todays updates, Google Cloud claims its storage systems are now among the best in the business, with improvements that maximize GPU and TPU utilization, resulting in superior energy efficiency and cost optimization.

Todays updates include the general availability of Cloud Storage FUSE, a new file-based interface for Google Cloud Storage that enables AI and machine learning applications to tap into file-based access to its cloud storage resources. According to Google Cloud, GCS FUSE delivers an increase in training throughput of 2.9 times compared with its existing storage systems, with model serving performance showing a 2.2-times improvement.

Other enhancements include support for caching in preview within Parallelstore, a high-performance parallel file system thats optimized for AI and high-performance computing workloads. With its caching capabilities, Parallelstore enables up to 3.9 times faster training times and 3.7 times superior training throughput, compared to traditional data loaders.

The company also announced AI-focused optimizations to the Filestore service, which is a network file system that enables entire clusters of GPUs and TPUs to simultaneously access the same data.

Lastly, theres the new Hyperdisk ML service, which delivers block storage, available now in preview. With this, Google Cloud claims it can accelerate model load times by up to 12-times compared to alternative services.

A third part of the generative AI equation is the open-source software thats used to support many of these models, and Google Cloud hasnt ignored these either. Its offering a range of updates across its software stack that it says will help simplify developer experiences and improve performance and cost efficiencies.

The software updates include the debut of MaxDiffusion, a new high-performance and scalable reference implementation for diffusion models that generate images. In addition, the company announced a range of new open models available now in MaxText, such as Gemma, GPT3, Llama 2 and Mistral.

The MaxDiffusion and MaxTest models are built on a high performance numerical computing framework called JAX, which is integrated with the OpenXLA compiler to optimize numerical functions and improve model performance. The idea is that these components ensure the most effective implementation of these models, so developers can focus on the math.

In addition, Google announced support for the latest version of the popular PyTorch AI framework, PyTorch/XLA 2.3, which will debut later this month.

Lastly, the company unveiled a new LLM inference engine called Jetstream. Its an open-source offering thats throughput- and memory-optimized for AI accelerators such as Google Clouds TPUs. According to Lohmeyer, it will provide three-times higher performance per dollar on Gemma 7B and other open AI models.

As customers bring their AI workloads to production, theres an increasing demand for a cost-efficient inference stack that delivers high performance, he explained. JetStream helps with this need and offers support for models trained with both JAX and PyTorch/XLA, and includes optimizations for popular open models such as Llama 2 and Gemma.

The final ingredient for running generative AI on Googles cloud stack is the Dynamic Workload Scheduler, which delivers resource management and job scheduling capabilities to developers. The main idea is that it improves access to AI computing capacity while providing tools to optimize spending on these resources.

With todays update, Dynamic Workload Scheduler now provides two starting modes flex start mode for enhanced obtainability with optimized economics, and calendar mode, for more predictable job start times and durations. Both modes are now available in preview.

According to Lohmeyer, flex start jobs will be cued to run as soon as possible, based on resource availability. This will make it easier for developers to access the TPU and GPU resources they need for workloads with more flexible start times. As for calendar mode, this provides short-term reserved access to AI compute resources including TPUs and GPUs. Users will be able to reserve co-located GPUs for a period of up to 14 days, up to eight weeks in advance. Reservations will be confirmed, and the capacity will come available on the requested start date.

Dynamic Workload Scheduler improved on-demand GPU obtainability by 80%, accelerating experiment iteration for our researchers, said Alex Hays, a software engineer at Two Sigma Inc. Leveraging the built-in Kueue and GKE integration, we were able to take advantage of new GPU capacity in Dynamic Workload Scheduler quickly and save months of development work.

THANK YOU

Originally posted here:
Google Cloud's AI Hypercomputer cloud infrastructure gets new GPUs, TPUs, optimized storage and more - SiliconANGLE News

Why won’t Google increase its free 15GB cloud storage? – Pocket-lint

Key Takeaways

It seems like everyone and their dog has a Google account nowadays. Its the most popular email service around, with over a billion daily users, but its usefulness doesnt end there. Its used as a hub for all the Google services, allows easy syncing of Google Chrome between devices, and enables hundreds of other quality-of-life features.

One of the most handy perks that getting a Google account gets you is 15GB of free cloud storage available on Google Drive. Sure, that storage is shared between your Gmail, Google Drive, and Google Photos, but its still useful for keeping your backup, email attachments, and a few documents around and ready to share online.

The 15GB limit between all the Google services was introduced back in 2013, and the bar has not been raised since. On the contrary, over the years, the company opted to remove some of the advantages that its cloud storage offered, such as unlimited photo backup for Google Pixel users, essentially making it a worse deal than it used to be all these years ago.

That begs a question: Why doesnt the free storage tier change? Over the years, prices of storage have gone down significantly, so Google should -- at least theoretically -- be able to offer much more storage to Gmail users. Unfortunately, its not as simple as that, and there are a few quite good reasons that the company is sticking to its 15GB limit.

Lets talk about the expenses first. Its true that storage prices have gone significantly down in the last few years, with both hard drives and SSDs significantly coming down in price per gigabyte. However, this fact doesnt take into account the growth of Google itself, rising prices of electricity and server space, all of which are contributing to significantly increasing costs of maintaining the cloud storage that the company offers.

In the blog post from 2021 when Google announced the end of unlimited photos storage, the company mentioned that more than 4.3 million GB are added to Google servers by users every day. This number increases significantly every year even without making the free storage tier bigger, so the operating costs for Google are tremendous. So, the biggest and most obvious reason that the company doesnt make their free storage tier bigger is the cost.

Plus, 15GB is still one of the bigger allowances around, so Google doesnt see the need to compete in this space anymore, and doing the bare minimum is usually preferable for giant companies to minimize their costs.

Speaking of doing the bare minimum: Most users really do not need more than 15GB of free storage.

For tech enthusiasts, 15GB of storage might feel like a pittance, but for a casual user whos only backing up some photos from their Android phone and getting a few emails a day, 15GB is really much more than enough. That's especially true if you only use your Google account for Gmail. Seeing as the maximum attachment size is 25MB, you could easily store 600 emails with the biggest attachment possible before running out of space.

Thats quite an unrealistic scenario, though, so lets see something more day-to-day.

I got my personal Gmail account around 2010, and ever since I have probably never deleted more than 50 emails. I use this account for almost everything, with tens of emails every day that end up simply rotting in the inbox -- a terrible habit, I know, but who has the time to take care of their inbox? Whats the result? Over these years, with more than 10,000 unread emails and probably more read than that, my Gmail has grown to 1.74GB. I could be as disorganized as I want for the rest of my life, and my Gmail account wouldn't touch the free 15GB limit anyway.

Of course, that's different if you want to use Google Photos as your backup or Google Drive to share and store some files, but for the most basic uses, 15GB of free cloud storage really is enough for most people.

Ultimately, though, the reason why Google doesnt want to give you more free cloud storage is really simple: It wants to make money selling you this service. Especially now that cloud storage is getting even more popular and widespread, its difficult to imagine Google taking a step back and offering more free storage, considering the push toward using Google One.

Of course, its not all bad in the paid cloud storage world. I know because Ive been using Google One for a while now. The cheapest tier is quite affordable at $1.99 per month and gets you not only 100GB of cloud storage across Google services, but some additional goodies as well. Were talking about the ability to share your storage space with up to five people, as well as more editing tools in Google Photos.

However, the real fun starts when you choose the highest-priced Google One plan called AI Premium. Not only does it include 2TB of cloud storage, but more importantly, it also lets you use Google Gemini Advanced. Its an improved Gemini AI model, which works both as a standalone chatbot, but is also available in Google Docs, Gmail, and other Google services if you buy the highest tier of Google One subscription.

So, ultimately, you shouldnt expect Google to offer more free cloud storage any time soon, as it would significantly harm the companys business and discourage users from buying the services that Google wants to push.

You really shouldnt worry that much about the lack of free cloud storage available. Ultimately, using Googles (or anyone elses for that matter) cloud solution is not only not very safe, but its also not the best practice if you value the safety of your data. Instead, if you feel like 15GB is not enough for you, you should look into getting yourself your own Network-Attached Storage, or maybe even setting up your own cloud storage solution. It would not only let you create a cloud storage service thats much more spacious than the ones offered by Google or other companies, but thats also, ultimately, much more affordable in the long run.

Read more from the original source:
Why won't Google increase its free 15GB cloud storage? - Pocket-lint

Google Cloud NEXT 2024: The hottest news, in brief – The Stack

Google Clouds first Arm-based CPU for the data centre, a host of new compute and storage services that dramatically improve generative AI performance, a security-centric Chrome offering, and a flurry of enterprise-focused Workspace updates that take the fight to Microsoft 365.

Also, AI in everything, including Gemini and Vertex AI in data warehouse BigQuery (with fine tuning) in public preview, for "seamless preparation and analysis of multimodal data such as documents, audio and video files." (nb: Vector search came to Big Query in preview in February.)

Those were among the updates set to get serious airtime at Google Cloud NEXT in Las Vegas this week. The Stack will share more considered analysis about some of the news coming through in coming days, along with interviews with executives and customers but heres an early sample from a blockbuster set of press releases, GitHub repositories and blogs...

Unlike traditional email and productivity solutions, Gmail and Workspace were built from the very beginning on a cloud-native architecture, rooted in zero-trust principles, and augmented with AI-powered threat defenses.

So said Google pointedly in the wake of the CSRBs blistering indictment of Microsofts security, which noted pointedly that Redmond had designed its consumer MSA identity infrastructure more than 20 years ago.)

Workspace, Googles suite of collaboration and productivity applications, has approximately 10 million paying users. That makes it a minnow compared to the 300 million+ paid seats Office 365 boasted back in 2022.

It could be more of a threat to Microsoft.

A series of new features unveiled today may make it one. They include a new $10/user AI Security add-on that will let Workspace admins automatically classify and protect sensitive files and data using privacy-preserving AI models and Data Loss Prevention [DLP] controls trained for their organization a Google spokesperson told The Stack that were extending DLP controls and classification labels to Gmail in beta.

Pressed for detail, they told us that these will include:

Also coming soon: Experimental support for post-quantum cryptography (PQC) in client-side encryption [with partners] Thales and Fortanix

A new generative AI service called Google Vids baked into Google Workspace may get more headlines. Thats a video, writing, production, and editing assistant that will work in-browser and sit alongside Docs, Sheets, and Slides from June. Less of a serious competitor for Premier Pro and more a templating assistant that pieces together your first draft with suggested scenes from stock videos, images, and background music.(The Stack has clarified that users can also upload their own video, not just use stock...)

Other Workspace updates today:

Chat: Increased member capacity of up to 500,000 in Spaces for those bigger enterprise customers. Also new: GA messaging interoperability with Slack and Teams through Google-funded Mio, and various AI integrations and enhancements across Docs, Sheets etc.

NVIDIA CEO Jensen Huang anticipates over $1 trillion in data center spending over the next four years as infrastructure is heavily upgraded for more generative AI-centric workloads. This isnt just a case of plumbing in more GPUs Google Cloud is showcasing some real innovations here.

It boasted significant enhancements at every layer of our AI Hypercomputer architecture [including] performance-optimized hardware, open software and frameworks

Top of the list and hot off the press:

Various other promises of faster, cheaper compute also abound. But its storage and caching where GCPs R&D work really shines. (Important for generative AI because it is a HUGE bottleneck for most models.)

A standout is the preview release of Hyperdisk, a block storage service optimised for AI inference/serving workloads that Google Cloud says accelerates model load times up to 12X compared to common alternatives, with read-only, multi-attach, and thin provisioning.

Hyperdisk lets uses spin up 2,500 instances to access the same volume and delivers up to 1.2 TiB/s of aggregate throughput per volume: Over 100X greater performance than Microsoft Azure Ultra SSD and Amazon EBS io2 BlockExpress in short its volumes are heavily optimised and managed network storage devices located independently from VMs, so users can detach or move Hyperdisk volumes to keep data, even after deleting VMs.

Hyperdisk performance is decoupled from size, so you can dynamically update the performance, resize your existing Hyperdisk volumes or add more Hyperdisk volumes to a VM to meet your performance and storage space requirements Google boasts, although there are some limitations

Other storage/caching updates:

Chrome Enterprise Premium is a turbocharged version ofChrome Enterprise with new....

Yes, we agree, this sounds rather good too.

More details and pricing in a standalone piece soon.

Follow this link:
Google Cloud NEXT 2024: The hottest news, in brief - The Stack

Neo4j Partners with Google Cloud to Launch New GraphRAG Capabilities for GenAI Applications – Datanami

If you are a visitor of this website:

Please try again in a few minutes.

There is an issue between Cloudflare's cache and your origin web server. Cloudflare monitors for these errors and automatically investigates the cause. To help support the investigation, you can pull the corresponding error log from your web server and submit it our support team. Please include the Ray ID (which is at the bottom of this error page). Additional troubleshooting resources.

Read the original post:
Neo4j Partners with Google Cloud to Launch New GraphRAG Capabilities for GenAI Applications - Datanami

Google Photos on Android seems primed to pick up a ‘recover storage’ option – Android Central

A new option hidden within the code for the Google Photos app teases a familiar space-saving function.

According to PunikaWeb, courtesy of AssembleDebug, the latest 6.78 version of Photos contains information regarding a coming "Recover Storage" option. The feature was discovered within the "Account Storage" section, under "Manage Storage." Upon tapping, the Android app showed an addition to the page that would let users "convert photos to Storage saver."

Google's description says the saver will "recover some storage" by reducing the quality of your previously cloud-saved items to save space. This method involves all of a user's photos and videos they've saved via the cloud.

A subsequent page states Photos will not touch the original quality of items stored in Gmail, Drive, or YouTube. Additionally, other items on a user's Pixel device may not be roped into this either.

The publication states Google's continued development of Recover Storage has brought in more information about photo/video compression. The company will seemingly warn users in-app that compressing their older items to a reduced quality "can't be reversed."

Users should also be prepared to wait a while as the app does its thing, which could take a few days.

Image 1 of 2

If this feature sounds familiar, it's because the web-based version of Photos already offers this space-saving option. The good thing is that compressing your older media won't affect your future uploads, as stated on its support page. So, if you're running out of space (again), you can always try to compress your files again.

Get the latest news from Android Central, your trusted companion in the world of Android

There's speculation that Google could roll out its Recover Storage option to Android users soon as its functionality seems nearly done. Moreover, it seems it will arrive for iOS devices in conjunction with Android.

Yesterday (Apr. 10), the company announced that a few powerful AI editing tools will soon arrive in Photos for free. Beginning May 15, all users can utilize Magic Eraser, Photo Unblur, Portrait Light, and a few more without a subscription. Eligible devices include those running Android 8 and above, Chromebook Plus devices, and iOS 15 and above.

King of the Androids

The Google Pixel 8 Pro arrived with a paradigm shift in tow. The device features loads of Google's AI software such as Gemini and other tools for editing up blemishes in our photos. Moreover, the Pixel 8 Pro delivers an immersive display for smooth scrolling, great haptics, and more.

Go here to see the original:
Google Photos on Android seems primed to pick up a 'recover storage' option - Android Central

HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery – GlobeNewswire

Boston, Massachusetts, April 09, 2024 (GLOBE NEWSWIRE) -- HYCU, Inc., a leader in data protection as a service and one of the fastest growing companies in the industry, today announced that it has received the 2024 Google Cloud Technology Partner of the Year for Backup and DR. HYCU is being recognized for their achievements in the Google Cloud ecosystem, helping joint customers do more with less by leveraging HYCUs R-Cloud platform that runs natively with Google Cloud to provide core data protection services including enterprise class automated backup and granular recovery across Google Cloud and other IaaS, DBaaS, PaaS, and SaaS services.

Google Clouds Partner Awards celebrate the transformative impact and value that partners have delivered for customers, said Kevin Ichhpurani, Corporate Vice President, Global Ecosystem and Channels at Google Cloud. Were proud to announce HYCU as a 2024 Google Partner Award winner and recognize their achievements enabling customer success from the past year.

HYCU provides backup and recovery for the broadest number of IaaS, DBaaS, PaaS, and SaaS services for Google Cloud currently. This support includes Google Workspace, BigQuery, CloudSQL, AlloyDB, Cloud Functions, Cloud Run, and AppEngine with enhanced capabilities for GKE. This support is in addition to Google Cloud services including Google Compute Engine, Google Cloud Storage, Google Cloud VMware Engine, and SAP on Google. With the HYCU R-Cloud Platform, HYCU can now help customers protect more Google Cloud services than any other provider in the industry. HYCU recently announced it has passed the 70 SaaS integration milestone threshold.

In a year when the threat landscape evolved to put companies at an even higher risk of data loss due to cyber threats, HYCU built an industry leading solution on Google Cloud to help customers extend purpose-built data protection to more of the Google Cloud services and SaaS applications that their businesses rely on, said Simon Taylor, Founder and CEO, HYCU, Inc. HYCUs innovation has also helped drive more growth for Google through double digit Google Marketplace GTV YoY. And, more HYCU customers recognized the value of HYCU R-Cloud to leverage the full power of R-Cloud for data protection across Google Cloud, on-prem, and SaaS, with all data backups stored securely using Google Cloud Storage. All of us at HYCU are both excited and proud to be named a Partner of the Year. It is yet another milestone as we look to solve the worlds modern data protection challenges.

Since the HYCU R-Cloud Platform was released and running on Google Cloud, customers have been able to benefit from R-Graph, the first visualization tool designed to help visualize a companys entire data estate including on-premises, Google Cloud and SaaS data. As the industrys first cloud-native platform for data protection, HYCU R-Cloud enables the build and release of enterprise-grade data protection for new data sources quickly and efficiently. This has enabled HYCU to extend data protection to dozens of new Google Cloud services and SaaS applications in the past twelve months, and leverage Google Cloud Storage to securely store backups.

For more information on HYCU R-Cloud, visit: https://www.hycu.com/r-cloud, follow us on X (formerly Twitter), connect with us on LinkedIn, Facebook, Instagram, and YouTube.

HYCU is showcasing its solution during Google Cloud Next from April 9th through the 11th in Las Vegas at booth #552. Attendees can learn more about HYCU's modern data protection approach firsthand.

# # #

About HYCU HYCU is the fastest-growing leader in the multi-cloud and SaaS data protection as a service industry. By bringing true SaaS-based data backup and recovery to on-premises, cloud-native and SaaS environments, the company provides unparalleled data protection, migration, disaster recovery, and ransomware protection to thousands of companies worldwide. As an award-winning and recognized visionary in the industry, HYCU solutions eliminate complexity, risk, and the high cost of legacy-based solutions, providing data protection simplicity to make the world safer. With an industry leading NPS score of 91, customers experience frictionless, cost-effective data protection, anywhere, everywhere. HYCU has raised $140M in VC funding to date and is based in Boston, Mass. Learn more at http://www.hycu.com.

The rest is here:
HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery - GlobeNewswire

Sovereign cloud services pick up steam as Rackspace unveils new public sector platform – ITPro

Rackspace Technology has become the latest firm to tap sovereign cloud services as a product offering, in this case specifically aimed at supporting workloads in the UK public sector and other regulated services.

Aligning itself with the National Cyber Security Centre (NCSC) and other UK regulatory bodies, Rackspaces UK Sovereign Services platform will provide dedicated compute and storage Pods for various sectors.

With segregation between each pod, Rackspace said it will provide a cost-effective hosting solution for customers designed to provide high levels of data and workload security.

As this platform has a focus on public sector cloud workloads, this separation is key to ensuring bodies from UK healthcare, government, and law enforcement dont operate on overlapping compute or disk workloads.

Government agencies need to adhere stringently to regulatory compliance measures, so operating with sovereign cloud services helps to ensure they can do this more easily while simultaneously benefiting from a reduction in cyber security risk.

Digital independence and sovereignty within the UK have become key requirements of the public sector and many other regulated industries, said Rick Martire, general manager for sovereign services at Rackspace.

This truly digital Sovereign offering allows for the UK public sector to achieve cost savings and compliance all through a single provider, without compromising on security or performance, he added.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.

Cloud sovereignty has been a recurring talking point in both the UK and European Union (EU) over the last year or so, with major companies announcing various plans to introduce dedicated regional cloud services in a bid to adhere to regulations.

While Rackspaces new offering puts the question of cloud service sovereignty more firmly into the UKs market rather than the EUs, it still makes a clear impression as to the increasing global focus on sovereignty in cloud computing.

Microsoft was a notable early voice in the space, revealing plans for its EU data boundary solution that would see a staggered rollout take place in January 2023.

The idea behind this data boundary, set to first be established for the public sector and commercial customers, was to allow users the ability to store and process customer data within the European Union (EU).

Microsoft then upped its commitment to sovereign cloud requirements in January 2024, extending EU processing capabilities to include data found in system-generated logs.

More recently, IBM announced the launch of a data center in Germany which promised to ensure a greater level of data sovereignty for European customers, followed swiftly by Oracles unveiling of its Sovereign Cloud region.

Oracle heralded its new region as paving the way for public and private sector organizations across the EU to gain a greater level of control over both data privacy and sovereignty requirements.

Nvidia also looked to the EU as a focus, partnering with Scaleway to drive the increased availability of sovereign infrastructure in the region.

The rest is here:
Sovereign cloud services pick up steam as Rackspace unveils new public sector platform - ITPro

Removing the hefty price tag: cloud storage without the climate cost – DatacenterDynamics

Tackling the sustainability issue

As of 2022, over half of all corporate data resides in the cloud, meaning demand for cloud storage has never been higher. Like a falling domino, this has triggered severe energy consumption throughout the data center industry, resulting in substantial greenhouse gas (GHG) emissions.

Disturbingly, the European Commission estimates that by 2030, EU data center energy use will increase from 2.7 percent to 3.2 percent of the Union's total demand. This would put the industrys emissions almost neck and neck with pollution from the EUs international aviation.

Yet, it must be remembered that cloud storage is still far more sustainable than the alternatives.

Its crucial to put the energy used by cloud storage into context and consider the savings it can make elsewhere. Thanks to sharing services and file storage, teams can collaborate and work wherever they are, removing the need for large offices and everyday commuting.

This means businesses can downsize their workspaces as well as reduce the environmental impact caused by employees traveling. In fact, its estimated that working from home four days a week can reduce nitrogen dioxide emissions by around 10 percent.

Getty Images

Besides this, cloud storage reduces dependence on physical, on-premises servers. For small and medium-sized businesses (SMBs), having on-site servers or their own data centers can be expensive, whilst running and cooling the equipment requires a lot of energy, which means more CO2 emissions.

Cloud servers, on the other hand, offer a more efficient alternative. Unlike on-premises servers that might only be used to a fraction of their capacity, cloud servers in data centers can be used much more effectively. They often operate at much higher capacities, thanks to virtualization technology that allows a single physical server to act as multiple virtual ones.

Each virtual server can be used by different businesses, meaning fewer physical units are needed overall. This means less energy is required to power and cool, leading to a reduction in overall emissions.

In addition, on-premises servers often have higher storage and computing capacity than needed just to handle occasional spikes in demand, which is an inefficient use of resources. Cloud data centers, by contrast, combine large amounts of equipment to manage these spikes more efficiently.

In 2022, the average power usage effectiveness of data centers improved. This indicates that cloud providers are using energy more efficiently and helping companies reduce their carbon footprint with cloud storage.

Importantly, there are ways to further improve the sustainability of services like cloud storage, which could translate to energy savings of 30-50 percent through greening strategies. So, how can businesses make the sustainable transition from normal cloud storage to green cloud storage? Well, we believe there are three fundamental steps.

Firstly, businesses should consider location. This means picking a cloud storage provider thats close to a power facility. This is because distance matters. If electricity travels a long way between generation and use, a percentage is lost. In addition, data centers located in underwater environments or cooler climates can reduce the energy required for cooling.

Next, businesses should ask green providers about what theyre doing to minimize their environmental impact. For example, powering their operations with solar, wind, or biofuels reduces reliance on fossil fuels and so lowers GHG emissions. Some facilities will house large battery banks to store renewable energy and ensure a continuous, eco-friendly power supply.

Last but certainly not least, technology offers a powerful avenue for enhancing the energy efficiency of cloud storage. Some providers have been investing in algorithms, software, and hardware designed to optimize energy use. For instance, introducing AI and machine learning algorithms or frequency scaling can drastically improve how data centers manage power consumption and cooling.

This is illustrated by Googles use of its DeepMind AI that reduced its data center cooling bill by 40 percent a prime example of how intelligent systems can contribute towards greater sustainability.

With the world warming up at an accelerating rate, selecting a cloud storage provider that demonstrates a clear commitment to sustainability can have a significant impact. In fact, major cloud providers like Google, Microsoft, and Amazon have already taken steps to make their cloud services greener, such as by pledging to move to 100 percent renewable sources of energy.

Undeniably, the cloud is reshaping the nature of business as we know it, but this digital growth risks an unpredictable future with serious environmental consequences. But businesses shouldnt have to choose between the Earth and innovation.

Instead, its a balancing act. And the answer lies in green cloud storage. By choosing providers powered by renewable energy, efficient data centers, and innovative technologies, businesses can reap the rewards of the cloud without incurring a harmful energy penalty on the planet.

Theres no time to waste. We must act now. Businesses have an obligation to choose green cloud storage and be part of the solution, not the problem. By making the switch today, we can ensure the cloud remains a convenient sanctuary, not a climate change culprit.

Continue reading here:
Removing the hefty price tag: cloud storage without the climate cost - DatacenterDynamics

How to open HEIC images on your Android phone or tablet – Android Police

When you capture pictures on your iPhone, the device stores them in the High-Efficiency Image Container (HEIC, or HEIF). Compared to JPEGs and PNGs, HEIC produces high quality at a small size. Android 10 introduced compatibility with the format, so you can open it on your Google Pixel and other phones. Older devices and some apps may not support it.

You must convert them to the acceptable formats before uploading them. With Google Photos, cloud storage services, and third-party apps, you can easily view and convert them. Here's how.

You can open HEIC files on Android if your device runs the Android 10 operating system (OS). HEIC is the default format for capturing images on iPhones and iPads running iOS 11, iPadOS, and macOS High Sierra or newer versions. Before it introduced HEIC support in 2017, Apple's devices used the JPG format.

If you send a HEIC image to the latest Android devices, it retains the format and doesn't automatically change to another format. Google Photos and Files by Google are among the few apps that can open it. Cloud Storage apps like Drive and Dropbox also work, or you can install dedicated HEIC viewer apps on the Play Store.

If you use an Android device from a third-party manufacturer, it should have its own gallery app. You'll see a broken image icon, an error message, or other signs if the device and its built-in apps don't support HEIC.

Upload or back up HEIC images to Google Photos to view them in their original format. The app doesn't convert them, even when you download them to your device. Likewise, the website version retains the default format. Conversion isn't necessary since HEIC is already a small size. Also, Photos focuses on being a cloud storage service where any mobile user can stash pictures and videos and then access them at any time.

Create a shareable link for your photos when you want others to see them. It's the most convenient way to distribute access, as long as everyone has internet access to view the link. If you send the files to others on WhatsApp, Instagram, and other social media apps, the file automatically converts to JPG.

Editing the image within Photos and saving a copy also changes it to JPG. However, you must alter the image before storing it as a copy. You can resize it slightly or apply filters at a minimal level. Samsung Gallery can also open HEIC files on Galaxy devices. The app used to have an option to convert them to JPG. Samsung has since removed it with the One UI 4 update. You can move any images on the app to Google Photos.

Files by Google also has built-in HEIC support, and you can view images without separate conversion software. Plus, it displays all local files on your device, and you can view them offline. Navigate to the folder where the HEIC file is, or search for it. Then tap or click it to view it.

You can edit the image within the app. It has similar tools to Google Photos, including Crop, Adjust, and Filters. Files doesn't have an in-built option to convert the file. However, you can share it with Photos or photo editing apps. Samsung's My Files app works similarly, although it's exclusive to Galaxy smartphones and tablets. It should be preinstalled if you own any of those devices.

Most cloud storage apps have HEIC support and provide web and app interfaces to access the files. Google Drive doesn't automatically convert HEIC files when you download them. But you can create a shareable link for others to view them through an internet connection.

You can also use Dropbox. It provides an option to upload HEIC files as JPG. Recently, some users complained that this option was missing, and the company hasn't officially stated that it removed it. OneDrive is another solution, although you won't find any options to upload HEIC photos in another format.

Third-party apps may provide extensive features beyond basic HEIC support. However, some charge a fee. If you use one, download it from the Google Play Store, as it's the safest app source for your Android device.

HEIC is a newer file format than JPG, PNG, and other image types. Though it offers better image compression and quality, not all devices and apps can open it. If you're an Apple user and frequently share such files with other devices, it's worth switching your default image capture settings to JPG. There isn't much quality difference.

With the iOS 11 update, you can capture and store photos on your iPhone or iPad in JPG format. Use the steps below to do it:

HEIC files aren't a headache if you own an iPhone, but they will be if you share them with Android users. If their devices don't have the latest OS versions or apps that support the format, they can't view the files. Consider setting up Google Drive on your iPhone or a Google Photos account. This way, you only need to share links to the files and save storage space.

See original here:
How to open HEIC images on your Android phone or tablet - Android Police

Podcast: What is distributed cloud storage and what are its benefits? – ComputerWeekly.com

In this podcast, we look at distributed cloud storage with Enrico Signoretti, vice-president of product and partnerships at Cubbit.

We talk about how storage has shifted to hybrid and multicloud modes and how distributed cloud storage separates the control plane from data to provide data retention in multiple locations, on-site and in multiple clouds.

Signoretti also talks about how organisations that need to retain control over data over costs and location, for example can achieve that with distributed cloud, as well as talking about the workloads to which it is best suited.

Enrico Signoretti: So, I can start with why it is important right now and then delve into what it is and what it does.

It is important because we live in a moment where companies are shifting from traditional models, at the beginning, [to] just cloud, and then we discovered hybrid cloud, so keeping some of your IT stuff on-premise and some in the public cloud.

Then we were talking more and more about multicloud; most large enterprises have multiple clouds and multiple applications running in different environments.

So, from this point of view, a distributed cloud is a model thats totally different to what were used to seeing in the market. So, the big hyperscalers do everything in single datacentres. So yes, you see the cloud, but everything running in one or a set of very closed datacentres.

With the model of distributed cloud you separate the control plane from the data plane; something that happened in the past when we were talking about software-defined.

So, the service provider keeps control of this control plane . . . but resources can be used and deployed everywhere. They could be in the same public cloud environment that I mentioned before, or in your datacentre. So, you are building this distributed cloud.

More so, when it comes to storage, when we talk about geo-distributed cloud, it means these resources are really distributed geographically, meaning that you can have some of your data in France maybe and other segments of the data in Italy or Germany, or even more distributed than that.

This is the main concept, and its really important for everybody because it removes a lot of obstacles when it is time to work with the multicloud.

Signoretti: The main benefit of distributed cloud is control. You can have control at several levels. When you start thinking about distributed cloud there is no lock-in because you have the possibility to choose where you put your data.

There is data sovereignty as well as we can call it data independence. Its not only data sovereignty that you achieve but you achieve control on all the layers and all aspects of data management.

And this is very important because even though most of the hyperscalers are very quick to respond to new regulations here in Europe, and also in the US, that are popping up, its still a complex world and for many organisations in Europe giving your data to this kind of organisation is not feasible.

The idea here is that with distributed cloud you have this level of sovereignty that you need but also control on cost, control on policies that are applied on this data management.

Maybe if we think about a comparison between the three models on-premises, public cloud and distributed cloud you can see that distributed cloud is just in the middle between the others. On the one hand, you keep control of the entire stack, and on the other hand, you have the flexibility of the public cloud.

So, matching these two, you can have a very efficient infrastructure that is deployed and managed by your organisation but still keeping all the advantages of public cloud.

Signoretti: You have to think of distributed cloud still as cloud. So, if you have a low latency, high-performance workload for which you usually need the CPU [central processing unit] very close to the storage, thats not for distributed cloud.

In that case, its way better to choose something that is on-premise or in the same cloud.

From my point of view, all other workloads are fine from backup, disaster recovery, collaboration and even big data lakes to store huge amounts of data for AI [artificial intelligence] and ML [machine learning].

In most cases you can have a good throughput. Its just the latency thats not there but the same goes for the public cloud. This is probably the set of use cases that are more suited for distributed cloud.

Read more from the original source:
Podcast: What is distributed cloud storage and what are its benefits? - ComputerWeekly.com