Page 278«..1020..277278279280..290300..»

Datafy raises $6 million Seed round led by Insight Partners to optimize cloud storage – CTech

Datafy, which has developed a cloud storage management platform, announced on Wednesday the completion of a $6 million Seed funding round led by global software investor Insight Partners.

The cloud storage sector is experiencing rapid growth, driven by an exponential increase in data generation due to the growth in AI adoption and the general widespread adoption of cloud technologies. The global cloud storage market is projected to grow from $132 billion in 2024 to $665 billion by 2032. Datafy offers up to 50% savings on storage costs and provides a self-optimizing, developer-independent solution.

Datafy's flagship product, focusing on EBS (Elastic Block Store) on the AWS cloud, simplifies cloud storage management by auto-scaling data storage usage to ensure optimum management at a minimal cost.

Datafy was founded by Zivan Ori (CEO), Yoav Ilovich (CPO), and Ziv Serlin (COO). Ori and Serlin previously founded E8 Storage, which was sold to Amazon in 2019. After the acquisition, the two led a development group in the field of cloud storage at Amazon's R&D center in Israel. Ilovich, a graduate of the IDF's Talpiot unit where he also met Ori, has led product teams for more than 15 years, including VP Product positions at Taboola and at Pagaya.

"Our mission is clear - to give Finops and Devops teams the control they deserve with no effort or big changes to the system, said Ori. With Datafy, we're not just saving money; we're transforming how businesses manage their data in the cloud. Todays funding news is the next step in our journey as we continue to grow."

See the article here:
Datafy raises $6 million Seed round led by Insight Partners to optimize cloud storage - CTech

Read More..

Google Photos is making it easier to free up space for your pictures and videos here’s how – Tom’s Guide

Its really easy to fill up your phones storage with photos and videos. In a time when microSD card support is rarer and rarer, it means the cloud can be a lifesaver. But whats to stop you eating through your storage allowances in the exact same way? Thats where Google Photos storage saver feature comes in.

The goal of storage saver is to free up space in your Google cloud storage, by reducing the quality of backed up photos lowering the amount of storage they need in the process. So far this feature has only been available on desktop, but it looks like itll be making the jump to Android in the near future.

Android Authority spotted this during an APK teardown of the latest version of the Google Photos Android app. The code references Googles storage saver feature, with dialogue mentioning users being able to choose the quality of photos that are backed up to the cloud.

However, reducing the quality is a permanent change, so while you will save storage space it means your photos wont look as detailed as they were when you took them. Which is exactly how storage saver works on the web right now. Presumably that means photos will be reduced to 16MP, and videos downgraded to 1080p.

Its also worth noting that this change covers everything backed up to your Google Photos account. So theres no picking and choosing which files get downgraded, while keeping some at their original quality. While it would be very nice to do that, its not something Google has offered at the time of writing. Google also limits compression to once per day, which should be fine for most people.

Reducing the quality is a permanent change, so while you will save storage space it means your photos wont look as detailed as they were when you took them.

Judging from the code this change just means Android users will be able to tell Google Photos to compress photos and videos on the cloud from their phones rather than having to log into Google Photos in a web browser. Which could prove useful if you dont mind losing some quality to save storage space. After all, Googles 15GB free allowance isnt a lot.

Of course if you need to keep all your photos and videos in their original form, then youll want to pay up for the right amount of storage. Google offers up to 5TB storage as part of Google One, but you may prefer to use one of the other best cloud storage services instead.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

Or, alternatively, if youd rather not be locked in with a subscription, or upload to the cloud, the best external hard drives gives you a way to keep everything backed up locally. Just make sure to back everything up regularly, since it cant be done automatically.

The rest is here:
Google Photos is making it easier to free up space for your pictures and videos here's how - Tom's Guide

Read More..

DigitalGlue to highlight new creative.space //CLOUD and //EDGE-X storage solutions at NAB 2024 – NewscastStudio

DigitalGlue will debut of its latest storage solutions, the creative.space //CLOUD and the //EDGE-X storage server, at the NAB Show 2024. Tailored for the content creation industry, these offerings are designed to provide scalable, secure, and cost-effective data management for businesses and creative professionals. Attendees can experience these innovative solutions firsthand at booth SL9081 and apply for the chance to win 10 TB of //CLOUD storage.

creative.space //CLOUD: Scalable and Affordable Storage for Creative Teams DigitalGlue is introducing a cloud-hosting option to the award-winning creative.space platform as a compelling option for creative teams needing an off-site collaboration solution. By leveraging patented UltraIO technology, creative.spaces //CLOUD storage servers provide unprecedented performance, data protection, and efficiency through the ability to offload CPU tasks to GPUs. //CLOUD customers have access to a dedicated node that provides the same features and experience as DigitalGlues on-premises systems, including desktop mounting, link sharing, HTTPS transfers, and more. While users have the option to stream data over the internet for remote editing, DigitalGlue also provides the option to host Mac Studio workstations for screen sharing access that are networked to cloud storage with a 10 GbE or higher connectivity. This provides a separation between the user and the data for added security, while also leveraging the new high-performance mode option in MacOS Sonoma for remote editing at the highest quality over low bandwidth internet connections. Offered at only $195/month for 10 TB, this solution stands out for its affordability and scalability, making it an ideal choice for creative teams.

//EDGE-X: Compact and Efficient Storage Server The //EDGE-X server is an all-flash SSD-based storage server featuring all of the functionality of the creative.space platform. Its compact form factor and lack of spinning disks make it the ideal solution for on-set storage, including mounted directly to a tripod. Productions can ingest directly from cameras from vendors such as RED and Blackmagic Design over a network connection using the creative.space web app, instead of having to shuttle capture cards. The //EDGE-X is adaptable for many use cases, easily integrating with the creative.space //CLOUD. The //EDGE-X is available for $250/month for 15TB, under a 5-year contract paid annually, offering an efficient solution for creative professionals.

Combined Offering for Comprehensive Data Management DigitalGlue also provides a bundled solution that includes 15TB of creative.space //CLOUD storage and the //EDGE-X server for a total of $445/month, based on a 5-year contract paid annually. This package is crafted to offer creative teams a comprehensive set of tools for efficient digital asset management, enhancing their ability to collaborate and produce content effectively.

No Hidden Fees and a Unified User Experience The creative.space platform delivers a consistent user experience across desktop and web applications, with features such as desktop mounting, media browsing, and file transfers. This uniform approach ensures a transparent pricing model, with fixed monthly or annual rates and no additional user access or task-specific fees.

Launching at NAB 2024: A New Era of Content Creation Collaboration DigitalGlue is proud to introduce the creative.space //CLOUD and //EDGE-X server at NAB 2024 in booth SL9081 and offer attendees the chance to win 10TB of //CLOUD storage. These products aim to transform the way creative teams manage and collaborate on digital assets. By offering a mix of on-premises and //CLOUD storage solutions, these products are set to streamline content creation workflows, addressing the industrys need for secure, accessible, and cost-effective data storage solutions.

View original post here:
DigitalGlue to highlight new creative.space //CLOUD and //EDGE-X storage solutions at NAB 2024 - NewscastStudio

Read More..

Google Cloud’s AI Hypercomputer cloud infrastructure gets new GPUs, TPUs, optimized storage and more – SiliconANGLE News

Google Cloud is revamping its AI Hypercomputer architecture with significant enhancements across the board to support rising demand for generative artificial intelligence applications that are becoming increasingly pervasive in enterprise workloads.

At Google Cloud Next 24 today, the company announced updates to almost every layer of the AI Hypercomputer cloud architecture, with new virtual machines powered by Nvidia Corp.s most advanced graphics processing units one of the most significant revelations. In addition, it unveiled enhancements to its storage infrastructure for AI workloads, plus the underlying software for running AI models, and more flexible consumption options with its Dynamic Workload Scheduler service.

The updates were announced by Mark Lohmeyer, vice president and general manager of Compute and ML Infrastructure at Google Cloud. He explained that generative AI has gone from almost nowhere just a couple of years ago to becoming widespread across a wide range of enterprise applications encompassing text, code, videos, images, voice, music and more, placing incredible strains on the underlying compute, networking and storage infrastructure that supports it.

To support the increasingly powerful generative AI models being adopted across the enterprise today, Google Cloud has announced the general availability of what it says is its most powerful and scalable tensor processing unit to date. Its called the TPU v5p, and it has been designed with a single purpose in mind to train and run the most demanding generative AI models.

TPU v5p is built to deliver enormous computing power, with a single pod containing 8,960 chips running in unison, which is more than twice as many as the number in a TPU v4 pod. According to Lohmeyer, the TPU v5p delivers some impressive performance gains, with twice as many floating point operations per second and three-times more high-bandwidth memory on a per-chip basis, resulting in vastly improved overall throughput.

To enable customers to train and serve AI models running on large-scale TPU clusters, Google is adding support for the TPU v5p virtual machines on Google Kubernetes Engine, its cloud-hosted service for running software containers.

As an alternative, customers can also use the latest hardware from Nvidia to train their generative AI models on Google Cloud. Besides its TPU family, its also providing access to Nvidias H100 GPUs through its new A3 family of VMs. The A3 Mega VM will become generally available from next month, and one of its main advantages will be support for confidential computing, which refers to techniques that can protect the most sensitive data from unauthorized access even while its being processed. This is a key development, Lohmeyer said, as it will provide a way for generative AI models to access data that was previously deemed too risky for them to process.

Character.AI is using Google Clouds Tensor Processor Units and A3 VMs running on Nvidias H100 Tensor Core GPUs to train and infer LLMs faster and more efficiently, said Character Technologies Inc. Chief Executive Noam Shazeer. The optionality of GPUs and TPUs running on the powerful AI-first infrastructure makes Google Cloud our obvious choice as we scale to deliver new features and capabilities to millions of users.

More exciting, perhaps, is what Google Cloud has in store for later in the year. Though it hasnt said when, the company confirmed that its planning to bring Nvidias recently announced but not yet released Blackwell GPUs to its AI Hypercomputer architecture. Lohmeyer said the Blackwell GPUs will be made available in two configurations, with VMs powered by both the HGX B200 and GB200 NVL72 GPUs. The former are designed for the most demanding AI workloads, while the latter is expected to support a new era of real-time large language model inference and massive-scale training for trillion-parameter scale models.

More powerful compute is just one part of the infrastructure equation when it comes to supporting advanced generative AI workloads. In addition, enterprises also need access to more capable storage systems that keep their data as close as possible to the compute instances that power them. The idea is that this reduces latency to train models faster, and with todays updates, Google Cloud claims its storage systems are now among the best in the business, with improvements that maximize GPU and TPU utilization, resulting in superior energy efficiency and cost optimization.

Todays updates include the general availability of Cloud Storage FUSE, a new file-based interface for Google Cloud Storage that enables AI and machine learning applications to tap into file-based access to its cloud storage resources. According to Google Cloud, GCS FUSE delivers an increase in training throughput of 2.9 times compared with its existing storage systems, with model serving performance showing a 2.2-times improvement.

Other enhancements include support for caching in preview within Parallelstore, a high-performance parallel file system thats optimized for AI and high-performance computing workloads. With its caching capabilities, Parallelstore enables up to 3.9 times faster training times and 3.7 times superior training throughput, compared to traditional data loaders.

The company also announced AI-focused optimizations to the Filestore service, which is a network file system that enables entire clusters of GPUs and TPUs to simultaneously access the same data.

Lastly, theres the new Hyperdisk ML service, which delivers block storage, available now in preview. With this, Google Cloud claims it can accelerate model load times by up to 12-times compared to alternative services.

A third part of the generative AI equation is the open-source software thats used to support many of these models, and Google Cloud hasnt ignored these either. Its offering a range of updates across its software stack that it says will help simplify developer experiences and improve performance and cost efficiencies.

The software updates include the debut of MaxDiffusion, a new high-performance and scalable reference implementation for diffusion models that generate images. In addition, the company announced a range of new open models available now in MaxText, such as Gemma, GPT3, Llama 2 and Mistral.

The MaxDiffusion and MaxTest models are built on a high performance numerical computing framework called JAX, which is integrated with the OpenXLA compiler to optimize numerical functions and improve model performance. The idea is that these components ensure the most effective implementation of these models, so developers can focus on the math.

In addition, Google announced support for the latest version of the popular PyTorch AI framework, PyTorch/XLA 2.3, which will debut later this month.

Lastly, the company unveiled a new LLM inference engine called Jetstream. Its an open-source offering thats throughput- and memory-optimized for AI accelerators such as Google Clouds TPUs. According to Lohmeyer, it will provide three-times higher performance per dollar on Gemma 7B and other open AI models.

As customers bring their AI workloads to production, theres an increasing demand for a cost-efficient inference stack that delivers high performance, he explained. JetStream helps with this need and offers support for models trained with both JAX and PyTorch/XLA, and includes optimizations for popular open models such as Llama 2 and Gemma.

The final ingredient for running generative AI on Googles cloud stack is the Dynamic Workload Scheduler, which delivers resource management and job scheduling capabilities to developers. The main idea is that it improves access to AI computing capacity while providing tools to optimize spending on these resources.

With todays update, Dynamic Workload Scheduler now provides two starting modes flex start mode for enhanced obtainability with optimized economics, and calendar mode, for more predictable job start times and durations. Both modes are now available in preview.

According to Lohmeyer, flex start jobs will be cued to run as soon as possible, based on resource availability. This will make it easier for developers to access the TPU and GPU resources they need for workloads with more flexible start times. As for calendar mode, this provides short-term reserved access to AI compute resources including TPUs and GPUs. Users will be able to reserve co-located GPUs for a period of up to 14 days, up to eight weeks in advance. Reservations will be confirmed, and the capacity will come available on the requested start date.

Dynamic Workload Scheduler improved on-demand GPU obtainability by 80%, accelerating experiment iteration for our researchers, said Alex Hays, a software engineer at Two Sigma Inc. Leveraging the built-in Kueue and GKE integration, we were able to take advantage of new GPU capacity in Dynamic Workload Scheduler quickly and save months of development work.

THANK YOU

Originally posted here:
Google Cloud's AI Hypercomputer cloud infrastructure gets new GPUs, TPUs, optimized storage and more - SiliconANGLE News

Read More..

A 30000TB tower powered by a 70-year-old technology Spectra Logic proves that data tape still has a place in an AI … – TechRadar

Spectra Logic has introduced the Spectra Cube tape library, a cloud-optimized system for on-premise, hybrid cloud, and IaaS environments that is designed to be quickly deployed, dynamically scaled, and easily serviced without tools or downtime.

The Spectra Cube library is managed by the company's recently announced LumOS library management software, which provides secure local and remote management and monitoring.

The tower is compatible with LTO-6, LTO-7, LTO-8, and LTO-9 technology generations and will reportedly support LTO-10 when it becomes available. LTO-6 support allows users to read old tapes all the way back to LTO-4 with an LTO-6 tape drive. The solution features high tape cartridge exchange performance, a TeraPack Access Port for easy tape handling, and drive interfaces including Fibre Channel and SAS.

With a capacity-on-demand expansion model, the Spectra Cube allows for additional tape slots and drives to be enabled via software without downtime. The library offers up to 30PB of native capacity and supports up to 16 partitions for shared or multi-tenant environments.

"As cloud data continues to grow rapidly, the escalating costs of public cloud storage have forced a reckoning, leading to significant interest in moving data to more economical locations including on-prem clouds and hybrid clouds, said Matt Ninesling, senior director of tape portfolio management at Spectra Logic.

Compared to typical public cloud options, Spectra Cube solutions can cut the costs of cold storage by half or more, while providing better data control and protection from existential threats like ransomware.

The price of a fully-fledged Spectra Cube library ranges from under $60,000 to over $500,000 depending on configuration, number of tape drives, amount of media, and other additions to the base library.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Go here to read the rest:
A 30000TB tower powered by a 70-year-old technology Spectra Logic proves that data tape still has a place in an AI ... - TechRadar

Read More..

Why won’t Google increase its free 15GB cloud storage? – Pocket-lint

Key Takeaways

It seems like everyone and their dog has a Google account nowadays. Its the most popular email service around, with over a billion daily users, but its usefulness doesnt end there. Its used as a hub for all the Google services, allows easy syncing of Google Chrome between devices, and enables hundreds of other quality-of-life features.

One of the most handy perks that getting a Google account gets you is 15GB of free cloud storage available on Google Drive. Sure, that storage is shared between your Gmail, Google Drive, and Google Photos, but its still useful for keeping your backup, email attachments, and a few documents around and ready to share online.

The 15GB limit between all the Google services was introduced back in 2013, and the bar has not been raised since. On the contrary, over the years, the company opted to remove some of the advantages that its cloud storage offered, such as unlimited photo backup for Google Pixel users, essentially making it a worse deal than it used to be all these years ago.

That begs a question: Why doesnt the free storage tier change? Over the years, prices of storage have gone down significantly, so Google should -- at least theoretically -- be able to offer much more storage to Gmail users. Unfortunately, its not as simple as that, and there are a few quite good reasons that the company is sticking to its 15GB limit.

Lets talk about the expenses first. Its true that storage prices have gone significantly down in the last few years, with both hard drives and SSDs significantly coming down in price per gigabyte. However, this fact doesnt take into account the growth of Google itself, rising prices of electricity and server space, all of which are contributing to significantly increasing costs of maintaining the cloud storage that the company offers.

In the blog post from 2021 when Google announced the end of unlimited photos storage, the company mentioned that more than 4.3 million GB are added to Google servers by users every day. This number increases significantly every year even without making the free storage tier bigger, so the operating costs for Google are tremendous. So, the biggest and most obvious reason that the company doesnt make their free storage tier bigger is the cost.

Plus, 15GB is still one of the bigger allowances around, so Google doesnt see the need to compete in this space anymore, and doing the bare minimum is usually preferable for giant companies to minimize their costs.

Speaking of doing the bare minimum: Most users really do not need more than 15GB of free storage.

For tech enthusiasts, 15GB of storage might feel like a pittance, but for a casual user whos only backing up some photos from their Android phone and getting a few emails a day, 15GB is really much more than enough. That's especially true if you only use your Google account for Gmail. Seeing as the maximum attachment size is 25MB, you could easily store 600 emails with the biggest attachment possible before running out of space.

Thats quite an unrealistic scenario, though, so lets see something more day-to-day.

I got my personal Gmail account around 2010, and ever since I have probably never deleted more than 50 emails. I use this account for almost everything, with tens of emails every day that end up simply rotting in the inbox -- a terrible habit, I know, but who has the time to take care of their inbox? Whats the result? Over these years, with more than 10,000 unread emails and probably more read than that, my Gmail has grown to 1.74GB. I could be as disorganized as I want for the rest of my life, and my Gmail account wouldn't touch the free 15GB limit anyway.

Of course, that's different if you want to use Google Photos as your backup or Google Drive to share and store some files, but for the most basic uses, 15GB of free cloud storage really is enough for most people.

Ultimately, though, the reason why Google doesnt want to give you more free cloud storage is really simple: It wants to make money selling you this service. Especially now that cloud storage is getting even more popular and widespread, its difficult to imagine Google taking a step back and offering more free storage, considering the push toward using Google One.

Of course, its not all bad in the paid cloud storage world. I know because Ive been using Google One for a while now. The cheapest tier is quite affordable at $1.99 per month and gets you not only 100GB of cloud storage across Google services, but some additional goodies as well. Were talking about the ability to share your storage space with up to five people, as well as more editing tools in Google Photos.

However, the real fun starts when you choose the highest-priced Google One plan called AI Premium. Not only does it include 2TB of cloud storage, but more importantly, it also lets you use Google Gemini Advanced. Its an improved Gemini AI model, which works both as a standalone chatbot, but is also available in Google Docs, Gmail, and other Google services if you buy the highest tier of Google One subscription.

So, ultimately, you shouldnt expect Google to offer more free cloud storage any time soon, as it would significantly harm the companys business and discourage users from buying the services that Google wants to push.

You really shouldnt worry that much about the lack of free cloud storage available. Ultimately, using Googles (or anyone elses for that matter) cloud solution is not only not very safe, but its also not the best practice if you value the safety of your data. Instead, if you feel like 15GB is not enough for you, you should look into getting yourself your own Network-Attached Storage, or maybe even setting up your own cloud storage solution. It would not only let you create a cloud storage service thats much more spacious than the ones offered by Google or other companies, but thats also, ultimately, much more affordable in the long run.

Read more from the original source:
Why won't Google increase its free 15GB cloud storage? - Pocket-lint

Read More..

Google Cloud NEXT 2024: The hottest news, in brief – The Stack

Google Clouds first Arm-based CPU for the data centre, a host of new compute and storage services that dramatically improve generative AI performance, a security-centric Chrome offering, and a flurry of enterprise-focused Workspace updates that take the fight to Microsoft 365.

Also, AI in everything, including Gemini and Vertex AI in data warehouse BigQuery (with fine tuning) in public preview, for "seamless preparation and analysis of multimodal data such as documents, audio and video files." (nb: Vector search came to Big Query in preview in February.)

Those were among the updates set to get serious airtime at Google Cloud NEXT in Las Vegas this week. The Stack will share more considered analysis about some of the news coming through in coming days, along with interviews with executives and customers but heres an early sample from a blockbuster set of press releases, GitHub repositories and blogs...

Unlike traditional email and productivity solutions, Gmail and Workspace were built from the very beginning on a cloud-native architecture, rooted in zero-trust principles, and augmented with AI-powered threat defenses.

So said Google pointedly in the wake of the CSRBs blistering indictment of Microsofts security, which noted pointedly that Redmond had designed its consumer MSA identity infrastructure more than 20 years ago.)

Workspace, Googles suite of collaboration and productivity applications, has approximately 10 million paying users. That makes it a minnow compared to the 300 million+ paid seats Office 365 boasted back in 2022.

It could be more of a threat to Microsoft.

A series of new features unveiled today may make it one. They include a new $10/user AI Security add-on that will let Workspace admins automatically classify and protect sensitive files and data using privacy-preserving AI models and Data Loss Prevention [DLP] controls trained for their organization a Google spokesperson told The Stack that were extending DLP controls and classification labels to Gmail in beta.

Pressed for detail, they told us that these will include:

Also coming soon: Experimental support for post-quantum cryptography (PQC) in client-side encryption [with partners] Thales and Fortanix

A new generative AI service called Google Vids baked into Google Workspace may get more headlines. Thats a video, writing, production, and editing assistant that will work in-browser and sit alongside Docs, Sheets, and Slides from June. Less of a serious competitor for Premier Pro and more a templating assistant that pieces together your first draft with suggested scenes from stock videos, images, and background music.(The Stack has clarified that users can also upload their own video, not just use stock...)

Other Workspace updates today:

Chat: Increased member capacity of up to 500,000 in Spaces for those bigger enterprise customers. Also new: GA messaging interoperability with Slack and Teams through Google-funded Mio, and various AI integrations and enhancements across Docs, Sheets etc.

NVIDIA CEO Jensen Huang anticipates over $1 trillion in data center spending over the next four years as infrastructure is heavily upgraded for more generative AI-centric workloads. This isnt just a case of plumbing in more GPUs Google Cloud is showcasing some real innovations here.

It boasted significant enhancements at every layer of our AI Hypercomputer architecture [including] performance-optimized hardware, open software and frameworks

Top of the list and hot off the press:

Various other promises of faster, cheaper compute also abound. But its storage and caching where GCPs R&D work really shines. (Important for generative AI because it is a HUGE bottleneck for most models.)

A standout is the preview release of Hyperdisk, a block storage service optimised for AI inference/serving workloads that Google Cloud says accelerates model load times up to 12X compared to common alternatives, with read-only, multi-attach, and thin provisioning.

Hyperdisk lets uses spin up 2,500 instances to access the same volume and delivers up to 1.2 TiB/s of aggregate throughput per volume: Over 100X greater performance than Microsoft Azure Ultra SSD and Amazon EBS io2 BlockExpress in short its volumes are heavily optimised and managed network storage devices located independently from VMs, so users can detach or move Hyperdisk volumes to keep data, even after deleting VMs.

Hyperdisk performance is decoupled from size, so you can dynamically update the performance, resize your existing Hyperdisk volumes or add more Hyperdisk volumes to a VM to meet your performance and storage space requirements Google boasts, although there are some limitations

Other storage/caching updates:

Chrome Enterprise Premium is a turbocharged version ofChrome Enterprise with new....

Yes, we agree, this sounds rather good too.

More details and pricing in a standalone piece soon.

Follow this link:
Google Cloud NEXT 2024: The hottest news, in brief - The Stack

Read More..

Google Photos on Android seems primed to pick up a ‘recover storage’ option – Android Central

A new option hidden within the code for the Google Photos app teases a familiar space-saving function.

According to PunikaWeb, courtesy of AssembleDebug, the latest 6.78 version of Photos contains information regarding a coming "Recover Storage" option. The feature was discovered within the "Account Storage" section, under "Manage Storage." Upon tapping, the Android app showed an addition to the page that would let users "convert photos to Storage saver."

Google's description says the saver will "recover some storage" by reducing the quality of your previously cloud-saved items to save space. This method involves all of a user's photos and videos they've saved via the cloud.

A subsequent page states Photos will not touch the original quality of items stored in Gmail, Drive, or YouTube. Additionally, other items on a user's Pixel device may not be roped into this either.

The publication states Google's continued development of Recover Storage has brought in more information about photo/video compression. The company will seemingly warn users in-app that compressing their older items to a reduced quality "can't be reversed."

Users should also be prepared to wait a while as the app does its thing, which could take a few days.

Image 1 of 2

If this feature sounds familiar, it's because the web-based version of Photos already offers this space-saving option. The good thing is that compressing your older media won't affect your future uploads, as stated on its support page. So, if you're running out of space (again), you can always try to compress your files again.

Get the latest news from Android Central, your trusted companion in the world of Android

There's speculation that Google could roll out its Recover Storage option to Android users soon as its functionality seems nearly done. Moreover, it seems it will arrive for iOS devices in conjunction with Android.

Yesterday (Apr. 10), the company announced that a few powerful AI editing tools will soon arrive in Photos for free. Beginning May 15, all users can utilize Magic Eraser, Photo Unblur, Portrait Light, and a few more without a subscription. Eligible devices include those running Android 8 and above, Chromebook Plus devices, and iOS 15 and above.

King of the Androids

The Google Pixel 8 Pro arrived with a paradigm shift in tow. The device features loads of Google's AI software such as Gemini and other tools for editing up blemishes in our photos. Moreover, the Pixel 8 Pro delivers an immersive display for smooth scrolling, great haptics, and more.

Go here to see the original:
Google Photos on Android seems primed to pick up a 'recover storage' option - Android Central

Read More..

HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery – GlobeNewswire

Boston, Massachusetts, April 09, 2024 (GLOBE NEWSWIRE) -- HYCU, Inc., a leader in data protection as a service and one of the fastest growing companies in the industry, today announced that it has received the 2024 Google Cloud Technology Partner of the Year for Backup and DR. HYCU is being recognized for their achievements in the Google Cloud ecosystem, helping joint customers do more with less by leveraging HYCUs R-Cloud platform that runs natively with Google Cloud to provide core data protection services including enterprise class automated backup and granular recovery across Google Cloud and other IaaS, DBaaS, PaaS, and SaaS services.

Google Clouds Partner Awards celebrate the transformative impact and value that partners have delivered for customers, said Kevin Ichhpurani, Corporate Vice President, Global Ecosystem and Channels at Google Cloud. Were proud to announce HYCU as a 2024 Google Partner Award winner and recognize their achievements enabling customer success from the past year.

HYCU provides backup and recovery for the broadest number of IaaS, DBaaS, PaaS, and SaaS services for Google Cloud currently. This support includes Google Workspace, BigQuery, CloudSQL, AlloyDB, Cloud Functions, Cloud Run, and AppEngine with enhanced capabilities for GKE. This support is in addition to Google Cloud services including Google Compute Engine, Google Cloud Storage, Google Cloud VMware Engine, and SAP on Google. With the HYCU R-Cloud Platform, HYCU can now help customers protect more Google Cloud services than any other provider in the industry. HYCU recently announced it has passed the 70 SaaS integration milestone threshold.

In a year when the threat landscape evolved to put companies at an even higher risk of data loss due to cyber threats, HYCU built an industry leading solution on Google Cloud to help customers extend purpose-built data protection to more of the Google Cloud services and SaaS applications that their businesses rely on, said Simon Taylor, Founder and CEO, HYCU, Inc. HYCUs innovation has also helped drive more growth for Google through double digit Google Marketplace GTV YoY. And, more HYCU customers recognized the value of HYCU R-Cloud to leverage the full power of R-Cloud for data protection across Google Cloud, on-prem, and SaaS, with all data backups stored securely using Google Cloud Storage. All of us at HYCU are both excited and proud to be named a Partner of the Year. It is yet another milestone as we look to solve the worlds modern data protection challenges.

Since the HYCU R-Cloud Platform was released and running on Google Cloud, customers have been able to benefit from R-Graph, the first visualization tool designed to help visualize a companys entire data estate including on-premises, Google Cloud and SaaS data. As the industrys first cloud-native platform for data protection, HYCU R-Cloud enables the build and release of enterprise-grade data protection for new data sources quickly and efficiently. This has enabled HYCU to extend data protection to dozens of new Google Cloud services and SaaS applications in the past twelve months, and leverage Google Cloud Storage to securely store backups.

For more information on HYCU R-Cloud, visit: https://www.hycu.com/r-cloud, follow us on X (formerly Twitter), connect with us on LinkedIn, Facebook, Instagram, and YouTube.

HYCU is showcasing its solution during Google Cloud Next from April 9th through the 11th in Las Vegas at booth #552. Attendees can learn more about HYCU's modern data protection approach firsthand.

# # #

About HYCU HYCU is the fastest-growing leader in the multi-cloud and SaaS data protection as a service industry. By bringing true SaaS-based data backup and recovery to on-premises, cloud-native and SaaS environments, the company provides unparalleled data protection, migration, disaster recovery, and ransomware protection to thousands of companies worldwide. As an award-winning and recognized visionary in the industry, HYCU solutions eliminate complexity, risk, and the high cost of legacy-based solutions, providing data protection simplicity to make the world safer. With an industry leading NPS score of 91, customers experience frictionless, cost-effective data protection, anywhere, everywhere. HYCU has raised $140M in VC funding to date and is based in Boston, Mass. Learn more at http://www.hycu.com.

The rest is here:
HYCU Wins Google Cloud Technology Partner of the Year Award for Backup and Disaster Recovery - GlobeNewswire

Read More..

Early OpenAI investor bets on alternative Sam Altmans approach to AI – Semafor

Each major breakthrough in AI has occurred by removing human involvement from part of the process. Before deep learning, machine learning involved humans labeling data meticulously so that algorithms could then understand the task, deciphering patterns and making predictions. But now, deep learning obviates the need for labeling. The software can, in essence, teach itself the task.

But humans have still been needed to build the architecture that told a computer how to learn. Large language models like ChatGPT came from a breakthrough in architecture known as the transformer. It was a major advance that allowed a deep learning method called neural networks to keep improving as they grew to unfathomably large sizes. Before the transformer, neural networks plateaued after reaching a certain size.

That is why Microsoft and others are spending tens of billions on AI infrastructure: It is a bet that bigger will continue to mean better.

The big downside of this kind of neural network, though, is that the transformer is imperfect. It tells the model to predict the next word in a sentence based on how groups of letters relate to one another. But there is nothing inherent in the model about the deeper meaning of those words.

It is this limitation that leads to what we call hallucinations; transformer-based models dont understand the concept of truth.

Morgan and many other AI researchers believe if there is an AI architecture that can learn concepts like truth and reasoning, it will be developed by the AI itself, and not humans. Now, humans no longer have to describe the architecture, he said. They just describe the constraints of what they want.

The trick, though, is getting the AI to take on a task that seems to exist beyond the comprehension of the human brain. The answer, he believes, has something to do with a mathematical concept known as category theory.

Increasingly popular in computer science and artificial intelligence, category theory can turn real-world concepts into mathematical formulas, which can be converted into a form of computer code. Symbolica employees, along with researchers from Google DeepMind, published a paper on the subject last month.

The idea is that category theory could be a method to instill constraints in a common language that is precise and understandable to humans and computers. Using category theory, Symbolica hopes its method will lead to AI with guardrails and rules baked in from the beginning. In contrast, foundation models based on transformer architecture require those factors to be added on later.

Morgan said it will be the key to creating AI models that are reliable and dont hallucinate. But like OpenAI, its aiming big in hopes that its new approach to machine learning will lead to the holy grail: Software that knows how to reason.

Symbolica, though, is not a direct competitor to foundation model companies like OpenAI and views its core product as bespoke AI architectures that can be used to build AI models for customers.

That is an entirely new concept in the field. For instance, Google did not view the transformer architecture as a product. In fact, it published the research so that anyone could use it.

Symbolica plans to build customized architectures for customers, which will then use them to train their own AI models. If they give us their constraints, we can just build them an architecture that meets those constraints and we know its going to work, Morgan said.

Morgan said the method will lead to interpretability, a buzzword in the AI industry these days that means the ability to understand why models act the way they do. The lack of interpretability is a major shortcoming of large language models, which are so vast that it is extremely challenging to understand how, exactly, they came up with their responses.

The limitation of Symbolicas models, though, is that they will be more narrowly focused on specific tasks compared to generalist models like GPT-4. But Morgan said thats a good thing.

It doesnt make any sense to train one model that tries to be good at everything when you could train many, tinier models for less money that are way better than GPT-4 could ever be at a specific task, he said.

(Correction: An earlier version of this article incorrectly said that some Symbolica employees had worked at Google DeepMind.)

See more here:
Early OpenAI investor bets on alternative Sam Altmans approach to AI - Semafor

Read More..