Category Archives: Cloud Hosting

You Can Get a Lifetime of iBrave Web Hosting on Sale for $80 Right Now – Lifehacker

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

You can get a lifetime subscription to iBrave Cloud Web Hosting on sale for $79.99 right now (reg. $899.10) through April 16 using the promo code ENJOY20 through April 16. iBrave's lifetime web hosting subscription gives you access to a control panel equipped with 80 one-click install apps for platforms like WordPress, Magento, and Joomla, and it comes with daily backups, unlimited SSD storage, monthly bandwidth, MySQL databases (limited to 1024 megabytes), and custom email addresses. It doesn't include domain names, but you can buy a new domain or use an existing one you already own.

You can get a lifetime subscription to iBrave Cloud Web Hosting on sale for $79.99 right now (reg. $899.10) using the promo code ENJOY20 through April 16 at 11:59 p.m. PT, though prices can change at any time.

See the original post here:
You Can Get a Lifetime of iBrave Web Hosting on Sale for $80 Right Now - Lifehacker

Future-proof your business: cloud storage without the climate cost – CloudTech News

With over half of all corporate data held in the cloud as of 2022, demand for cloud storage has never been higher. This has triggered extreme energy consumption throughout the data centre industry, leading to hefty greenhouse gas (GHG) emissions.

Worryingly, the European Commission now estimates that by 2030, EU data centre energy use will increase from 2.7% to 3.2% of the Unions total demand. This would put the industrys emissions almost on par with pollution from the EUs international aviation.

Despite this, it must be remembered that cloud storage is still far more sustainable than the alternatives.

Why should we consider cloud storage to be sustainable?

Its important to put the energy used by cloud storage into context and consider the savings it can make elsewhere. Thanks to file storage and sharing services, teams can collaborate and work wherever they are, removing the need for large offices and everyday commuting.

As a result, businesses can downsize their workspaces as well as reduce the environmental impact caused by employees travelling. In fact, its estimated that working from home four days a week can reduce nitrogen dioxide emissions by around 10%.

In addition, cloud storage reduces reliance on physical, on-premises servers. For small and medium-sized businesses (SMBs), having on-site servers or their own data centres can be expensive, whilst running and cooling the equipment requires a lot of energy, which means more CO2 emissions.

Cloud servers, on the other hand, offer a more efficient alternative. Unlike on-premises servers that might only be used to a fraction of their capacity, cloud servers in data centres can be used much more effectively. They often operate at much higher capacities, thanks to virtualisation technology that allows a single physical server to act as multiple virtual ones.

Each virtual server can be used by different businesses, meaning fewer physical units are needed overall. This means less energy is required to power and cool, leading to a reduction in overall emissions.

Furthermore, on-premises servers often have higher storage and computing capacity than needed just to handle occasional spikes in demand, which is an inefficient use of resources. Cloud data centres, by contrast, pool large amounts of equipment to manage these spikes more efficiently.

In 2022, the average power usage effectiveness of data centres improved. This indicates that cloud providers are using energy more efficiently and helping companies reduce their carbon footprint with cloud storage.

A sustainable transition: three steps to create green cloud storage

Importantly, there are ways to further improve the sustainability of services like cloud storage, which could translate to energy savings of 30-50% through greening strategies. So, how can ordinary cloud storage be turned into green cloud storage? We believe there are three fundamental steps.

Firstly, businesses should carefully consider location. This means choosing a cloud storage provider thats close to a power facility. This is because distance matters. If electricity travels a long way between generation and use, a proportion is lost. In addition, data centres located in cooler climates or underwater environments can cut down on the energy required for cooling.

Next, businesses should quiz green providers about what theyre doing to reduce their environmental impact. For example, powering their operations with wind, solar or biofuels minimises reliance on fossil fuels and so lowering GHG emissions. Some facilities will house large battery banks to store renewable energy and ensure a continuous, eco-friendly power supply.

Last but certainly not least, technology offers powerful ways to enhance the energy efficiency of cloud storage. Some providers have been investing in algorithms, software and hardware designed to optimise energy use. For example, introducing frequency scaling or AI and machine learning algorithms can significantly improve how data centres manage power consumption and cooling.

For instance, Googles use of its DeepMind AI has reduced its data centre cooling bill by 40% a prime example of how intelligent systems can work towards greater sustainability.

At a time when the world is warming up at an accelerating rate, selecting a cloud storage provider that demonstrates a clear commitment to sustainability can have a significant impact. In fact, major cloud providers like Google, Microsoft and Amazon have already taken steps to make their cloud services greener, such as by pledging to move to 100 per cent renewable sources of energy.

Cloud storage without the climate cost

The clouds impact on businesses is undeniable, but our digital growth risks an unsustainable future with serious environmental consequences. However, businesses shouldnt have to choose between innovation and the planet.

The answer lies in green cloud storage. By embracing providers powered by renewable energy, efficient data centres, and innovative technologies, businesses can reap the clouds benefits without triggering a devastating energy tax.

The time to act is now. Businesses have a responsibility to choose green cloud storage and be part of the solution, not the problem. By making the switch today, we can ensure the cloud remains a convenient sanctuary, not a climate change culprit.

Check out the upcomingCloud Transformation Conference, a free virtual event for business and technology leaders to explore the evolving landscape of cloud transformation.Book your free virtual ticket to deep dive into the practicalities and opportunities surrounding cloud adoption.Learn more here.

Tags: climate, costs, Storage, sustainability

Original post:
Future-proof your business: cloud storage without the climate cost - CloudTech News

On Cloud Computing And Learning To Say No – Hackaday

Do you really need that cloud hosting package? If youre just running a website no matter whether large or very large you probably dont and should settle for basic hosting. This is the point that [Thomas Millar] argues, taking the reader through an example of a big site like Business Insider, and their realistic bandwidth needs.

From a few stories on Business Insider the HTML itself comes down to about 75 kB compressed, so for their approximately 200 million visitors a month theyd churn through 30 TB of bandwidth for the HTML assuming two articles read per visitor.

This comes down to 11 MB/s of HTML, which can be generated dynamically even with slow interpreted languages, or as [Thomas] says would allow for the worlds websites to be hosted on a system featuring single 192 core AMD Zen 5-based server CPU. So whats the added value here? The reduction in latency and of course increased redundancy from having the site served from 2-3 locations around the globe. Rather than falling in the trap of edge cloud hosting and the latency of inter-datacenter calls, databases should be ideally located on the same physical hardware and synchronized between datacenters.

In this scenario [Thomas] also sees no need for Docker, scaling solutions and virtualization, massively cutting down on costs and complexity. For those among us who run large websites (in the cloud or not), do you agree or disagree with this notion? Feel free to touch off in the comments.

Original post:
On Cloud Computing And Learning To Say No - Hackaday

Seekr finds the AI computing power it needs in Intels cloud – CIO

Intels cloud gives developers access to thousands of the latest Intel Gaudi AI accelerator and Xeon CPU chips, combined to create a supercomputer optimized for AI workloads, Intel says. It is built on open software, including Intels oneAPI, to support the benchmarking of large-scale AI deployments.

After it began evaluating cloud providers in December, Seekr ran a series of benchmarking tests before committing to the Intel Developer Cloud and found it resulted in 20% faster AI training and 50% faster AI inference than the metrics the company could achieve on premises with current-generation hardware.

Ultimately for us, it comes down to, Are we getting the latest-generation AI compute, and are we getting it at the right price? Clark says. Building [AI] foundation models at multibillion-parameters scale takes a large amount of compute.

Intels Gaudi 2 AI accelerator chip has previously received high marks for performance. The Gaudi 2 chip, developed by the Intel acquired Habana Labs, outperformed Nvidias A100 80GB GPU in tests run in late 2022 by AI company Hugging Face.

Seekrs collaboration with Intel isnt all about performance, however, says Clark. While Seekr needs cutting-edge AI hardware for some workloads, the cloud model also enables the company to limit its use to just the computing power it needs in the moment, he notes.

The goal here is not to use the extensive AI compute all of the time, he says. Training a large foundation model versus inferencing on a smaller, distilled model take different types of compute.

Follow this link:
Seekr finds the AI computing power it needs in Intels cloud - CIO

Competition under threat as cloud giants selectively invest in startups, watchdog says – TechRadar

In a recent address at the 72nd Antitrust Law Spring Meeting in Washington DC, UK Competition and Markets Authority (CMA) CEO Sarah Cardell delved into the potential impact of the current AI landscape on competition and consumer protection.

Emphasizing AI's transformative benefits, Cardell implied that tech giants like Amazon, Google, and Microsoft have been selectively investing in specific startups.

Her speech, recorded via speakers notes, highlighted the need for proactive measures to ensure fair, open, and effective competition in the AI landscape.

Reflecting on the CMAs ongoing scrutiny of the cloud and AI industry, Cardell outlined a series of risks that current practices pose.

Concerns were raised about tech giants controlling critical inputs (such as compute and data) for foundation model development, potentially restricting access for other companies. Such restriction could lead to incumbent firms protecting their existing positions from disruption, which Cardell fears might even lead to market power in other markets beyond AI.

The CMAs CEO also noted that partnerships involving key players in the AI landscape, such as the big three, could reinforce their existing positions of market power and dominance, making it even harder for smaller companies to reach the top.

To address these concerns, the CMA has already committed to enhancing its merger review process to assess the implications of partnerships and arrangements and to monitor current and emerging partnerships more closely, including that of Microsoft and OpenAI.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Finally, the CMA has plans to examine AI accelerator chips and their impact on the foundation model value chain.

As the AI landscape continues to evolve, its clear that the CMA remains committed to its existing investigations into dominant companies and encouraging competition.

Read the rest here:
Competition under threat as cloud giants selectively invest in startups, watchdog says - TechRadar

Google is now authorized to host classified data in the cloud – Nextgov/FCW

Google Public Sector achieved a major milestone Tuesday for its U.S. government customers, announcing Defense Department authorization for its cloud platform to host secret and top secret classified data.

The accreditation instantly makes Googles cloud offering more competitive with rivals Amazon Web Services, Microsoft and Oracle as they vie for billions of dollars worth of business within the Defense Department and intelligence agencies.

We're thrilled to announce another significant milestone for Google Public Sector: the authorization of Google Distributed Cloud Hosted to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense, Leigh Palmer, the companys vice president of delivery and operations said at Google Cloud Next conference in Las Vegas. This authorization underscores Google Public Sector's commitment to empowering government agencies with secure, cutting-edge technology.

Google Distributed Cloud is the companys air-gapped solution built to meet the U.S. governments most stringent security standards. The suite of accredited tools includes capabilities like compute and storage, data analytics, machine learning and artificial intelligence and does not need to be connected to the public internet to function.

According to Palmer, Google developed its air-gapped cloud with a security-first approach, leveraging zero trust principles, Google best practices and the latest federal guidelines in application and hardware security, cryptography and cybersecurity.

The accreditation represents the culmination of a pivot back to defense work for Google, which in 2018 opted not to continue controversial AI work it was doing under a Pentagon program called Project Maven in part over employee concerns. In 2022, the tech giant formed a new division, Google Public Sector, in part to target a growing government market that spends more than $100 billion on technology each year.

The Defense Department and intelligence agencies represent a significant portion of that spending, and the ability to host secret and top secret government data now allows Google Public Sector to compete for task orders against Amazon Web Services, Microsoft and Oracle on two multi-billion dollar contracts: The Central Intelligence Agencys C2E contract and the Pentagons Joint Warfighting Cloud Capability contract.

Even before the accreditation, Google Public Sector performed work for the Army, Defense Innovation Unit and Air Force.

Google Cloud is committed to being a trusted partner and enabling public sector agencies to achieve their goals with the highest levels of security and innovation, Palmer said.

See the original post:
Google is now authorized to host classified data in the cloud - Nextgov/FCW

Amazon CEO says GenAI may be the biggest technology transformation since the cloud – TechRadar

In his annual year-end letter to shareholders, Amazon CEO Andy Jassy highlighted the significance of generative artificial intelligence not just for the companys profits but the entire technological landscape.

Likening its impact to that of the advent of the cloud, Jassys sentiments reflect a growing recognition of the power of GenAI among tech workers.

The news came as the company reported 12% year-on-year revenue growth to a staggering $575 billion.

Amazon Web Services (AWS), Amazon's cloud division that manages the generative AI side of operations, reported slightly higher revenue growth of 13% year over year. The divisions $91 billion income accounted for 15.8% of the companys accounts.

In the letter, Jassy stated: Generative AI may be the largest technology transformation since the cloud (which itself, is still in the early stages), and perhaps since the Internet.

Jassy also commented on GenAIs comparative simplicity, sharing that while moving from on-prem to the cloud requires a large migration effort, generative AI can be layered on top of existing work in the cloud.

He added: The amount of societal and business benefit from the solutions that will be possible will astound us all.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

The journey towards harnessing generative AIs full potential isnt without its challenges, though. Jassy acknowledged the technologys appetite for computing resources, software services, and infrastructure.

Looking ahead, the CEO touched upon the importance of collaboration and diversity in the AI landscape, adding that the vast majority [of GenAI applications] will ultimately be built by other companies.

Regarding the cloud computing business, the company's last full financial year started off with widespread cost-reducing efforts, including layoffs, but by the end, things started to look up thanks to investments in in-house components.

More broadly, though, Amazons CEO stated that the company is not done lowering our cost to serve, indicating that further measures of efficiency, including layoffs, could be on the cards. Amazons layoffs in the past three months have only affected a few hundred, making them significantly smaller than previous efforts.

Go here to read the rest:
Amazon CEO says GenAI may be the biggest technology transformation since the cloud - TechRadar

Cloud Native Computing Foundation Announces the Winners of its First CloudNativeHacks Hackathon – PR Newswire

First-ever hackathon showcases innovative and unique solutions to challenging sustainability challenges

PARIS, March 22, 2024 /PRNewswire/ -- KubeCon + CloudNativeCon Europe The Cloud Native Computing Foundation(CNCF), which builds sustainable ecosystems for cloud native software, today announced the first, second, and third-place winners of CloudNativeHacks.

During KubeCon + CloudNativeCon Europe 2024, CNCF, in collaboration with the United Nations, hosted its first-ever hackathon, CloudNativeHacks, to focus on advancing the delivery of the UN Sustainable Development Goals (SDGs). Sponsored by Heroku, the aim was for individuals and teams to develop a proof of concept to help support these development goals, working together to solve pressing issues and contribute meaningfully to creating a better, more sustainable world.

CloudNativeHacks Winners

First place: TeamUrban Unity - Carolina Lindqvist and Syed Ali Raza Zaidi, which addresses SDG 11: Sustainable Cities and Communities and SDG 17: Partnerships for the Goals.

Team Urban Unity from Switzerland and the UK developed a proof of concept for a platform that democratizes urban planning policies. They created a map where urban planners can drop a pin if they want to create a new building, but perhaps the local neighbors want to create a park and so can provide feedback about it. It is a platform for the people and run by the people.

Second place: TeamForrester - Radu-Stefan Zamfir, Alex-Andrei Cioc, George-Alexandru Tudurean, which addresses SDG 13: Climate Action and SDG 15: Life on Land.

Team Forrester, from Romania, developed an app that spreads awareness and handles automatic detection and monitoring of deforestation globally, leveraging AI, open source software, and publicly available data such as satellite imagery.

Third place: Team Potato - Inhwan Hwang, Sungjin Hong, and Myeonghun Yu which addresses SDG 5: Gender Equality and SDG 11: Sustainable Cities and Communities

Team Potato from Korea developed a project that creates a crowd-guarded route, a collaborative map using luminance to gauge the safety of a chosen walking path.

"As we celebrate ten years of Kubernetes, it has been an honor to see #TeamCloudNative come together to use cloud native technologies to help create a more sustainable future," said Arun Gupta, Vice President and General Manager, Open Ecosystem at Intel and Chairperson of the Governing Board for CNCF. "I am so proud of the participants and want to congratulate the winners."

"Congratulations to the winners of the first-ever CloudNativeHacks event," said Priyanka Sharma, Executive Director of the Cloud Native Computing Foundation. "It was inspiring to see the diverse and innovative ideas and I am thrilled that cloud native technologies were the building blocks for creating applications that help impact our world for generations to come."

"As a technology that accelerates the development of applications, it is great to support the first ever CloudNativeHacks and see applications that help with the sustainability of our planet built in just two days," said Bob Wise, CEO of Heroku. "We look forward to seeing how these applications can change the future."

The hackathon was presided over by a panel of judges from the cloud native community and the United Nations, including:

Winners received $10,000, $5,000, and $2,500 respectively.

Additional Resources

About Cloud Native Computing FoundationCloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry's top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 800 members, including the world's largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit http://www.cncf.io.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see ourtrademarkusage page. Linux is a registered trademark of Linus Torvalds.

Media ContactJessie Adams-Shore The Linux Foundation [emailprotected]

SOURCE Cloud Native Computing Foundation

See the original post:
Cloud Native Computing Foundation Announces the Winners of its First CloudNativeHacks Hackathon - PR Newswire

What Is Cloud Security? – CrowdStrike

Cloud security definition

Cloud security is a discipline of cybersecurity focused on the protection of cloud computing systems. It involves a collection of technologies, policies, services, and security controls that protect an organizations sensitive data, applications, and environments.

Cloud computing, commonly referred to as the cloud, is the delivery of hosted services like storage, servers, and software through the internet. Cloud computing allows businesses to reduce costs, accelerate deployments, and develop at scale.

Cloud security goals:

Fortify the security posture of your cloud platforms and respond with authority to cloud data breaches.Cloud Security Services

As companies continue to transition to a fully digital environment, the use of cloud computing has become increasingly popular. But cloud computing comes with cybersecurity challenges, which is why understanding the importance of cloud security is essential in keeping your organization safe.

Over the years, security threats have become incredibly complex, and every year, new adversaries threaten the field. In the cloud, all components can be accessed remotely 24/7, so not having a proper security strategy puts gathered data in danger all at once. According to the CrowdStrike 2024 Global Threat Report, cloud environment intrusions increased by 75% from 2022 to 2023, with a 110% year-over-year increase in cloud-conscious cases and a 60% year-over-year increase in cloud-agnostic cases. Additionally, the report revealed that the average breakout time for interactive eCrime intrusion activity in 2023 was 62 minutes, with one adversary breaking out in just 2 minutes and 7 seconds.

Cloud security should be an integral part of an organizations cybersecurity strategy regardless of their size. Many believe that only enterprise-sized companies are victims of cyberattacks, but small and medium-sized businesses are some of the biggest targets for threat actors. Organizations that do not invest in cloud security face immense issues that include potentially suffering from a data breach and not staying compliant when managing sensitive customer data.

Download this new report to learn about the most prevalent cloud security risks and threats from 2023 to better protect from them in 2024.

An effective cloud security strategy employs multiple policies and technologies to protect data and applications in cloud environments from every attack surface. Some of these technologies include identity and access management (IAM) tools, firewall management tools, and cloud security posture management tools, among others.

Organizations also have the option to deploy their cloud infrastructures using different models, which come with their own sets of pros and cons.

The four available cloud deployment models are:

This type of model is the most affordable, but it is also associated with the greatest risk because a breach in one account puts all other accounts at risk.

The benefit of this deployment model is the level of control it provides individual organizations. Additionally, it provides enhanced security and ensures compliance, making it the most leveraged model by organizations that handle sensitive information. However, it is expensive to use.

The biggest benefit from this deployment model is the flexibility and performance it offers.

Most organizations use a third-party CSP such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure to host their data and applications. Strong cloud security involves shared responsibility between these CSPs and their customers.

It is important not to rely only on security measures set by your CSP you should also implement security measures within your organization. Though a solid CSP should have strong security to protect from attackers on their end, if there are security misconfigurations, privileged access exploitations, or some form of human error within your organization, attackers can potentially move laterally from an endpoint into your cloud workload. To avoid issues, it is essential to foster a security-first culture by implementing comprehensive security training programs to keep employees aware of cybersecurity best practices, common ways attackers exploit users, and any changes in company policy.

The shared responsibility model outlines the security responsibilities of cloud providers and customers based on each type of cloud service: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).

This table breaks down the shared responsibility by cloud service model:

Misconfigurations, workloads and data

Endpoints, user and network security, and workloads

Endpoints, user and network security, workloads, and data

The dynamic nature of cloud security opens up the market to multiple types of cloud security solutions, which are considered pillars of a cloud security strategy. These core technologies include:

It is essential to have a cloud security strategy in place. Whether your cloud provider has built-in security measures or you partner with the top cloud security providers in the industry, you can gain numerous benefits from cloud security. However, if you do not employ or maintain it correctly, it can pose challenges.

The most common benefits include:

Unlike traditional on-premises infrastructures, the public cloud has no defined perimeters. The lack of clear boundaries poses several cybersecurity challenges and risks.

Failure to properly secure each of these workloads makes the application and organization more susceptible to breaches, delays app development, compromises production and performance, and puts the brakes on the speed of business.

In addition, organizations using multi-cloud environments tend to rely on the default access controls of their cloud providers, which can become an issue in multi-cloud or hybrid cloud environments. Insider threats can do a great deal of damage with their privileged access, knowledge of where to strike, and ability to hide their tracks.

To address these cloud security risks, threats, and challenges, organizations need a comprehensive cybersecurity strategy designed around vulnerabilities specific to the cloud. Read this post to understand 12 security issues that affect the cloud. Read: 12 cloud security issues

Though cloud environments can be open to vulnerabilities, there are many cloud security best practices you can follow to secure the cloud and prevent attackers from stealing your sensitive data.

Some of the most important practices include:

Why embrace Zero Trust?

The basic premise of the Zero Trust principle in cloud security is to not trust anyone or anything in or outside the organizations network. It ensures the protection of sensitive infrastructure and data in todays world of digital transformation. The principle requires all users to be authenticated, authorized, and validated before they get access to sensitive information, and they can easily be denied access if they dont have the proper permissions.

CrowdStrike has redefined security with the worlds most complete CNAPP that secures everything from code to cloud and enables the people, processes, and technologies that drive modern enterprise.

With a 75% increase in cloud-conscious attacks in the last year, it is essential for your security teams to partner with the right security vendor to protect your cloud, prevent operational disruptions, and protect sensitive information in the cloud. CrowdStrike continuously tracks 230+ adversaries to give you industry-leading intelligence for robust threat detection and response.

The CrowdStrike Falcon platform contains a range of capabilities designed to protect the cloud. CrowdStrike Falcon Cloud Security stops cloud breaches by consolidating all the critical cloud security capabilities that you need into a single platform for complete visibility and unified protection. Falcon Cloud Security offers cloud workload protection; cloud, application, and data security posture management; CIEM; and container security across multiple environments.

Get a free, no obligation Cloud Security Risk Review for instant and complete visibility into your entire cloud estate, provided through agentless scanning with zero impact to your business.CrowdStrike's cloud security risk review

Read the original post:
What Is Cloud Security? - CrowdStrike

Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others – DatacenterDynamics

On the back of Nvidia announcing its latest Blackwell line of GPUs, the hyperscale cloud providers have all announced plans to offer access to them later this year.

Oracle, Amazon, Microsoft, and Google have all said they will offer access to the new GPUs through their respective cloud platforms at launch. Lambda and NexGen, GPU-cloud providers, have said it will soon be offering access to Blackwell hardware.

The launch of the H100 Hopper GPU saw niche cloud providers including CoreWeave and Cirrascale get first access, with H100 instances coming to the big cloud platforms later.

Malaysian conglomerate YTL, which recently moved into developing data centers, is also set to host and offer access to a DGX supercomputer.

Singaporean telco Singtel is also set to launch a GPU cloud service later this year.

Applied Digital, a US company previously focused on hosting cryptomining hardware, has also announced it will host Blackwell hardware.

Oracle said it plans to offer Nvidias Blackwell GPUs via its OCI Supercluster and OCI Compute instances. OCI Compute will adopt both the Nvidia GB200 Grace Blackwell Superchip and the Nvidia Blackwell B200 Tensor Core GPU.

Oracle also said Nvidias Oracle-based DGX Cloud cluster will consist of 72 Blackwell GPUs NVL72 and 36 Grace CPUs with fifth-generation NVLink. Access will be available through GB200 NVL72-based instances.

As AI reshapes business, industry, and policy around the world, countries and organizations need to strengthen their digital sovereignty in order to protect their most valuable data, said Safra Catz, CEO of Oracle.

Our continued collaboration with Nvidia and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security.

Google announced its adoption of the new Nvidia Grace Blackwell AI computing platform. The company said Google is adopting the platform for various internal deployments and will be one of the first cloud providers to offer Blackwell-powered instances.

The search and cloud company also said the Nvidia H100-powered DGX Cloud platform is now generally available on Google Cloud. The company said it will bring Nvidia GB200 NVL72 systems, which combine 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink, to its cloud infrastructure in future.

"The strength of our long-lasting partnership with Nvidia begins at the hardware level and extends across our portfolio - from state-of-the-art GPU accelerators, to the software ecosystem, to our managed Vertex AI platform," said Google Cloud CEO Thomas Kurian.

"Together with Nvidia, our team is committed to providing a highly accessible, open, and comprehensive AI platform for ML developers."

Microsoft also said it will be one of the first organizations to bring the power of Nvidia Grace Blackwell GB200 and advanced Nvidia Quantum-X800 InfiniBand networking to the cloud and will be offering them through its Azure cloud service.

Microsoft also announced the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the Nvidia H100 NVL platform, which is designed for midrange training and inferencing.

Together with Nvidia, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere, said Satya Nadella, chairman and CEO, Microsoft.

From bringing the GB200 Grace Blackwell processor to Azure to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.

Blackwell hardware is also coming to Amazon Web Services (AWS). The companies said AWS will offer the GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs on its cloud platform.

AWS will offer the Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. The cloud provider also plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters. GB200s will also be available on Nvidias DGX Cloud within AWS.

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of Nvidia GPU solutions for customers, said Adam Selipsky, CEO at AWS.

Nvidias next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique Nitro systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else.

In its own announcement, GPU cloud provider Lambda Labs said it would be one of the first companies to deploy the latest Blackwell hardware.

The GB200 Grace Blackwell Superchip and B200 and B100 Tensor Core GPUs will be available through Lambdas On-Demand & Reserved Cloud, and Blackwell-based DGX SuperPODs will be deployed in Lambdas AI-Ready Data Centers.

NexGen, a GPU cloud and Infrastructure-as-a-Service provider, also announced it would be among the first cloud providers to offer access to Blackwell hardware.

The company said it will provide these services as part of its AI Supercloud, which is itself planned for Q2 2024.

Being one of the first Elite Cloud Partners in the Nvidia Partner Network to offer Nvidia Blackwell-powered products to the market marks a major milestone for our business, said Chris Starkey, CEO of NexGen Cloud.

Through Blackwell-powered solutions, we will be able to equip customers with the most powerful GPU offerings on the market, empowering them to drive innovation, whilst achieving unprecedented efficiencies. This will help unlock new opportunities across industries and enhance the way we use AI both now and in the future.

Malaysias YTL, which is developing data centers in Johor, is moving to become an AI cloud provider.

The company this week announced the formation of YTL AI Cloud, a specialized provider of GPU-based computing. The new unit will deploy and manage one of the worlds most advanced supercomputers on Nvidias Grace Blackwell-powered DGX Cloud.

The YTL AI Supercomputer will reportedly surpass more than 300 exaflops of AI compute.

The supercomputer will be located in a facility at the 1,640-acre YTL Green Data Center Campus, Johor. The site will reportedly be powered via 500MW of on-site solar capacity.

YTL Power International Managing Director, Dato Seri Yeoh Seok Hong, said: We are proud to be working with Nvidia and the Malaysian government to bring powerful AI cloud computing to Malaysia.

"We are excited to bring this supercomputing power to the Asia Pacific region, which has been home to many of the fastest-growing cloud regions and many of the most innovative users of AI in the world.

In the US, Applied Digital also said it would be "among the pioneering cloud service providers" offering Blackwell GPUs. Further details weren't shared.

Applied develops and operates next-generation data centers across North America to cater to high-performance computing (HPC). It was previously focused on hosting cryptomining hardware. The company also has a cloud offering through Sai Computing.

Applied Digital demonstrates a profound commitment to driving generative AI, showcasing a deep understanding of its transformative potential. By seamlessly integrating infrastructure, Applied breathes life into generative AI, recognizing the critical role of GPUs and supporting data center infrastructure in its advancement, said Wes Cummins, CEO and chairman of Applied Digital.

Singaporean telco Singtel announced it will be launching its GPU-as-a-Service (GPUaaS) in Singapore and Southeast Asia in the third quarter of this year.

At launch, Singtels GPUaaS will be powered by Nvidia H100 Tensor Core GPU-powered clusters that are operated in existing upgraded data centers in Singapore. In addition, Singtel - like everyone else - will be among the world's first to deploy GB200 Grace Blackwell Superchips.

Bill Chang, CEO of Singtels Digital InfraCo unit and Nxera regional data center business, said: We are seeing keen interest from the private and public sectors which are raring to deploy AI at scale quickly and cost-effectively.

"Our GPUaaS will run in AI-ready data centers specifically tailored for intense compute environments with purpose-built liquid-cooling technologies for maximum efficiency and lowest PUE, giving them the flexibility to deploy AI without having to invest and manage expensive data center infrastructure.

Original post:
Nvidia Blackwell GPUs to be offered via AWS, Microsoft, Google, Oracle and others - DatacenterDynamics