Page 192«..1020..191192193194..200210..»

Cloud Production Offers an Option for Sustainability in Media and Content Creation – Sports Video Group

Energy efficiency, resource sharing, dynamic scaling enhance environmental awareness

In this time of increasing awareness of environmental sustainability, players within the media-tech industry are seeking innovative solutions to reduce their carbon footprint. In one approach, the media- and content-creation sector is moving toward remote production in the cloud, a process that was accelerated by the pandemic.

Cloud production, leveraging the vast capabilities of cloud service providers, introduces a paradigm shift in how media is created, managed, and distributed. One of the primary sustainability advantages lies in ensuring that cloud providers also rely on renewable energy sources and maximize energy efficiency within the infrastructure. Many cloud providers invest in technologies both hardware and software that operate with high energy efficiency, reducing operational costs while contributing to a smaller overall carbon footprint.

Some more-sustainable data centers have incorporated cooling systems that use outdoor air and direct evaporative cooling to save both energy and water.

Resource sharing is a compelling factor driving sustainability in cloud production. A multi-tenant model enables multiple clients to efficiently share the same infrastructure, optimizing resource utilization particularly in software-as-a-service applications. This collaborative approach minimizes the need for individual organizations to maintain excess capacity, leading to a more efficient use of computing resources.

Dynamic scaling, up or down, allows organizations to provision resources based on demand. The elasticity ensures that computing power is used only when necessary, reducing energy consumption during periods of lower demand, and operational efficiency is improved by avoiding unnecessary resource consumption.

The cloud production model can address the issue of hardware-lifecycle impact. Traditionally, organizations managing on-premises production face challenges related to frequent hardware upgrades and replacement, contributing to electronic waste. Cloud providers are responsible for infrastructure maintenance, including upgrades. By managing the lifespan of equipment, cloud providers help reduce electronic-waste generation.

Despite the advantages, cloud providers are in fact data centers, sometimes very large facilities with a very large physical footprint. Being on 24 hours per day, seven days per week, the facilities use a great deal of energy, and, according to recent estimates, cloud computing generates 1.55% of all global carbon emissions. Understanding this impact is critical in improving the situation. The energy used to run these facilities and the water used to cool them are being explored in an effort to reduce their environmental footprint and impact. Using renewable energy sources and innovative cooling technologies, cloud providers are actively moving toward more-sustainable operations.

Production in the cloud represents a growing sustainable option in the media- and content-creation industry. Energy efficiency, resource sharing, dynamic scaling, and a commitment to green initiatives by cloud providers offer a more environmentally conscious approach to content creation. This shift not only aligns with global sustainability goals but also positions the industry as a responsible steward of the planets resources. Though not yet perfect, as technology continues to advance, cloud production is reshaping the future of media in a more sustainable manner.

Note: The views expressed here are solely those of the author and do not represent positions of SVG.

See the original post here:
Cloud Production Offers an Option for Sustainability in Media and Content Creation - Sports Video Group

Read More..

Ampere to integrate its CPUs with Qualcomm’s cloud-based AI chips for running large models – SiliconANGLE News

Chipmaker Ampere Computing LLC, which makes powerful, Arm-based computer chips for smartphones and data center servers, said today its partnering with Qualcomm Inc. on a new joint solution that will enable enhanced artificial intelligence inference at lower costs.

The new partnership was announced today during an annual update (below) that saw Ampere detail its 2024 product roadmap, which is focused on advancing high performance, power-efficient computing for cloud and AI workloads. Aside from the partnership with Qualcomm, one of the major highlights to look forward to is the launch of its new AmpereOne platform, with the first product being a 12-channel, 256-core central processing unit built on an advanced N3 process node.

Ampere is an agnostic, Arm-based chip developer that has forged strong partnerships with public cloud infrastructure giants such as Amazon Web Services Inc., Google Cloud and Microsoft Corp., helping them to design their own, proprietary data center processors. For instance, Ampere worked with AWS to design that companys Graviton chips.

The companys main backer is another cloud infrastructure provider, Oracle Corp., which has invested more than $400 million in the company. Oracle was one of the first cloud companies to adopt Amperes Altra central processing units in an effort to increase its competitiveness in the cloud computing industry.

Ampere has seen a lot of success in the cloud, in contrast to other Arm chip manufacturers such as Qualcomm, Marvell Technologies Inc., Advanced Micro Devices Inc. and Samsung Electronics Co. Ltd., which all have so far failed to make much impact in the market for Arm-based data center chips.

Amperes Altra CPUs are said to be customized to run real-time AI workloads such as chatbots, data analytics and video content analysis, offering rapid inference capabilities at a fraction of the cost of Nvidia Corp.s powerful graphics processing units, which power the majority of AI applications today.

In todays update, Ampere explained that it will work with Qualcomm to create a joint, cloud-based AI processor that combines its most advanced Ampere Altra CPUs with Qualcomms low-powered Cloud AI100Ultra AI inference cards for data center servers.

According to Ampere Chief Product Officer Jeff Wittich, the new offering is designed to enable large language model inference for some of the most powerful generative AI models currently available. Weve taken our Ampere CPUs and paired them with AI100 Ultra accelerators from Qualcomm, he told SiliconANGLE in an interview. They will be deployed via Supermicro servers initially and eventually come to others.

Wittich explained that AI is driving enormous demand for lower-cost data center processors, as companies are becoming increasingly concerned about the high costs of running workloads on GPUs, which are also becoming more difficult to procure.

For AI training, most companies have GPUs and thats fine, but theyre too power-hungry and expensive for inference workloads, Wittich explained. A lot of companies are saying they cannot continue to deploy lots of high-powered and expensive GPUs. We can help solve that problem.

Wittich said CPUs are ideal for many LLM models, with Amperes regular chips more than able to cater to the smaller ones of 7 billion parameters. And soon, theyll also be able to handle much bigger LLMs of up to 70-billion parameters. For those, thats where Qualcomms solutions come in, he said.

The combined Ampere/Qualcomm offering will be enhanced by the launch of Amperes most advanced AmpereOne CPU, which the company claims can deliver 40% greater performance than any existing CPU available today, even without any exotic platform designs. The 192-core, 12-channel memory chip will go into production soon and is expected to become available later this year.

Ampere said its efforts in the AI industry are being validated by strong customer adoption. One of its newest customers is Meta Platforms Inc., which is now running its Llama 3 LLM on Ampere CPUs in Oracles cloud platform. In an update, the company showed data that illustrates how Llama 3 running on a 128-core Ampere Altra chip with no GPU delivers the same performance as an Nvidia A10 GPU paired with an x86 CPU, while using two-thirds less energy.

Last year, Ampere announced that it had become a founding member of the new AI Platform Alliance, which sees a number of chipmakers combine their expertise to develop more sophisticated platforms for AI compute. The Alliances latest initiative will see Ampere leverage the open interface technology in its Ampere chips to incorporate rival chipmakers intellectual property into future CPUs. This suggests that the collaboration with Qualcomm might just be the first of many more in the coming years.

Ampere Chief Executive Renee James said the increasing power requirements and energy challenges associated with AI mean that the companys low-power Arm chip designs are becoming more relevant than ever.

We started down this path six years ago because it is clear it is the right path, she said. Low power used to be synonymous with low performance. But Ampere has proven that isnt true. We have pioneered the efficiency frontier of computing and delivered performance beyond legacy CPUs in an efficient computing envelope.

James said the use of power-hungry GPUs for running AI workloads is unsustainable in the long term, as the demands of AI models grow exponentially. Whats more, Nvidia is reportedly struggling to keep up with enterprise demand for its specialized processors. We believe that the future data center infrastructure has to consider how we retrofit existing air-cooled environments with upgraded compute, she said.

In addition, James stated her belief that the industry needs to build more sustainable data centers that do not put excessive strain on existing energy grids. That is what we enable at Ampere, she said.

With reporting from Robert Hof

THANK YOU

Read more:
Ampere to integrate its CPUs with Qualcomm's cloud-based AI chips for running large models - SiliconANGLE News

Read More..

Triumphs in the Cloud: Celebrating the 2024 Cloud Computing Awards with Business Awards UK – InvestorsObserver

Triumphs in the Cloud: Celebrating the 2024 Cloud Computing Awards with Business Awards UK

HALIFAX, United Kingdom, May 16, 2024 (GLOBE NEWSWIRE) --

Business Awards UK is proud to announce the winners and finalists of the 2024 Cloud Computing Awards , celebrating companies that have made significant advancements in cloud technology, addressing critical digital challenges and pushing the boundaries of innovation.

Business Awards UK 2024 Cloud Computing Awards Winners

Business Awards UK 2024 Cloud Computing Awards Finalists

Charting New Heights: Exploring Advances in Cloud Computing

The 2024 Cloud Computing Awards signify more than just recognition of technological innovation; they underscore a transformative shift across various industries driven by cloud technology. This year has seen exceptional progress in areas crucial for organisational resilience and efficiency, including disaster recovery solutions that allow businesses to maintain continuity even in the face of global disruptions. The enhanced capabilities for robust cloud infrastructure have proven indispensable, ensuring businesses can operate seamlessly, regardless of geographical or physical constraints.

Moreover, the push towards sophisticated cloud management software reflects a growing need for tools that not only optimise performance but also safeguard data integrity and compliance across platforms. These developments are crucial in a time when cyber threats are becoming more sophisticated, and regulatory demands more stringent. The advancements in cloud technology have also facilitated better resource management, enabling companies to leverage real-time analytics and machine learning for more informed decision-making and strategic planning.

As the sector evolves, the role of cloud computing in enhancing operational agility cannot be overstated. Companies now have the capability to deploy scalable solutions rapidly, adjust to market demands with unprecedented flexibility, and significantly reduce time to recovery after incidents, thereby minimising impact and ensuring business as usual even under challenging conditions.

This year's Cloud Computing Awards highlight not only the achievements of individual companies but also the collective stride towards a more resilient, efficient, and innovative future. As businesses continue to navigate a landscape marked by rapid technological change and increasing digital threats, the role of cloud computing as a cornerstone of digital strategy becomes more evident and essential.

For more information about the 2024 Cloud Computing Awards, please contact Business Awards UK.

Company Details:

Organization: Business Awards UK

Contact Person: Mark Byrne, Director

Email: mark@business-awards.uk

Website: https://business-awards.uk

Contact Number: +441422 771042

Country: United Kingdom

City: HALIFAX

The information provided does not constitute endorsement of any activities or claims mentioned in the press release. Neither KISS PR, nor its distribution partners, are responsible for the validity or accuracy of the information provided. Decisions based on the content of the press release are at the reader's own risk. For further inquiries about the company or the content issued, please contact the source company directly. Details about the source company are included in the press release.

Original post:
Triumphs in the Cloud: Celebrating the 2024 Cloud Computing Awards with Business Awards UK - InvestorsObserver

Read More..

From Cards to Clouds: A Family Tree of Developer Tools – The New Stack

What with the 60th birthday of BASIC and SQL turning 50, I felt inspired to look back into software development’s past. Then when I saw Ian Miell’s diagram that he originally made for a presentation (he’s a partner at Container Solutions), I could immediately see how it would make a great device to hang some history on.

Not every tool has been placed in the diagram — only ones that Ian thought made a considered advancement. For example, Ansible, a configuration tool I’m quite familiar with, is missing. Many developers in mid-career today will doubtless see Kubernetes as the recognizable final result in the “cloud native” tree. But this post is more about what came before. So let’s jump back.

While punch cards sound truly arcane, they were used in our school back in the 1980s. Pupils in very early Computer Studies classes wrote instructions with punch cards in a language called CESIL (Computer Education in Schools Instruction Language). These were sent to a mainframe to be processed, with the results coming back on a printout. Needless to say, very few kids got anything to run. And computers remained uncool.

While Graphical User Interfaces (GUI) like Microsoft Windows helped democratize who could use computing among the populace in general, it was the shell script where programmers first saw how a process could be controlled by a sequence of commands,and how this was a separate domain from program code itself.

One of the first quiet revolutions was to stop thinking in terms of a sequence of commands. The conceptual jump from sequential coding was the declarative form — not that anyone used that term. This was only possible when there was enough memory available and system space to both separate the concept of what needed to be done, and how to do it. SQL is a good example of a declarative language because we state what we want to create or see, but make no mention of exactly how or where (or even why) it should happen. This started the path of computers being a tool for computing, yet with both things retaining a subtly separate identity. It also went slightly against the idea of the “programmer” as a worker who robotically entered lines of code, and ushered us toward the age of the “developer”.

In the early 90s when I first wanted to build an executable program using the C language, I needed Make. It was both a declarative tool, and one of the earliest software production automation tools. As we remembered when looking at Zig, C needs to bring the source code together, include header files, compile the language into object code, and then link up the required libraries into a single executable format. So there was a chain of events that needed to be done, and these were inferred from the instructions and the type of file target.

Looking across the diagram from make, a tar file was one of the first organizational attempts to make portable sets of files for deployment. I would have seen this first in a zip file, but it introduced the same concept — it was used to make a target system look like the development system. This was an early look into configuration management.

Source control (or version control, to the right of tar on the diagram) took quite some time to become relevant. Files and storage were expensive, and programs were small. But as the size and time investment grew longer, and the concept of collaboration became commonplace, tooling was needed. CVS (Concurrent Versioning System) was the first recognized client-server system that tracked changes in a code repository. I remember a conversation with my team about moving from SVN to Git. Git was not a simple sell, because it had the three basic steps of adding, committing and pushing code, versus the two of previous source control systems. Git treated your local machine as a valid repository.

Working with scripts — or recipes — for any of the main configuration managers (Ansible, Chef or Puppet) meant that by the 2000s developers had to be fully cognizant of a pipeline. This brought them closer to other parts of the production process, like Quality Assurance (QA), as they had a position for testing further down the pipeline.

The “distributed” part of Git that mattered wasn’t that it didn’t need a central storage location — most organizations still ran one with BitBucket, GitLab or GitHub — it is that the “source of truth” could be distributed to branches reasonably well. The differences between the “main branch” and the current “release branch” could be methodically understood. This was a major technique for maintaining sanity while collaborating. Branches could be coupled with environments, like Staging, Testing and Production.

Java, the major language in this period, used Maven for dependency management to pull down missing artifacts. In an attempt to resolve everything, it would often pull down what felt like the entire internet to make sure your local repository had everything it needed to build your project.

Jenkins, the successful result of a project fork, was the key to the success of Continuous Integration/Continuous Development (CI/CD). It automated the process of pulling code from source control, building it, then delivering it to an environment perhaps for automated testing. I remember someone creating physical traffic lights to show whether our central build was working or not. Trying to leave work on Friday evening with traffic lights at red was bad, and got people into the habit of not checking in breaking changes at the end of the week.

Finally, we come to the dawning of the cloud era. When you literally can’t touch your infrastructure, it becomes even more important to state in advance what you want to do. We are now in the 2010s. As teams of developers were now usually handed laptops, there was the need to somehow capture the Linux experience onto Windows or (if you were lucky) a Mac. I remember using early Virtual Machines (VMs) like Oracle’s VirtualBox. If we tried using a Linux OS with a GUI, the VM would have to handle the tricky bits like making sure your laptop’s mouse worked correctly within, say, Ubuntu.

The principle of isolation was exercised in VMs, and finally honed in the container, which didn’t try to abstract an entire physical machine.

Docker has been the lynchpin of cloud adoption as it allows the developer to commune with the container, without worrying so much about where the container is. The responsibility of worrying about the overall infrastructure could then shift elsewhere. Instead of a project, we now had an image.

Today we are at the bottom of the diagram, where the focus has shifted to container management, orchestration, scaling and monitoring.

The cloud has presented us with new opportunities, and a host of different problems. One company, Amazon, has succeeded in controlling the mindset of cloud development — our artifacts or components are now EC2 and S3. Developers have been introduced to the vagaries of the internet, from peak capacity to the geography and legality of data storage.

Right now, we await further repercussions of Generative AI. One could argue that the concept of “cloud native” is no longer so prevalent, as the GitOps flagship Weaveworks is no longer active.

But the diagram isn’t necessarily about cloud, or declarative programming, or even about DevOps. I’ve worked with most of the tools mentioned here without thinking in these terms. It is, of course, about a meandering journey as we spend more time on working out how to spend as little time as possible dealing with the results of changes.

The history here is also as much about the value of collaboration, as well as the continuing search for the Source of Truth. And yet the future will still consist of a developer saying the words “but it works on my machine” for some time to come.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Excerpt from:
From Cards to Clouds: A Family Tree of Developer Tools - The New Stack

Read More..

FedRAMP board launched to support safe, secure use of cloud services in government – GSA

May 14, 2024

New board will replace Joint Authorization Board and govern FedRAMP provisional authorization

WASHINGTON The U.S. General Services Administration has announced a new board that will serve as the official governing body for the Federal Risk and Authorization Management Program (FedRAMP), replacing the Joint Authorization Board (JAB).

The FedRAMP programs commitment is to make it safe and easy for the U.S. government to take full advantage of cloud services to help agencies deliver on their missions.

The new board brings a wealth of technology, cybersecurity, and engineering expertise from federal agency executives who will work to champion the vision of FedRAMP and make that vision successful, said Eric Mill, GSAs executive director for cloud strategy. The board will approve and help guide FedRAMP policies, bring together the federal community to create a robust authorization ecosystem, and be a critical partner to the FedRAMP program in our shared goal of a more streamlined customer experience and stronger federal cybersecurity.

The Office of Management and Budget established the FedRAMP Board and appointed the inaugural board members under the FY23 National Defense Authorization Act. The FedRAMP Board intentionally comprises members from across government, bringing diverse perspectives from the frontlines of cyber and IT modernization efforts, stated Deputy Federal Chief Information Officer Drew Myklegard. By harnessing their collective expertise, the board will play a vital role in adapting the FedRAMP Program to address the evolving cyber landscape and enable the accelerated adoption of secure cloud technologies across the government.

The inaugural board members are:

The initial focus of the board is to ensure a smooth transition from the JAB and any work in progress. The board is also focused on defining metrics for the program, engaging with agencies to perform joint-agency and single-agency authorizations, and working with FedRAMP to continuously monitor FedRAMP-authorized cloud products and services.

For more information about FedRAMP, please visit fedramp.gov.

###

About GSA: GSA provides centralized procurement and shared services for the federal government, managing a nationwide real estate portfolio of nearly 370 million rentable square feet, overseeing about $100 billion in products and services via federal contracts, and delivering technology services that serve millions of people across dozens of federal agencies. GSAs mission is to deliver the best customer experience and value in real estate, acquisition, and technology services to the government and the American people. For more information, visit GSA.gov and follow us at @USGSA.

Read more:
FedRAMP board launched to support safe, secure use of cloud services in government - GSA

Read More..

Ampere Scales AmpereOne Product Family to 256 Cores, Announces Joint Work with Qualcomm Cloud AI … – PR Newswire

New Ampere CPU will provide 40% more performance than any CPU currently on the market

SANTA CLARA, Calif., May 16, 2024 /PRNewswire/ -- Ampere Computing today released its annual updateon upcoming products and milestones, highlighting the company's continued innovation and invention around sustainable, power efficient computing for the Cloud and AI. The company also announced that they are working with Qualcomm Technologies, Inc. to develop a joint solution for AI inferencing using Qualcomm Technologies' high-performance, low power Qualcomm Cloud AI 100 inference solutions and Ampere CPUs.

Semiconductor industry veteran and Ampere CEO Renee James said the increasing power requirements and energy challenge of AI is bringing Ampere's silicon design approach around performance and efficiency into focus more than ever. "We started down this path six years ago because it is clear it is the right path," James said. "Low power used to be synonymous with low performance. Ampere has proven that isn't true. We have pioneered the efficiency frontier of computing and delivered performance beyond legacy CPUs in an efficient computing envelope."

James continued to highlight the growing problem of the rapid advance to AI: energy. "The current path is unsustainable. We believe that the future data center infrastructure has to consider how we retrofit existing air-cooled environments with upgraded compute, as well as build environmentally sustainable new data centers that fit the available power on the grid. That is what we enable at Ampere."

Chief Product Officer Jeff Wittich shared Ampere's vision for what the company is referring to as "AI Compute", which incorporates traditional cloud native capabilities all the way to AI. "Our Ampere CPUs can run a range of workloads from the most popular cloud native applications to AI. This includes AI integrated with traditional cloud native applications, such as data processing, web serving, media delivery, and more."

James and Wittich also both highlighted the company's upcoming new AmpereOneplatform by announcing a 12-channel 256 core CPU is ready to go on the N3 process node.

Along with updates on the company's direction and vision, this year's update included several news highlights:

About Ampere Ampere is a modern semiconductor company designing the future of cloud computing with the world's first Cloud Native Processors. Built for the sustainable Cloud with the highest performance and best performance per watt, Ampere processors accelerate the delivery of all cloud computing applications. Ampere Cloud Native Processors provide industry-leading cloud performance, power efficiency and scalability. For more information visit Ampere Computing.

*For more information on performance claims, see footnotes here.

Qualcomm Cloud AI is a product of Qualcomm Technologies, Inc., and/or its subsidiaries. Qualcomm is a trademark or registered trademark of Qualcomm Incorporated.

Media Contact Alexa Korkos Ampere Computing [emailprotected]

SOURCE Ampere

View post:
Ampere Scales AmpereOne Product Family to 256 Cores, Announces Joint Work with Qualcomm Cloud AI ... - PR Newswire

Read More..

Rafay’s PaaS Now Supports GPU Workloads for AI/ML in the Cloud – The New Stack

Platform as a Service (PaaS) provider Rafay has extended its Kubernetes management platform to better support enterprise AI and ML workloads, with a focus on GPU resource management, democratizing access to ML pipelines, and assisting with model testing and selection.

The new capabilities make compute resources for AI instantly consumable by developers and data scientists with enterprise-grade guardrails, said Haseeb Budhani, co-founder and CEO of Rafay Systems.

Rafay is a Kubernetes company that helps customers manage their environments, including Kubernetes, CI/CD pipelines and deployment platforms.

The company noticed customers deploying AI workloads on Kubernetes using Rafay’s product, and identified three gaps they could address, Budhani told The New Stack. The first gap is efficiently consuming and sharing expensive GPU resources. Rafay extended its existing PaaS to provide GPU resources to internal customers, with features like time limits and cost management.

“What we saw happen was our customers were deploying AI workloads on Kubernetes, and using our product to do it, unbeknownst to us,” Budhani said.

The second gap is democratizing access to machine learning (ML) pipelines beyond just data scientists. Rafay introduced an AI/ML workbench on top of their platform to make consuming these pipelines easier for everyone in an enterprise.

The third gap is testing and selecting the best ML models. Rafay added an “LLM playground” layer between the PaaS and ML workbench to allow users to quickly test and select the best models for their needs, Budhani said.

Rafay’s newly added support for GPU workloads helps enterprises and managed service providers power a new GPU-as-a-service experience for internal developers and customers.

Rafay’s new AI Suite provides standards-based pipelines for machine learning operations (MLOps) and large language model operations (LLMOps) to quicken the development and deployment of AI applications.

Moreover, as the global GPU-as-a-service market is expected to reach $17.2 billion by 2030, organizations are seeking scalable solutions to connect their data scientists and developers to accelerated computing infrastructure.

Rafay’s PaaS now addresses issues like environment standardization, self-service consumption of compute, secure use of multitenant environments, cost optimization, and auditability for GPU-based workloads.

“GPU-accelerated workloads are a growing part of enterprise portfolios and organizations need scalable tools to manage them,” said Justin Warren, founder and principal analyst at PivotNine. Customers also want to maintain tight control over the sovereignty of sensitive data, a challenge that is only growing in complexity. It’s good to see Rafay providing enterprises with options beyond the narrow vision of a few major cloud providers.”

The new features for GPU workloads include developer and data scientist self-service, AI-optimized user workspaces, GPU matchmaking, and GPU virtualization.

“Beyond the multicluster matchmaking capabilities and other powerful PaaS features that deliver a self-service compute consumption experience for developers and data scientists, platform teams can also make users more productive with turnkey MLOps and LLMOps capabilities available on the Rafay platform,” Budhani said in a statement. “This announcement makes Rafay a must-have partner for enterprises, as well as GPU and sovereign cloud operators, looking to speed up modern application delivery.”

NTT DATA has been an early user of Rafay’s new AI capabilities and has collaborated with the Rafay team to help deliver its new GPU support and AI Suite to market.

“Rafay’s approach satisfies users responsible for application development and management, making it easy to cross-collaborate within enterprises’ security and budget boundaries,” said Mike Jones, vice president of partners and alliances at NTT DATA, in a statement.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

The rest is here:
Rafay's PaaS Now Supports GPU Workloads for AI/ML in the Cloud - The New Stack

Read More..

AWS to invest 7.8bn in European Sovereign Cloud – DatacenterDynamics

Amazon Web Services (AWS) will invest 7.8 billion ($8.47bn) in its sovereign cloud infrastructure in Germany through 2040.

The Sovereign Cloud region will be AWS' first and located in the state of Brandenburg, Germany. It is expected to launch by the end of 2025.

This investment reinforces our commitment to offer customers the most advanced set of sovereignty controls, privacy safeguards, and security features available in the cloud. We're investing heavily in new local talent and infrastructure, which will help provide the operational sovereignty our customers require," said Max Peterson, vice president of Sovereign Cloud at AWS.

"This is an exciting milestone, and we're looking forward to the ways that our customers and partners across Europe will drive further innovation with the AWS European Sovereign Cloud."

AWS expects that this will bring around 2,800 full-time equivalent jobs to Germany each year, including construction, facility maintenance, engineering, telecommunications, and other jobs within the broader local economy.

Additional roles will be created that are high-skilled permanent positions needed to build and operate the Sovereign Cloud including software engineers, systems developers, and solutions architects.

High performing, reliable, and secure infrastructure is the most important prerequisite for an increasingly digitalized economy and society. Brandenburg is making progress here. In recent years, we have set on a course to invest in modern and sustainable data center infrastructure in our state, strengthening Brandenburg as a business location, said Prof. Dr. Jrg Steinbach, Brandenburgs Minister of Economic Affairs, Labour and Energy. State-of-the-art data centers for secure cloud computing are the basis for a strong digital economy. I am pleased Amazon Web Services (AWS) has chosen Brandenburg for a long-term investment in its cloud computing infrastructure for the AWS European Sovereign Cloud."

AWS first announced plans for a European Sovereign Cloud region in October 2023.

In September 2023, Virtus Data Centres revealed that it was planning a 300MW data center campus in Brandenburg. Google has a cloud region in Brandenburg.

Oracle has launched two sovereign cloud regions in Frankfurt, Germany, and Madrid, Spain. Digital Realty is the host partner for the EU Sovereign Cloud region location in Madrid, and Equinix is the host partner for the region location in Frankfurt.

Microsoft and Google also have sovereign cloud offerings but are being marketed through partners. In 2021, Orange and Capgemini launched France-based cloud company Bleu to sell Microsoft Azure services from local data centers. In January 2024, Bleu confirmed that it would launch at the end of the year.

Google has partnered with T-Systems in Germany, Thales in France, and Proximus in Belgium and Luxembourg. A leaked report suggests Google views its trusted partner cloud initiative as its "most important program" and believes it can corner a $100 billion market in Europe and Asia via data sovereignty-compliant clouds.

In April 2024, Vultr launched a sovereign and private cloud offering that would be deployed in its 32 data center locations across six continents.

More:
AWS to invest 7.8bn in European Sovereign Cloud - DatacenterDynamics

Read More..

The end of centralized data? Samsung teams with Expanso on distributed processing – VentureBeat

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Today, Samsung Next, the venture capital arm of Samsung Electronics, announced it is making a strategic investment in Seattle-based Expanso, a startup looking to power distributed data processing.

The companies expect this move to help both of them, streamlining how Samsung handles the vast swathe of data coming from its globally distributed workloads and providing Expanso with more reach and growth.

Samsung is a global company with products and services that span every aspect of customers lives, David Aronchick, CEO of Expanso, said in a statement.Teaming up with Samsung Next is a no-brainer, and is core to our mission of tackling those pervasive data challenges. Our role is more about bridging gaps and enabling our customers to freely choose how and where to handle their varied workloads, all while ensuring they maintain full control over their data.

Today, enterprises are leaning heavily towards globally distributed workloads and generating data across various environments, including cross-regions, cross-cloud, IoT and edge devices and even on unreliable networks.

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.

Usually, teams tend to extract insights from this data by moving all of it across networks through complex ETL pipelines and centralizing everything in a cloud data platform.

The approach works well (allowing for BI/AI applications) but also takes a lot of time and financial resources at the same time.

Even building custom peer-to-peer computing solutions doesnt work as it increases developer time, maintenance costs and forces teams to learn and reinvent 40 years of distributed computing knowledge.

Expanso takes a decentralized approach to this challenge by providing enterprises with Bacalhau an open-source distributed data processing platform.

The offering runs on the distributed systems organizations have already deployed (or plan to) and schedules data processing jobs against the data right where its generated, be it on the cloud or on the edge.

This enables teams to analyze data in situ, reducing the operational overhead of replicating data centers or managing data movement between clouds.

Not to mention, it also helps increase the speed of data jobs while also bolstering security at the same time.

Now, with the strategic backing of Samsung Next, Expansos Bacalhau is expected to support the global distributed workloads of the electronics major which has platforms spanning from mobile devices and computers all the way up to heavy industrial deployments.

This would also be very critical as the company and the industry at large continues to witness a surge in global data generation, with emerging technologies such as generative AI and 5G networks.

It would be hard to imagine a better fit for Expansos distributed computing architecture, in terms of providing faster answers, more reliable execution, and integrated security, even over spotty networks. There is no doubt that partnering with Samsung will be a wonderful catalyst for all of our use cases, Aronchick told VentureBeat.

While it remains unclear how exactly Samsung will leverage Bacalhau across its environments, Andy Duong, an investor at Samsung Next, did indicate some applications, including video processing on the smallest edge devices and sharing sensitive petabyte (or larger) datasets between enormous organizations.

Were excited to invest in Expanso for many reasons, including the potential for new applications driven by real-time data analysis, more efficient resource utilization than in traditional data centers, and the ability to empower users with greater data privacy control. We believe an approach that recognizes both data gravity, and the underused power of distributed devices while giving developers the same reliability and verifiability as running in a traditional data center will be part of every organizations IT infrastructure in the future, Duong told VentureBeat.

According to Aronchick, since the launch of Bacalhaus public demo last year, the platform has been used to run over 2 million jobs across use cases. The exact revenue stats remain unclear but some of the companys clients are heavyweights such as the U.S. Navy, University of Maryland, Prelinger Labs and WeatherXM.

From distributed ETL, to log processing, to Edge ML (including video processing), we have seen customers take our platform and use it in tons of ways we could never have imagined. For each significant case, we have been able to build out solutions and open-source the project so that other folks can do the exact same thing. This means that whether you are in retail, manufacturing, transportation, or high-traffic web/mobile applications, you can use Bacalhau from Expanso quickly and easily, with a lot of the basics taken care of for you, he added.

Moving ahead, the company plans to focus on adapting its platform to all the ways people are already using it with easier frameworks, integration into more services, and better getting-started examples. This, the CEO hopes, will also shake off many of the reservations about building distributed computing architectures.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat's Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Here is the original post:
The end of centralized data? Samsung teams with Expanso on distributed processing - VentureBeat

Read More..

Canalys adds to narrative of surging cloud spending | Microscope – ComputerWeekly.com

There has already been some research indicating that cloud spending has defied economic gravity and grown in the first few months of 2024.

Adding to findings from the likes of the Cloud Industry Forum(CIF) and Daisy Corporate Services is an analysis of Q1 from market watchers Canalys.

The analyst found that global cloud spending increased by 21% in Q1, with the top hyperscalers, AWS, Microsoft Azure and Google Cloud, accounting for 66% of total spending.

The cloud market is feeling the impact of artificial intelligence (AI), which is driving a fair amount of the current public cloud investment as users look for increased storage and computing power.

As a result, Canalys has seen customers change their focus from concentrating on optimising cloud budgets to investing in projects that involve AI integration.

The first quarter saw a number of enterprise customers entering into long-term commitments with the large hyperscalers, indicating that some significant projects were being planned.

The convergence of AI and cloud represents a transformative juncture, reshaping how businesses approach technology for innovation and growth, said Alex Smith, vice-president at Canalys. Businesses must navigate the complexities of optimising costs associated with AI infrastructure, including compute resources, storage and data processing, while ensuring that investments in AI technologies yield tangible returns.

Canalys has also noted that AI is causing customers to consider more than just their cloud choices, and many are reevaluating their entire technology stacks.

As AI starts to roll out across more of the infrastructure, there are opportunities for more workloads and data to be moved to the cloud.

That movement spells an opportunity for the channel, with many customers finding that transition of workloads far from straightforward and costly, opening the door for a trusted advisor that can smooth the process. Looking ahead, cloud service providers as a whole will endeavour to capitalise on this trend by embedding AI in their products and solutions, making AI integration not something novel, but the norm, said Smith.

All of the top three hyperscalers have been working hard on increasing channel activity, but they are also pitching slightly different offerings to the market, making customers opt for different flavours of public cloud.

There is significant variation in the strategies of the top three hyperscalers, reflected in their differing growth rates, said Canalys analyst Yi Zhang. Microsofts end-to-end portfolio is proving to be a strong competitive moat, while Googles strength in AI is giving it a strong tailwind. However, AWSs recent US$4bn investment in Anthropic for generative AI and its ongoing AI integration in its cloud services underscores a determination to stay ahead of the pack as business priorities shift to AI.

Earlier this week, CIF shared the findings from its Tough times, but innovation springs internal whitepaper, revealing that spending had continued among UK users.

The attractions of cloud continued to be the flexibility it gave around spending and the agility to spin up and down requirements.

Read the original post:
Canalys adds to narrative of surging cloud spending | Microscope - ComputerWeekly.com

Read More..