Category Archives: Cloud Computing
Oracle and NVIDIA to Deliver Sovereign AI Worldwide – NVIDIA Blog
GTCOracle and NVIDIA today announced an expanded collaboration to deliver sovereign AI solutions to customers around the world. Oracles distributed cloud, AI infrastructure, and generative AI services, combined with NVIDIAs accelerated computing and generative AI software, are enabling governments and enterprises to deploy AI factories.
These AI factories can run cloud services locally, and within a countrys or organizations secure premises with a range of operational controls, supporting sovereign goals of diversifying and boosting economic growth.
As AI reshapes business, industry, and policy around the world, countries and organizations need to strengthen their digital sovereignty in order to protect their most valuable data, said Safra Catz, CEO of Oracle. Our continued collaboration with NVIDIA and our unique ability to deploy cloud regions quickly and locally will ensure societies can take advantage of AI without compromising their security.
In an era where innovation will be driven by generative AI, data sovereignty is a cultural and economic imperative, said Jensen Huang, founder and CEO of NVIDIA. Oracles integrated cloud applications and infrastructure, combined with NVIDIA accelerated computing and generative AI services, create the flexibility and security nations and regions require to control their own destiny.
Turnkey Solutions to Help Customers Meet Data Sovereignty The combination of NVIDIAs full-stack AI platform with Oracles Enterprise AI deployable across OCI Dedicated Region, Oracle Alloy, Oracle EU Sovereign Cloud, and Oracle Government Cloud offers customers a state-of-the-art AI solution that provides greater control over operations, location, and security to help support digital sovereignty.
Countries across the globe are increasingly investing in AI infrastructure that can support their cultural and economic ambitions. Across 66 cloud regions in 26 countries, customers can access more than 100 cloud and AI services spanning infrastructure and applications to support IT migration, modernization, and innovation.
The companies combined offerings can be deployed via the public cloud or in a customers data center in specific locations, with flexible operational controls. Oracle is the only hyperscaler capable of delivering AI and full cloud services locally, anywhere. OCI services and pricing are consistent across deployment types to simplify planning, portability, and management.
Oracles cloud services leverage a range of NVIDIAs stack, including NVIDIA accelerated computing infrastructure and the NVIDIA AI Enterprise software platform, including newlyannounced NVIDIA NIM inference microservices, which are built on the foundation of NVIDIA inference software such as NVIDIA TensorRT, NVIDIA TensorRT-LLM, and NVIDIA Triton Inference Server.
Sovereign AI Pioneers Avaloq, a leader in wealth management technology, selected OCI Dedicated Region to bring a complete OCI cloud region into its own data center.
OCI Dedicated Region aligns with our commitment to ensure maximum control over data residency while providing access to the latest cloud infrastructure, said Martin Bchi, chief technology officer at Avaloq. This supports us as we continue to drive the digital transformation of banks and wealth managers.
TEAM IM, a leading New Zealand information management services provider, chose Oracle Alloy to build New Zealands first locally owned and operated hyperscale cloud known as TEAM Cloud.
Organizations in New Zealand are increasingly eager to harness the power of the cloud while safeguarding the integrity of their data within their own shores by leveraging a unique hyperscale cloud solution, said Ian Rogers, chief executive officer of TEAM IM. With Oracle Alloy and the possibility of integrating the NVIDIA AI platform into our cloud services, weve been able to become a cloud services provider that can assist public sector, commercial and iwi organizations in navigating the intricacies of the digital landscape and optimizing their digital transformations.
e& UAE, telecom arm of e& group, is collaborating with Oracle to enhance its AI capabilities and intends to deploy NVIDIA H100 Tensor Core GPU clusters within its OCI Dedicated Region.
OCI will enable us to deploy NVIDIA H100 GPU clusters within our own OCI Dedicated Region, hosted at e& UAE data centers, said Khalid Murshed, chief technology and information officer (CTIO) of e& UAE. This type of localization will allow us to accelerate AI innovation across the UAE and helps us develop new Gen AI applications and use cases at scale. This is in line with e& UAEs transformation efforts to pioneer innovation and shape the future of technology with our focus on driving excellence in AI to provide unparalleled customer experiences.
OCI Supercluster and OCI Compute Boosted with NVIDIA Grace Blackwell To help customers address the ever-increasing needs of AI models, Oracle plans to take advantage of the latest NVIDIA Grace Blackwell computing platform, announced today at GTC, across OCI Supercluster and OCI Compute. OCI Supercluster will become significantly faster with new OCI Compute bare metal instances, ultra-low-latency RDMA networking, and high-performance storage. OCI Compute will adopt both the NVIDIA GB200 Grace Blackwell Superchip and the NVIDIA Blackwell B200 Tensor Core GPU.
The NVIDIA GB200 Grace Blackwell Superchip will power a new era of computing. GB200 delivers up to 30X faster real-time large language model (LLM) inference, 25X lower TCO, and requires 25X less energy compared to the previous generation of GPUs, supercharging AI training, data processing, and engineering design and simulation. NVIDIA Blackwell B200 Tensor Core GPUs are designed for the most demanding AI, data analytics, and high-performance computing (HPC) workloads.
NVIDIA NIM and CUDA-X microservices, including NVIDIA NeMo Retriever for retrieval- augmented generation (RAG) inference deployments, will also help OCI customers bring more insight and accuracy to their generative AI copilots and other productivity tools using their own data.
NVIDIA Grace Blackwell Comes to DGX Cloud on OCI To meet escalating customer demand for increasingly complex AI models, the companies are adding NVIDIA Grace Blackwell to NVIDIA DGX Cloud on OCI. Customers will be able to access new GB200 NVL72-based instances through this co-engineered supercomputing service designed for energy-efficient training and inference in an era of trillion-parameter LLMs.
The full DGX Cloud cluster buildout will include more than 20,000 GB200 accelerators and NVIDIA CX8 InfiniBand networking, providing a highly scalable and performant cloud infrastructure. The cluster will consist of 72 Blackwell GPUs NVL72 and 36 Grace CPUs with fifth-generation NVLink.
Availability Oracle and NVIDIAs sovereign AI solutions are available immediately. To learn more, go to the Oracle sovereign AI page.
Additional Resources
Read more here:
Oracle and NVIDIA to Deliver Sovereign AI Worldwide - NVIDIA Blog
Rackspace Launches SPOT, an Innovative Open Market Auction for Cloud Servers with Managed Kubernetes – Datanami
SAN ANTONIO, March 22, 2024 Rackspace Technology, a leading end-to-end hybrid, multicloud, and AI technology solutions company, recently introduced Rackspace SPOT, the worlds only open market auction for cloud servers.
Rackspace SPOT offers instant online sign-up, a customer-driven bidding process that allows users to set compute market prices, with capacity delivered as turnkey, fully managed Kubernetes clusters.
Rackspace SPOT offers a unique open market auction model that provides cost-effective cloud infrastructure solutions for small to medium enterprises, startups, digital companies, and developers, said Lance Weaver, Rackspace Technology Chief Product and Technology Officer, Private Cloud Business Unit. With high-availability Kubernetes clusters and market-based dynamic pricing, organizations can scale resources up or down as needed and obtain cloud infrastructure at the best possible price. The transparency of the auction process provides information to compare prices and choose the most cost-effective options.
Spot was co-developed by teams at Rackspace Technology and Platform9, the leading independent provider of SaaS-managed Kubernetes.
We are pleased to contribute to Spot, a major innovation in the cloud computing market, said Sirish Raghuram, Co-founder and Chief Growth Officer at Platform9. Spot features several Platform9 innovations, such as a new centrally hosted Kubernetes control plane that lowers costs to customers. This allows users to deploy a fully managed Kubernetes cluster from as little as $0.72 per month, which is two orders of magnitude cheaper than comparable alternatives.
Rackspace SPOT Key Features
Rackspace SPOT Key Benefits
Kubernetes Solutions
Leverage Kubernetes without the complexity of managing all the moving parts; you dont have to worry about configuring high-availability upgrades or troubleshooting.
In addition, SPOT comes out of the box with two storage classes: Solid-State Drive (SSD) and Serial ATA (SATA) for persistent storage, a network-policy-capable Calico Container Network Interface (CNI), and load balancers.
Rich Add-Ons
Bring your own favorite K8s Helm charts or Operators today while our engineers are hard at work to include these out-of-the-box, including:
To learn more about Rackspace SPOT, click here.
About Rackspace Technology
Rackspace Technology is a leading end-to-end hybrid, multicloud, and AI technology services company. We design, build, and operate our customers cloud environments across all major technology platforms, irrespective of technology stack or deployment model. We partner with our customers at every stage of their cloud journey, enabling them to modernize applications, build new products, and adopt innovative technologies.
Source: Rackspace
Read more here:
Rackspace Launches SPOT, an Innovative Open Market Auction for Cloud Servers with Managed Kubernetes - Datanami
AWS: Revolutionising motorsports through cloud computing – British GT
SRO Motorsports Group has cultivated a longstanding, successful collaboration with Amazon Web Services (AWS), the worlds most comprehensive and broadly adopted cloud. Spanning from 2019 to present day, and beyond, AWSs presence across the global Fanatec GT World Challenge platform has only continued to grow, becoming the presenting entitlement sponsor in 2020.
SRO Motorsports Group and AWS are growing together ahead of another thrilling season, expanding their collaboration across the Fanatec GT World Challenge America, Europe, Asia, and Australia championships. AWS continues to stand as a beacon of innovation, sustainability, and global expansion in the realm of global motorsports.
AWS, serving as the designated global technology provider for SRO, extends its cutting-edge array of cloud services, artificial intelligence (AI), and machine learning throughout all SRO series worldwide. This significant collaboration offers an exceptional experience for teams, drivers, and fans, raising the bar for engagement and innovation in motorsports. AWS has the potential to revolutionize various aspects of SROs operations through advanced cloud computing capabilities, with the aim of enhancing the efficiency, safety, and entertainment quotient of SRO racing events.
AWS's robust data analytics tools also enable SRO Motorsports Group to gather and analyze large amounts of data during races. This data encompasses crucial metrics such as car telemetry, driver performance, track conditions, and fan engagement. By harnessing AWS's machine learning algorithms, SRO can collect actionable insights from this data and as a result assist in facilitating real-time decision-making for teams, optimizing race strategies, and enhancing the overall competitiveness of the sport.
Additionally, sensors located across circuits capture timely data on track conditions, weather patterns, and environmental factors, all of which is seamlessly integrated within AWS's cloud computing infrastructure. This enables officials to optimize track management, make adjustments on the fly, and adapt event logistics. By leveraging AWS's capabilities, SRO Motorsports Group can ensure smoother race operations and minimize disruptions, thereby enhancing the overall experience for teams, drivers, and spectators alike.
AWS's contribution extends beyond the confines of the track, breaking down barriers in the digital, virtual, and real-life realms of motorsport to enrich the experience of fans around the world. Through AWS-powered video streaming and content delivery solutions, SRO Motorsports Group delivers high-definition, low-latency broadcasts of its events to global audiences. Moreover, AWS's personalized recommendation engines and interactive fan engagement platforms enhance viewer immersion, fostering a deeper connection between fans and the sport.
The collaboration between SRO and AWS has already seen significant achievements, including the RaceVision Powered by AWS concept, offering real-time insights and on-screen graphics that bring fans closer to the action with data like top speeds and cornering prowess. This collaboration has also been instrumental in advancing SRO's e-sports initiatives, leveraging AWS's virtual machines and media services to host race servers and livestream events with professional broadcast quality.
Looking ahead, the collaboration is committed to exploring new, innovative methods that utilize AWS's extensive set of global cloud-based products to further enhance GT racing around the world. AWS will also continue to play a pivotal role in SRO's Balance of Performance (BOP) criteria, a cornerstone of modern GT competition, by assuming data hosting responsibilities.
Follow this link:
AWS: Revolutionising motorsports through cloud computing - British GT
Microsoft and NVIDIA announce major integrations to accelerate generative AI for enterprises everywhere – Stories – Microsoft
REDMOND, Wash., and SAN JOSE, Calif. March 18, 2024 At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.
Together with NVIDIA, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere, said Satya Nadella, chairman and CEO, Microsoft. From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.
AI is transforming our daily lives opening up a world of new opportunities, said Jensen Huang, founder and CEO of NVIDIA. Through our collaboration with Microsoft, were building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world.
Advancing AI infrastructure
Microsoft will be one of the first organizations to bring the power of NVIDIA Grace Blackwell GB200 and advanced NVIDIA Quantum-X800 InfiniBand networking to Azure, deliver cutting-edge trillion-parameter foundation models for natural language processing, computer vision, speech recognition and more.
Microsoft is also announcing the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the NVIDIA H100 NVL platform. Designed for midrange training and inferencing, the NC series of virtual machines offers customers two classes of VMs from one to two NVIDIA H100 94GB PCIe Tensor Core GPUs and supports NVIDIA Multi-Instance GPU (MIG) technology, which allows customers to partition each GPU into up to seven instances, providing flexibility and scalability for diverse AI workloads.
Healthcare and life sciences breakthroughs
Microsoft is expanding its collaboration with NVIDIA to transform healthcare and life sciences through the integration of cloud, AI and supercomputing technologies. By harnessing the power of Microsoft Azure alongside NVIDIA DGX Cloud and the NVIDIA Clara suite of microservices, healthcare providers, pharmaceutical and biotechnology companies, and medical device developers will soon be able to innovate rapidly across clinical research and care delivery with improved efficiency.
Industry leaders such as Sanofi and the Broad Institute of MIT and Harvard, industry ISVs such as Flywheel and SOPHiA GENETICS, academic medical centers like the University of Wisconsin School of Medicine and Public Health, and health systems like Mass General Brigham are already leveraging cloud computing and AI to drive transformative changes in healthcare and to enhance patient care.
Industrial digitalization
NVIDIA Omniverse Cloud APIs will be available first on Microsoft Azure later this year, enabling developers to bring increased data interoperability collaboration, and physics-based visualization to existing software applications. At NVIDIA GTC, Microsoft is demonstrating a preview of what is possible using Omniverse Cloud APIs on Microsoft Azure. Using an interactive 3D viewer in Microsoft Power BI, factory operators can see real-time factory data overlaid on a 3D digital twin of their facility to gain new insights that can speed up production.
NVIDIA Triton Inference Server and Microsoft Copilot
NVIDIA GPUs and NVIDIA Triton Inference Server help serve AI inference predictions in Microsoft Copilot for Microsoft 365. Copilot for Microsoft 365, soon available as a dedicated physical keyboard key on Windows 11 PCs, combines the power of large language models with proprietary enterprise data to deliver real-time contextualized intelligence, enabling users to enhance their creativity, productivity and skills.
From AI training to AI deployment
NVIDIA NIM inference microservices are coming to Azure AI to turbocharge AI deployments. Part of the NVIDIA AI Enterprise software platform, also available on the Azure Marketplace, NIM provides cloud-native microservices for optimized inference on more than two dozen popular foundation models, including NVIDIA-built models that users can experience at ai.nvidia.com. For deployment, the microservices deliver prebuilt, run-anywhere containers powered by NVIDIA AI Enterprise inference software including Triton Inference Server, TensorRT and TensorRT-LLM to help developers speed time to market of performance-optimized production AI applications.
About NVIDIA
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The companys invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing infrastructure company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.
About Microsoft
Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
For more information, press only:
Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,[emailprotected]
Natalie Hereth, NVIDIA Corporation, [emailprotected]
Note to editors: For more information, news and perspectives from Microsoft, please visit Microsoft Source athttp://news.microsoft.com/source. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.
NVIDIA forwardlooking statements
Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features, and availability of NVIDIAs products and technologies, including NVIDIA Grace Blackwell Superchip, NVIDIA DGX Cloud, NVIDIA Omniverse Cloud APIs, NVIDIA AI and Accelerated Computing Platforms, and NVIDIA Generative AI Microservices; the benefits and impact of NVIDIAs collaboration with Microsoft, and the features and availability of its services and offerings; AI transforming our daily lives, the way we work and opening up a world of new opportunities; and building a future that unlocks the promise of AI for customers and brings transformative solutions to the world through NVIDIAs continued collaboration with Microsoft are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; NVIDIAs reliance on third parties to manufacture, assemble, package and test NVIDIAs products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIAs existing product and technologies; market acceptance of NVIDIAs products or NVIDIA partners products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIAs products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the companys website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.
Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.
2024 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX, NVIDIA Clara, NVIDIA NIM, NVIDIA Omniverse, NVIDIA Triton Inference Server, and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.
Nutanix sues Tessell over claims founders used IP to build rival products with strikingly similar features – ITPro
Nutanix has announced it has filed a lawsuit against database as a service platform Tessell amid allegations its founders covertly built competing products while still employed at Nutanix.
The cloud computing giant filed the lawsuit in the US District Court in San Jose, which alleges that sensitive company information, as well as valuable resources, were used illegally by Tessells founders to launch the business.
Nutanixs filing alleged that Tessell founders Bala Kuchibhotla, Kamal Khanuja, and Bakul Banthia covertly designed, built, exhibited, and secured financing for a future Tessell product while still at Nutanix.
Moreover, the lawsuit claimed that Nutanix source code, servers, and other resources were used by the Tessell founders when creating the product, and that they took the source code across to Tessell after leaving Nutanix.
KKB [Tessells founders] used Nutanix facilities, equipment, services, and even the Nutanix Era source code when developing the Tessell product, the filing reads.
KKB planned, developed, obtained initial financing for, and demonstrated prototypes of the competing productall using Nutanix computers and while they were employed by Nutanix. One of the Tessell prototypes they demonstrated actually ran on Nutanix servers.
The source code claimed to have been stolen by the group was for Nutanixs Era database management software suite, now referred to as the Nutanix Database Service (NDB).
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
The complaint also argued the group tried to remove all indications of Nutanix branding from the Tessell prototype, and further cover their tracks by wiping their company devices.
When planning their departures, KKB took all or a substantial portion of the Era source code, saved it to private accounts and devices, and Tessell later incorporated Era source code into its product. KKB then wiped their laptops to cover their tracks, an effort that was initially successful.
Nutanix claimed the speed with which Tessell was able to launch its competing product, as well as a very similar feature-set, was enough for the company to open a probe into potential IP infringements.
When Tessell launched its product in late 2022, however, the speed with which it came to market with features strikingly similar to Era caused Nutanix to commence a full-fledged forensic investigation. As a result, Nutanix uncovered the theft of proprietary code and technology described herein.
The complaint argued that Tessells founders access to Nutanix source code and resources meant it was able to release their competing product just 18 months after being founded, with support for major databases such as Oracle, Microsoft SQL Server, MongoDB, MySQL, and PostgreSQL.
Tessell was able to release a robust commercial product in such a relatively short period of time because it was founded by a group of Nutanix engineers who were instrumental in developing the Era product, and who had access to all of the key technology and source code embedded in Era.
Nutanix said it is seeking the return of stolen intellectual property, an injunction to prevent further infringement, restitution, and money damages from Tessell.
It also announced it would be commencing separate arbitration proceedings against the Tessell founders concerning the violation of their employee agreements as Nutanix staff.
ITPro has approached Nutanix and Tessell for comment
Is Edge Computing Living Up to Its Promise in the Australian Market? – TechRepublic
Edge computing is often predicted as a technology that will revolutionise enterprise computing in Australia. By capturing, processing, analysing and storing data at the edge, instead of a central location, its promises include fast real-time computing at the edge and less cost.
While the edge is yet to live up to these promises, adoption is taking place. Utpal Mangla, general manager for industry edge cloud & IBM Distributed Cloud Platform, said the edge is being rolled out for specific use cases globally, including quality control in manufacturing.
Growth in AI may propel the trend, as some use cases find their way to the edge. Mangla argues a hybrid cloud by design approach is the best option for enterprises looking to prepare for a future where they need to straddle on-premises, cloud and edge computing.
Australia should be a prime candidate for edge computing growth. With a population of 25 million living on a landmass the size of the U.S. or Europe, the potential for processing data at the edge, rather than centralised data centres or the cloud, appears compelling for businesses.
SEE: The current state of edge computing.
But just like elsewhere, the edge has not taken off at the pace some had envisioned.
The [global] industry has been saying for five, seven, eight years, that it was coming and would change the world, IBMs Mangla told TechRepublic. Has it panned out the way analysts and businesses have seen it? Probably in terms of monetisation, it hasnt reached that scale yet.
Telecommunications companies like Telstra have put billions into 5G infrastructure. The hope was 5Gs high bandwidth, low latency and capabilities like network slicing would make the promise of the edge real. Monetisation was to be achieved through business customers.
However, a study commissioned by Telstra in 2022-23 described adoption as nascent at the time, with just 25% of businesses. It found leaders were exploring cloud adoption and provider investments were fast-tracking customer journeys and advancing hybrid cloud plans.
An Accenture survey of 2,100 C-level execs across 18 countries found 83% still believe edge will be essential to remain competitive in the future. However, only 65% were using edge computing, and 50% of these were only ad hoc or tactical users of edge computing (Figure A).
Slower than expected edge growth was in part due to the hype of predictions themselves.
Some of it was a bit overblown. It always required building use cases, it required building businesses, it required putting things in place to make that potential of edge computing happen, Mangla said.
However, there are other barriers the edge computing industry is dealing with in deployment.
SEE: Consider these edge computing best practices.
While billed as a way to rationalise costs through a reduction in data transfer costs, the edge has actually turned out to be more expensive in some cases. Telstras State of Cloud, Edge and Security 2022-23 survey found 46% of businesses were put off by low cost-to-benefit ratios.
Forty-two per cent found business cases and outcomes not compelling enough (Figure B), which was put down to insufficient understanding of the benefits of edge in terms of ROI and competitive differentiation. Businesses were also experiencing edge strategy and skills gaps.
The depth of the edge computing market also means it is challenged by the lack of a standard edge computing stack and APIs (Figure C). This has made it challenging to develop and deploy edge applications that can work across different edge computing devices and platforms.
Use cases for edge computing have tended to be vertical-specific. With differences across industries and use cases, this means investment is also fragmented, at least until individual industries determine consistent architectural approaches to capturing the value of the edge.
Edge computing is growing in Australia and around the world, even if this may appear underwhelming due to inflated expectations. Mangla noted that edge computing is actively being deployed by IBM clients, in specific industry use cases across the globe.
A prime example is on the industrial floor. Mangla said many organisations with manufacturing operations that want top-class quality control and inspection are augmenting manual inspections by deploying edge computing for visual inspection on the shopfloor.
Epicor, a provider of manufacturing execution systems in the Australian and regional market, is just one technology vendor that is actively working with local manufacturers to supercharge operations through smart factory initiatives that further automate operations.
IBM has worked with automotive clients using the edge to monitor stock levels in car parking lots during supply chain disruptions. The edge is also powering autonomous driving features for automakers, allowing split-second decisions without the latency of cloud connectivity.
Australian miners are leading edge adopters. Iron ore miner Newcrest is using intelligent edge to pull data from downstream sensors monitoring tonnes tipped, apron feeder speeds and weightometres to control the volume of ore delivered to crushed ore bins upstream.
McKinsey & Company research indicates the industries looking at the potential of edge computing include everything from aerospace and defence, the chemicals industry, and electric power, natural gas and utilities, to financial services, healthcare systems and retail.
Australia is ripe for growth in edge computing, Mangla said. During a recent visit to Sydney, he noted there were startups looking to run artificial intelligence visual recognition technology for industries at the edge, and New Zealand companies looking at the edge for their manufacturing operations.
SEE: Australia is well-positioned to capitalise on the future influence of AI.
Mangla said edge computing could be accelerated thanks to AI because some use cases suit the edge. Mangla said the edge really excels where decisions need to be taken fast at that point in time and where it is best to avoid waiting on data to be transferred to centralised locations.
Organisations looking to make the most of edge computing, as well as the rest of their cloud infrastructure, should take a hybrid cloud by design approach, Mangla said. While much of the market had stumbled organically into a mix of on-premises and cloud infrastructure and services, he said this would be better managed with an intentional embrace of hybrid cloud in the future.
When you think about strategy, there is always going to be more than one cloud, he said. You will always have something, the crown jewels, that you will never put in a cloud environment, you will always have that in an on-prem or in a private cloud. Australian Government agencies like the ATO, they will never put anything on the public cloud.
As new applications get built, make sure that application can seamlessly operate across the entire landscape. As you start building it intentionally over the next 10 to 15 years, as you do app modernisation, you need a hybrid cloud by design mindset.
Read the original here:
Is Edge Computing Living Up to Its Promise in the Australian Market? - TechRepublic
Deploy A Production-Ready Streamlit App with Cloud Run and Cloud Build – Towards Data Science
If youre a data scientist, youre probably familiar with Streamlit. Youve probably used it once to prototype a demo, share a dashboard, or even build a more complex app.
In this post, youll learn how to deploy Streamlit applications quickly and cost-effectively on the Google Cloud Platform.
To do that, youll need:
If youre unfamiliar with some of these tools, everything will be covered below, and links will be provided to learn more.
Without further ado, lets have a look .
Cloud Run is a Google Cloud service that provides a fully managed platform to run your code in a container.
Whether youre building a web server, an API, or running processing jobs, Cloud Run automatically scales your containers up and down from zero.
This means you only pay when your code is running.
With Cloud Run, the workflow is simple. You write your code in your favorite programming language (Python, Go, NodeJS, or Java), package it into a container, and push it for deployment.
Then, Google Cloud provides everything the app needs to run in production.
Read more here:
Deploy A Production-Ready Streamlit App with Cloud Run and Cloud Build - Towards Data Science
Windows 365 with Copilot What you need to know – Laptop Mag
Microsoft has gone all-in on Copilot, from putting AI integration into their new Surface devices and Microsoft365 to Windows 11. The company has also added several much-needed features to its Windows 365 cloud computing platform.
All Windows 11 users will be able to access Copilot from the Windows taskbar or the Copilot key on compatible devices. For Windows 365, users now have access to multiple security updates, virtual GPUs, and AI Cloud PC resizing.
Windows365 is Microsoft's cloud PC platform for business and consumer purposes. As a cloud computing system, it is available on all platforms. The new announcement from Microsoft mostly focuses on security updates, the Windows App, Windows365 GPU, and AI Cloud PC Resizing solutions.
In terms of security updates, Windows 365 now supports a set of new features designed to protect against data leaks and data loss. Both Windows365 and Azure Virtual Desktop support single sign-on and passwordless authentication as well as third-party IDP support to keep your data secure. Windows365 and Azure Virtual Desktop also support Cloud PC encryption as well as watermarking, screen capture protection, and tamper protection. So you can be sure your virtual data is protected from unauthorized access or manipulation. These updates are now generally available to all Windows 365 users.
Windows 365 also offers Windows 365 GPU support for access to powerful GPU-enabled Cloud PCs. Ideal for graphic design, image and video rendering, 3D modeling, data processing, and visualization, Windows 365 GPU allows users to run apps using the local GPU resource. This was one of the most requested updates to Windows 365, as it allows for far more versatility in the cloud computing platform. Windows 365 GPU is now in public preview.
For those looking to use Windows on their devices, the Windows App is now available via the Microsoft Store, online, or in TestFlight for Apple devices. The Windows App functions as a portal to Windows365, Azure Virtual Desktop, Remote Desktop, Remote Desktop Services, and Microsoft Dev Box. The Windows App is now in public preview.
Lastly, Windows 365 will be using AI to offer Cloud PC resizing recommendations to help reduce costs and simplify security and management of Windows 365 Cloud PCs. This feature is now in public preview.
Get our in-depth reviews, helpful tips, great deals, and the biggest news stories delivered to your inbox.
See the original post:
Windows 365 with Copilot What you need to know - Laptop Mag
Navigating the Evolving Cloud Landscape: Strategies for Success in 2024 – CXOToday.com
By Shrikant Navelkar
Due to the significant advancements in the IT industry, the adoption of cloud computing has rapidly shifted from being a mere concept to an essential requirement for companies of all sizes across various sectors. However, merely selecting a cloud service provider and a compatible architecture is not enough. To fully leverage the potential of this technology, a thorough reassessment of existing operations, processes, and business culture may be necessary.
According to Gartner, it is projected that seventy percent of enterprise workloads will migrate to the cloud by 2024. Despite this, three out of four organizations lack a suitable cloud strategy. Regardless of their stage in the cloud journey, every business requires a well-defined cloud strategy. A comprehensive cloud strategy should be business-driven, addressing key issues of what and why, and aligning closely with the organizations overarching business objectives.
What is a cloud strategy?
A cloud strategy refers to a companys plan for adopting and using cloud computing services and resources. It can include the following key elements: cloud adoption roadmap, cloud service model evaluation, cloud provider evaluation, cloud governance, and application modernization. Overall, a cloud strategy enables a company to adopt cloud computing in a systematic and well-defined and governed manner.
Here are the trending cloud strategies for 2024:
Democratization of AI Cloud infrastructure has a big role in democratizing AI by providing accessible, scalable, and cost-effective resources for individuals and organizations of all sizes. Through on-demand access to AI tools, frameworks, and services, coupled with elastic scalability and pay-as-you-go pricing models, cloud platforms enable experimentation, innovation, and deployment without the need for extensive upfront investment. This accessibility lowers barriers to entry, allowing smaller businesses, startups, and individual developers to participate in the AI-driven innovation.
Optimization of Hybrid and Multi-Clouds Environments Organizations are increasingly adopting a multi-hybrid cloud approach, which combines public, private, and edge cloud environments, to leverage the strengths of each while mitigating their respective limitations. However, if its usage is not optimized then it can lead to wastage of resources, time and funds. Hence, optimizing the use of multi-hybrid cloud is emerging as a significant trend in cloud computing as it allows for flexibility, security, compliance, cost-efficiency enabling business to address their diverse needs effectively.
Making the cloud sustainable Considering the growing environmental awareness, major cloud service providers such as Amazon, Microsoft, Oracle, and Google have each committed to achieving net-zero emissions, not only within their own operations but also to assist their customers in reducing their carbon footprints. Furthermore, these cloud service providers have outlined plans to procure 100 percent of the energy for their operations from renewable sources. Thus, the push towards greener and sustainable cloud is emerging as a significant trend in 2024.
Revolutionizing Data Management with Edge Computing Edge computing is emerging as a pivotal trend in cloud computing due to its ability to process data closer to its source, reducing latency, enhancing bandwidth efficiency, and bolstering privacy and security by keeping sensitive data local. This approach enables applications to operate offline, ensuring uninterrupted functionality in remote or unstable environments, while also offering scalability and flexibility to adapt to diverse use cases. As the Internet of Things (IoT) expands and the need for real-time data analytics intensifies, edge computing is poised to revolutionize how data is managed, paving the way for innovation and efficiency across industries.
Simplification of Cloud Computing Simplifying the cloud for non-technical people, and making it more user-friendly is a rapidly rising trend. No-code and low-code tools are allowing non-technical people to develop applications that would previously have required a trained software engineer. Additionally, many cloud providers are offering drag-and-drop features and natural language tools that minimize or eliminate the need for technical skills. This trend is democratizing the use of cloud tools and services.
In conclusion, as we navigate the ever-evolving landscape of cloud computing in 2024, its evident that embracing innovation and staying agile are key to achieving success in this dynamic environment. By adopting strategies that prioritize flexibility, scalability, security, and sustainability, organizations can harness the full potential of cloud technologies to drive growth, streamline operations, and deliver exceptional value to their customers.
(The author is Director, Clover Infotech, and the views expressed in this article are his own)
Here is the original post:
Navigating the Evolving Cloud Landscape: Strategies for Success in 2024 - CXOToday.com
MiTAC and TYAN Present Cutting-Edge Server Solutions for Cloud Computing at CloudFest 2024 – PR Newswire
RUST, Germany, March 19, 2024 /PRNewswire/ -- Today, the subsidiary of MiTAC Holdings Corp.(TSE:3706), MiTAC Computing Technology and its server brand TYAN, the leading manufacturer in server platform design worldwide, unveil their latest cloud server platforms at CloudFest 2024, Booth #H12. These platforms feature the newest processors from the AMD EPYC 9004 series, AMD Ryzen 7000 series, and 5th Gen Intel Xeon Scalable, making them ideal for next-generation data centers and cloud computing.
MiTAC Showcases an Advanced Cloud and Datacenter Solution Supported by 5th Gen Intel Xeon Scalable ProcessorsOver a decade, MiTAC and Intel have been vital strategic partners, working together to create top-tier server products for cloud computing and next-gen demanding workloads. Last year, Intel transferred the rights to develop and sell products utilizing Intel Datacenter Solution Group (DSG)designs to MiTAC. Today, MiTAC showcases one of its DSG product lineups tailored for cloud, enterprise, and AI workloads at CloudFest. The Intel Server System M50FCP1UR204, presented by MiTAC, represents the 1U solution from the Intel Server M50FCP Family(formerly Fox Creek Pass). Featuring 5th/4th Gen Intel Xeon Scalable Processors, the Intel Server M50FCP Family is a full-featured, performance-optimized 2-socket server platform, including both 1U and 2U options, can be configured to support a wide range of mainstream needs. With powerful compute capabilities, built-in accelerators, and high-speed I/O and memory bandwidth, the Intel Server System M50FCP Family is an ideal choice for data-intensive mainstream workloads.
TYAN's New Cloud Computing Servers Leverage the Power of AMD EPYC 9004 and Ryzen 7000 Series Processors"Scalability, high availability, reliability, performance, energy efficiency, support for virtualization, and network connectivity are among the most crucial aspects that cloud computing servers should possess to meet the demands of modern enterprises," said Eric Kuo, Vice President of Server Infrastructure Business Unit, MiTAC Computing Technology Corporation."By harnessing the capabilities of AMD's new CPUs, I believe TYAN cloud platforms could offer these features and more."
TYAN's Transport CX TD76-B8058, designed in a 2U 4-node format, harnesses the power of AMD EPYC 9004 processors for high-density cloud server deployment. Each server node offers 16 DDR5-4800 DIMM slots, 4 hot-swappable E1.s drive bays, 2 NVMe M.2 slots, 1 OCP v3.0 LAN mezzanine slot, and 1 standard PCIe 5.0 x16 slot. With front I/O for the shortest network cable routing, rear-accessible, hot-swappable cooling fans, and redundant 80+ Titanium power supplies, it ensures optimal system serviceability.
For cloud service providers looking for entry-level servers, TYAN presents the Transport CX GX40-B8016. It's a compact 1U server ideal for front-end portal and edge computing tasks in datacenters. It comes with 4 DDR5-4800 UDIMM slots, supports up to 4 SATA drives, 2 NVMe M.2 slots, and one PCIe 5.0 x16 slot. Powered by AMD Ryzen 7000 Series Processors with AMD 3D V-Cache technology, Transport CX GX40-B8016 is boasted about low-power consumption and manageability by cloud service providers.
TYAN Thunder CX TD76-B5658 : Optimized For High-density Cloud Server DeploymentOther than the above, TYAN also exhibits new Intel Emerald Rapids platform this time. Thunder CX TD76-B5658 is a 4-node high-density designed server for CSPs, with a mainstream 2U form factor. It features shared system fans, redundant 80+ Titanium power supplies, and hot-swappable modules for easy maintenance. It also includes front I/O for easy network cable management. Each node is powered by a single 5th/4th Gen Intel Xeon Scalable processor and supports PCIe 5.0 x16, OCP v3.0 LAN mezzanine, and NVMe M.2 SSDs.
About MiTAC Computing Technology Corp.MiTAC Computing Technology Corp, a MiTAC Holdings Corp. (TSE:3706) subsidiary, specializes in cloud and edge computing solutions and has over 30 years of design and manufacturing expertise. With a strong focus on large-scale data centers, the company offers flexible and customized supply models for various systems and applications. Its product lineup includes TYAN servers, 5G ORAN servers, high-performance AI servers, and data center products. Intel Datacenter Solutions Group (DSG) transited its business to MiTAC since July 2023, allowing MiTAC expanding its product offerings with cutting-edge total cost of ownership solutions for next-gen data center equipment.
MiTAC Computing Technology Official website: http://www.mitacmct.com MiTAC DSG website: https://datacentersolutions.mitacmct.com/
About TYANTYAN, as a leading server brand of MiTAC Computing Technology Corporation under the MiTAC Holdings Corp. (TSE:3706), designs, manufactures and markets advanced x86 and x86-64 server/workstation board technology, platforms and server solution products. Its products are sold to OEMs, VARs, System Integrators and Resellers worldwide for a wide range of applications. TYAN enables its customers to be technology leaders by providing scalable, highly-integrated, and reliable products for a wide range of applications such as server appliances and solutions for HPC, hyper-scale/data center, server storage, AI and security appliance markets. For more information, please visit TYAN's website at http://www.tyan.com or MiTAC Computing Technology Corporation website at http://www.mitacmct.com
SOURCE MiTAC Computing Technology Corporation