Page 1,837«..1020..1,8361,8371,8381,839..1,8501,860..»

Microsoft to enact new cloud outsourcing and hosting licensing changes which still don’t address core customer… – ZDNet

Credit: Microsoft

On August 29, Microsoft went public with promised cloud outsourcing and hosting changes which officials first outlined earlier this year. These changes, which will take effect on October 1, 2022, still don't address some of the core customer and partner complaints which led to Microsoft revising its policies in these areas.

Microsoft introduced outsourcing restrictions in 2019, resulting in customers paying more to run Microsoft software in non-Microsoft cloud environments. Customers who had been using AWS and Google Cloud as dedicated hosts for running Windows Server and clients were affected directly, but some of them didn't realize the extent of the impact until their contracts with Microsoft were up for renewal this year. Microsoft's changes around its bring-your-own-license terms made their contracts more expensive if they wanted to run Microsoft software on anything but Azure.

Some European partners and customers took their complaints to European antitrust authorities. Microsoft responded with a set of "European Cloud Principles", which officials said would level the playing field -- to some extent -- for partners and customers who wanted to run Microsoft software on certain non-Microsoft cloud infrastructures.

What those principles didn't include was what many customers cared most about: The ability to run Microsoft software on Amazon Web Services, Google and Alibaba. They focused on customers who wanted to move their software licenses to other clouds outside of those "Listed Providers."

"This is just them circling back and lightening up the Cloud Solution Providers (CSPs) rules. It doesn't change the complexities and limitations that affect the 'listed providers': Amazon, Google, and Alibaba, and their joint customers with Microsoft," said Directions on Microsoft analyst Wes Miller. "While this is good news for a set of providers, there's no change to the complex and encumbered rules that affect those three providers and customers. "

Earlier this year, Microsoft officials said they would address some of the complaints from European cloud vendors about restrictive cloud licensing policies that resulted in customers paying more to run Microsoft software in non-Microsoft cloud environments. The list of changes Microsoft outlined today, company officials said, will make it easier for customers to bring their software to partners' clouds; to ensure partners have access to the products they need to sell cost-effective solutions that customers want, and to empower partners to build solutions with speed and scale.

Specifically, Microsoft is adding a new Flexible Virtualization benefit which officials said will allow customers with Software Assurance or subscription licenses to use their own licensed software to build and run solutions on any infrastructure except the Listed Providers (AWS, Google, Alibaba). Customers who want to run their own Microsoft-licensed software on those providers' infrastructure will have to buy the licenses from those providers. And any user with a Microsoft 365 F3, E3 or E5 license will be able to virtualize Windows 10 or 11 on their own servers or outsourcers' servers -- as long as those outsourcers are not AWS, Google or Alibaba -- without any additional licenses required. (Currently, customers need a VDA add-on license to virtualize qualifying Windows 10 or 11 editions.)

Microsoft also is adding a new Windows Server virtual core licensing option as part of the Flexible Virtualization benefit, which will allow Windows Server to be licensed on a virtual core basis and not on a physical core one, as exists currently. Microsoft officials said this change will help with moving Windows Server workloads to the cloud.

Microsoft's August 29 blog post outlines in more detail these coming hosting/outsourcing licensing changes, along with a few additional ones.

See the rest here:
Microsoft to enact new cloud outsourcing and hosting licensing changes which still don't address core customer... - ZDNet

Read More..

Teradata takes on Snowflake and Databricks with cloud-native platform – VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Database analytics giant Teradata has announced cloud-native database and analytics support. Teradata already had a cloud offering that ran on top of infrastructure-as-a-service (IaaS) infrastructure, enabling enterprises to run workloads across cloud and on-premise servers. The new service supports software-as-a-service (SaaS) deployment models that will help Teradata compete against companies like Snowflake and Databricks.

The company is launching two new cloud-native offerings. VantageCloud Lake extends the Teradata Vantage data lake to a more elastic cloud deployment model. Teradata ClearScape Analytics helps enterprises take advantage of new analytics, machine learning and artificial intelligence (AI) development workloads in the cloud. The combination of cloud-native database and analytics promises to streamline data science workflows, support ModelOps and improve reuse from within a single platform.

Teradata was an early leader in advanced data analytics capabilities that grew out of a collaboration between the California Institute of Technology and Citibank in the late 1970s. The company optimized techniques for scaling analytics workloads across multiple servers running in parallel. Scaling across servers provided superior cost and performance properties compared to other approaches that required bigger servers. The company rolled out data warehousing and analytics on an as-a-service basis in 2011 with the introduction of the Teradata Vantage connected multicloud data platform.

Our newest offerings are the culmination of Teradatas three-year journey to create a new paradigm for analytics, one where superior performance, agility and value all go hand-in-hand to provide insight for every level of an organization, said Hillary Ashton, chief product officer of Teradata.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Teradatas first cloud offerings ran on specially configured servers on cloud infrastructure. This allowed enterprises to scale applications and data across on-premise and cloud servers. However, the data and analytics scaled at the server level. If an enterprise needed more compute or storage, it had to provision more servers.

This created an opening for new cloud data storage startups like Snowflake to take advantage of new architectures built on containers, meshes and orchestration techniques for more dynamic infrastructure. Enterprises took advantage of the latest cloud tooling to roll out new analytics at high speed. For example, Capital One rolled out 450 new analytics use cases after moving to Snowflake.

Although these cloud-native competitors improved many aspects of scalability and flexibility, they lacked some aspects of governance and financial controls baked into legacy platforms. For example, after Capital One moved to the cloud, it had to develop an internal governance and management tier to enforce cost controls. Capital One also created a framework to streamline the user analytics journey by incorporating content management, project management and communication within a single tool.

This is where the new Teradata offerings promise to shine. It promises to combine the new kinds of architectures pioneered by cloud-native startups with the governance, cost-controls and simplicity of a consolidated offering.

Snowflake and Databricks are no longer the only answer for smaller data and analytics workloads, especially in larger organizations where shadow systems are a significant and growing issue, and scale may play into workloads management concerns, Ashton said.

The new offering also takes advantage of Teradatas various R&D into smart scaling, allowing users to scale based on actual resource utilization rather than simple static metrics. The new offering also promises a lower total cost of ownership and direct support for more kinds of analytics processing. For example, ClearScape Analytics includes a query fabric, governance and financial visibility. This also promises to simplify predictive and prescriptive analytics.

ClearScape Analytics includes in-database time series functions that streamline the entire analytics lifecycle, from data transformation and statistical hypothesis tests to feature engineering and machine learning modeling. These capabilities are built directly into the database, improving performance and eliminating the need to move data. This can help reduce the cost and friction of analyzing a large volume of data from millions of product sales or IoT sensors. Data scientists can code analytics functions into prebuilt components that can be reused by other analytics, machine learning, or AI workloads. For example, a manufacturer could create an anomaly detection algorithm to improve predictive maintenance.

Predictive models require more exploratory analysis and experimentation. Despite the investment in tools and time, most predictive models never make it into production, said Ashton. New ModelOps capabilities include support for auditing datasets, code tracking, model approval workflows, monitoring model performance and alerting when models become non-performing. This can help teams schedule model retraining when they start to lose accuracy or show bias.

What sets Teradata apart is that it can serve as a one-stop shop for enterprise-grade analytics, meaning companies dont have to move their data, Ashton said. They can simply deploy and operationalize advanced analytics at scale via one platform.

Ultimately, it is up to the market to decide if these new capabilities will allow the legacy data pioneer to keep pace or even gain an edge against new cloud data startups.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read this article:
Teradata takes on Snowflake and Databricks with cloud-native platform - VentureBeat

Read More..

Dell Servers with NVIDIA DPUs and VMware vSphere 8 Announced – StorageReview.com

NVIDIA used VMware Explore to announce a new data center solution with Dell Technologies bringing AI training, AI inference, data processing, data science, and zero-trust security capabilities to enterprises globally. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs, and NVIDIA AI Enterprise software and is optimized for the newly announced VMware vSphere 8 enterprise workload platform.

NVIDIA used VMware Explore to announce a new data center solution with Dell Technologies bringing AI training, AI inference, data processing, data science, and zero-trust security capabilities to enterprises globally. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs, and NVIDIA AI Enterprise software and is optimized for the newly announced VMware vSphere 8 enterprise workload platform.

NVIDIA has already added the technology to the NVIDIA LaunchPad hands-on lab to allow enterprises to experience the combination of these technologies and get access to hardware and software for end-to-end workflows in AI, data science, and more.

Manuvir Das, head of EnterpriseComputing at NVIDIA, said:

AI and zero-trust security are powerful forces driving the worlds enterprises to rearchitect their datacenters as computing and networking workloads are skyrocketing. VMware vSphere 8 offloads, accelerates, isolates, and better secures data centerinfrastructure services onto the NVIDIA BlueField DPU and frees the computing resources to process the intelligence factories of the worlds enterprises.

Dells Travis Vigil, senior vice president, portfolio and productmanagement, Infrastructure Solutions Group, added:

Dell and NVIDIAs long tradition of collaborating on next-generation GPU-accelerated data centers hasalready enabled massive breakthroughs. Now, through a solution that bringsNVIDIAs powerful BlueField DPUs along with NVIDIA GPUs to our PowerEdge server platform, ourcontinued collaboration will offer customers performance and security capabilities to help organizationssolve some of the worlds greatest challenges.

vSphere on BlueField DPUs will unlock hardware innovation helping customers meet the throughput and latency needs of modern distributed workloads. vSphere will enable this by offloading and accelerating network and security infrastructure functions onto DPUs from CPUs.

The BlueField DPUs will be managed through the iDRAC BMC with a special cable, allowing full out-of-band management versus a traditional edge-card installation. This means that the DPU in this deployment wont usually be a customer-installed device, but one that you select when the server is configured.

Customers running applications that demand high network bandwidth and fast cacheaccess such as in-memory databases can expect to reduce the number of cores required but achieve better performance. Reducing the number of cores will also improve TCO. Offloading to DPUs can also result in a higher transaction rate with lower latency by leveraging freed CPU cores and better cache locality, all while benefitting from vSphere DRS and vMotion.

By running infrastructure services on DPUs and isolating them from the workloaddomain, vSphere on DPUs will boost infrastructure security. Additionally, now in beta,NSX Distributed Firewall will offload to DPUs to scale customers security operationsby securing East-West traffic at line rate without the need for software agents.

vSphere 8 will dramatically accelerate AI and machine learning applications bydoubling the virtual GPU devices per VM, delivering a 4x increase of passthroughdevices, and vendor device groups that allow binding of high-speed networkingdevices and the GPU.

Krish Prasad, VMwares senior vice president and general manager of the CloudPlatform Business Unit, explained:

Dell PowerEdge servers built on the latest VMware vSphere 8innovations, and accelerated by NVIDIA BlueField DPUs, provide next-generation performance andefficiency for mission-critical enterprise cloud applications while better protecting enterprises fromlateral threats across multi-cloud environments.

As NVIDIA-Certified Systems, the Dell PowerEdge servers can run the NVIDIA and VMware AI-Ready Enterprise Platform, featuring the NVIDIA AI Enterprise software suite and VMware vSphere.

NVIDIA AI Enterprise is a comprehensive, cloud-native suite of AI and data analytics software optimizedto enable organizations to use AI on familiar infrastructure. It is certified for any deployment, fromthe enterprise data center to the public cloud, and includes global enterprise support to keep AIprojects on track.

An upcoming release of NVIDIA AI Enterprise will support new capabilities introduced inVMware vSphere 8, including the ability to support larger multi-GPU workloads, optimize resources andefficiently manage the GPU lifecycle.

With NVIDIA LaunchPad, enterprises can get access to a free hands-on lab of VMwarevSphere 8 running on NVIDIA BlueField-2 DPU.

Dell servers with vSphere 8 on NVIDIA BlueField-2 DPU will be available later in the year.

NVIDIA AIEnterprise with VMware vSphere is now available and can be experienced on NVIDIA LaunchPad hands-lab.

Engage with StorageReview

Newsletter|YouTube| PodcastiTunes/Spotify|Instagram|Twitter|TikTok|RSS Feed

Read the original here:
Dell Servers with NVIDIA DPUs and VMware vSphere 8 Announced - StorageReview.com

Read More..

NAS vs. server: Which storage option should you choose? – TechTarget

Next to computer processing and applications, data storage is one of the most important IT activities. Data storage ensures all information created by a private user, small business or multinational corporation can be stored in a safe and secure location for future retrieval and use. Data storage technology is also essential from a DR perspective, as properly backed up files and information can help businesses recover operations if a disruption affects their ability to conduct business.

Before choosing a storage option, IT administrators should examine how NAS and server-based storage address planning, evaluating, selecting and implementing data storage.

NAS provides cost-effective and easy-to-implement options to increase storage capacity. As the term network-attached storage implies, the storage device attaches to a network -- most often, Ethernet or other TCP/IP-based networks -- and can be launched quickly into production. Other devices on the network use NAS storage capabilities. NAS devices are freestanding devices that typically have at least two bays into which storage modules are inserted. The more bays there are, the more storage can be implemented.

NAS devices typically come with their own OS and network interface software, so devices can be easily connected to an existing LAN, powered up and quickly placed into service. NAS devices are typically file-based, as opposed to server-based devices that can be either block- or file-based. This makes them compatible with most OSes. Capacities can range from a few terabytes to dozens of terabytes. NAS is ideal for individual users and SMBs that need easy-to-use storage with flexibility, convenience and moderate investments.

Server-based storage typically connects to a primary file server; uses the file handling functions of the server and its processing power; and connects either directly to the primary server(s) or via a network, such as Ethernet or a SAN designed for high-capacity data transfers among users and storage arrays. Other servers, such as application servers, coexist in the infrastructure.

Server storage is the vehicle of choice for large organizations because capacities can be dramatically expanded by adding more capacity to existing servers known as scale-up storage -- or by adding more physical storage servers to the infrastructure called scale-out storage. Server-based storage can support block and file storage formats, which makes them ideal for larger organizations with a variety of storage requirements.

Unlike NAS, storage servers are typically implemented in different forms, such as standalone towers or rack-based devices. In these situations, IT management must ensure the servers have enough power, are physically secure and are properly cooled. These and other criteria -- such as additional floor space to place server racks -- typically make server storage more expensive than NAS. Given the need to potentially manage dozens and possibly hundreds of storage arrays, the need for experienced storage technicians is significant; NAS devices are more of a DIY platform.

While the focus of this article is NAS and server-based storage, cloud-based storage offerings can accommodate virtually any storage requirement. NAS and server-based requirements can be implemented in cloud environments and offer the following advantages:

Cloud storage can serve as primary storage, secondary storage for specific operational requirements and backup storage for DR planning.

The table below provides a more detailed comparison of NAS and server storage based on specific criteria. It's important to note that, before any storage offering is selected, user and storage requirements -- now and in the near to long term -- must be defined, as well as the type of storage activities that will need to be supported, for example, primary storage or data backups.

Depending on the business requirements, both NAS and server-based storage can occupy the storage infrastructure. Both can be used for primary file storage for specific applications and backup storage for DR.

NAS can work alongside server storage, potentially for specialized requirements, such as secure data storage and retrieval.

Server storage can work alongside NAS; the key is to define the requirements and select the appropriate storage offering.

Read the original here:
NAS vs. server: Which storage option should you choose? - TechTarget

Read More..

Apple eases subscription path to Xcode Cloud to keep devs in the ecosystem DEVCLASS – DevClass

Apple has opened subscriptions for Xcode Cloud, a continuous integration and delivery (CI/CD) service designed to work with Xcode, the official IDE for macOS and iOS development.

Xcode Cloud was introduced in June and is an add-on subscription for developers who are already signed up to the Apple Developer Program.

The cost starts at $14.99 per month for up to 25 compute hours, though this basic plan is free until the end of December 2022. A fee of $99 per year is still required for the developer program itself. Further compute hours are available at extra cost, for example 250 hours for $99 per month, and can now be obtained via the Apple Developer App.

Xcode Cloud is based on workflows defined in Xcode. The core actions in a workflow are build, analyze, test, and archive. The service also supports post-actions, such as distributing a new version of an app, and custom build scripts. When a build completes, the artifacts (output from the build) are stored online for 30 days so they can be downloaded, for example by App Store Connect, the web-based tools Apple offers for managing apps in its Store, including those for iPhone, iPad, Mac and Watch. There is also a service called Test Flight, which is for distributing preview releases to testers.

Apple considers these three services Xcode, TestFlight, and AppStore Connect as the core elements of its CI/CD system.

The service works in conjunction with a git repository, which must be one of either Bitbucket, GitHub or GitLab, though self-managed instances are supported as well as cloud-hosted. Xcode Cloud clones a repository temporarily onto its own servers, though Apple says: It doesnt store your source code and securely handles any stored data for example, your derived data and keeps it private.

Xcode Cloud is all about keeping developers within the Apple ecosystem. CI/CD is widely adopted, and without Xcode Cloud devs will use competing systems such as Github Actions or CircleCI. The advantage of Xcode Cloud is its integration.

I liked that with a single git push I could compile, archive, deploy to TestFlight, and send for beta review. I even pushed a fix from my iPhone using Working Copy one time while I was on a train, said one developer on Hacker News.

Developers who work entirely with Apple products may be pleased, but the company seems uninterested in scenarios such as cross-platform development, or developing web applications on a Mac, or using an IDE other than Xcode. Another disappointment is that Apples cloud build service does not enable development of Mac or iOS software from non-Mac computers.

See the original post here:
Apple eases subscription path to Xcode Cloud to keep devs in the ecosystem DEVCLASS - DevClass

Read More..

Public cloud to double to $90b by 2025 in nation – China Daily

China's next wave of cloud migration is expected to be spearheaded by critical industrial and manufacturing sectors. [Photo/VCG]

China's next wave of cloud migration is expected to be spearheaded by critical industrial and manufacturing sectors, and the country's public cloud market will more than double from $32 billion in 2021 to $90 billion by 2025, said global management consulting firm McKinsey& Company.

According to the latest report from McKinsey, despite a relatively late start, China has made enormous progress in terms of cloud migration speed and has become the world's second-largest cloud market.

Over the next few years, the speed of cloud migration in China will be broadly in line with the rest of the world, with a 19-percentage-point increase expected in IT workloads shifting to the cloud between 2021 and 2025.

However, China differs from other countries in its high proportion of private cloud, which is expected to reach 42 percent by 2025, compared with 36 percent for the public cloud.

McKinsey's survey suggested that only 11 percent of the companies surveyed plan to be mostly on the public cloud. The remainder will continue to use a private cloud with traditional servers or use a hybrid cloud.

"Cloud adoption is strongly correlated with digital transformation. By 2025, 78 percent of all IT workloads will be on cloud in China," said Kai Shen, partner at McKinsey. "But when we look across the cloud adoption of business use cases with P&L impact, we find that adoption rates are much lower at between 0 percent to 25 percent."

P&L is an indicator that can show a company's ability to increase its profit, either by reducing costs and expenses or increasing sales.

"It demonstrates that Chinese companies still have enormous opportunities to develop, adopt and scale use of cloud, for example in dynamic pricing and personalization, digital twins and three-dimensional simulations, sales forecasting and inventory optimization," he said.

In terms of industries, the report also pointed out that sectors with numerous tech-savvy and digital-native companies, such as e-commerce and education, have already shifted a significant portion of their IT workloads to the cloud in China.

Labor-intensive industrial and manufacturing sectors, on the contrary, have not done that. But that could quickly change given the latest national policy guidance, it added.

The rest is here:
Public cloud to double to $90b by 2025 in nation - China Daily

Read More..

The future potential of the cloud – IT-Online

Despite an increased awareness by decision-makers of cloud computing, many organisations around the world are yet to fully adopt it.

By Frik van der Westhuizen, CEO of EQPlus

Research shows that 41% of companies in the EU used cloud computing last year, and then it was predominantly for email and storage of files. Meanwhile in EMEA, revenue from cloud-related services are expected to approach $400 billion by 2025, up from the $330 billion of 2020. With so much potential in the cloud, decision-makers will have no choice but to embrace it.

Improving cost management and making businesses more agile for a digitally driven market have been the most common benefits of going the cloud route for some time.

While these are still relevant today, the cloud also provides a platform for the development of advanced technologies that can foster innovation within an organisation. For example, the cloud supports automation that, when combined with no-code applications, make it possible for non-technical people to create a range of digital services that cater for an increasingly sophisticated customer base.

Evolving environment

But while the likes of artificial intelligence (AI) and automation can help businesses across industry sectors drive innovation, it also provides malicious users with the means to perpetrate more sophisticated cyberattacks. This results in the perpetual chicken and egg situation when it comes to organisational defences a company must use the cloud to defend itself while hackers exploit those same solutions for attack.

By 2025, it is anticipated that 80% of companies globally will be using the cloud. Of those, 84% will leverage a multi-cloud approach to benefit from specific service provider advantages. As part of this, the cloud provides the means to more easily create tailored solutions that address virtually any business need.

This modular approach can be seen in the increasing adoption of Kubernetes and containers to provide more agile and affordable ways for businesses to deliver micro-services.

Data at scale

Linking the cloud, big data, and the Internet of Things (IoT), will provide business and technology leaders with even more opportunities to grow. The increasing number of IoT devices generate more data which the cloud and edge computing services can analyse at scale. Doing so faster than what was possible before, empowers organisations to rapidly adapt to market demands.

The exponential growth of data has resulted in companies struggling to make sense of it. This presents massive untapped potential at a time where even the smallest iterative change can make a significant difference when it comes to competitive advantage. The cloud provides the means to not only store this data, but also analyse it at speed. When combined with AI and automation, decision-makers can access improved insights that can create much-needed differentiation.

Sustainability

One thing which South African companies will look to exploit when it comes to the cloud is its ability to help position sustainable business practices. The uncertainties around the stability of the national electricity grid mean business can turn to the cloud as a safe haven for their mission-critical systems.

There is therefore no need to run powerful (and energy expensive) on-premises servers. Instead, the cloud provides the means to optimise physical technology resources. The cloud does not compromise on flexibility, the ability to scale as needed, and the availability of resources on-demand. Companies can remain focused on realising their strategic mandate and worry less about energy efficient on-premises resources.

The cloud is here to stay. But that does not mean it will stagnate. If anything, it will continue to drive innovation and the adopting of more sophisticated solutions to harness new opportunities in a rapidly evolving market.

Related

Read the original here:
The future potential of the cloud - IT-Online

Read More..

Underwater data centres are coming. Can they slash CO2 emissions and make the Internet faster? – Euronews

The first US commercial subsea data centre is set to touch the seabed of the Pacific ocean by the end of this year.

The Jules Verne Pod is scheduled for installation near Port Angeles, on the northwestern coastline of the United States, and could revolutionise how servers are run.

The pod, which is similar in size to a 6-metre shipping container, will accommodate 800 servers and be just over nine metres underwater. The innovation is intended to reduce carbon emissions by 40 per cent.

We are environmentally conscious and we take full advantage of sustainability opportunities, including energy generation, construction methods and materials, Maxie Reynolds, founder of Subsea Cloud, the company making the pods, told Euronews Next.

In February, Subsea Cloud said their first 10 pods would aim to offset more than 7,683 tons of CO2 in comparison with an equivalent land-based centre by reducing the need for electrical cooling.

At the time, Subsea said their data centres would be aimed at healthcare, finance, and the military in the US.

Data centres are used to centralise shared information technology (IT) operations and are essential for the running of our daily lives; the Cloud, Google, and Meta all have data centres that they use to run their products.

Data centres are currently built on land, sometimes in rural areas far from big populated areas.

The Jules Verne Pod comes after a previous government project by Subsea and will be followed by the Njord01 pod in the Gulf of Mexico and the Manannan pod in the North Sea.

The Gulf of Mexico pod depth will likely be around 250m, while the North Sea depth will likely be around 200m.

A seabed data centre is also planned by Chinese company Highlander off the coastal city of Sanya, on Hainan Island.

Subsea says the seabed data centres cost 90 per cent less to make compared with land-based operations.

The savings are the result of a smaller bill of materials, less complexities in terms of deployment and maintenance, too, said Reynolds.

Its complex and costly to put in the infrastructure in metropolitan areas and actually in rural areas too. There are land rights and permits to consider and labour is slower and can be more expensive.

For example, Reynolds said installing and burying a subsea cable takes about 18 minutes and costs about 1,700 for each mile (1.6 km) of cable. On land, it would take about 14 days and cost about 165,000 per mile.

The feasibility of underwater data storage was proved by Microsoft in 2020.

In 2018, the software giant launched Project Natick, dropping a data centre containing 855 servers to the seabed off Orkney, an archipelago on the northeastern coast of Scotland.

Two years later, the centre was reeled up to reveal that only eight servers were down, when the average on land in the same time frame would have been 64.

As well as reducing costs and their environmental footprint, underwater data centres could also provide a faster internet connection.

Subsea claims that latency - or data lag - can be reduced by up to 98 per cent with its underwater pods.

Latency is a byproduct of distance, so the further these data centres get away from metropolitan areas, the more of it is introduced, said Reynolds.

Around 40 per cent of the worlds population lives within 100 km of a coast, and in major urban coastal centres like Los Angeles, Shanghai, and Istanbul, the installation of Subseas data centres could vastly improve how people use their devices.

Signals travel at 200km/millisecond and the average data centre is 400km away from an internet user, meaning a round trip takes 40 milliseconds. This could be reduced by up to 20 times to 2 milliseconds by Subseas pods due to the reduced distance.

My competitors are the inefficient, wasteful timeworn data centres that create long lasting business, environmental and societal problems from the day they are built, said Reynolds.

Players in the subsea data centre space are, for now at least, long-distance allies rather than competitors. We need one another if we are to reshape and redesign the industry for the better.

Read the original post:
Underwater data centres are coming. Can they slash CO2 emissions and make the Internet faster? - Euronews

Read More..

Bringing AWS-Style DPU Offload To The VMware Base – The Next Platform

Databases and datastores are by far the stickiest things in the datacenter. Companies make purchasing decisions that end up lasting for one, two, and sometimes many more decades because it is hard to move off a database or datastore once it is loaded up and feeling dozens to hundreds of applications.

The second stickiest thing in the datacenter is probably the server virtualization hypervisor, although this stickiness is more subtle in its inertia.

The choice of hypervisor depends on the underlying server architecture, of course, but inevitably the management tools that wrap around the hypervisor and its virtual machines end up automating the deployment of systems software (like databases and datastores) and the application software that rides on top of them. Once an enterprise has built all of this automation with VMs running across clusters of systems, it is absolutely loath to change it.

But server virtualization has changed with the advent of the data processing unit, or DPU, and VMware has to change with the times, which is what the Project Monterey effort with Nvidia and Intel is all about.

The DPU offload model that enhances the security of platforms particularly network and storage access while at the same time lowering the overall cost of systems by dumping the network, storage, and security functions that would have been done on the server to that DPU, thus freeing up CPU cores on the server that would have been burdened by such work. Like this:

Offload is certainly not a new concept to HPC centers, but the particular kind of offload the DPU is doing is inspired by the Nitro family of SmartNICs created by Amazon Web Services, which have evolved into full-blown DPUs with lots of compute of their own. The Nitro cards are central to the AWS cloud, and in many ways, they define the instances that AWS sells.

We believe, as do many, that in the fullness of time all servers will eventually have a DPU to better isolate applications from the control plane of the cluster that provides access to storage, networking, and other functions. DPUs will be absolutely necessary in any multi-tenant environment, but technical and economic benefits will accrue to those using DPUs on even single-node systems.

With the launch of the ESXi 8 hypervisor and its related vSphere 8 management tools, Nvidia and VMware have worked to get much of the VMware virtualization stack to run on its Arm-based server cores on the BlueField-2 DPU line, virtualizing cores running on X86 systems that the DPU is plugged into. Conceptually, this next generation of VMwares Cloud Foundation stack looks like this:

With the Nitro DPUs and a homegrown KVM hypervisor (which replaced a custom Xen hypervisor that AWS used for many years), AWS was able to reduce the amount of server virtualization code running on the X86 cores in its server fleet down to nearly zero. Which is the ultimate goal of Project Monterey as well. But as with the early Nitro efforts at AWS, shifting the hypervisor from the CPUs to the DPU took times and steps, and Kevin Deierling, vice president of marketing for Ethernet switches and DPUs at Nvidia, admits to The Next Platform that this evolution will take time for Nvidia and VMware as well.

I think it is following that similar pattern, where initially you will see some code running on the X86 and then a significant part being offloaded to the Bluefield DPUs, Deierling explains. Over time, I think you will see more and more of that being offloaded, accelerated, and isolated to the point where, effectively its a true bare metal server model where nothing is running on the X86. But today, theres still some software running out there.

The BlueField-2 DPU includes eight 64-bit Armv8 Cortex-A72 cores for local compute as well as two acceleration engines, a PCI-Express 4.0 switch, a DDR4 memory interface, and a 200 Gb/sec ConnectX-6dx network interface controller. That NIC interface can speak 200 Gb/sec Ethernet or 200 Gb/sec InfiniBand, as all Nvidia and prior Mellanox NICs for the past many generations can. That PCI-Express switch is there to provide endpoint and root complex functionality, and we are honestly still trying to sort out what that means.

It is not clear how many cores the vSphere 8 stack is taking to run on a typical X86 server without a DPU or how many cores are cleared up by running parts of the vSphere 8 stack on the BlueField-2 DPU. But Deierling did illustrate the principle by showing the effect of offloading virtualized instances of the NGINX Web and application server from the X86 CPUs to the BlueField-2.

In this case, NGINX was running on a two-socket server with a total of 36 cores, and eight of the cores were running NGINX and their work could be offloaded to the Arm cores on the BlueField-2 DPU and various security and networking functions related to the Web server also accelerated. The performance of NGINX improved, the latency of Web transactions dropped. Here is how Nvidia calculates the return on investment:

Deierling says that using the DPU offered a near immediate payback and made the choice of adding a DPU to systems a no brainer.

We dont know what editions of the vSphere 8 stack Essentials, Standard, Enterprise Plus, and Essentials Plus are certified to offload functions to the BlueField-2 DPU, and we dont know what a BlueField-2 DPU costs either. So it is very hard for us to reckon what the ROI of running virtualization on the DPU might bring specifically.

But even if the economics of the DPU were neutral the cost of the X86 cores freed up was the same as the cost of the BlueField-2 DPU it still makes sense to break the application plane from the control plane in a server to enhance security and to accelerate storage and networking. And while the benefits of enhanced security and storage and networking acceleration will be hard to quantify, they might even be sufficient for IT organizations to pay a premium for a DPU instead of just using a dumb NIC or a SmartNIC.

Here is one case in point that Deierling brought up just as an example. For many years, hyperscalers and cloud builders did not have security across the east-west traffic between the tens to hundreds of thousands of servers interlinked in their regions, which constitute their services. The DPU was invented in part to address this issue, encrypting data in motion across the network as application microservices chatter. A lot of hyperscalers and cloud builders as well as other service providers, enterprise datacenters, and HPC centers similarly are not protecting data in transit between compute nodes. It has just been too expensive and definitely was not off the shelf.

With Project Monterey, Nvidia and VMware are suggesting that organizations run VMwares NSX distributed firewall and NSX IDS/IPS software on the BlueField-2 DPU on every server in the fleet. (The latter is an intrusion detection system and intrusion prevention system.) The idea here is that no one on any network can be trusted, outside the main corporate firewall and inside of it, and the best way to secure servers and isolate them when there are issues is to wrap the firewall around each node instead of just around each datacenter.

The NSX software can make use of the Accelerated Switching and Packet Processing (ASAP2) deep packet Inspection technology that is embedded in the Nvidia silicon, which is used to offload of packet filtering, packet steering, cryptography, stateful connection tracking, and inspection of Layer 4 through Layer 7 network services to the BlueField-2 hardware.

The first of the server makers out the door with the combined VMware stack and Nvidia BlueField-2 is Dell, which has certified configurations of its PowerEdge R650 and R750 rack servers and its VxRAIL hyperconverged infrastructure with the Nvidia DPUs and vSphere 8 preinstalled to offload as much work as possible to those DPUs. These systems will be available in November. Pricing is obviously not available now. Hopefully they will be when they start shipping so we can figure out the real ROI of DPU offload for server virtualization. The numbers matter here. In a way, the ROI will pay for enhanced security for those who have to justify the added complexity and cost. Those who want the enhanced security at nearly any cost wont care as much about the DPU ROI. The trick for VMware and Nvidia is to price this low enough that it is indeed a no-brainer.

See the original post here:
Bringing AWS-Style DPU Offload To The VMware Base - The Next Platform

Read More..

Cloud Performance Management Market Worth $3.9 Billion By 2027 Exclusive Report by MarketsandMarkets – Benzinga

Chicago, Aug. 30, 2022 (GLOBE NEWSWIRE) -- Cloud Performance Management Marketto grow from USD 1.5 billion in 2022 to USD 3.9 billion by 2027, at a Compound Annual Growth Rate (CAGR) of 17.6% during the forecast period, according to a new report by MarketsandMarkets. The major factors driving the growth of the Cloud Performance Management market include increasing demand of AL, Big data, cloud solutions.

Browse in-depth TOC on "Cloud Performance Management Market"233 Tables47 Figures225 Pages

Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=239116385

Large Enterprises segment to hold the highest market size during the forecast period

Organizations with more than 1,000 employees are categorized as large enterprises. The traction of cloud performance management in large enterprises is said to be higher than SMEs, as they are adopting cloud performance management solutions to improve business operational efficiency across regions.

The increasing deployment of SaaS offerings such as customer relationship management, human capital management, enterprise resource management, and other financial applications creates an advantageous environment for cloud monitoring adoption, particularly in large organisations, improve the overall cloud system, improve the cloud monitoring, and sustain themselves in intense competition. Large enterprises introspect and retrospect on implementing best practices to ensure effective performance management. CMaaS (Cloud-Monitoring-as-a-Service) is a popular software solution for large businesses seeking a fully managed cloud monitoring service for cloud and virtualized environments. These solutions are provided by third-party providers and are monitored 24 hours a day by IT experts with access to the most recent APM technologies and services.

Banking, Financial Services, and Insurance to record the fastest market size during the forecast period

The BFSI vertical is crucial as it deals with financial data. Economic changes significantly affect this vertical. Regulatory compliances and the demand for new services have created an environment where financial institutions are finding cloud computing more important than ever to stay competitive. A recent worldwide survey on public cloud computing adoption in BFSI states that 80% of the financial institutions are considering hybrid & multi-cloud strategies to avoid vendor lock-in. It provides these critical financial institutions the much-needed flexibility to switch to alternate public cloud operators in case of an outage to avoid any interruptions in the services.

New competitors, new technologies, and new consumer expectations are impacting the BFSI sector. Digital transformation provides organizations access to new customer bases and offers enhanced visibility into consumer behaviour through advanced analytics, which helps organizations in creating targeted products for their customers. Most banks are adopting cloud performance management solutions owing to their benefits, such as configuration management and infrastructure automation to increase stability, security, and efficiency. The BFSI business is expected to hold a significant share of the cloud performance management market due to different advantages offered by cloud-based technologies, such as improved performance, reduced total cost of ownership, improved visibility, and standard industry practices. Cloud performance management is adopted for mission-critical industry verticals, such as BFSI, extensively to improve revenue generation, increase customer insights, contain costs, deliver market-relevant products quickly and efficiently, and help monetize enterprise data assets.

Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=239116385

Asia Pacific is projected to be the highest CAGR during the forecast period

The Asia Pacific region comprises emerging economies, such as China, Japan, Australia and New Zealand, and the rest of Asia Pacific. The demand for managed cloud and professional services is growing, particularly in countries with a mature cloud landscape, such as Japan. This is due to the increasing migration of complex Big Data and workloads such as enterprise resource planning (ERP) to cloud platforms. The expansion of open source technologies, as well as advancements in API-accessible single-tenant cloud servers, also helps to promote acceptance of managed private cloud providers. Furthermore, with the rise of the Internet of Things (IoT), the cloud is becoming increasingly important in enabling the development and delivery of IoT applications. To deal with the data explosion, more businesses in Asia-Pacific are redesigning their networks and deploying cloud services.

The huge amount of data lead to the complexity of managing workloads and applications manually, which would act as the major factor in the adoption of cloud performance management solutions among enterprises in this region. Also, the affordability and ease of deployment of cloud performance management solutions would act as the driving factors for the adoption of cloud technologies among enterprises. The increasing trend toward cloud-based solutions is expected to trigger the growth of the cloud performance management market in this region. Integration of latest technologies, such as AI, analytics, ML, drives the demand for cloud performance management solutions in the region. The availability of advanced and reliable cloud infrastructure presents attractive opportunities for cloud-based technologies. Increase in investments in Asia Pacific by giant cloud providers such as Google is the driver for the growth of CPM market in this region. Strong technological advancements and government initiatives have driven the cloud performance management market. Increase in urbanization, technological innovation, and government support for the digital economy with suitable policies and compliance (regulations) have driven the cloud performance management market.

Get 10% Free Customization on this Report: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=239116385

Market Players

Some prominent players across all service types profiled in the Cloud Performance Management Market study include Microsoft (US), IBM (US), HPE (US), Oracle (US), VMware (US), CA Technologies (US), Riverbed (US), Dynatrace (US), App Dynamics (US), BMC Software (US).

Browse Adjacent Markets:Cloud Computing Market Research Reports & Consulting

Related Reports:

Cloud Storage Market by Component (Solutions and Services), Application (Primary Storage, Backup and Disaster Recovery, and Archiving), Deployment Type (Public and Private Cloud), Organization Size, Vertical and Region - Global Forecast to 2027

Integrated Cloud Management Platform Market by Component (Solutions and Services), Organization Size, Vertical (BFSI, IT & Telecom, Government & Public Sector) and Region - Global Forecast to 2027

Here is the original post:
Cloud Performance Management Market Worth $3.9 Billion By 2027 Exclusive Report by MarketsandMarkets - Benzinga

Read More..