Page 2,502«..1020..2,5012,5022,5032,504..2,5102,520..»

Cloud Computing Optical Component Market 2021: Applications, Market Assessments, Key Players Analysis, Trends, And Forecasts To 2027 – Energy Siren

Cloud Computing Optical Component Market 2021: Applications, Market Assessments, Key Players Analysis, Trends, And Forecasts To 2027

The Cloud Computing Optical Component market study is that the analyzed report which incorporates a close study of the market. The analysis also provides the market trends, market shares, revenue, competitors, and also estimates the marketplace for organizations around the world. Every segment is thoroughly studied to administer buyers and stakeholders a more realistic picture. The research offers a full study of the worth chain to present a holistic picture of the Keyword industry. The industry participants, from staple suppliers to end-users, are thoroughly examined during a value chain study, primary and secondary research was wont to produce market estimates and predictions report includes forecasts supported detailed research further as an estimate of the industrys evolution supported previous studies. The research gives intensive market research for the time into account. The market is split into various segments, each of which incorporates an in-depth study of the competition likewise as an inventory of the most important players.

Get a Sample Report of Cloud Computing Optical Component Market @ https://www.intelligencemarketreport.com/report-sample/140638

Major Key Players Included in this report are:

Top manufacturers, revenue, and price, similarly as industry sales channels, traders, and dealers, distributors, research findings, company strengths, and innovations. The size, growth, supply, demand, share, innovations, and current developments of every segment. A well-prepared business report also recommends corrective steps if the corporate fails to satisfy its goals. It portrays a grim image of upcoming company sector openings and market components.

Cloud Computing Optical Component Market Segmentation

In this marketplace evaluation, the market marketplace has been divided into many segments, along with product kind, software, quit-client, and geography. Every market section is classed in phrases of CAGR, marketplace percentage, and future growth capability. The local analysis within the features a have a look at identifies a promising area this is often projected to produce opportunities inside the world market within the subsequent years. This segmented analysis will gain readers, stakeholders, and enterprise individuals in acquiring an in-depth perspective of the worldwide marketplace and its boom ability within the following years.

Cloud Computing Optical Component Market Report Scope

Cloud Computing Optical Component Market Segmentation, By Type

Cloud Computing Optical Component Market Segmentation, By Application

Competitive Scenario

The study illuminates the aggressive environment of the market marketplace, helping readers to appreciate competitiveness on each neighborhood and Celsius scale. Market researchers have additionally forecasted every international marketplace chiefs destiny prospects, considering vital factors which include running areas, production, and products variety. From development to boost, the firm profile covers all of the first areas of the market. The market is extremely well investigated in terms of product contributions, main financial issues, SWOT evaluation, improvements, and methodologies.

Do you have any specific requirement about this report?

Ask your query @ https://www.intelligencemarketreport.com/send-an-enquiry/140638

Regional Overview

The fundamental and secondary drivers of global business, as well as the top economies, market share, trends, and regional market conditions, are all examined in this report. A complete analysis of value and volume at the global, business, and regional levels is included in the global Cloud Computing Optical Component market study. In a similar vein, the study estimates the global market size based on historical data and projected outcomes.

Regional Coverage

Table of Contents -Major Key Points

Buy 1-User PDF of Cloud Computing Optical Component Market @ https://www.intelligencemarketreport.com/checkout/140638

Contact Us:Akash AnandHead of Business Development & Strategy[emailprotected]Phone: +44 20 8144 2758

See the rest here:
Cloud Computing Optical Component Market 2021: Applications, Market Assessments, Key Players Analysis, Trends, And Forecasts To 2027 - Energy Siren

Read More..

Why a Wider Net is Now Needed to Find the Right Software Engineer – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

Over the past two years, the world has evolved to a new normal after the COVID-19 pandemic paralyzed or otherwise destabilized economies, including governments closing down economic systems through strict measures. From the cessation of international flights to the enforcement of lockdowns, businesses had to close their premises and employees had to transition to working from home. As a result, browsing traffic and the use of other online tools (such as video conferencing and streaming services) shot up drastically with a corollary increase in traffic for IT firms and cloud computing. Every firm geared up to hire more IT specialists to boost capacity to handle demand. According to the economic data analytics advisors, Emsi, cloud computing job postings grew more than 90% between 2017 and 2020 a whopping four times more than the overall tech job growth! Neo4j Inc., a cloud database company, almost doubled its employees during the pandemic.

It follows that getting hold of the best cloud computing talent is imperative for such IT companies to stay competitive, but it hasn't been easy; theres an acute shortage of cloud-computing engineers, largely due to the high demand.

Related: Cloud Computing Will Be a Goldmine in the Post Covid-Era

Competition for top talent in cloud computing is not only happening among talented engineers but also at the top executive levels of some of the biggest IT firms in the industry. Microsoft recently hired former Amazon senior vice president, Charlie Bell, further compounding the rivalry between the two tech giants. Last year, Amazon also saw its former vice president of marketing, Brian Hall, join Google's cloud unit.

It is also increasingly difficult for IT firms to retain individuals with cloud computing skills. Most software engineers quickly get two or three strong offers with great deals on the table, including very competitive salaries. In a move to retain their workforces, some companies in Washington state a key business storefront for various technology giants, including Amazon and Microsoft, and with large cloud development centers have had to enforce non-compete agreements that bar employees from joining or accepting offers from competing firms.

It can be quite difficult to compete with large firms in this realm if you're a small to mid-sized company, but there might be a solution silver to this dilemma: outsource help from nearshore software development companies (nearshore referring to countries in relatively close proximity), and perhaps accrue additional benefits in the process.

First, working with a software company from a nearby country (for U.S. businesses, English-speaking Latin America is a good place to start) widens the net, meaning you'll expand a potential employee base and increase the chances of getting top-notch talent. Nearshoring also means that a business can greatly reduce (or eliminate) software operation costs. Having such an outsourcing team in the same time zone also means traveling back and forth for in-person project meetings will be a relatively easy proposition.

Related: The Most Popular Countries for Low-Cost Software Development Outsourcing

The rest is here:
Why a Wider Net is Now Needed to Find the Right Software Engineer - Entrepreneur

Read More..

VMware salaries revealed: How much the cloud computing firm is paying engineers, managers, and salespeople as it navigates a future without Dell -…

VMware

VMware is navigating a new future as a standalone company - and it's expanding its ranks to do so.

The cloud computing giant finalized its $9.6 billion spinout from parent company Dell this week, not long after new executives took the helm. VMware CEO Raghu Raghuram, who assumed the position in May, is aiming to generate more revenue by selling software-as-a-subscription and growing the company's security and edge computing units. VMware recently hired - and is continuing to hire - a slew of new engineers, salespeople, and analysts to execute that vision.

To find out how much VMware is paying new hires, Insider analyzed salaries of approved H-1B visas published by the US Office of Foreign Labor Certification. This data is drawn from 951 approved visa applications for VMware workers hired in the last 12 months.

The data comes with some caveats: It's based on pay to foreign workers whose visas were sponsored by VMware, and includes only base pay not total compensation, which can include bonuses or stock. However, it still provides a rare glimpse into salary ranges that are usually kept private.

A VMware spokesperson did not immediately respond to a request for comment.

Do you work at VMware? Got a tip? Contact this reporter securely via email at aholmes@businessinsider.com or via the encrypted-messaging app Signal at 706-347-1880 using a nonwork phone.

Courtesy of Comparably

Staff engineer 2 (California): $276,995

Staff engineer (California):$248,400

Staff engineer (Texas):$229,900

Staff engineer, VMware cloud (California):$225,000

Systems engineer (California):$185,000

Lead applications developer (California):$165,774

Member of technical staff (California):$160,000

Application support engineer (Georgia):$80,059

VMware

Lead application analyst (California):$188,020

Senior business systems analyst (California):$166,799

Advisory data analyst (California):$152,963

Business systems analyst (California):$150,000

Programmer analyst (Texas):$132,500

Business systems analyst (Texas):$95,716

VMware

Senior business strategy manager (California):$242,000

Service management business process architect (California):$167,360

Senior manager, advanced analytics and business intelligence (California):$159,411

Business systems analyst, finance (California):$150,000

IT business systems analyst (California): $146,000

Business and data analysis manager (Georgia):$100,000

Business system analyst (Texas):$95,716

VMWare; Samantha Lee/Insider

Senior director, experience platform and engineering (California):$310,000

Senior director (California): $296,707

Group manager, cloud products (California): $246,400

Senior manager, R&D (California):$245,057

Director of engineering (California):$241,920

Senior product line marketing manager (California):$240,000

Read more from the original source:
VMware salaries revealed: How much the cloud computing firm is paying engineers, managers, and salespeople as it navigates a future without Dell -...

Read More..

IIT Jodhpur along with WileyNXT invites applications for its first-ever online PG Diploma with academic credits in Data Engineering and Cloud…

Text Size:A- A+

New Delhi [India], November 10 (ANI/NewsVoir): Indian Institute of Technology, Jodhpur (IIT Jodhpur) along with its knowledge partner, WileyNXT, Wileys innovative bridge learning solution, has announced the launch of its first online Post Graduate Diploma Program in Data Engineering and Cloud Computing with academic credits.

For this exclusive, unique, and specialised PG Diploma by IIT-Jodhpur, Wiley is also facilitating Career Assurance Program for successful candidates. Post the completion of the program, Wiley will help these PG Diploma holders in finding relevant jobs. Developed by IIT Jodhpur in collaboration with Wiley Innovation Advisory Councils (WIAC) industry practitioners, the program is set to commence its first batch from December 5, 2021.

The 12-month live online IIT-Jodhpur PG Diploma program includes 600+ learning hours by the esteemed IIT Jodhpur faculty and practical sessions led by Wileys industry experts. The PGD program in Data Engineering & Cloud Computing is carefully designed for current as well as potential software and technology professionals aspiring for a high growth career in the domain. It aims to hone high-end demand skills like Big Data Engineering, Cloud Computing and Machine Learning along with tools and technologies like Python and SQL. In addition, the program also focuses on specialization such as retail and financial analytics.

The International Data Corporation (IDC) report states that Indian Public Cloud Services Market revenue is set to reach $9.5 billion by 2025, growing at a compound annual growth rate (CAGR) of 21.5 per cent. Another credible industry report states an increase in the salary of Data Engineers by 4.7%, from $113,249 in 2019 to $118,621 in 2020. For Cloud Engineers, there was a growth of 6.3% from 2019 to an average salary of $136,479. This unprecedented growth accounts for the increasing importance of data and cloud engineering roles.

Commenting on the launch Philip Kisray, SVP and GM, International Education, Wiley said, When it comes to innovation in education and research, Wiley has always been a leader leading from the forefront. With our esteemed partner IIT-Jodhpur, we are extremely delighted to have begun a revolution in the online education space by introducing our first online PG Diploma with academic credits. We are certain that this program will prove to be a great success among all our prospective learners and candidates.

Dr. Gaurav Bhatnagar, Associate Professor, Department of Mathematics, Indian Institute of Technology, Jodhpur said, Industry experts suggest that for every 3 data scientists in an organisation, there is a need for 10 data engineers. Alongside, the businesses are increasingly depending on Cloud services and models and it is therefore that the IT sector is always on the lookout for competitive and specialised skills. Attuned to the market demand, we are happy to collaborate with our knowledge partner WileyNXT and launch an online PG Diploma with specialisation in Data Engineering and Cloud Computing. Given the benefits associated with the program such as academic credits and placements assurance, we are hopeful that the program will enable professionals to create a desired impact in their respective careers.

Dr. Dip Sankar Banerjee, Assistant Professor, Department of Computer Science and Engineering, Indian Institute of Technology, Jodhpur said, Data engineering and cloud computing are one of the most sought-after skills in the industry. In the current decade, data science and cloud computing has directly generated the highest number of jobs in the IT and services industry. However, there is a huge skill gap that persists amidst the existing workforce. With our exclusive and unique PG Diploma program powered by WileyNXT, we aim to contribute our bit in filling the gap by training skilled talent and providing assured placements to them. The main targets of the course will be to train the next generation of professionals in the necessary theoretical and hands on skills with sufficient emphasis on real world applications and problems.

Aspirants with a bachelors degree in engineering or science (4-year program) or a masters degree in science, MCA or in a related field with a minimum of 50% score/CGPA of 5.0 on a scale of 10 with corresponding proportional requirements are eligible for the course. Additionally, the applicant must have a minimum of 2 years of work experience (after qualifying degree) in industry/Research & Development laboratories/Academic Institutions.

IIT-Jodhpur & WileyNXT have begun inviting applications for the program and the deadline to apply is November 30, 2021. The competitive program has a rigorous screening process. Qualifying the written test conducted by IIT Jodhpur, or its appointed partner is a prerequisite to get enrolled in the program. On successful completion of the program, the candidates will be provided with a certification of Post Graduate Diploma by IIT Jodhpur and a digital certificate by WileyNXT.

For any further information on the Data Engineering & Cloud Computing program, click here.

Indian Institute of Technology Jodhpur was established in 2008, to foster technology education and research in India. The institute is committed to technological thought and action to benefit the economic development of India. Scholarship in teaching and learning; Scholarship in research and creative accomplishments; and relevance to industry are three driving forces for us at IIT Jodhpur.

IIT Jodhpur functions from its sprawling residential Permanent Campus of 852 acres on National Highway 65, North-Northwest of Jodhpur towards Nagpur. This campus is meticulously planned and envisioned to stand as a symbol of academics. A large parcel of the Permanent Campus (of about 182 acres) is set aside for the development of a Technology Park to strengthen institute-industry interactions. The institute is committed to a multidisciplinary approach of technology development. Hence, it has established state-of-the-art laboratories for basic research and has organized its academic degree activities through Departments and its coordinated research through Centers for Technologies.

Follow IIT Jodhpur on:

Twitter: IIT Jodhpur

Facebook: IIT Jodhpur

LinkedIn: IITJodhpur

WileyNXT, is Wileys innovative learning solution built to bridge the skill gap. It offers professional learning and education programs in new and emerging technologies, which are relevant for todays workforce. WileyNXT programs are designed by Wiley Innovation Advisory Council (WIAC), a body comprising of 40+ industry and academia leaders.

This story is provided by NewsVoir. ANI will not be responsible in any way for the content of this article. (ANI/NewsVoir)

This story is auto-generated from a syndicated feed. ThePrint holds no responsibility for its content.

Subscribe to our channels on YouTube & Telegram

Why news media is in crisis & How you can fix it

India needs free, fair, non-hyphenated and questioning journalism even more as it faces multiple crises.

But the news media is in a crisis of its own. There have been brutal layoffs and pay-cuts. The best of journalism is shrinking, yielding to crude prime-time spectacle.

ThePrint has the finest young reporters, columnists and editors working for it. Sustaining journalism of this quality needs smart and thinking people like you to pay for it. Whether you live in India or overseas, you can do it here.

Support Our Journalism

Continue reading here:
IIT Jodhpur along with WileyNXT invites applications for its first-ever online PG Diploma with academic credits in Data Engineering and Cloud...

Read More..

QCI Qatalyst Selected by BMW Group and Amazon Web Services as a Finalist in the Quantum Computing Challenge – HPCwire

LEESBURG, Va., Nov. 10, 2021 Quantum Computing Inc., a leader in bridging the power of classical and quantum computing, announced that its Qatalyst ready-to-run quantum software was selected as one of three finalists for the second and final round of the BMW GroupandAmazon Web Services (AWS)Quantum Computing Challenge for the Vehicle Sensor Placement use case.

The Quantum Computing Challenge invited the quantum community to apply innovations in quantum computing to real world problems in industrial applications. The use case problems presented in the challenge represent critical commercial applications that demonstrate the real-world value of quantum computing.

BMW stated that its goal with the challenge is to tap into additional innovative power, inspire new thinking, and create opportunities for quantum builders to work with BMW on meaningful business problems.

The Vehicle Sensor Placement use case challenges participants to find optimal configurations of sensors for a given vehicle so that it can reliably detect obstacles in different driving scenarios using quantum computing or nature-inspired optimization approaches. The number of sensors per car is expected to increase significantly as autonomous driving becomes more common. Vehicles need sensors to gather data from as large a portion of their surroundings as possible, but each sensor adds additional costs, so optimizing the sensor placement uses genetic algorithms. The goal of the challenge is to use quantum computing techniques to optimize the positions of sensors, enabling maximum coverage while keeping costs to a minimum.

This Challenge is yet another step in showcasing quantum computings potential for commercial applications and real-world business problem solving, said Bob Liscouski, CEO of QCI. We are pleased that we have been selected to participate in the final level of competition, and our team will work hard to demonstrate the power of Qatalyst. Regardless of the final outcome, we believe that the applications for quantum computing will significantly increase over the coming years, and QCI is well positioned to be a key player.

About Quantum Computing Inc.

Quantum Computing Inc. (QCI) (Nasdaq: QUBT) is focused on accelerating the value of quantum computing for real-world business solutions. The companys flagship product, Qatalyst, is the first software to bridge the power of classical and quantum computing, hiding complexity and empowering SMEs to solve complex computational problems today. QCIs expert team in finance, computing, security, mathematics and physics has over a century of experience with complex technologies; from leading edge supercomputing innovations, to massively parallel programming, to the security that protects nations. Connect with QCI on LinkedIn and @QciQuantum on Twitter. For more information about QCI, visit http://www.quantumcomputinginc.com.

About the BMW Group Quantum Computing Challenge

The BMW Group Quantum Computing Challengeis open to participants from research groups and companies worldwide. The challenge is organized into two rounds. In the first round, participants need to submit a well-documented concept proposal for one of four use case challenges, described below. In the second and final round, teams with the top three submissions in each use case will be asked to build out their solutions. The final, virtual presentation to the competitions judging panel, including domain experts from BMW and AWS will take place in December. The winners will be announced at theQ2B quantum computing industry conference(Dec. 79).

Source: Quantum Computing Inc. (QCI)

Here is the original post:
QCI Qatalyst Selected by BMW Group and Amazon Web Services as a Finalist in the Quantum Computing Challenge - HPCwire

Read More..

European server sales sink to 4-year low: Cloud, software-defined and chip shortage blamed – The Register

Server sales across the European channel fell to their lowest level in four years over the third quarter of 2021, as the long-awaited recovery in infrastructure spending failed to show up with shrinking volumes reported for 18 countries.

The numbers collated by Context show 91,021 servers were sold via distribution in calendar Q3, with hefty double-digit declines recorded in some of the largest countries that consume the systems. It is estimated Context captures up to 60 per cent of the total server market volumes in the region*.

"We were waiting for a rebound with recovery from COVID-19 but it looks like it is not happening for now," Gurvan Meyer, enterprise business analyst at Context told The Register.

The reasons? The pandemic has sped up customers switch to a hybrid IT environment, software defined infrastructure played a part too, and so did the ongoing supply chain wobbles.

Meyer said he believes that "infrastructure management has progressed impressively in the last two to three years, businesses have become more efficient in terms of using their hardware resources; there was quite a lot of over-provisioning in the past and IT teams have caught up."

Sales to end users in Germany slid 3.9 per cent year on year, they sank almost 26 per cent in the UK, and dropped 6.4 per cent in France.

Context also compared unit sales in Q3 2021 with those made in Q3 2019, before the pandemic began: Germany was down 30.3 per cent in those two years, the UK was down 24.6 per cent, and France was down 11.2 per cent. A further 15 countries reported double-digit drops in year-on-year comparison.

Meyer told us: "Some countries are more advanced in their digital transformation than others, and countries are structurally different (service economy in the UK versus industrial economy in Germany) and I tend to think that the rather soft market we see in the UK, for example, is down partly to the fact that the UK is slightly more advanced in terms of hybrid infrastructure than, let's say, Italy."

As for worldwide infrastructure-as-a-service spending, Canalys estimated Q3 expansion of 35 per cent to $49.9bn, with AWS, Microsoft and Google accounting for 61 per cent of the entire market. Yet even this corner of tech industry isn't immune to the crippling shortages affecting multiple industries.

"Overall computer demand is outgrowing chip manufacturing capabilities, and infrastructure expansion may become limited for the cloud service providers," said Blake Murray, research analyst.

We asked the market watcher for European-specific stats but it seems they are not yet at a stage to be made public.

Canalys said the impact of the global chip shortages on the cloud giants is "imminent" as data centre component makers are seeing lead times extended and prices rising. Just last week, for example, data centre networking outfit Arista said the lead times to secure certain parts were stretching to 80 weeks.

Glenn O'Donnell, veep and research director at Forrester, said the auto industry has become the poster child for chip shortages.

"The impact extends far beyond autos home appliances, consumer electronics, medical devices, farm equipment, and even toys are all affected. It is hitting corporate IT hard, as data centre equipment, cloud services, PC, and even Apple struggle to get these essential parts," he said.

Computacenter, one of Europe's largest resellers, said in September that customers were recommencing projects but that getting hold of enough kit was the issue, not demand.

"The ongoing supply shortages in the industry has risen to the top of our challenges," said colourful CEO Mike Norris.

* Some customers including trade clients buy servers direct and do not use distribution, so their figures aren't tracked by Context.

Follow this link:
European server sales sink to 4-year low: Cloud, software-defined and chip shortage blamed - The Register

Read More..

Amazon Cloud can save the world AWS plays the green card Blocks and Files – Blocks and Files

AWS is proclaiming that businesses in Europe can reduce energy use by nearly 80 per cent when they run their applications on the AWS Cloud instead of operating their own datacentres. The claim is found in an AWS-commissioned report by 451 Research.

We learn that companies could potentially further reduce carbon emissions from an average workload by up to 96 per cent once AWS meets its goal to be powered by 100 per cent renewable energy, a target the company is on a path to achieve by 2025. The 451 Researchers found that, compared to the computing resources of the average European company, cloud servers are roughly three times more energy efficient, and AWS datacentres are up to five times more energy efficient.

Chris Wellise, director of sustainability at AWS, is quoted in the blog: AWS is proud to collaborate with businesses and governments to help meet their sustainability goals. We believe we have responsibilities to the communities where we operate, and to us, that means sustainability and environmental stewardship.

Surely every business understands it has responsibilities to the communities in which it operates. Amazon is just one of many businesses hurriedly running a green climate change banner up its corporate flagpole as it tries to ride on the back of customers climate change concerns to boost its own business.

AWS claims, via the 451 Researchers, that moving a megawatt (MW) of a typical compute workload from a European organisations datacentre to the AWS Cloud could reduce carbon emissions by up to 1,079 metric tonnes of carbon dioxide per year. So AWS is effectively saying, if you want your European compute operations to emit less carbon then move them to AWS, increase AWSs profits and increase its carbon emissions.

Amazons total carbon emissions were the equivalent of 60.64 million metric tonnes of carbon dioxide in 2020. That was 19 per cent more than the 51.17 metric tonnes it emitted in 2020 which was 15 per cent higher than its 2019 total. This data comes from its own annual Sustainablity Report.

It says it is the worlds largest corporate buyer of renewable energy and invites organisations to join The Climate Pledge a commitment to becoming net-zero carbon by 2040, ten years ahead of the Paris Agreement. Amazon co-founded The Climate Pledge. It is not known if Blue Origin, the space tourism company whose rockets and rocket building activities emit CO2, and which was founded Amazon founder and ex-CEO Jeff Bezos, has signed up as well.

Amazon itself the ecommerce behemoth has only committed to making 50 per cent of all its shipments net-zero carbon by 2030, five years after its AWS-powered-by-100-per-cent-renewable-energy goal. Amazon actually aims to reach net-zero carbon emissions across all its operations by 2040, ten years after that. As part of this it intends to only use zero-carbon fuel ocean shipping by 2040.

Amazon revenues were $386 billion in 2020 and it made a profit of $21.3 billion. With this amount of financial firepower at its disposal the company could move faster to net-zero status in its activities if it optimised them for sustainability over profitability. As a tiny example, the money spent on the 451 Research report could have been spent instead on reducing Amazons own carbon emissions but that wouldnt have provided such a good marketing opportunity for this self-interested business.

Continue reading here:
Amazon Cloud can save the world AWS plays the green card Blocks and Files - Blocks and Files

Read More..

Red Hat Extends Foundation for Multi-Cloud Transformation and Hybrid Innovation with Version 8.5 of Red Hat Enterprise Linux – Database Trends and…

Red Hat recently announced the general availability ofRed Hat EnterpriseLinux 8.5, the latest version of its enterprise Linux platform. The new release provides new capabilities to meet evolving and complex IT needs, from enhanced cloud-native container innovations to extending Linux skills with system roles, on whatever footprint customers require.

Linux is the common language spoken across nearly every public cloud, private cloud, edge deployment and data center, said Gunnar Hellekson, general manager, Red Hat Enterprise Linux, Red Hat. Red Hat Enterprise Linux 8.5 reinforces the role of the worlds leading enterprise Linux platform in the multi-cloud ecosystem, providing new capabilities to meet evolving and complex IT needs, from enhanced cloud-native container innovations to extending Linux skills with system roles, on whatever footprint our customers require.

According to Red Hat, recentstudiesindicate that organizations are realizing that using public cloud exclusively may not be economically feasible for long-term scale. At the same time, it notes, Gartnerpredicts that by 2026, public cloud spending will exceed 45% of all enterprise IT spending, up from less than 17% in 2021.

Red Hat says it has long championed a hybrid multi-cloud world, where customers can choose the environment and technologies that build on a flexible, more consistent foundation.

The updated platform extendsRed Hat Insights services, builds on existing container management capabilities and makes it easier for IT teams to set up workload-specific systems wherever they may exist across a multi-cloud world.

Red Hat Insights, Red Hats predictive analytics service for identifying and remediating potential system issues, is available by default through almost all Red Hat Enterprise Linux subscriptions. With the launch of Red Hat Enterprise Linux 8.5, Insights adds new capabilities around vulnerability, compliance and remediation, helping organizations more effectively manage Red Hat Enterprise Linux environments across multicloud and hybrid cloud environments, even when it comes to nuanced security or compliance scenarios.

According to Red Hat, containers are a crucial component of modern DevOps implementations, which in turn are key to the adoption of multi-cloud and hybrid cloud strategies. Supporting these strategies, Red Hat Enterprise Linux 8.5 offers:

In addition, Red Hat says, as modern IT environments spread across multiple public clouds, virtualized environments, private clouds, on-premise servers and edge devices, the IT operations experience is becoming more complex. To help address complexity and to extend the existing skills of both new and experienced IT operations teams, Red Hat Enterprise Linux 8.5 adds support for new Red Hat Enterprise Linux system roles. System roles are preset configurations for Red Hat Enterprise Linux systems, enabling IT teams to more easily support specific workloads from the cloud to the edge. Red Hat Enterprise Linux 8.5 now includes:

In addition to these capabilities, Red Hat Enterprise Linux 8.5 also adds support for OpenJDK 17 and .NET 6 for developers seeking to modernize and build next-generation applications. The Red Hat Enterprise Linux web console has also been enhanced, making it possible to manage live kernel patching operations and manage overall performance. And, finally, enhancements to Image Builder introduce broader support for creating customized Red Hat Enterprise Linux images on bare metal for edge deployments and for assembling images that have distinct file systems to meet organization-specific internal standards and security compliance requirements.

For more information, read the Red Hat Enterprise Linux 8.5release notesor view theproduct documentationfor Red Hat Enterprise Linux 8.5.

Go here to read the rest:
Red Hat Extends Foundation for Multi-Cloud Transformation and Hybrid Innovation with Version 8.5 of Red Hat Enterprise Linux - Database Trends and...

Read More..

AMD Deepens Its Already Broad Epyc Server Chip Roadmap – The Next Platform

The hyperscalers, cloud builders, HPC centers, and OEM server manufacturers of the world who build servers for everyone else all want, more than anything else, competition between component suppliers and a regular, predictable, almost boring cadence of new component introductions. This way, everyone can consume on a regular schedule and those ODMs and OEMs who actually manufacture the twelve million servers (and growing) consumed each year can predict demand and manage their supply chains.

As many wise people have said, however, IT organizations buy roadmaps, they dont buy point products because they have to manage risk and get as much of it out of their products and organizations as they possibly can.

AMD left the server business for all intents and purposes in 2010 after Intel finally got a good 64-bit server chip design out the door with the Nehalem Xeon E5500 architecture that came out in early 2009 largely copied from AMDs wildly successful Opteron family of chips. AMDs early Opterons were innovative, sporting 64-bits, multiple cores, HyperTransport interconnect, and multiple cores on a die, and essentially made Intel look like a buffoon for pushing only 32-bit Xeons and trying to get the enterprise to adopt 64-bit Itanium chips. But by 2010, AMD had been delayed on delivering several generations of Opterons and had made an architectural fork that did not pan out. When Intel pulled back on Itanium and designed many generations of competitive 64-bit Xeon server chips, AMD was basically pushed out of the datacenter. But by 2015, Intel had been slowing the pace of innovation and driving up prices, and the market was clamoring for more competition, and AMD reorganized itself and got to work creating what has become its Epyc comeback this time once again coinciding with Intel leaving its own flanks exposed for attack because of delays in its 10 nanometer and 7 nanometer chip making processes.

Intel, under the guiding hand of chief executive officer Pat Gelsinger, is getting its chip manufacturing house in order and also getting back to a predictable and more rapid cadence of performance and feature enhancements, and that means AMD has to do the same thing. And as part of its Data Center Premier event this week, the top brass at AMD unrolled the roadmap and showed that they were not only going to be sticking to a regular cadence and flawless execution for the Epyc generations, but were going to be deepening the Epyc roadmap to include different variations and SKUs to chase very specific portions of the server market and very precise workloads.

Ahead of the keynote by Lisa Su, AMDs president and chief executive officer, Mark Papermaster, the companys chief technology officer, and Forrest Norrod, general manager of AMDs Datacenter and Embedded Solutions Group, walked through the deepening roadmap for the Epyc server chips. This was done in the context of the unveiling of the Milan-X Epyc 7003 with 3D V-Cache, which boosts performance by 50 percent on many HPC and AI workloads and which is coming out in the first quarter of 2022, and the Aldebaran Instinct MI200 GPU accelerator, which is starting to ship now and notably in the 1.5 exaflops Frontier supercomputer being installed at Oak Ridge National Laboratory. Milan-X and Instinct MI200 were the highlights of the AMD event this week, to be sure, but they were not the only things that AMD talked about on its roadmap, and there is other chatter we need to bring into the picture as well that pushes this roadmap even further than AMD itself did this week.

Both of them are the culmination of a lot of work over the last four years to start broadening our product portfolio in the datacenter, Norrod explained, referring to Milan-X and Aldebaran. So particularly on the CPU side, you should think about the first three stops in Italy, and that we are sort of on one train, barreling down the road to get to market relevance in a reasonable footprint with one socket, one fundamental part. It has long been our belief that as we pass a certain point, particularly given the increasing workload complexity in the datacenter, that we were going to have to begin broadening our product offerings, still always being mindful of how do we do it in such a way that we preserve our execution fidelity. And we need to make it really easy for customers to adopt the more workload specific products. That is a central theme of what we talked about: workload specificity, having products that are tuned for particular segments of the datacenter market. And by doing so, we make sure that we can continue to offer leadership performance and leadership TCO in each one of those segments.

Norrod made no specific promises, but said that we should expect the broadening and deepening of the portfolio of chips and products with AMD compute GPUs as well.

In her keynote address, Su carved the datacenter up into four segments, and explained how AMD would be targeting each of them with unique silicon.

General purpose computing covers the broadest set of mainstream workloads, both on-prem and in the cloud, Su explained. Socket-level performance is an important consideration for these workloads. Technical computing includes some of the most demanding workloads in the datacenter. And here, per-core performance matters the most for these workloads. Accelerated computing is focused on the forefront of human understanding, addressing scientific fields like climate change, materials research, and genomics, and highly parallel and massive computational capability is really the key. And with cloud-native computing, maximum core and thread density are needed to support hyperscale applications. To deliver leadership compute across all these workloads, we must take a tailored approach focused on innovations in hardware, software, and system design.

With that, lets take a look at the Epyc roadmap that Su, Norrod, and Papermaster talked about and then look at the augmented and extended one that we put together to give you and even fuller picture.

Heres the Epyc roadmap they all talked about:

You can see that the Milan-X chip has been added, and so has another chip in the Genoa series, called Bergamo and sporting the Zen 4c core, a variant of the forthcoming Zen 4 core and a different packaging of the compute chiplets than the standard Genoa parts will have. But thats not all you get.

There is also the Trento variant of the Milan processor, which will be used as the CPU host to the MI200 GPU accelerators in the Frontier system. And then there will be a second generation of 5 nanometer Epyc processors, and we have caught wind of a high core count version code-named Turin, which now that we see the more revealing AMD server chip roadmap, looks very much like a follow-on to Bergamo, not to Genoa. Which implies a different follow-on to Genoa for which we do not yet have a codename. (Might we suggest Florence? Maybe Venice after that?)

Anyway, here is our extended version of AMDs Epyc roadmap:

Lets walk through this.

Milan-X, as we know from this week, will be comprised of a couple of SKUs of the Milan chip with two banks of L3 cache stacked up on top of the native L3 cache on the die, tripling the total L3 cache to boost performance. We know from the presentations that there is a 16-core variant and a 64-core variant, and we presume there might be a few more variants with 24 cores and 32 cores, possibly 48 cores with all of them getting the proportional amount of extra L3 cache (3X more per core) added.

With Trento, what we have heard is that the I/O and memory hub chiplet on the Milan processor complex has been enhanced in two ways. The first is that the Infinity Fabric 3.0 interconnect is supported on the I/O hub, which means the Trento chip can share memory coherently with any Instinct MI200 accelerators attached to it. This is a necessary feature for Frontier because Oak Ridge had coherent CPU-GPU memory on the prior Summit supercomputer based on IBM Power9 CPUs and Nvidia V100 GPU accelerators. The other enhancement with the Trento I/O and memory hub chiplet is rumored to be support for DDR5 main memory on the controllers. For all we know, the Trento hub chiplet also supports PCI-Express 5.0 controllers and also the CXL accelerator protocol, which might be useful in Frontier.

Milan, Milan-X, and Trento all fit into the SP3 server socket, which tops out at a 400 watt TDP.

With the Genoa and Bergamo chips, AMD is moving to the 5 nanometer chip etching processes from Taiwan Semiconductor Manufacturing Co., and Papermaster said that at the same ISO frequency, this process delivers twice the transistor density and twice the transistor power efficiency while also boosting the switching performance of the transistors by 25 percent. To be super clear: This is not a Milan to Genoa statement, but a 7 nanometer process to 5 nanometer process statement, and how this results in server chip performance depends on architecture and how AMD turns the dials on the frequency and voltage curves. AMD is also moving to a larger SP5 socket for these processors.

Genoa is based on the Zen 4 core, and Bergamo is based on the Zen 4c core that has the same instructions per clock (IPC) improvements over the Zen 3 core in the Milan family of chips and the same microarchitecture so there are no software tweaks necessary to use it but it has a different point on the optimization curve for frequency and voltage and has some optimizations in the cache hierarchy that make Bergamo more suited to having more compute chiplets, or CCDs, in the Epyc package. That Zen 4 core IPC uplift is expected to be in the range of 29 percent compared to the Zen 3 core, so this is going to be a big change in single thread performance as well as throughput performance with Genoa. Begamo will take throughput performance to an even higher extreme, but will sacrifice some per-thread performance to get there.

The Genoa Epyc 7004 will have 96 Zen 4 cores across four banks of three compute tiles, for a total of a dozen compute tiles, and an I/O and memory hub that supports DDR5 memory, PCI-Express 5.0 controllers, and the CXL protocol on top of that for linking accelerators, memory, and storage to the compute complex. Genoa is launching sometime in 2022; we dont have much clarity as to when because AMD is timing itself to keep ahead of Intel, which keeps changing its launch dates for Sapphire Rapids and Granite Rapids Xeon SPs.

There are a couple of ways to get to the 128 Zen 4c cores that Bergamo will offer. Instead of twelve 8-core compute tiles in the Genoa, the Bergamo chip could employ eight 16-core tiles. The die could also have twelve 12-core tiles, and then dud back some of the cores on each tile to dial the core count all the way back to 128 total cores in the Bergamo package. The latter seems equally likely as the former, but if both processors have twelve memory controllers, as is rumored, then it will be the latter scenario. The Trento I/O and memory hub supports eight compute chiplets and the Genoa I/O and memory hub supports twelve compute chiplets, so AMD could go either way to get to Bergamo, but again, if it used the Trento I/O and memory hub, then Bergamo would be relegated to only eight memory controllers and that would cause a compute to memory capacity and bandwidth imbalance. It looks like Bergamo will use the Genoa I/O and memory hub, therefore, and have some partially dudded cores so it maxes out at 128 cores instead of 144 cores. All Papermaster said is that Bergamo has a different physical design and a different chiplet configuration from Genoa, so everyone is guessing at this point.

The Bergamo chip will plug into the same SP5 socket as Genoa, which is what the hyperscalers and cloud builders care about. Bergamo will be available in the first half of 2023 according to Su, but Norrod initially said that it could be end of 2022 to early 2023 for the launch, and then backed off to say early 2023. Its not clear why this will take so long to come to market. It could be that the hyperscalers and cloud builders only recently talked AMD into taking the risk and incur the extra cost of making a special SKU of the Gemoa processor.

After that, comes kickers to Genoa and Bergamo, and it is looking like the Bergamo kicker is in fact the rumored 256-core Turin processor based on the future Zen 5c core that has been rumored recently.

We dont think the stock, general purpose kicker to Genoa would jump from 96 cores to 256 cores, but jumping to 192 cores would be reasonable. And so that is what we think will be in the Genoa kicker, which is labeled with ??? in our extended roadmap above. (We will call it Florence until we are told otherwise.) This chip might have four compute tiles, each with twelve Zen 5 cores, in each core complex, and four core complexes on the package to reach that theoretical 192 cores in the general purpose Epyc 7005. The Turin hyperscale variant would have 256 cores and a thermal design point of a whopping 600 watts, so people are saying. The compute tile here could be based on 16 Zen 5c cores, packed into a four-tile compute complex, with four of these on the package.

We think there will be Genoa-X and Florence-X variants with stacked 3D V-Cache, and there is even a possibility to see Bergamo-X and Turin-X variants that also have enhanced L3 caches. Why not?

There is talk that the Epyc 7005s will be based on TSMCs 3 nanometer processes, but we think AMD will try to get two generations of chips out of 5 nanometers, with the Genoa kicker and Turin based on a refined 5 nanometer process, much as Rome was a first pass at 7 nanometers and Milan is a second pass. This is particularly the case if TSMC is having delays with its 3 nanometer processes, as was rumored two months ago. The Epyc 7005s are probably a late 2024 to early 2025 product again, it will depend on a lot of moving parts and how well or poorly Intel is doing, and whatever else is happening in the server space at that time. The 10 exaflops generations of supercomputers will require these CPUs.

We strongly suspect that the Genoa kicker and the Turin processor will fit into the same SP5 server socket as Genoa and Bergamo. Server makers freak out if you do a socket change with every generation.

See original here:
AMD Deepens Its Already Broad Epyc Server Chip Roadmap - The Next Platform

Read More..

Wiwynn Showcases High Performance OCP OAI Server and Immersion Cooling Solutions at OCP Global Summit 2021 – Yahoo Finance

Solutions from cloud to edge; optimizations for AI training, low PUE and edge environment

TAIPEI, Nov. 8, 2021 /PRNewswire/ -- Wiwynn (TWSE: 6669), an innovative cloud IT infrastructure provider for data centers, announced to exhibit at the OCP Global Summit 2021, November 9-10, for its Open Compute Project (OCP) based cloud and edge servers, in addition to the Habana OCP Accelerator Module (OAM) based Open Accelerator Infrastructure (OAI) platform. It will also showcase the world's first two-phase immersion cooled edge server and OAI server to address the surging power consumption and demand of low PUE in datacenters.

Wiwynn Logo (PRNewsfoto/Wiwynn)

"We are excited to exhibit at OCP Global Summit and demonstrate our latest development in server, storage and high-performance OAI server. By integrating the latest CPU platforms with cutting-edge compute acceleration, 48V DC-in, and advanced cooling technologies, our offerings bring the most optimized performance to applications from cloud to edge," said Dr. Sunlai Chang, Wiwynn's President. "We are committed to the vibrant community and will continue to innovate for workload optimization while contributing to sustainable development for the datacenters."

As the major server partner for hyperscale datacenters and the leading OCP Solution Provider, Wiwynn will exhibit its next-generation OCP-based 1P/2P servers using processor platforms, including x86 and ARM, to address the needs of diverse workload optimization. Wiwynn's field-proven two-phase immersion cooling solution, designed for hyperscale datacenter to save up to 90% cooling energy, will be elaborated in response to the surging power consumption and demand of low PUE in datacenters.

For edge offering, Wiwynn's OCP openEDGE based solutions, EP100 and ES200, are perfect for central unit/distributed unit (CU/DU) of 5G open radio access network (RAN), MEC, 5GC software, as well as platforms for AI edge applications, such as 5G smart factory. Considering the diverse edge environment, Wiwynn will unveil the world first two-phase immersion cooling edge platform, EP200, with 2000W cooling capability within 2U height. The compact and integrated design allows edge servers like ES200 to operate without AC facility in harsh environment while still having massive computing capability.

Story continues

For AI/deep learning (DL) training, Wiwynn will showcase its latest OCP Accelerator Module (OAM) based OCP Accelerator Infrastructure (OAI) server, SV600G4. It is one of the Wiwynn collaborations with Habana Labs, an industry-leading developer of purpose-built deep learning AI processors. SV600G4 integrates the server motherboard and the Universal Baseboard (UBB) that adopts the fully connected OAM architecture with 100Gb/s OAM interlink, and features eight Habana Gaudi AI training processors. In addition to air cooling, Wiwynn optimized the system to support liquid cooling options for high density deployment. For the OCP event, Wiwynn has partnered with LiquidStack, an industry-leading data center thermal management company, to demonstrate the world's first OAI server cooled by a 2-phase liquid immersion DataTank delivering 3kW of compute power per RU.

"We are excited to collaborate with Wiwynn on the development of their high-performance OAI solution and benefit from access to solutions optimized for both air cooling and liquid cooling," said Eitan Medina, chief business officer of Habana Labs. "Habana is committed to bringing increased operational efficiencies to our data center customers. With Wiwynn's experience in cloud datacenters and design capabilities in system integration, thermal and advanced cooling, their cooling innovations can be catalysts for Habana's drive to datacenter adoption of our purpose-built deep learning solutions."

In addition to the showcase at booth #C2, Wiwynn will have speakers at Expo Hall Stage Talks and Executive Tracks to present the Company's offerings of "Cloud to Edge 2.0" and outlook for the future technology trend. Wiwynn will also present in eight engineering workshops to deep dive topics regarding OCP Accelerator Infrastructure (OAI), DC-SCM, Open System Firmware (OSF), modular BMC, system management, immersion cooling, and liquid cooling.

Come and explore the Synergy of Edge and Cloud together.

Wiwynn's OCP Summit 2021 Event page

OCP Global Summit

About Wiwynn

Wiwynn is an innovative cloud IT infrastructure provider of high-quality computing and storage products, plus rack solutions for leading data centers. We aggressively invest in next generation technologies for workload optimization and best TCO (Total Cost of Ownership). As an OCP (Open Compute Project) solution provider and platinum member, Wiwynn actively participates in advanced computing and storage system designs while constantly implementing the benefits of OCP into traditional data centers.

For more information, please visit Wiwynn website or contact sales@wiwynn.com Follow Wiwynn on Facebook and Linkedin for the latest news and market trends.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/wiwynn-showcases-high-performance-ocp-oai-server-and-immersion-cooling-solutions-at-ocp-global-summit-2021-301418424.html

SOURCE Wiwynn

Read the rest here:
Wiwynn Showcases High Performance OCP OAI Server and Immersion Cooling Solutions at OCP Global Summit 2021 - Yahoo Finance

Read More..