Category Archives: Cloud Servers

This breakthrough tech could solve Microsoft’s AI power consumption woes and is 1,000x more energy-efficient – Windows Central

What you need to know

Generative AI is a resource-hungry form of technology. While it's been leveraged to achieve impressive feats across medicine, education, computing, and more, its power demands are alarmingly high. According to a recent report, Microsoft and Google's electricity consumption surpasses the power usage of over 100 countries.

The high power demand is holding the tech from realizing its full potential. Even billionaire Elon Musk says we might be on the precipice of the most significant technological breakthrough with AI, but there won't be enough electricity to power its advances by 2025.

OpenAI CEO Sam Altman has shown interest in exploring nuclear fusion as an alternative power source for the company's AI advances. On the other hand, Microsoft has partnered with Helion to start generating nuclear energy for its AI efforts by 2028.

In a paper published in Nature, there might be a silver lining that could help Microsoft facilitate its AI efforts. Researchers have developed a new prototype chip dubbed computational random-access memory (CRAM) that could scale down AI's power-hungry demands by over 1,000 times, translating to 2,500x energy savings in one of the simulations shared.

READ MORE: Microsoft and Google's electricity consumption surpasses the power usage of over 100 countries

As you may know, traditional AI processes transfer data between logic and memory, which heavily contributes to their high power consumption. However, the CRAM approach keeps data within the memory, canceling AI's high demand for power.

With the rapid progression of AI, tools like ChatGPT and Microsoft Copilot would've consumed enough electricity to power a small country for a whole year by 2027. However, the researchers behind the CRAM model believe it could achieve energy savings of up to 2,500 times compared to traditional methods.

All the latest news, reviews, and guides for Windows and Xbox diehards.

The CRAM model isn't a new phenomenon. According to Professor Jian-Ping Wang, the senior author of the paper:

"Our initial concept to use memory cells directly for computing 20 years ago was considered crazy."

CRAM leverages the spin of electrons to store data, compared to traditional methods that use electrical charges. It also offers high speeds, low power consumption, and is environmentally friendly.

Ulya Karpuzcu, a co-author of the paper, further stated:

"As an extremely energy-efficient digital-based in-memory computing substrate, CRAM is very flexible in that computation can be performed in any location in the memory array. Accordingly, we can reconfigure CRAM to best match the performance needs of a diverse set of AI algorithms."

While the researchers have yet to determine how far they can push this model regarding scalability, it shows great promise. It could solve AI's most significant deterrent high power consumption.

Go here to see the original:
This breakthrough tech could solve Microsoft's AI power consumption woes and is 1,000x more energy-efficient - Windows Central

Authority Backlinks Service on Cloud Hosting Platforms Launched by LinkDaddy – Newsfile

July 23, 2024 11:31 PM EDT | Source: Plentisoft

Miami, Florida--(Newsfile Corp. - July 23, 2024) - LinkDaddy's latest updates help business owners to get their own marketing content placed on top cloud hosting sites, where it can help to improve their search engine rankings, or help them to rank for a larger selection of keywords.

Authority Backlinks Service On Cloud Hosting Platforms Launched By LinkDaddy

To view an enhanced version of this graphic, please visit: https://images.newsfilecorp.com/files/8814/217420_efd45d15d4a29a1c_002full.jpg

More information about how backlinking can improve search rankings and updated marketing techniques from LinkDaddy can be found at https://linkdaddy.com/cloud-authority-backlinks

Business owners commonly use the LinkDaddy content and backlinking service to expand their targeted marketing areas, reach new demographics, or improve the search rankings for new products or services. LinkDaddy is now about to place marketing content on 15 popular hosting services, including several of the most highly ranked options.

Although content can be hosted nearly anywhere online, LinkDaddy limits its hosting to servers with exceptionally high domain authority. This helps to build credibility with the search algorithms, as each new piece of content gives the client's business a boost to its own domain authority.

As Tony Peacock, LinkDaddy CEO, says "We craft high-quality content tailored to your specific keywords. This content is designed to resonate with your target audience and align with your website's niche."

While many marketing techniques provide short-term results, cloud backlinking has been shown to provide long-term and cumulative benefits. As each new piece of content with backlinks goes live, and the search engine algorithms find it on high-authority sites, client brands will be moved further up in the search results.

Clients can choose from 3 different packages on the LinkDaddy website, with each package containing a unique list of high-authority hosting options. Each client will receive content specific to their brands, products, and services, a personalized HTML page on a popular service, and will have their marketing content posted on up to 5 different, highly reputable hosting services.

Tony Peacock clarifies that, "Using the Cloud Stacking method helps your content show up more in search engine results. When your content ranks higher, it's easier for your audience to find it. This can lead to more people visiting your site, more engagement, and ultimately, more conversions."

More information about building backlinks with LinkDaddy and using content to improve search rankings can be found at https://linkdaddy.com/cloud-authority-backlinks/

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/217420

SOURCE: Plentisoft

Continue reading here:
Authority Backlinks Service on Cloud Hosting Platforms Launched by LinkDaddy - Newsfile

[News] Tencent Cloud Releases Self-developed Server OS, Supporting Chinas Top Three CPU Brands – TrendForce

Due to challenges in exporting high-performance processors based on x86 and Arm architectures to China, the country is gradually adopting domestically designed operating systems.

According to industry sources cited by Toms hardware, Tencent Cloud recently launched the TencentOS Server V3 operating system, which supports Chinas three major processors: Huaweis Kunpeng CPUs based on Arm, Sugons Hygon CPUs based on x86, and PhytiumsFeiTeng CPUs based on Arm.

The operating system optimizes CPU usage, power consumption, and memory usage. To optimize the operating system and domestic processors for data centers, Tencent has collaborated with Huawei and Sugon to develop a high-performance domestic database platform.

Reportedly, TencentOS Server V3 can run GPU clusters, aiding Tencents AI operations. The latest version of the operating system fully supports NVIDIA GPU virtualization, enhancing processor utilization for resource-intensive services such as Optical Character Recognition (OCR). This innovative approach reduces the cost of purchasing NVIDIA products by nearly 60%.

TencentOS Server is already running on nearly 10 million machines, making it one of the most widely deployed Linux operating systems in China. Other companies, such as Huawei, have also developed their own operating systems, like OpenEuler.

Read more

(Photo credit: Tencent Cloud)

Read the rest here:
[News] Tencent Cloud Releases Self-developed Server OS, Supporting Chinas Top Three CPU Brands - TrendForce

Cutting An IoT Fan Free Of The Cloud – Hackaday

The cloud is supposed to make everything better.You can control things remotely, with the aid of a benevolent corporation and their totally friendly servers. However, you might notlike those servers, and you might prefer to take personal control of your hardware. If thats the case, you might like to follow the story of [ouaibe] and their quest to free a fan from the cloud.

The unit in question was a tower fan from Dreo. [ouaibe] noted that there was already a project to control the fans using Home Assistant, but pure lower-level local control was the real goal here. Work began on pulling apart the Dreo Android app to determine how it talked to the fan, eventually turning up a webserver on board, but little progress. The next step was to disassemble the unit entirely. That turned up multiple PCBs inside, with one obviously for wireless communication and another hosting a Sino Wealth microcontroller. Dumping firmwares followed, along with reverse engineering the webserver, and finally establishing a custom ESPHome integration to fully control the fan.

[ouaibe] has shared instructions on how to cut your own fan from the cloud, though notes that the work wont be extended to other Dreo products any time soon. In any case, its a great example of just how much work it can take to fully understand and control an IoT device thats tethered to a commercial cloud server. Its not always easy, but it can be done!

See the original post here:
Cutting An IoT Fan Free Of The Cloud - Hackaday

Surge in AI server demand from cloud service providers: TrendForce – InfotechLead.com

TrendForces latest industry report reveals a sustained high demand for advanced AI servers from major cloud service providers (CSPs) and brand clients, projected to continue into 2024.

The expansion in production by TSMC, SK Hynix, Samsung, and Micron has alleviated shortages in the second quarter of 2024, significantly reducing the lead time for NVIDIAs flagship H100 solution from 4050 weeks to less than 16 weeks.

Key Insights:

AI Server Shipments: AI server shipments in Q2 are estimated to rise by nearly 20 percent quarter-over-quarter, with an annual forecast now at 1.67 million units, representing a 41.5 percent year-over-year growth.

Budget Priorities: Major CSPs are prioritizing budgets towards AI server procurement, overshadowing the growth of general servers. The annual growth rate for general server shipments is a mere 1.9 percent, with AI servers expected to account for 12.2 percent of total server shipments, a 3.4 percentage point increase from 2023.

Market Value: AI servers are significantly boosting revenue growth, with their market value projected to exceed $187 billion in 2024a 69 percent growth rate, comprising 65 percent of the total server market value.

Regional Developments:

North America and China: North American CSPs like AWS and Meta are expanding proprietary ASICs, while Chinese companies Alibaba, Baidu, and Huawei are enhancing their ASIC AI solutions. This trend will likely increase the share of ASIC servers in the AI server market to 26 percent in 2024, with GPU-equipped AI servers holding about 71 percent.

Market Dynamics:

AI Chip Suppliers: NVIDIA dominates the GPU-equipped AI server market with a nearly 90 percent share, whereas AMD holds about 8 percent. When considering all AI chips used in AI servers (GPU, ASIC, FPGA), NVIDIAs market share is around 64 percent for the year.

Future Outlook: Demand for advanced AI servers is anticipated to remain robust through 2025, driven by NVIDIAs next-generation Blackwell platform (GB200, B100/B200), which will replace the Hopper platform. This shift is expected to boost demand for CoWoS and HBM technologies, with TSMCs CoWoS production capacity estimated to reach 550600K units by the end of 2025, growing by nearly 80 percent.

Memory Advancements: Mainstream AI servers in 2024 will feature 80 GB HMB3, with future chips like NVIDIAs Blackwell Ultra and AMDs MI350 expected to incorporate up to 288 GB of HBM3e by 2025. The overall HBM supply is projected to double by 2025, fueled by the sustained demand in the AI server market.

Conclusion:

The AI server market is experiencing unprecedented growth, with significant contributions to revenue and technological advancements. As major CSPs and tech giants continue to invest heavily in AI infrastructure, the industry is set for transformative developments through 2025.

View original post here:
Surge in AI server demand from cloud service providers: TrendForce - InfotechLead.com

Microsoft Servers Are Back After The World’s Biggest Outage Here’s What The Hell Happened – Pedestrian.TV

Thousands of businesses across Australia and the rest of the world are recovering after a massive IT outage caused chaos on Friday afternoon. But what exactly caused the outage, and is it likely to happen again?

Were aware of an issue with Windows 365 Cloud PCs caused by a recent update to CrowdStrike Falcon Sensor software, Microsoft said in a statement on X on Friday.

Massive disruptions that wreaked havoc on everything from radio and television, to banks and grocery stores after cybersecurity firm CrowdStrike pushed out a faulty content update on Windows servers. Servers running on Mac and Linux systems were not impacted by the outage.

CrowdStrike an American-based cybersecurity firm that offers a range of cloud-based security services to 538 of the Fortune 1000 companies launched the new update to its Falcon software on Friday, which caused a malfunction that disabled software worldwide. Ironically, the software is designed to protect against disruptions and crashes.

This system was sent an update and that update had a software bug in it and it caused an issue with the Microsoft operating system, CrowdStrikes CEO, George Kurtz told the US Today Show.

We identified this very quickly and remediated the issue, and as systems come back online, as theyre rebooted, theyre coming up and theyre working and now we are working with each and every customer to make sure we can bring them back online.

But that was the extent of the issue in terms of a bug that was related to our update.

If that wasnt enough, Microsofts own Azure cloud services also faced a major outage, causing even further issues for businesses. The two outages were unrelated, so I guess it was just a bad day for Microsoft.

The issue that prompted a blue screen of death for millions of users across the country was *not* the result of a cyberattack or hack so you dont have to worry about an ongoing threat to your security.

This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed, Kurtz wrote on X. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website.

Kurtz added that customers remain fully protected, while apologising for the inconvenience and disruption.

Australian cybersecurity leader Alastair MacGibbon told the ABC that the issue wasnt malicious.

This is all about communication. This is about just reassuring the public that this doesnt appear to be a malicious act, MacGibbon said.

Of course, in slower time, it would be to try to understand how you could build systems to reduce the likelihood of this happening again.

You wouldnt be calling this a near miss. Its certainly a hit, but its a hit that wasnt malicious. And as a consequence, well learn more from it and therell be plenty of raking over the coals by government agencies and corporates all around the world.

At this point its probably easier to list the businesses that werent affected by the outage.

Low-cost airline Jetstar cancelled all Australia and New Zealand flights as a result of the outage, with flights only resuming at 2am on Saturday morning. Things should be largely back to normal today, but brace for delays if youre heading to the airport.

Jetstar said flights on Saturday are are currently planned to operate as scheduled. Please proceed to the airport as usual.

There may be a small number of flights impacted due to operational reasons. If your flight is impacted, we will communicate directly to you using the contact details on your booking, a statement on the Jetstar website read.

The outage also hit the airwaves, causing Triple J host Abby Butler to manually play the stations theme music out of her phone.

Self-serve checkouts and eftpos facilities at supermarkets and petrol stations also caused chaos, with some forced to close while others went cash-only.

Many major banks including Commbank and ANZ also had to close, which made getting cash out virtually impossible.

Rideshare services and delivery apps like Uber and Doordash also faced issues, which were likely caused by payment system outages.

The outage is being described as perhaps the biggest in history, but thankfully, it looks like it is already mostly resolved.

Deputy Secretary from the Home Affairs Cyber and Infrastructure Security Centre says the issue should self-resolve within the next hours and days.

There is no reason to panic, CrowdStrike are on it, it is not a cybersecurity incident and were working as fast as we can to resolve the incident, he said on X.

Most stores and services seem to be operating as normal on Saturday morning, with social media users reporting that even the Jetstar desk at Sydney Airport didnt look too manic.

It should go without saying that anyone catching a flight today should probably allow some extra time to avoid an airport-induced headache.

Read the original post:
Microsoft Servers Are Back After The World's Biggest Outage Here's What The Hell Happened - Pedestrian.TV

Amazon Graviton4 server CPU shown beating AMD and Intel processors in multiple benchmarks – TechSpot

In context: Amazon's AWS Graviton line of Arm-based server CPUs is designed by subsidiary Annapurna Labs. It introduced the processors in 2018 for the Elastic Compute Cloud. These custom silicon chips, featuring 64-bit Neoverse cores, power AWS's A1 instances tailored for Arm workloads like web services, caching, and microservices.

Amazon Web Services has landed a haymaker with its latest Graviton4 processor. They're exclusive to AWS's cloud servers, but the folks at Phoronix have somehow managed to get their hands on a unit to give us a peek at its performance potential.

Graviton4 packs 96 Arm Neoverse V2 cores, each with 2MB of L2 cache. The chip also rocks 12 channels of DDR5-5600 RAM, giving it stupid amounts of memory bandwidth to flex those cores. Positioning this offering for R8g instances, AWS promises up to triple the vCPUs and RAM compared to the previous R7g instances based on Graviton3. The company also claims 30 percent zippier web apps, 40 percent faster databases, and at least 40 percent better Java software performance.

However, the real story lies in those benchmarks, which the publication ran on Ubuntu 24.04. In heavily parallelized HPC workloads like miniFE (finite element modeling) and Xcompact3d (complex fluid dynamics), Graviton4 demolished not just its predecessors but even AMD's EPYC 'Genoa' chips.

One particularly impressive showing was in the ACES DGEMM HPC benchmark, where the 96-core Graviton4 metal instance scored a staggering 71,131 points, smoking the second-place 96-core AMD EPYC 9684X at 53,167 points.

In code compilation, the Graviton4 significantly outpaced the Ampere Altra Max 128-core flagship but lagged behind the varying core count Xeon and EPYC processors. However, it beat the EPYC 9754 in the Timed LLVM Compilation test.

The surprises kept coming with workloads not necessarily associated with Arm chips. Graviton4 demolished the competition in 7-Zip compression. Cryptography is another strong suit, with the Graviton4 nearly tripling its predecessor's performance in algorithms like ChaCha20.

After testing over 30 different workloads, Phoronix concluded that the Graviton4 is hands down the fastest Arm server processor to date. It's giving current Intel and AMD chips a considerable run for their money across various tasks.

Of course, this silicon arms race will only heat up further with new chips like Intel's Granite Rapids and AMD's Turin on the horizon. For now, AWS has a performance monster on its hands with Graviton4.

Image credit: Phoronix

See original here:
Amazon Graviton4 server CPU shown beating AMD and Intel processors in multiple benchmarks - TechSpot

AWS Graviton4 Benchmarks Prove To Deliver The Best ARM Cloud Server Performance – Phoronix

Show Your Support: This site is primarily supported by advertisements. Ads are what have allowed this site to be maintained on a daily basis for the past 20+ years. We do our best to ensure only clean, relevant ads are shown, when any nasty ads are detected, we work to remove them ASAP. If you would like to view the site without ads while still supporting our work, please consider our ad-free Phoronix Premium.

This week AWS announced that Graviton4 went into GA with the new R8G instances after Amazon originally announced their Graviton4 ARM64 server processors last year as built atop Arm Neoverse-V2 cores. I eagerly fired up some benchmarks myself and I was surprised by the generational uplift compared to Graviton3. At the same vCPU counts, the new Graviton4 cores are roughly matching Intel Sapphire Rapids performance while being able to tango with the AMD EPYC "Genoa" and consistently showing terrific generational uplift.

Graviton4 reached general availability this week with initially powering the new R8g instances. Graviton4-based R8g instances are promoted as offering up to 30% better performance than the Graviton3-based R7g prior generation instances. Graviton3 CPUs sported 64 x Neoverse-V1 cores while Graviton4 has 96 x Neoverse-V2 cores based on the Armv9.0 ISA. The Neoverse-V2 cores with the Graviton4 have 2MB of L2 cache per core, twelve channel DDR5-5600 memory, and other improvements over prior Graviton ARM64 processors.

AWS promotes Graviton4 as offering up to 30% faster performance within web applications, 40% faster performance for databases, and 40%+ greater performance for Java software.

Being curious about the Graviton4 performance, I fired up some fresh AWS instances to compare the R8g instance to other same-sized instances. The "16xlarge" size was used across all testing for looking at 64 vCPUs each time and 512GB of memory per instance. The instances tested for today's article included:

Graviton2 - r6g.16xlarge Graviton3 - r7g.16xlarge Graviton4 - r8g.16xlarge AMD EPYC 9R14 - r7a.16xlarge Intel Xeon 8488C - r7i.16xlarge

All instances were tested using Ubuntu 24.04 with the Linux 6.8 kernel and stock GCC 13.2 compiler.

It would have been interesting to compere to Ampere Computing's cloud ARM64 server processors but not really feasible, unfortunately. With Ampere Altra (Max) in the cloud like with Google's T2A Tau instances, only up to 48 vCPUs are available. And even then Ampere Altra is making use of DDR4 memory and making use of Neoverse-N1 cores... AmpereOne of course is the more direct competitor albeit still not to be found. We still don't have our hands on any AmpereOne hardware nor any indications from Ampere Computing when they may finally send out review samples. Oracle Cloud was supposed to be GA by now with their AmpereOne cloud instances but those remain unavailable as of writing and Ampere Computing hasn't been able to provide any other access to Ampere One for performance testing. Thus it's still MIA for what may be the closest ARM64 server processor competitor to Graviton4.

Let's see how Graviton4 looks -- and its performance per dollar in the AWS cloud -- compared to prior Graviton instances and the AMD EPYC and Intel Xeon competition. The performance per dollar values were based on the on-demand hourly rates.

Page 1 - Introduction Page 2 - HPC Benchmarks Page 3 - Crypto Benchmarks, srsRAN + More Page 4 - Code Compilation + 7-Zip Page 5 - Ray-Tracing, Digital Signal Processing, OpenSSL Page 6 - Database Workloads - ClickHouse, PostgreSQL, RocksDB Page 7 - Blender + Conclusion

Read the original:
AWS Graviton4 Benchmarks Prove To Deliver The Best ARM Cloud Server Performance - Phoronix

Cohesity goes epic on AMD Epyc – ComputerWeekly.com

Data security and management company Cohesity is following (if not leading) the infrastructure efficiency efforts being seen across the wider technology industry with recent work focused on energy-efficient computing.

Cohesity Data Cloud now supports AMD Epyc CPU-powered servers.

Epyc (pronounced epic) AMDs brand of multi-core x86-64 microprocessors based on the companys Zen microarchitecture.

The two firms have collaborated to make sure users can deploy Cohesity Data Cloud on AMD Epyc CPU-based all-flash and hybrid servers from Dell, Hewlett Packard Enterprise (HPE) and Lenovo.

Reminding us that organisations face challenges from ransomware and cyberattacks to stringent regulatory requirements, IT constraints, tight budgets and tough economic conditions, Cohesity says that to solve these challenges, companies need to take advantage of technology that is best suited to their specific requirements.

Customers each have unique needs but a common goal securing and gaining insight from their data. They trust Cohesity, in part, because we strive to offer the largest ecosystem with the most choices to suit their preferences, said John Davidson, group vice president, Americas sales, Cohesity.

By supporting AMD Epyc CPU-powered servers, Davidson says his firm is opening up new options for users to customise and modernise their datacenter.

[Customers can] increase performance and deliver energy, space and cost savings so they can execute their data security and management strategy on their preferred hardware configurations, he added.

All-flash servers have become an increasingly popular choice for organisations with high-demand applications and workloads, stringent power budgets for their datacentres, or increasing storage capacity requirements and little physical space within their datacentre.

As supermicro notes here, All-flash data storage refers to a storage system that uses flash memory for storing data instead of spinning hard disk drives. These systems contain only solid-state drives (SSDs), which use flash memory for storage. They are renowned for their speed, reliability, low energy consumption and reduced latency, making them ideal for data-intensive applications and workloads.

Cohesity now offers AMD-powered all-flash servers from HPE to modernise customer data centers and meet the requirements of green initiatives through the greater density, performance and cost savings all-flash servers provide over traditional servers.

Single-socket 1U HPE servers based on AMD Epyc can reduce the number of required nodes and power costs by up to 33% when compared with dual-socket 2U servers based on other CPUs.

Cohesitys AI-powered data security and management capabilities are now generally available on AMD-powered all-flash servers from HPE and hybrid servers from Dell and Lenovo.

Excerpt from:
Cohesity goes epic on AMD Epyc - ComputerWeekly.com

Oracle Drops After Musks xAI Shifts Away From Cloud Deal – BNN Bloomberg

(Bloomberg) -- Oracle Corp. dropped as much as 4.8% after Elon Musk said his artificial intelligence startup would rely less on cloud technology from the software maker, jeopardizing a potentially lucrative revenue stream.

In a post Tuesday on his social network X, Musk said his company, xAI, decided to build a system to train AI models internally because our fundamental competitiveness depends on being faster than any other AI company. The Information reported earlier that the companies had ended talks on a potential $10 billion cloud agreement.

Oracle Chairman Larry Ellison said last September that Oracle had a deal to provide cloud infrastructure to Musks xAI to train models. Ellison didnt release the value or the duration of the contract at that time. In Musks post, he said that xAIs Grok 2 model was trained on 24,000 Nvidia Corp. H100 chips from Oracle and is probably ready to release next month.

Musks decision to build AI-training infrastructure internally underscores the expansion challenges for cloud providers despite the availability of capital, wrote Anurag Rana, an analyst at Bloomberg Intelligence. We believe these issues extend beyond Oracle and could also vex Microsoft and AWS, not just because of a shortage in specialized chips, but also power.

In May, the Information reported that Oracle and xAI were close to a deal to expand their relationship. Musks startup would have spent about $10 billion to rent cloud servers from Oracle for a period of years, the Information reported then, citing a person involved in the talks. Those talks have now ended, the publication reported before Musks posts.

Oracles shares hit an intraday low of $138 after the report. The stock closed at $145.03 Monday, gaining 38% this year.

(Updates with post from Musk beginning in the second paragraph.)

2024 Bloomberg L.P.

Go here to read the rest:
Oracle Drops After Musks xAI Shifts Away From Cloud Deal - BNN Bloomberg