Category Archives: Cloud Servers

Northern Data Group to launch Europe’s first cloud service with Nvidia H200 GPUs Blocks and Files – Blocks & Files

Datacenter services outfit Northern Data Group will be the first cloud provider in Europe to offer the use of the recently launched powerful H200 GPU hardware from Nvidia.

Germany-headquartered Northern Data is made up of the Taiga Cloud, Ardent Data Centers, and Peak Mining business units. It will be Taiga Cloud that offers the GPUs in datacenters in the final quarter of this year, said Northern Data.

Northern Data Group is an Elite Partner of Nvidia. Taigas energy-efficient cloud is said to be powered by Europes largest cluster of Nvidia 100 Tensor Core and H100 Tensor Core GPUs, which helps organizations accelerate AI and ML innovation on demand, according to the company.

Taiga is deploying the new GPUs in partnership with Supermicro, and the move will help further build up Northern Datas high-performance computing solutions, while complementing its existing GenAI technologies available to customers.

The first full island of over 2,000 Nvidia H200 GPUs will deliver 32 petaFLOPS of performance, it is promised. They will utilize BlueField-3 data processing units (DPUs) and Nvidia Mellanox CX7 NICs. The configuration of Nvidia-referenced architecture will allow customers to access more bandwidth, and faster and more efficient data storage access, said the provider.

The GPUs will be accommodated in one of Northern Data Groups European datacenters, powered by carbon-free, renewable energy, and have a power usage effectiveness ratio of less than 1.2, it is promised.

Our GenAI platform is constantly evolving and we are proud to collaborate with Supermicro to be the first in Europe to offer access to Nvidia H200 GPUs, said Aroosh Thillainathan, founder and CEO, Northern Data Group. This is testament to our commitment to continually offer industry-leading, next-generation solutions.

We are collaborating with Northern Data Group to expand its GenAI cloud offering to include Supermicro GPU servers based on the latest Nvidia H200 HGX GPUs, added Vik Malyala, president and managing director, EMEA, and SVP of technology and AI at Supermicro.

We are fully committed to delivering the most performant and energy-efficient AI infrastructure and solutions, and this collaboration will accelerate availability and bring the best value for customers in Europe.

Last December, Ardent Data Centers revealed a 110 million ($119 million) investment to expand its ability to provide colocation services for HPC and AI compute power across both Europe and the US.

It signed Letters of Intent with two datacenter sites in the US, and was the preferred bidder on a strategic site in the UK. The assets will boost capacity to help address the surging demand for compute power needed for GenAI applications. The anchor tenant for the extra capacity will be sister company Taiga Cloud.

Read the rest here:
Northern Data Group to launch Europe's first cloud service with Nvidia H200 GPUs Blocks and Files - Blocks & Files

Understanding serverless and serverful computing in the cloud era – TechRadar

Two distinct methods have emerged in cloud computing: serverless and serverful computing. Serverless computing represents a significant departure from traditional approaches, offering exciting possibilities for innovation, operational streamlining, and cost reduction. But what exactly does it involve, and how does it differ from the established serverful model?

Serverless computing introduces an approach where you, the developer, only worry about the code you need to run, not the infrastructure around it. This narrowing of focus simplifies operational management and cuts expenses by pushing the server management tasks elsewhere. As a concept, its similar to Business Process Outsourcing (BPO) or contracting out Facilities Management. Youre concentrating on the areas where you have IP or can build value and letting someone else own those non-core processes that extract value.

In contrast, Serverful computing is how most organizations have consumed the Cloud, where you are accountable for managing and overseeing servers while providing you with the most control and customization options.

Empowering IT professionals and developers with knowledge of these approaches and their inherent tradeoffs is crucial for devising an effective cloud strategy. Your expertise and understanding are key to choosing the right approach for your business.

Social Links Navigation

UK Director for Cloud Computing Technology and Strategy at Akamai.

Serverful computing, or traditional server-based computing, involves a hands-on approach to deploying applications. In this model, you are responsible for managing the servers that run your applications, which includes provisioning servers, updating operating systems, scaling resources to meet demand, access control, and ensuring high availability and fault tolerance.

This approach provides more control over your IT infrastructure. You can customize almost every aspect of your environment to suit your application. For example, you can deploy additional security controls or software, tune the kernel to get maximum performance or use specific operating systems needed to support aspects of your application stackall of which arent readily achievable in a serverless environment.

On the other hand, Serverless computing takes most of the complexity away from managing cloud computing infrastructure by abstracting away the infrastructure. With this abstraction, you avoid directly managing cloud servers and instead hire backend computing in an as a service model. There are still servers, but you no longer need to worry about them; the provider ensures theyre available, patched, compliant, and secure.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Serverless and Event-Driven computing are often used interchangeably, but whilst they overlap, there are some crucial differences.

Serverless computing can be used to implement event-driven architectures because it can automatically scale to handle a varying number of events and only charges for actual execution time. For instance, a serverless function can be triggered by an event such as an HTTP request or a message in a queue.

Not all event-driven architectures are serverless, and not all serverless functions are event-driven. Event-driven systems can also be built using traditional serverful infrastructure, and serverless functions can perform scheduled tasks or be invoked directly by an API rather than being driven by events.

There is no one-size-fits-all approach, and you may find that you use both options even within a single application. In an HR system, storing employee records in a serverful database is practical to support complex or long-running queries, such as payroll processing. However, multi-stage and ad-hoc time-off requests are well-suited for a serverless application.

Serverless computing offers two primary advantages: simplicity and an execution-based cost model. By adopting serverless, businesses can manage their infrastructure more easily, as the cloud provider takes care of server provisioning, scaling, and maintenance. This approach allows developers to focus on writing and deploying applications without the burden of managing underlying servers.

Serverless computing also enhances efficiency and resource utilization, as businesses only incur costs for the actual computing power used and when used. Business leaders can plan more simply because they know that each transaction costs and we expect , so our bill this month will be .

When used on platforms with open standards, for example, NATS.io instead of a Hyperscalers data-streaming solution, this transaction-based model can significantly reduce expenses and unlock new opportunities for innovation, freeing developers and managers to concentrate on building high-quality applications rather than dealing with infrastructure complexities.

On the other hand, serverful computing provides businesses with greater control and customisation over their infrastructure. By managing your servers, you can tailor their environment to meet specific needs and ensure high performance, reliability, and security. This approach is beneficial for applications that require consistent and long-term resource allocation, as it allows for fine-tuning and optimization that serverless models may not offer.

Additionally, serverful computing enables direct oversight of the hardware and software stack, enabling detailed monitoring and troubleshooting. This hands-on control can be crucial for enterprises with stringent regulatory requirements or those needing to handle sensitive data securely.

While serverless computing offers compelling benefits, it also presents challenges that businesses must navigate. On a smaller scale, being serverless is a highly efficient way to consume cloud computing services. When demand begins to ramp up, it can rapidly become costly, especially if platform lock-in is a factor. Think of it like taking a taxi versus buying a car. A taxi ride once a week is a cheap way to get home from the office, but taking a taxi to and from the office every day, to and from your kids school to drop them off or collect them, and to the shops at the weekend for groceries is going to quickly become outrageously costly when compared to buying a car.

To mitigate these risks, companies need to establish a culture of cost monitoring, open standards, and vendor evaluation. Choosing vendors with low or no egress fees can help control expenses, and using open standards ensures the app's portability. This avoids the risk of introducing technical debt by becoming overly reliant on a single provider's proprietary services or APIs. This will hinder flexibility and increase migration complexities down the line, potentially resulting in significant refactoring costs.

Balancing the advantages of serverless computing with these challenges requires careful planning and strategic decision-making to ensure long-term success in the cloud environment.

The decision here is how you manage the tradeoffs inherent in serverful and serverless computing: control vs consume, open standards vs proprietary, fixed costs vs dynamic cost base. Looking ahead to the next six months and beyond, serverless and serverful computing are poised to continue evolving in response to changing business needs.

While offering simplicity and cost-effectiveness, serverless computing remains constrained by factors such as speed and latency, much like other cloud-based services. However, many providers have built Edge and Distributed platforms that deliver more sophisticated serverless offerings, bringing computing power closer to end-users, mitigating latency issues and enhancing overall performance.

In contrast, serverful computing will maintain its relevance, particularly for applications requiring more significant control over infrastructure, higher performance, or specific regulatory or security requirements. There will always be a place for both serverless and serverful cloud computing. As cloud technology continues to mature, we may see advancements in serverful computing that improve automation, scalability, and resource optimization, further enhancing its appeal in certain use cases.

Ultimately, the future of cloud computing lies in striking the right balance between serverless and serverful approaches, leveraging the strengths of each to optimize performance, efficiency, security, and agility in an increasingly digital world.

We've featured the best cloud hosting provider.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Link:
Understanding serverless and serverful computing in the cloud era - TechRadar

Cloud Server Market Analysis, Size, Growth, Competitive Strategies, and Worldwide Demand – openPR

Cloud Server

Get Free Exclusive PDF Sample Copy of This Research @ https://www.advancemarketanalytics.com/sample-report/33229-global-cloud-server-market?utm_source=OpenPR&utm_medium=Vinay

Major players profiled in the study are: IBM (United States), Rackspace Technology, Inc. (United States), Microsoft Corporation (United States), Google LLC (United States), HP (United States), Dell (United States), Oracle (United States), Lenovo (China), Sugon (China), Inspur (China), Cisco Systems, Inc. (United States)

Scope of the Report of Cloud Server The Cloud Server market is expected to grow in the future due to increasing business demand for maximum flexibility of resources and high demand for secure servers from enterprises. No deployment, mobility and lesser costs are the major factors for the growth of the cloud server market globally. Cloud servers avoid the hardware issues seen with physical servers, and they are likely to be the most stable option for businesses to keep their IT budget down which is boosting the market.

In Feb 2020, SiteGround, the largest independent web hosting platform trusted by the owners of more than two million domains, announced that it will be moving a big part of its infrastructure to Google Cloud.

The Global Cloud Server Market segments and Market Data Break Down are illuminated below: by Application (Production, Development & Testing, Disaster Recovery, Others), Industry Vertical (BFSI, Retail, Government, Healthcare and Life Sciences, Telecommunication and IT, Travel and Hospitality, Transportation and Logistics, Others), Deployment Model (Public Cloud, Private Cloud, Hybrid Cloud)

Market Opportunities: Increasing Awareness about Cloud Server among Developing Economies Rising Number of Small and Medium-sized Enterprises Globally

Market Drivers: Increase in Demand for Secure Serve due to Cyber-Attacks and System Downtime Growing Demand for Quick Access to Real-Time Data and Deploy Large Projects

Market Trend: Increase Usage of Virtualization Infrastructure and Adoption of Cloud-based Services among Business Enterprise

What can be explored with the Cloud Server Market Study? Gain Market Understanding Identify Growth Opportunities Analyze and Measure the Global Cloud Server Market by Identifying Investment across various Industry Verticals Understand the Trends that will drive Future Changes in Cloud Server Understand the Competitive Scenarios - Track Right Markets - Identify the Right Verticals

Region Included are: North America, Europe, Asia Pacific, Oceania, South America, Middle East & Africa

Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.

Get Up to 30% Discount on This Premium Report @ https://www.advancemarketanalytics.com/request-discount/33229-global-cloud-server-market?utm_source=OpenPR&utm_medium=Vinay

Strategic Points Covered in Table of Content of Global Cloud Server Market: Chapter 1: Introduction, market driving force product Objective of Study and Research Scope the Cloud Server market Chapter 2: Exclusive Summary - the basic information of the Cloud Server Market. Chapter 3: Displaying the Market Dynamics- Drivers, Trends and Challenges & Opportunities of the Cloud Server Chapter 4: Presenting the Cloud Server Market Factor Analysis, Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis. Chapter 5: Displaying the by Type, End User and Region/Country 2016-2022 Chapter 6: Evaluating the leading manufacturers of the Cloud Server market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile Chapter 7: To evaluate the market by segments, by countries and by Manufacturers/Company with revenue share and sales by key countries in these various regions (2024-2030) Chapter 8 & 9: Displaying the Appendix, Methodology and Data Source

Finally, Cloud Server Market is a valuable source of guidance for individuals and companies.

Read Detailed Index of full Research Study at @ https://www.advancemarketanalytics.com/buy-now?format=1&report=33229?utm_source=OpenPR&utm_medium=Vinay

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Middle East, Africa, Europe or LATAM, Southeast Asia.

Contact Us: Craig Francis (PR & Marketing Manager) AMA Research & Media LLP Unit No. 429, Parsonage Road Edison, NJ New Jersey USA - 08837 Phone: +1(201) 7937323, +1(201) 7937193 sales@advancemarketanalytics.com

About Author: Advance Market Analytics is Global leaders of Market Research Industry provides the quantified B2B research to Fortune 500 companies on high growth emerging opportunities which will impact more than 80% of worldwide companies' revenues. Our Analyst is tracking high growth study with detailed statistical and in-depth analysis of market trends & dynamics that provide a complete overview of the industry. We follow an extensive research methodology coupled with critical insights related industry factors and market forces to generate the best value for our clients. We Provides reliable primary and secondary data sources, our analysts and consultants derive informative and usable data suited for our clients business needs. The research study enables clients to meet varied market objectives a from global footprint expansion to supply chain optimization and from competitor profiling to M&As.

This release was published on openPR.

Link:
Cloud Server Market Analysis, Size, Growth, Competitive Strategies, and Worldwide Demand - openPR

Apple Is Preparing M5 with SoIC for Future Macs and AI Servers – The Mac Observer

At WWDC last month, Apple finally announced its AI plans, which are expected to be the powerhouse of upcoming software updates including iOS 18, macOS 15 Sequoia, and so on.

Thats why Apple is gearing up to use more advanced and powerful chips, such as the M5 with SoIC (System on Integrated Chip) packaging to boost performance for both Macs and cloud-based AI servers. And once again, Apple will partner with Taiwan-based TSMC to meet its requirements, according to a report from Economic Daily.

As analyst Jeff Pu suggests Apples timeline may see AI servers equipped with the M4 chip by late 2025, potentially superseding current M2 Ultra chips used in AI cloud servers and originally designed for Mac devices. Meanwhile, hints about Apples upcoming M5 chip have already surfaced in official code, corroborating reports that work is ongoing, besides TSMCs 3nm process.

Apples AI cloud servers currently use several linked M2 Ultra chips, which were initially meant only for desktop Macs. When Apple switches to the M5 with SoIC, its advanced dual-purpose design suggests Apple is preparing for the future by integrating its supply chain to support AI across computers, cloud servers, and software.

TSMC introduced SoIC in 2018, letting chips stack up in 3D. This setup handles electricity and heat better than traditional flat designs. Apple and TSMC are teaming up to develop an even more advanced version of SoIC. Currently, its in testing, but the companies plan to start mass production of M5 for new Macs and AI servers by the second half of 2025.

Via

Read more from the original source:
Apple Is Preparing M5 with SoIC for Future Macs and AI Servers - The Mac Observer

Apple is reportedly making M5 chips for both Mac and cloud AI use – Pocket-lint

Key Takeaways

Apple is switching to a dual-purpose focus with its upcoming M5 processor, aiming to power both Macs and the servers running Apple Intelligence, according to the Chinese-language Economic Daily, by way of DigiTimes. The chip is reportedly in small-scale trial production, using a more advanced SoIC (System on Integrated Chip) packaging technology created in tandem with long-time Apple partner TSMC. Beyond just exploiting 3D architecture, like existing SoICs, the M5 allegedly incorporates thermoplastic carbon fiber composite molding technology.

At WWDC 24, Apple announced some major AI upgrades. Here are all the new Apple Intelligence features coming to Apple's devices.

The significance of that last point is uncertain, but to date, most of Apple's M-series chips have been geared towards Macs. With Apple Intelligence slated to launch alongside iOS 18, iPadOS 18, and macOS Sonoma this fall, the company may need future servers to be maximally efficient to cope with millions of generative AI prompts every day. Its current AI servers are believed to be equipped with the M2 Ultra, also found in the Mac Pro and higher-end versions of the Mac Studio.

Apple and TSMC's goal for the M5 is to have it enter mass production in 2025 and 2026, DigiTimes says. If so, the first M5 Macs will likely ship towards the end of 2025 rather than earlier, since Apple has yet to put the M4 in anything but 11- and 13-inch iPad Pros. The first M4 Macs are expected to arrive by the end of 2024, and could be announced as soon as Apple's September iPhone event.

The M series is optimized for Apple's various software platforms. They're more efficient at associated tasks than an AMD or Intel chip would be, which can mean speed boosts in some cases, and less memory usage in others. A knock-on benefit may be reduced power consumption, which is extremely important in the context of cloud servers. Apple datacenters are estimated to have consumed 2.344 billion kWh of electricity in 2023 alone, which is not only expensive but an obstacle to Apple's environmental sustainability goals. The company is going to have to ramp up its renewable power projects to support Apple Intelligence, and may be hoping that the M5 will take some of the edge off.

Over the decades, Apple has gradually brought more and more chip design in-house, even if it's reliant on firms like TSMC to actually manufacture parts. Some other examples include the A-series processors used in iPhones and iPads, and its W-, U-, and H-series wireless chips.

Read the original here:
Apple is reportedly making M5 chips for both Mac and cloud AI use - Pocket-lint

Storage news ticker July 1 Blocks and Files – Blocks & Files

Cloud-native data protector Alcion has joined Carnegie Mellons CyLab Venture Network as an inaugural startup partner. Several members of its leadership team including co-founders Niraj Tolia and Vaibhav Kamra are alumni of CMU.

Aston University researchers in the UK are part of a team that has sent data at a record rate of 402 terabits per second using commercially available optical fiber. This beats their previous record, announced in March 2024, of 301 terabits or 301,000,000 megabits per second using a single, standard optical fiber. The international research team included Professor Wladek Forysiak and Dr Ian Phillips who are members of the Universitys Aston Institute of Photonic Technologies (AIPT). Led by the Photonic Network Laboratory of the National Institute of Information and Communications Technology (NICT) based in Tokyo, Japan, also Nokia Bell Labs of the US.

The speed was achieved by using a wider spectrum, six wavelength bands (O, E, S, C, L, and U) rather than the previous four, which increased capacity for data sharing. Aston University contributed specifically by building a set of U-band Raman amplifiers the longest part of the combined wavelength spectrum where conventional doped fiber amplifiers are not presently available from commercial sources.

French DNA storage startup Biomemory has hired Dr Dominique Loque as its chief scientific officer. He comes from being head of R&D at Neoplants a synthetic biological tool maker for plant metabolic engineering. Loque worked at Bayer Crop Science before that. Biomemory has also added Chrystelle Roger, Steffen Hellmold, Olivier Lauvray, Thibault Vieille, and Raphal Maunier to its Advisory Board.

Erfane Arwani, CEO of Biomemory, said: In ten years, we anticipate the addressable market for DNA data storage to be around $10 billion. It will be essential to collaborate with the largest players in the data storage sector to meet the growing demand for high-density storage in datacenters while preserving our planet from excessive energy consumption.

Cirata, the rebranded active data replicator WANdisco, says that General Motors is proceeding to phase 2 under an existing master license agreement. Phase 2 will be transacted on the Azure Marketplace, which includes a 4PB data allowance and accompanying maintenance and support services, with a contract value of $490,733.

Cirata has released Data Migrator 2.5, which includes native integration with the Databricks Unity Catalog. It can be used to automate large-scale transfer of data and metadata from existing data lakes to cloud storage and database targets, even while changes are being made by the application at the source. Data Migrator 2.5 also supports IBMs General Parallel File System (GPFS) the cluster file system used as storage for IBM Spectrum Scale data lakes. This initiates data transfer from a source GPFS file system as changes occur, without disruption to the storage environment. It can be used for one-off cloud migrations, disaster recovery processes, and continuous data migration use cases. Migration targets supported include the Hadoop Distributed File System, Alibaba Cloud Object Storage Service, Amazon S3, Azure Data Lake Storage Gen 2, Google Cloud Storage, IBM Cloud Object Storage, and Oracle Object Store.

Data lakehouse supplier Cloudera has appointed Francisco Mateo-Sidron as SVP of sales for the EMEA region. He was previously SVP and GM for EMEA at Qlik for almost three years, with 25 years at SAP before that.

Data Protector Druva has made a February 2024 Gartner Enterprise Backup and Recovery Software Solutions voice of the customer document available for download. Naturally it features strongly in the report, being one of four chosen customer choices amongst the 11 suppliers mentioned. Download it here (registration required).

Data pipeline supplier Fivetran has been hired by customer service platform Intercom. It says Intercom can now centralize data from all its third-party sources, instantly placing reliable and secure data into the hands of analysts and other key business decision makers, while freeing up the companys data engineers so they have more time to focus on business-critical initiatives.

The FMS 2024 conference announced keynote sessions will be provided by FADU, Kixoia, KOVE, Micron, NEO Semiconductor, Phison, Samsung, Silicon Motion, SK hynix, and Western Digital. Jay Kramer, FMS keynote master of ceremonies, said: Attendees will hear about the latest memory and storage technology advances empowering emerging applications driven by AI, big data analytics, quantum computing, and 5G infrastructure.

Streaming data lake supplier Hydrolix has launched its Powered by Hydrolix Partner Program, saying partners can offer customers hot access to four times more data at a quarter of the cost. It was inspired, designed, and launched in the wake of a successful partnership with Akamai. Hydrolix joined the Akamai Qualified Cloud Compute Partner program in November 2023 and together the companies built the TrafficPeak managed observability service that is used by more than 100 companies worldwide.

Huawei Central reports that Huawei and Chinas Wuhan Xinxin (XMC) have a collaboration to build high bandwidth memory (HBM) chips, which are used to provide GPUs with fast access to more data than can be held in x86 server-socket DRAM. Currently, SK hynix and Samsung have a near 50:50 share of the HBM market, with Nvidia the dominant GPU supplier. Micron has entered the market as well and wants to grow its 34 percent share. GenAI training is sending GPU demand, and thus HBM demand, through the roof.

Reuters reports that NAND and SSD supplier Kioxia is to file a preliminary IPO application for its shares to be listed on the Tokyo Stock Exchange. Kioxia declined to comment. A consortium led by Bain has majority ownership of Kioxia, which has a NAND fabrication joint venture with Western Digital. Kioxia previously attempted an IPO in 2020 and a merger with Western Digital in 2023, which foundered due to opposition from Bain consortium member SK hynix. Now generative AI is helping to lead NAND market revenues out of a cyclical trough and an IPO looks possible again. Kioxias latest IPO plans emerged initially in April.

Data management supplier Reltio has appointed Mihir Shah to its advisory board. Shah was CEO at Storcentric, leaving in April. Storcentric was a mini-conglomerate operation, acquiring Vexata, Nexsan, and Drobo, and was itself bought by Serene Investment Management in February 2023. He became the founder and managing partner at Altos Equity and a board member at OtherWorld Computing.

Reltio announced the latest release of its Connected Data Platform with enhancements for its AI-powered offerings: Customer 360 Data Product, Multidomain MDM, and Entity Resolution. There are new velocity packs, additional audience-segmentation capabilities for Customer 360 Data Product, premium business continuity service featuring cross-regional disaster recovery and enhanced data resiliency, and expanded GenAI and ML capabilities.

Data protector and cyber resiliency business Rubrik has been named the Healthcare and Life Sciences 2024 Microsoft Partner of the Year.

Wedbush analyst Matt Bryson says Seagate appears to be continuing to reduce forward velocity of HAMR component requirements (seemingly pushing out a meaningful production ramp), while continuing to lift procurement of parts for 1822TB conventional drives.

Reuters reports that the SK Group will invest 103 trillion ($74.6 billion) through 2028 to strengthen its AI memory chips business, which includes building HBM chips. Wedbush analyst Matt Bryson suggests 80 percent of this investment is slated to be allocated towards HBM.

NAND supplier and SK hynix subsidiary Solidigm, working with the Storage Review media outlet, announced a new world record for the calculation of , surpassing both records they previously set in March 2024 and March 2023. They calculated to 202 trillion digits nearly doubling their most recent record of 105 trillion digits set earlier this year in March. Their system used nearly 1.5PB of storage across 28 Solidigm D5-P5336 61.44TB NVMe SSDs, in a Dell PowerEdge server powered by Intel Xeon CPUs. The calculation spanned an uninterrupted calculation period of 85 days about three months.

Swissbits new N2000 (Gen3 PCIe) and N3000 (Gen4 PCIe) DRAM-less SSD product families each have four lanes and are available in M.2 2230/2242/2280 form factors. The N2000 series offers storage capacities from 60 to 480GB with TLC technology. The spectrum of the N2600 with pSLC ranges from 20 to 160GB and offers an increased service life. The features are complemented by standard integrated security functions such as AES256 encryption, Crypto Erase, and TCG Opal 2.0. The N2000 family is ideal for green IoT applications such as charging stations, industrial applications, embedded systems, data logging, and boot OS scenarios.

The N3000 series is available in M.2 2242/2280 form factors with TLC technology and capacities ranging from 240 to 1,920GB. The same applies to the N3002 PLP series, which is available in the M.2 2280 form factor. The N3602 PLP series with pSLC technology, which is available in the M.2 2280 form factor with storage capacities ranging from 80 to 320GB, is suitable for particularly high endurance requirements. They have security functions such as AES256, Crypto Erase, and TCG Opal 2.0, and are suited for industrial applications, networking systems such as routers or switches, as well as edge servers.

Swissbit offers a solderable alternative to the N2000 in the form of the E2000 PCIe BGA family. The E2000 and E2600 series offer full PCIe SSD performance in a multi-chip design measuring just 3.2 square centimeters. The ultra-compact BGAs also support the ambient temperature range from -40 to 85C and are available with storage capacities ranging from 60 to 480GB (E2000, TLC) and 20 to160 GB (E2600, pSLC).

Data warehouser Teradata will offer its VantageCloud Lake on Google Cloud, featuring updates that are designed to leverage the strengths of both Teradata and Google Cloud. It says key components of VantageCloud Lake are its ClearScape Analytics and a connected ecosystem that, for the Google Cloud edition, include seamless integrations with Google Clouds AI technologies, Vertex AI platform, and Gemini models. For example, the Teradata Customer Complaint Analyzer uses Gemini models to provide a richer and more comprehensive view of customer complaints. This joint solution can automatically categorize,analyze, and identify insights from text and audio-based customer complaints more efficiently than traditional methods, which only analyze text.

Teradata VantageCloud Lake on Google Cloud is now generally available. ClearScape Analytics integration with Vertex AI platform is now generally available, and the solutions enhancements withGemini models will be available for private preview in the first half of 2025.

View original post here:
Storage news ticker July 1 Blocks and Files - Blocks & Files

Lenovo adopts Chinese Loongson CPUs for cloud servers 16-core Loongson 3C5000 chips necessary to rebuff US … – Tom’s Hardware

This week, Chinese CPU developer Loongson published 105 programs from 53 developers that natively support its 5000- and 6000-series processors based on the proprietaryLoongArch architecture. As the list revealed, Lenovo has quietly deployed Loongson's processors in its datacenters and is running cloud services on them, reportsThe Register. The scale of the deployment is unclear, but the revelation highlights Lenovo's commitment to using Chinese CPUs.

For now, Lenovo offers three software packages that support Loongson's LoongArch-based platforms: Wentian WxSphere Server Virtualization System Software V8.0 (16-core 3C5000L/3C5000), Wentian WxCloud Cloud Computing Management Platform V3.0 (16-core 3C5000L/3C5000), and Wentian WxStack Hyper-converged System Software V8.0 (quad-core3A6000). For Lenovo, this is enough to deploy Loongson's 5000-series CPUs commercially for its cloud services and prepare to deploy the next-generation Loongson's 6000-series processors.

Loongson has quietly gained traction in Chinawith mini PCs aimed at the channel, NAS, andthe education sector. These moves align with China's increasing urgency to replace Western technology with homegrown solutions, driven by policy objectives and necessity due to U.S.-led sanctions.

Deploying 16-core 3C5000 processors for cloud services is something new, but it shows that Lenovo is confident in these CPUs and their successors, which willfeature up to 128 cores. Lenovo's support for Loongson's architecture is crucial in making Chinese hardware a viable alternative to existing enterprise technologies. This support is expected to challenge companies like AMD and Intel, especially given China's vast market, which includes major telecommunications companies with extensive customer bases.

It is unclear whether it makes much financial sense to use 16-core CPUs for cloud services nowadays, as there are more powerful equivalents from traditional x86 CPU vendors specifically architected for such workloads. However, Lenovo needs to learn how Loongson's CPUs behave with its instances today and try out the next-generationDragonChainmicroarchitecture-based processors that will be rolling out over the next couple of years.

Notably, Lenovo's software stack is not the only cloud platform in China to support Loongson's processors; there are ten more platforms from various vendors, so there are more Loongson-based cloud deployments in the country.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Read the original:
Lenovo adopts Chinese Loongson CPUs for cloud servers 16-core Loongson 3C5000 chips necessary to rebuff US ... - Tom's Hardware

Is Apple going to give us the XServe replacement we need? – XDA Developers

Key Takeaways

There was plenty announced at WWDC this year, from new versions of iOS and macOS, to the AI-ification of everything. But hidden in Apple's plethora of announcements was Private Cloud Compute (PCC), a technical model for offloading AI processing capabilities to the cloud in a privacy-focused manner. There's a lot to unpack about PCC, but one thing did catch our eye. It runs exclusively on Apple Silicon, making use of several semi-proprietary technologies like Secure Enclave in the cloud.

Apple has made servers before, but they haven't shipped a new one for well over a decade. With this new market for Apple Silicon-based cloud computing, could Apple be about to finally announce a replacement for the classic XServe?

Source: Wikimedia commons

The original XServe marked Apple's entry into the enterprise server market. Released in 2002, XServer was a series of rack-mountable servers running on a customer version of macOS. Capable of running either individually or in clusters, it was especially favored by creatives and educational institutes due to the collaborative features and easy onboarding for specific sectors. This server (at the time, known as Mac OS X Server) was relatively unique in the enterprise space in that it provided a graphical user interface not just to set up the operating system, but also to set up and enable core functionalities like hosting web pages, email servers, or databases.

XServe had several generations, even making the switch from Power PC to Intel alongside the rest of the Mac lineup, and was initially very popular. The creative sector in particular loved the XServe, with its easy setup and seamless integrations with existing Mac products, and applications like Final Cut Server, all making it perfect for scaling up creativity.

An XServe cluster in use at Nasa (Source: Wikimedia Commons)

Ultimately the original XServe was dropped in 2010 as its popularity faded. Less and less effort was put into maintaining the bespoke applications and GUIs that made the XServe popular, and eventually, the dedicated version of Mac OS X Server was rolled into mainline macOS. This spelled the end for XServe, and it was announced that any further development would cease.

In the aftermath of XServe's demise, a frustrated customer emailed Steve Jobs, only to be told "Hardly anyone was buying them."

In theory, the Mac Pro Server and Mac Mini Server replaced the XServe. In practicality though, this ushered in a decade of pain for anyone trying to run Mac OS X in the data center. The 'server' OS was bundled into mainline Mac OS X, and the server-specific components were made available as a download through the newly launched Mac App Store. Development continued on the server elements, and it still exists now in some form.

The hardware for the Mac Pro Server and Mac Mini Server didn't last long though. The Mac Pro Server was discontinued in 2013, after barely a single refresh from its late 2010 launch. Ouch. The fate of the Mac Mini Server wasn't much better, and it was discontinued in 2014. There was no clear replacement for either of these products, and running Mac OS X in the data center became even harder. In the wake of this, we got the trashcan Mac Pro, which, well, the less said the better. It was everything you didn't want in a server rack - circular, badly cooled, and difficult to repair.

Source: Wikimedia commons

Fast-forward to today and macOS is a nightmare for the data center. Due to the way Apple's ecosystem works for certain things, like XCode or iOS app development, developers are required to build jobs on Apple hardware or run certain code tests. To do this at scale, big companies often need access to tens or hundreds of macOS machines in the cloud.

The combination of developers and businesses needing access to macOS in the cloud, the difficulty of virtualizing and the lack of any appropriate form factors, has led to a whole side industry of Mac Mini hosting. You can rent Macs in the cloud either through dedicated providers like macincloud.com, or more recently they've also become available through AWS. Depending on the provider (or which spec you select), these will likely be Apple Silicon Mac Minis or Mac Pros mounted in a data center somewhere.

As we mentioned at the start, Apple is now running its PCC at scale, on Apple Silicon hardware. Apple doesn't mention what hardware they're using for this. The modern Mac Pro has a rack mount version, but it's prohibitively expensive and is more intended for professionals in the music space than for data center use. The Mac Mini is also far from ideal, and wasn't designed for data center utilization. The Mac Studio is probably the most reasonable candidate, but again, it is a far cry from the rack-mountable hardware we're used to in the data center.

This raises the question - does Apple have some more hardware in the pipeline? And could we finally see a worthy successor to XServe? Cloud hosting for Macs has been a nightmare for years, and it's a problem Apple is surely aware of (as it's running more and more of its silicon in the cloud right now). Apple is almost certainly never going to compete directly for cloud compute, but what has changed since the discontinuation of XServe is the size of Apple's utilization. The company is using more and more of its hardware in the cloud, and building additional software services like PCC on top of its proprietary hardware. Given Apple's track record for software support, this is the company likely to commit to years of supporting Apple Silicon in the data center.

Whether we will ever actually get new, proper data center hardware from Apple, we don't know. It's fully possible that Apple identifies its market as mostly internal, and already has specialized hardware that won't ever be released for public consumption. But if any of this potentially novel server hardware does see the light of day, you can guess it won't be by halves, and we'll be doing our utmost to dig into every detail of it. That said, it's unlikely I'll be using Apple Silicon to replace my NAS any time soon.

Apple had a ton of AI features to show off during WWDC, but once it releases in Q3 2024, a lot of them will be missing in action.

Go here to see the original:
Is Apple going to give us the XServe replacement we need? - XDA Developers

Data waste: According to study, a significant portion of stored server data is never accessed – TechSpot

Why it matters: Making cloud servers and data centers more efficient is a crucial step in the push to increase sustainability and reduce carbon emissions. However, one company has started drawing attention to what it calls "data wastage" the retention of large amounts of data no one accesses and the factors making it difficult to cut back.

Up to 70 or 80 percent of the data that some companies store goes unused, according to enterprise data infrastructure provider NetApp. The company has recently started bringing attention to the issue to help clients meet sustainability goals.

NetApp Chief Technology Evangelist Matt Watts recently told The Register that storage comprises 15 to 20 percent of data center power consumption. Furthermore, a national survey indicates that cloud usage of the UK's power grid could grow from 2.5 percent to six percent by the end of this decade.

NetApp provides data tools for clients like Microsoft, Amazon, and Google. Last April, it published a report on the difficulties of tackling data wastage. Watts initially reported that around 41 percent of stored data is never accessed, but he has since revised the number to as much as twice that in certain areas.

Most leading IT figures believe that cutting back on unwanted data could help reduce carbon emissions, but many companies lack the resources. Some don't have enough time or find differentiating between valuable and unwanted data too daunting.

Obviously, companies that handle clients' data also don't want to cause trouble by deleting information someone might need. Watts highlighted disagreement and confusion in some organizations about whether IT departments are the owners or simply the caretakers of the data they manage, which can present an additional roadblock.

NetApp's BlueXP classification tool, part of a service that gives client companies unified control over diverse data infrastructures, was a significant factor in revealing the scale of data wastage. It retrieved metadata from its data center clients, showing who owned their stored files and when customers last accessed their information.

The issue comes amid NetApp's controversial decision to remove BlueXP's support for services like Google Cloud, Amazon S3, and OneDrive. Watts said that NetApp intends to focus on its internal storage systems instead so the company could stand out.

Read the rest here:
Data waste: According to study, a significant portion of stored server data is never accessed - TechSpot

HPE’s Neil MacDonald: Running AI In Public Cloud Will ‘Crush’ IT Budgets – CRN

The target customers for this [HPE Private Cloud for AI] are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets, said MacDonald, the general manager of Compute, HPC (High Performance Compute) and AI for HPE.

Hewlett Packard Enterprise Executive Vice President Neil MacDonald says that HPE Private Cloud for AI has a big GenAI advantage over the public cloud, which he proclaims will crush the IT budgets of enterprise companies.

The target customers for this [HPE Private Cloud for AI] are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets, said MacDonald, the general manager of Compute, HPC (High Performance Compute) and AI for HPE in a press conference ahead of HPE Discover.

AI is the most data- and compute-intensive workload of our generation, and managing that data and the governance around it and the protection of that data and IP leaves a lot of enterprises to want to embrace generative AI but to do it in a private cloud model, said MacDonald.

Manuvir Das, vice president of enterprise computing for Nvidia, who joined the HPE press conference on the new Nvidia AI Computing By HPE portfolio, said he also sees the private cloud cost advantages for customers.

Its the age-old debate that if you have the wherewithal to stand up your own infrastructure, you can get a far superior TCO than relying on a cloud service, he said.

Furthermore, Das said there are benefits for customers looking to keep their private data on- premises. The AI workload is particularly interesting because its so driven by data, he said. And if you think about an enterprise company, you are really accessing the private data of your company that really represents the secret sauce, the IP of your company. So the question is, would you prefer sending all of that data into the cloud, or would you rather keep that all under your control?

Finally, there are latency issues with regard to moving the data from on-premises to the public cloud, said Das. If you have petabytes of your enterprise data that youre now extracting insight from, do you want to move the data to the compute? Or do you want to move the compute to where the data is? So I think these are the reasons why a private solution is quite desirable. And of course, you know both options exist and every customer will decide for themselves which option they prefer.

MacDonald also weighed in on a number of other issues including why HPE is going all in with Nvidia on the Nvidia AI Computing By HPE portfolio rather than AMD, and why HPE has a liquid cooling advantage over competitors.

Below is an excerpt from the press conference question-and-answer session with MacDonald.

If you think about enterprise AI success, generative AI relies not just on accelerator silicon, but also on fabrics, on system design, on models, on software tooling, on the optimizations of those models at runtime. And so we are thrilled to be working closely within Nvidia with a very strong set of capabilities that together enable us to have our enterprise customers be able to move forward much more quickly on their enterprise [AI] journeys.

It's key to notice that this HPE private cloud AI is not a reference architecture that would place the burden on the customer to assemble their AI infrastructure by cobbling together piece parts, whether those are GPUs or pieces of software or different connectivity.

HPE and Nvidia have done the hard work for customers like co-developing a turnkey AI private cloud that is up and running in three clicks. And that goes much beyond a question simply of an accelerator.

So the target customers for this are very firmly enterprise customers who are seeking to embrace generative AI and who recognize that running AI workloads at scale in the public cloud will crush their IT budgets.

AI is the most data- and compute-intensive workload of our generation, and managing that data and the governance around it and the protection of that data and IP leaves a lot of enterprises to want to embrace generative AI but to do it in a private cloud model.

So our target customers for HPE Private Cloud AI are those enterprises around the world who are all coming to grips with how to gain the productivity benefits of generative AI in their operations and want to do that on-prem with greater efficiency and greater control.

At HPE we are currently supporting a few customers with GPU as a service that supports large language model work and other AI workloads. This spans thousands of GPUs that were providing to customers via the cloud. Thats currently an offer available to select pilot customers. Were working to refine the offering and well share further details later this year.

Not yet. The growth in the energy and thermal intensity of accelerator silicon and in CPU silicon continues to accelerate. As a result, within our product portfolios and across our servers, we offer a variety of systems today that encompass traditional air-cooled, 70 percent PLC systems and 100 percent PLC systems. And we continue to evolve our technologies around liquid cooling as we move forward.

But as these systems are becoming more accelerator-rich, and therefore more thermally and energy challenged, it becomes increasingly pervasive to deploy either partial direct liquid cooling or in the most [high]-performance systems 100 percent direct liquid cooling. So across the portfolio today, we have all of that and you will see increasing use of direct liquid cooling as we continue to work with Nvidia on future systems and future silicon.

So you referred to rear door heat exchanger technologies that you see on the [slide on the] extreme left. There are also advanced systems for recirculating air around air-cooled systems and leveraging water supply as the heat exchanger without requiring different server technology. Thats also on the left [hand side of the slide].

In the middle [of the slide] is classic direct liquid cooling, with the fluid removing the heat from the system going to an external CDU as the point of exchange in the system and then on the right, you see the same kinds of technologies, but at greater scale, with 100 percent DLC [direct liquid cooling] and no reliance on air cooling at all. So we do all of that today in products that are shipped at scale to customers around the world and we build on decades of experience in direct liquid cooling across the spectrum.

We continue to innovate in the cooling space. That includes all forms of advanced cooling technologies, which we continue to assess and bring to the portfolio. Were very familiar and aware of what can and cant be done with immersion base technologies both single base and dual base and the optimum solution for the design points across this range today remain the solutions that you see in front of you.

We continue to provide the direct liquid cooling that weve been providing in the past, which is single phase.

In essence, when youre dealing with 100 percent direct liquid cooling you are extracting all of the heat using liquid, and you design the system very, very specifically to enable you to do that by pulling the fluid through the system and all the points of rejection of heat.

So this is something that were doing at scale that underpins the two exascale systems in the world that have been delivered. And that same technology is increasingly relevant within our broader portfolio. Today we have systems deployed, leveraging our servers for AI workloads. Taking advantage of all of the range of cooling that you see in front of you, including 100 percent direct liquid cooling is a very strong capability of HPE.

Direct liquid cooling is not about designing a server as much as it is about designing an entire system, including the integration with all of the circulation, the CPUs, the control systems, etc.

Read the original post:
HPE's Neil MacDonald: Running AI In Public Cloud Will 'Crush' IT Budgets - CRN