Page 872«..1020..871872873874..880890..»

IBC 2023: Grass Valley to Demonstrate Its Cloud-Native Playout X … – Sports Video Group

Grass Valley will be demonstrating its full-featured, cloud-first playout solution, Playout X, at stand 9-A01 at IBC2023, September 15-18, 2023, at the RAI Exhibition and Conference Center in Amsterdam.

Playout X is a complete broadcast-grade solution that in addition to clip based and live event handling, also includes dynamic insertion of rich graphics and subtitles, 32 programmable stereo audio channels (64 mono). Playout X provides live stream format support, including SDI, ST2110, NDI, SRT, RIST, in addition to multiformat timeline including back-to-back interlaced, progressive, SD, HD and UHD in any format. It also supports Free Ad-supported Streaming Television, or FAST, signaling with comprehensive SCTE-104 and SCTE35 support, allowing the triggering of downstream layers of advertising, which allows for the monetization of free OTT streaming services.

According to Steve Hassan, Senior Director, Playout at Grass Valley,We have re-architected the playout services hosted by AMPP to provide industry leading application density, and reduced end-to-end latency compared with other solutions in the marketplace. This combined with the latest support for Linux operating systems in AMPP, provides Playout X customers reduced total cost of ownership (TCO) through reduced hosting costs without any compromise to functionality.

Built with a cloud-centric microservices architecture on Grass Valleys AMPP Ecosystem, Playout X allows media and entertainment companies to deploy new channels in minutes, such as for live special event coverage, and collapse them again when the event is over, with Software as a Service (SaaS) pricing to ensure you only pay for what you use. Edge compute architecture offered by AMPP provide further flexibility allowing customers to deploy across both on- premises COTs servers in a traditional master control environment and in the cloud on the platform provider of the customers choosing.

Read the original here:
IBC 2023: Grass Valley to Demonstrate Its Cloud-Native Playout X ... - Sports Video Group

Read More..

OVHcloud debuts comprehensive carbon calculator for customers – ComputerWeekly.com

French Infrastructure-as-a-Service (IaaS) provider OVHcloud is rolling out an exhaustive carbon calculator to its customers to help make it easier for them to track the Scope 1, Scope 2 and Scope 3 emissions generated by their cloud usage.

Eight months in the making, the tool is accessible via the OVHcloud customer panel, and according to the company will provide users with a granular level of detail about the carbon footprint of their cloud infrastructure.

It has also been co-developed with Sopra Steria, and its results are location-based, meaning it factors in the different energy mix that OVHcloud draws on to power its datacentres, which can differ depending on where they are situated.

The tool takes into account the estimated electrical consumption of servers from OVHcloud datacentre monitoring and maps them to their carbon equivalent, taking into account the cooling and networking equipment, as well as freight, manufacturing, end of life and waste management, to provide a complete picture of the actual carbon footprint, the company said, in a statement.

The company first went public with details of its planned carbon calculator tool in Spring 2023, with OVHclouds pledge that it would provide Scope 1, Scope 2 and Scope 3 emissions data from launch, resulting in it garnering comparisons with a similar tool from Amazon Web Services (AWS).

The latters take on the same technology launched in March 2022 has seen the public cloud giant come in for criticism for failing to provide a comprehensive enough view of the carbon footprint of its customers on account of the fact it does not track Scope 3 emissions.

In response, AWS confirmed to Computer Weekly in May 2023 that it was working to provide its customers with Scope 3 data from early 2024.

OVHCloud CEO Michel Paulin said that, coupled with the work the company has done to improve the energy and water efficiency of its datacentres, it has sustainability rooted in its DNA, and constantly challenges itself to improve the carbon footprint of its entire operations.

We are more than ever aware of the importance for our customers of calculating their carbon footprint as accurately as possible, he said.

We are therefore extremely happy to give them a precise reading and understanding of it, all with a single click of the mouse.

Fabienne Mathey-Girbig, executive director of corporate responsibility and sustainable development at Sopra Steria, said the calculator will enable businesses to easily understand the environmental impact of their cloud activities.

As a major tech player in Europe, we have a key role to play in supporting a more sustainable digital landscape across the entire value chain, including employees, clients, suppliers and partners, she said.

We are proud of the trust placed in us to play an active role in this decarbonisation initiative.

Link:
OVHcloud debuts comprehensive carbon calculator for customers - ComputerWeekly.com

Read More..

Gigamons Precryption to block attacks hiding behind encryption – CSO Online

With promises of unprecedented visibility into encrypted traffic across virtual machines (VM) and container workloads, deep observability company Gigamon has launched a new "Precryption" technology.

Gigamon's GigaVUE 6.4 will deploy the Precryption technology to enable IT and security teams to conduct encryption-centric threat detection, investigation, and response across the hybrid cloud infrastructure.

"There's encryption everywhere now, including traffic or lateral movement within all virtualized and containerized environments, which is a good thing because it provides confidentiality for all of our information," said Michael Dickman, chief product officer at Gigamon. "The danger is that attackers can use encryption to hide their own movement and their own attacks, making it look like just another encrypted traffic flow, and that goes undetected."

The new Precryption technology will be delivered as a part of Gigamon's existing licenses and will be charged per usage (eg. Terabytes).

The new Precryption technology by Gigamon leverages Linux's Extended Berkeley Packet Filter (eBPF) technology to insert custom observability programs into the workload networks and bring them back to a centralized location.

eBPF is a flexible technology in the Linux kernel that allows users to write and load custom programs that run within the kernel space. eBPF programs are typically used for network packet filtering, monitoring, and other kernel-level tasks, but their use cases have expanded to various aspects of system observability and control.

Simply put, "Gigamon's new technology allows network traffic to be inspected by capturing traffic before encryption or after decryption using eBPF," said Christopher Steffen, vice president of research at EMA. "It doesn't require encryption keys and doesn't need to perform resource-intensive decryption."

"With the new tech, you don't actually have to manage, track or use keys," Dickman said. "There's no computing needed for an additional overlay of secondary decryption because that's how decryption usually works where you interrupt a traffic stream, and then decrypt it and re-encrypt, which is quite expensive, compute-wise."

The latest GigaVUE release has added a few other capabilities, other than the Precryption technology, to support visibility and decryption in a host of environments.

With the new "Cloud SSL decryption" capability, Gigamon looks to extend classic on-premises decryption capabilities to virtual and cloud platforms. "Application Metadata Intelligence" is another capability that allows for the detection of vulnerabilities and suspicious activities across both managed and unmanaged hosts.

Most significant and integral to Gigamon's Precryption is the "Universal Cloud Tap" capability that serves a single, executable tap for platforms to allow control and configuration of eBPF. "UCT is how we pull out visibility to network data in containers as well as VMs in a very efficient manner," Dickman said.

Gigamon's latest capabilities are well received by analysts who deem it long overdue. "So many organizations have network encryption requirements, but many do not have a method of adhering to these requirements of implementing network encryption while retaining the ability to monitor network traffic," Steffen said. "Precryption solves this problem, allowing security and network administrators to deliver on encryption controls while maintaining their ability to protect company resources by not losing visibility on their internal and external network traffic."

See the original post:
Gigamons Precryption to block attacks hiding behind encryption - CSO Online

Read More..

When a Quantum Computer Is Able to Break Our Encryption, It Won’t … – Lawfare

On Oct. 23, 2019, Google published a groundbreaking scientific research article announcing one of the holy grails of quantum computing research: For the first time ever, a quantum computer had solved a mathematical problem faster than the worlds fastest supercomputer. In order to maximize impact, the Google team had kept the article tightly under wraps in the lead-up to publicationunusually, they had not posted a preprint to the arXiv preprint server.

The article sank with barely a ripple in the expert academic community.

That wasnt because anyone disputed the significance of the Google teams milestone. Many experts still consider Googles demonstration to be the most important milestone in the history of quantum computing, comparable to the Wright brothers first flight in 1903. But most experts in the field had already read the article. A month earlier, a NASA employee who was involved with the research had accidentally posted a draft of the article on NASAs public web site. It was online for only a few hours before being taken back down, but that was long enough. Schrdingers cat was out of the bag.

This anecdote illustrates a fact with important policy implications: It is very difficult to keep groundbreaking progress in quantum computing secret.

One of the most important quantum computing algorithms, known as Shors algorithm, would allow a large-scale quantum computer to quickly break essentially all of the encryption systems that are currently used to secure internet traffic against interception. Todays quantum computers are nowhere near large enough to execute Shors algorithm in a practical setting, and the expert consensus is that these cryptanalytically relevant quantum computers (CRQCs) will not be developed until at least the 2030s.

Although the threat is not yet imminent, the consequences of a hostile actors execution of Shors algorithm could be incredibly dire. Encryption is at the very bedrock of most cybersecurity measures. A hostile actor who could read encrypted information transmitted over the internet would gain access to an immeasurable amount of critically sensitive informationfrom personal information such as medical or criminal records, to financial information such as bank account and credit card numbers, to cutting-edge commercial research and development, to classified national security information. The U.S. National Security Agency has said that the impact of adversarial use of a quantum computer could be devastating to [National Security Systems] and our nation.

Fortunately, preemptive countermeasures are already being put into place. The U.S. National Institute of Standards and Technology (NIST) is standardizing new post-quantum cryptography (PQC) protocols that are expected to resist attacks from both standard and quantum computers. Upgrading communications systems to use post-quantum cryptography will be a long, complicated, and expensive process that will extend over many years. The U.S. government has already begun the process: In May 2022, President Biden issued National Security Memorandum 10, which gives directives to all U.S. government agencies regarding the U.S. governments transition to post-quantum cryptography. Recognizing the long timelines that this transition will require, the memorandum sets the goal of mitigating as much of the quantum risk as is feasible by 2035.

Several experts have stated that one of the most important factors that will determine the severity of the threat posed by a CRQC is whether or not the public knows of the CRQCs existence. As soon as the existence of the CRQC becomes public knowledgeor is even considered plausibleand the threat becomes concrete, most vulnerable organizations will immediately move to upgrade all their communications systems to post-quantum cryptography. This forced transition may well be very expensive, chaotic, and disruptive, but it will fairly quickly neutralize most attack vectors (with one important exception mentioned below). The true nightmare scenario would be if a hostile actor (such as a criminal or terrorist organization or a hostile foreign government) covertly operated a CRQC over a long time period before PQC becomes universal, allowing the actor to collect a huge amount of sensitive information undetected.

Fortunately, it is extremely unlikely that any organization will develop a CRQC in secret, for at least four interrelated reasons.

First, anyone trying to develop a high-performance quantum computer will face stiff competition from commercial industry. Quantum computers have the potential to enable many commercial applications that have nothing to do with decryption, such as drug design, materials science, and numerical optimization. While there is huge uncertainty in the pace of technology development and the timelines for useful applications, some people have predicted that quantum computers could deliver over a trillion dollars in economic value over the next decade. Many private companies are racing to produce state-of-the-art quantum computers in order to profit from these applications, and there is currently no clear technical industry leader. Moreover, these companies are collectively extremely well funded: U.S. quantum computing startups alone have raised over $1.2 billion in venture capital, and that total does not include other major players such as national laboratories, large self-funding companies, or non-U.S. companies.

In the near term, these companies face some incentives to publicize their technical capabilities and other incentives to keep them proprietary. But in the long run, companies need to advertise their capabilities at a reasonable level of technical detail in order to attract customers. The closer the state of the art in commercial industry comes to the technical performance required to execute Shors algorithm, the clearer the threat will become to potential targets, and the more urgently they will prioritize upgrading to PQC.

Any organization attempting to secretly develop a CRQC would therefore need enormous financial resources in order to compete with the well-funded and competitive commercial industry, and it would need to stay far ahead of that industry in order to keep the element of surprise.

The second reason that a CRQC is unlikely to be developed in secret is that a relatively small number of people are at the cutting edge of quantum computing development in industry or academia, and they are well known within the expert community. Any organization attempting to secretly develop a CRQC would need to acquire world-class talentand if many of the greatest technical experts suddenly left their organizations or stopped publishing in the technical literature, then that fact would immediately be fairly evident, just as it was during the Manhattan Project. (However, this point may become less relevant in the future as the commercial industry matures. As the pool of expert talent grows and more information becomes business proprietary, public information about the top technical talent may decrease.)

Third, a CRQC might be physically difficult to hide. Its extremely difficult to estimate the physical resources that will be required to operate a CRQC, but my recent research suggests that a CRQC might plausibly draw 125 megawatts of electrical power, which is a significant fraction of the total power produced by a typical coal-fired power plant. A device that requires its own dedicated power plant would leave considerable evidence of its existence. Certain very capable organizations (such as national governments) might be able to conceal such a project, but doing so would not be easy and could well be impossible for smaller organizations.

The fourth reason has to do with the relative resources required for various quantum computing applications. As with most technical questions regarding the future of quantum computers, there is a huge amount of uncertainty here. But there is fairly strong theoretical evidence that many commercial applications of quantum computers will be significantly technically easier to implement than Shors algorithm. There is already very active research into the question of whether even todays crude quantum computers, known as noisy intermediate-scale quantum computers, might be able to deliver practical applications in the near future, although we dont yet know for sure.

In a more conservative technical scenario, all useful quantum applications might require a technically challenging hardware stabilization process known as quantum error correction, which has very high hardware requirements. But even in this scenario, there is evidence that some commercial applications of quantum computers (like the scientific modeling of chemical catalysis) will require lower hardware resources than Shors algorithm does. For example, one recent analysis estimated that computationally modeling a chemical catalyst used for direct air carbon capture would require only 20 percent as many qubits as executing Shors algorithm would. (A qubit is the basic building block of a quantum computer and one of the simplest ways to quantify its hardware performance.)

These analyses imply that commercial applications of quantum computing will very likely become technically feasible before decryption does. Unless an organization attempting to develop a CRQC is far more technically advanced than the commercial sectorwhich is unlikely, given the potentially huge economic value mentioned abovecommercial companies will probably beat the organization to applications, and they will announce their success. Even in the unlikely event that an organization does manage to develop a CRQC before the commercial industry develops a commercially useful quantum computer, that organization will face an enormously high opportunity cost of not using its CRQC for commercial applications that could deliver billions of dollars of value. Even if the organization were government sponsored, its government sponsor would face an enormous economic incentive to use its quantum computer for commercial applications rather than for intelligence collection.

What this means for policymakers is that the ultimate worst-case scenario, in which a hostile actor secretly deploys a CRQC for many years against totally unsuspecting victims, is highly unlikely. This does not in any way lessen the importance of quickly upgrading all critical communications systems to post-quantum cryptography, however, since doing so defends against harvest-now-decrypt-later attacks, in which a CRQC is deployed retroactively against saved encrypted data that was intercepted previously.

Operators of communications systems that transmit highly sensitive information should already be preparing to upgrade those systems cryptography to PQC, and they should perhaps develop contingency plans for even further accelerating that adoption if signs arise that CRQCs are approaching unexpectedly quickly. But policymakers should also understand that the commercial applications of quantum computers will probably emerge well before intelligence-collection applications do. This conclusion may carry implications regarding appropriate national-security-related policies such as export controls and outbound investment restrictions, as well as the broader balance of risks and benefits around quantum computers.

Finally, policymakers and cybersecurity analysts should avoid messaging that emphasizes the risk that CRQCs developed in secret could be imminent or already operational (unless, of course, they have additional information that runs counter to the points raised above). There is already more than enough reason to upgrade our communications systems to resist attacks from quantum computers as soon as possible. Even if completely unexpected attacks from a black-swan quantum computer are unlikely, attacks from known or suspected quantum computers would already be plenty bad enough.

Follow this link:
When a Quantum Computer Is Able to Break Our Encryption, It Won't ... - Lawfare

Read More..

Implement Object-Level Encryption and Policy With OpenTDF – DevOps.com

With an exponential increase in data being generated, stored and shared online, the issue of data security no longer belongs to IT alone. Software and application developers have been put on notice along with business owners and legal teams worldwide to not only maintain data privacy but to build more secure products. Theres no easy solution. However, there are resources readily available for devs to build security into the fabric of their products. One is called OpenTDF, an open source project that lets you integrate encryption and data policy controls into your new and existing apps to safeguard your data, and the sharing of it, for you as well as your users.

TDF, the trusted data format, was originally developed by the United States National Security Agency (NSA). Its an open standard for object-level encryption that keeps data protected and under the data owners control, wherever its created or shared. TDF includes cryptographically secured metadata that ensures consistent policy control throughout the data life cycle. Picture this: You can grant, revoke or turn off data access at any time, even if the data has left your network or application.

OpenTDF is an open source project that evolves the open TDF specification and provides a blueprint for getting started. There are a multitude of example applications that demonstrate the implementation logic, as well as streaming video and IoT use cases.

The OpenTDF project is based on Kubernetes and OCI containers, and there is a quick start guide to get you up and running with a development environment. The quick start process will install supplemental services like Keycloak as well as project-specific services like key access service (KAS) and Abacus (an ABAC front end for configuration and management of attribute-based access control). Once youve completed a quick start installation, youll have a basic OpenTDF cluster with a Keycloak identity provider, PostgreSQL data store and a single entry point at localhost with an Nginx ingress controller.

Theres an architectural diagram available on GitHub to see all of the services and service interactions.

Several SDKs are available for building on the OpenTDF framework, including JavaScript, Python, C++and Java. The client SDKs generally include basic examples for identity auth and creating a TDF-protected encrypted object.

Have you ever wondered how to make data access secure and simple? Using OpenTDF, developers can create that experience for their users. Lets walk through a sample web application that uses OpenTDF to encrypt and upload data to cloud storage.

The application well be using is called OpenTDF Secure Remote Storage. Its a react-based example that shows developers how to create encrypted data streams. These streams allow you to upload and download files from S3-compatible remote data stores while maintaining data protection. You can even remove encryption if needed.

To make things easy, well be using OpenTDFs client-web SDK, which authenticates against Keycloak using OpenID Connect (OIDC). Keep in mind that this example runs on your local machine. Its not designed for cloud or enterprise services.

Prerequisites: Youll need an S3-compatible storage object, like an Amazon S3 bucket. (You can create one for free here.)

First, install two CLI tools: Kind and Tilt. These will be used to deploy the OpenTDF services to your local machine. If youre on macOS, you can install them with a simple Homebrew command: brew install kind tilt.

Next, you will need the sample code on your local machine. You can either download the zip or clone the OpenTDF GitHub Repository using the following command: git clone git@github.com:opentdf/opentdf.git. This will create a directory called opentdf in your current location.

Now, navigate to the root directory of the sample application: cd opentdf/examples/secure-remote-storage. To deploy OpenTDF, youll need a local Kubernetes cluster. Use the Kind CLI to create one: kind create cluster --name opentdf.

Finally, start the application using Tilt: tilt up. This will launch the necessary OpenTDF services.

1. To begin, go to http://localhost:65432/secure-remote-storagein your web browser. This is where the Secure Remote Storage webpage is hosted.

2. Now, its time to log in. Use the following credentials (defined in the bootstraps config file): Username: user1 Password: testuser123

3. Then, choose a file from your computer to upload. It can be anything a text file, a PDF, or an image. Let your creativity flow! If you dont have a file handy, no worries. You can download an image by right-clicking on it and selecting Save As from this link.

4. Next, tell the application where to store your encrypted file. Provide the necessary JSON object that defines your S3-compatible object store. You can refer to the prerequisites for more details. (Optionally, you can save the configuration for future uploads. Just give it a name and click Save. This way, you wont need to define the object store again in the future.)

5. Now, its time to encrypt and upload. Click the encrypt and upload button, and watch the magic happen!

Whats happening behind the scenes? When you click encrypt and upload, the application uses the OpenTDF API to convert your selected file into a .tdf file. It applies AES-GCM encryption and attaches access controls to ensure that only authorized users, like you (in this case, testuser123), can access the data. Even if your data store is public, your data remains secure.

6. Ready to view and download your uploaded file? The table on the webpage lists all the files youve successfully uploaded. Each file has a download button next to it. Click that button, and the hosted file will be downloaded and decrypted on your local file system.

Excited to explore more? Now that youve seen OpenTDF in action, you can dive into the source code of this application. Use this sample application as a starting point to integrate OpenTDF into your own secure applications.

From a secure webcam app to a privacy-forward menstrual tracking app and more, the possibilities are endless with OpenTDF and the future is in developers hands. By building on OpenTDF, the valuable data flowing through your applications will be protected forever.

Learn more about OpenTDF at openTDF.io, and get the full quick-start guide, including more detailed instructions and other sample apps at the OpenTDF GitHub.

Cassandra Zimmerman, technical product manager at Virtru, contributed to this article.

Read the original:
Implement Object-Level Encryption and Policy With OpenTDF - DevOps.com

Read More..

MLPerf Releases Latest Inference Results and New Storage … – EnterpriseAI

MLCommons this week issued the results of its latest MLPerf Inference (v3.1) benchmark exercise. Nvidia was again the top performing accelerator, but Intel (Xeon CPU) and Habana (Gaudi1 and 2) performed well. Google provided a peak at its new TPU (v5e) performance. MLCommons also debuted a new MLPerf Storage (v0.5) benchmark intended to measure storage performance under ML training workloads. Submitters in the first Storage run included: Argonne National Laboratory (ANL), DDN, Micron, Nutanix, and Weka.

Digging through the latest Inference results more than 12,000 performance and 5,000 power inferencing results from 120 systems is a challenge. There were a more modest 28 results in the storage category. From a usefulness perspective, MLCommons provides direct access to results spreadsheets that permit potential system users/buyers to drill down onto specific system configurations and benchmark tests for comparison. (Links to Inference Datacenter and Edge v3.1 results and Storage v0.5 results)

In the past, HPCwire has tended to try to cover the full exercise in a single article. The rising number of results and introduction of a new category make this less tenable. Instead, well present a broad overview in this article and drill deeper into some vendor-specific results in separate articles (Nvidia and Intel/Habana). By now, you may be familiar with the MLPerf release cadence which is twice yearly for training and inference, with each released on alternate quarters. - so, inference results are released in spring and (early) fall; training results are released in winter and summer. The HPC Training benchmark is released just once yearly, close to the annual SC conference.

Broadly, inferencing and training are the foundational pieces of ML applications, with training deemed the more computational-intense of the two (i.e. think of training LLMs with trillions of parameters). Inferencing, though, is the volume workhorse, sitting behind every chatbot and similar applications.

MLPerf Inference v3.1 introduced two new benchmarks to the suite. The first is a large language model (LLM) using the GPT-J reference model to summarize CNN news articles; it garnered results from 15 different submitters, reflecting the rapid adoption of generative AI. The second change is an updated recommender, meant be more representative of industry practices, using the DLRM-DCNv2 reference model and larger datasets; it had 9 submissions. These new tests, say MLCommons, help advance AI by ensuring that industry-standard benchmarks represent the latest trends in AI adoption to help guide customers, vendors, and researchers, says MLCommons.

In a pre-briefing, David Kanter, MLCommons executive director, said, We added our first generation recommender a couple of years ago and are now updating it. The LLM (inference) benchmark is brand new and reflects the explosion of interest in what people are calling generative AI, large language models. An LLM had been added to the MLPerf Training benchmark in the spring (see HPCwire coverage, MLPerf Training 3.0 Showcases LLM; Nvidia Dominates, Intel/Habana Also Impress)

No ML benchmarking effort today would be complete without LLM coverage and MLCommon (parent organization for MLPerf) now has that.

Its important to understand large language models operate on tokens. A token is typically a piece of a word. An LLM simply takes a set of tokens as input and predicts the next token. Now, you can chain this together to actually build a predicted sentence. In practice, LLM s are used in a wide variety of applications. You can use them in search and in generating content, like essays or summaries. Summarization is what we do here, said Kanter.

The MLPerf LLM inference benchmark is quite different from the training benchmark, he emphasized.

One of the critical differences is the inference LLM is fundamentally performing a generative task. It's writing fairly lengthy sentences, multiple sentences, [but] its also actually a different and smaller model, he said. Many folks simply don't have the compute or the data to really support a really large model. The actual task we're performing with our inference benchmark is text summarization. So we feed in an article and then tell the language model to summarize the article.

As is MLCommons practice, submitting organizations are invited to submit brief statements on their submissions. These range in quality from pure marketing to providing more granular technical descriptions of a submission's distinguishing features. Given the high number of results, a fast review of the vendor statements can be informative in conjunction with consulting the spreadsheet.

Both Inference and storage submitter statements are appended to the end of this article. As examples, here are a few snippets from a few vendor statements in MLPerf Inference v3.1 exercise:

Azure promoted its online versus on premise showing access to H100 instances. Azure was the only submitter to publish results for virtual machines in the cloud, while matching the performance of on premises and bare metal offerings. This has been possible thanks to innovative technologies including: AI supercomputing GPUs: Equipped with eight NVIDIA H100 Tensor Core GPUs, these VMs promise significantly faster AI model performance than previous generations, empowering businesses with unmatched computational power; Next-generation computer processing unit (CPU): Understanding the criticality of CPU performance for AI training and inference, we have chosen the 4th Gen Intel Xeon Scalable processors as the foundation of these VMs, ensuring optimal processing speed."

CTuning Foundation, the non-profit ML tool developer, noted that it [delivered the new version of the open-source MLCommons CM automation language, CK playground and modular inference library (MIL) that became the 1st and only workflow automation enabling mass submission of more than 12000 performance results in a single MLPerf inference submission round with more than 1900 power results across more than 120 different system configurations.

Google touted its new TPU v5e. TPU v5e systems use multiple accelerators linked together by a high-speed interconnect and can be configured with a topology ranging from 1x1 to 16x16 (256 chips), giving the user the flexibility to choose the system that best meets their needs. This wide range of topology options offered by TPU systems allows users to run and scale AI inference workloads cost-effectively, without compromising on performance."

In this submission, Google Cloud used a TPU v5e system with a 2x2 topology (4 TPU chips) to run the 6-billion-parameter GPTJ benchmark. This benchmark demonstrates both the ease of scaling and the cost-efficiency offered by the TPU v5e systems for inference of large language models. Users can easily add more TPU v5e instances to achieve higher total queries per second (QPS), while maintaining the same performance per dollar advantage.

HPE reported, In the datacenter category, HPE Cray systems with eight (8) NVIDIA GPUs led our portfolio in performance, delivering more than 340,000 samples per second throughput for ResNet-50 Computer Vision, and more than 28,000 samples per second throughput for Bert 99.0 NLP. HPE also submitted for the first time the newly available HPE ProLiant DL380a Gen11 and HPE ProLiant DL320 Gen11 servers with NVIDIA H100 and L4 GPUs. The HPE ProLiant DL380a Gen11 with four (4) NVIDIA H100 GPUs is ideal for NLP and LLM inference. The HPE ProLiant DL320 Gen11 with four (4) NVIDIA L4 GPUs is a 1U server positioned for computer vision inference.

Intel discussed Gaudi2 accelerators, 4th Gen Intel Xeon Scalable processors and Intel Xeon CPU Max Series. Gaudi2 performance on both GPT-J-99 and GPT-J-99.9 for server queries and offline samples are 78.58/second and 84.08/second, respectively. These outstanding inference performance results complement our June training results and show continued validation of Gaudi2 performance on large language models. Performance and model coverage will continue to advance in the coming benchmarks as Gaudi2 software is updated continually with releases every six to eight weeks.

Intel remains the only server CPU vendor to submit MLPerf results. Our submission for 4th Gen Intel Xeon Scalable processors with Intel AMX validates that CPUs have great performance for general purpose AI workloads, as demonstrated with MLPerf models, and the new and larger DLRM v2 recommendation and GPT-J models.

You get the general flavor. Its necessary to dig into the spreadsheet for meaningful comparisons.

The new storage benchmark (v0.5) has been in the works for two years. MLCommons says, Its the first open-source AI/ML benchmark suite that measures the performance of storage for ML training workloads. The benchmark was created through a collaboration spanning more than a dozen leading industry and academic organizations and includes a variety of storage setups including: parallel file systems, local storage, and software defined storage. The MLPerf Storage Benchmark will be an effective tool for purchasing, configuring, and optimizing storage for machine learning applications, as well as for designing next-generation systems and technologies.

Although its being introduced along with the latest inference results, storage performance in ML is typically a more sensitive system element in training. MLCommons notes, Training neural networks is both a compute and data-intensive workload that demands high-performance storage to sustain good overall system performance and availability. For many customers developing the next generation of ML models, it is a challenge to find the right balance between storage and compute resources while making sure that both are efficiently utilized.

MLPerf Storage is intended to help overcome this problem by accurately modeling the I/O patterns posed by ML workloads, providing the flexibility to mix and match different storage systems with different accelerator types. The new benchmark reports results in sample/s and MB/s. Of course, the choice of storage hardware, protocol/filesystem, and network all influence performance.

The MLPerf Storage benchmark suite is built on the codebase of DLIO, a benchmark designed for I/O measurement in high performance computing, adapted to meet current storage needs.

Talking about the motivation and goals for the new benchmark, Kanter said Id heard about pretty large hyperscalers, who deployed really large training clusters, that could not hit their peak utilization because they didn't have enough storage. That [suggested] there's fundamentally a hard problem in storage and one that's under appreciated. Most hyperscalers that are buying 1000s, or tens of 1000s of accelerators also have engineers on staff to design proper storage subsystems."

The key accomplishment is we created a tool that represents ML training IO patterns, that doesn't require having any compute or accelerators, said Kanter. That's important, because if you want to size a storage subsystem for 1000 accelerators, you don't want to have to buy 1000 accelerators. Another interesting thing is its a dynamic tool that is coupled to compute. The metric for MLPerf storage is how many samples per second can be streamed out, for a given compute utilization; so we model a compute subsystem. If your storage falls behind too much, the compute subsystem will be idle, and we only allow 10% idle due to storage.

If the storage system us too slow, you cant run the benchmark, said Kanter. Obviously, these are early days for MLPerf Storage and it will take some time for the community take its full measure. There are already plans for additions. Given its newness, its best look through MLCommons documentation. (Link to MLPerf Storage Benchmark Rules)

Link to MLCommons, https://mlcommons.org/en/

ASUSTeK

ASUStek recently benchmarked its new AI servers using the MLPerf Inference v3.1 suite, aiming to highlight its performance across varied deep learning tasks. Our results exhibit our system's competency in inferencing some of the most demanding models with remarkable efficiency.

In the modern era of AI, speed and efficiency in deploying machine learning models to production are paramount. Enter ASUS GPU Server portfolios - designed to redefine the standards of inference, as validated by our recent MLPerf Inference benchmarks. Harness the power of AI frameworks like TensorFlow, PyTorch, and more. ASUS servers are not just about raw power; they're about smart power. Optimized software-hardware integrations ensure that you get the most out of every tensor operation. Power doesnt have to come at the cost of the planet. ASUS GPU servers not only boast top-tier performance metrics but do so with impressive energy efficiency ratings, as highlighted in the MLPerf power efficiency results. Seamlessly scale your AI workloads. With our multi-GPU configurations and optimized in hardware and software, ASUS GPU servers are built to handle increasing data demands, ensuring youre always ahead of the curve.

System Configuration:

Hardware: ASUS flagship AI Server ESC8000A-E12 with Dual AMD Genoa CPU up to 8 NVIDIA H100 GPUs, and ESC4000A-E12 with Dual AMD Genoa CPU up to 8 L4 GPUs

The results signify the DL system's enhanced performance and capability to address contemporary deep learning challenges, making it an apt choice for researchers and industries requiring accelerated inferencing workloads.

Azure

Microsoft Azure announced the general availability of the ND H100 v5-series for Generative AI at scale. These series of virtual machines vary in sizes ranging from eight to thousands of NVIDIA H100 GPUs interconnected by NVIDIA Quantum-2 InfiniBand networking. Azure was the only submitter to publish results for virtual machines in the cloud, while matching the performance of on premises and bare metal offerings. This has been possible thanks to innovative technologies including:

The ND H100 v5 is now available in the East United States and South Central United States Azure regions. Enterprises can register their interest in access to the new VMs or review technical details on the ND H100 v5 VM series at Microsoft Learn.

CTuning

As a founding member of MLCommons, cTuning.org is committed to democratizing MLPerf benchmarks and making them accessible to everyone to deliver the most efficient AI solutions while reducing all development, benchmarking and optimization costs.

We are proud to deliver the new version of the open-source MLCommons CM automation language, CK playground and modular inference library (MIL) that became the 1st and only workflow automation enabling mass submission of more than 12000 performance results in a single MLPerf inference submission round with more than 1900 power results across more than 120 different system configurations from different vendors (different implementations, all reference models and support for DeepSparse Zoo, Hugging Face Hub and BERT pruners from the NeurIPS paper, main frameworks and diverse software/hardware stacks) in both open and closed divisions!

This remarkable achievement became possible thanks to open and transparent development of this technology as an official MLCommons project with public Discord discussions, important feedback from Neural Magic, TTA, One Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and contributions from students, researchers and even school children from all over the world via our public MLPerf challenges. Special thanks to cKnowledge for sponsoring our developments and submissions, to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer, and to TTA for sharing their platforms with us to add CM automation for DLRMv2 available to everyone.

Since its impossible to describe all the compelling performance and power-efficient results achieved by our collaborators in a short press-release, we will make them available with various derived metrics (power efficiency, cost, etc) and reproducibility reports at the MLCommons CK playground (x.cKnowledge.org), github.com/mlcommons/ck_mlperf_results and github.com/mlcommons/ck/blob/master/docs/news-mlperf-v3.1.md shortly after official release.

We continue enhancing the MLCommons CM/CK technology to help everyone

automatically co-design the most efficient end-to-end AI solutions

based on their requirements and constraints. We welcome all submitters to join our public MLCommons Task Force on Automation and Reproducibility if you want to automate your future MLPerf submissions at scale.

Connect Tech Inc

As a new member of MLCommons, Connect Tech ran performance and accuracy benchmarks in the Inference: Edge category in its recent MLPerf submission. Using Connect Techs feature-rich Hadron carrier board with the NVIDIA Jetson Orin NX, a high-performance, energy-efficient platform, showcased remarkable levels of performance across various AI workloads.

Connect Tech additionally supports NVIDIA Jetson Orin NX with Photon and Boson carrier boards, and system devices like Polaris and Rudi-NX. By deploying on Connect Techs production-ready hardware, customers can take immediate advantage of Jetson Orin NX for performance improvements and enhanced user experience with robotics and other edge AI applications.

Connect Tech's involvement in MLCommons signifies more than just technical achievement. It reflects the company's commitment to pushing the envelope of what's possible in the world of AI at the edge. The seamless integration of Connect Tech's hardware with NVIDIA's cutting-edge technology presents engineers and scientists with the tools to drive AI and machine learning innovations across diverse industries, including robotics, industrial automation, and healthcare.

Connect Tech is a hardware design and manufacturing company, specializing in rugged, small form factor solutions. As an Elite NVIDIA Jetson ecosystem partner, Connect Tech designs carrier boards, enclosures, and embedded systems for each Jetson generation. With a rich history of innovation, Connect Tech integrates edge AI solutions within various industries, empowering engineers and scientists to harness the potential of machine learning.

Connect Tech remains at the forefront as the world delves deeper into AI and machine learning. Navigating the complex landscape of embedded AI computing is made easier by using NVIDIA and Connect Techs innovative products.

Dell

Enterprise IT is bracing for the most transformative technology trend in decades: generative AI. Dell Technologies is ready to meet this demand with the worlds broadest Generative AI solutions portfolio from desktop to edge to data center to cloud, all in one place.

For the MLPerf inferencing v3.1 benchmark testing, Dell submitted 230 results, including the new GPT-J and DLRMv2 benchmark results, across 20 system configurations. Dell Technologies works with customers and collaborators, including NVIDIA, Intel, and Qualcomm, to optimize performance and efficiency, boosting inferencing workloads, including generative AI.

The Dell PowerEdge XE accelerated server family continues to deliver tremendous performance gains across several benchmarks. Here are some of the latest highlights:

Generate higher quality, faster time-to-value predictions and outputs while accelerating decision-making with powerful solutions from Dell Technologies. Take a test drive in one of our worldwide Customer Solution Centers. Collaborate with our Innovation Lab and tap into one of our Centers of Excellence.

Fujitsu

Fujitsu offers a fantastic blend of systems, solutions, and expertise to guarantee maximum productivity, efficiency, and flexibility delivering confidence and reliability. Since 2020, we have been actively participating in and submitting to inference and training rounds for both data center and edge divisions.

In this round, Fujitsu demonstrated the performance of PRIMERGY CDI with four A100-PCIe-80GB GPUs installed in an external PCIe BOX and measured the benchmark program only for the data center closed division. Fujitsu Server PRIMERGY CDI is expertly engineered to deploy the necessary resources according to each customer's unique workload, releasing them when no longer needed. CDI stands for Composable Disaggregated Infrastructure, a next-generation technology that supports the diversification of data processing. This results in an efficient operation that maximizes resource utilization, while providing user-friendly services that eliminate the drawbacks of traditional physical servers.

As demonstrated by the impressive results of this round, the PRIMERGY CDI confirms that even with GPUs mounted in an external PCIe BOX, it delivers outstanding performance and remarkable scalability for PCIe components.

Our purpose is to make the world more sustainable by building trust in society through innovation. With a rich heritage of driving innovation and expertise, we are dedicated to contributing to the growth of society and our valued customers. Therefore, we will continue to meet the demands of our customers and strive to provide attractive server systems through the activities of MLCommons.

Giga Computing Technology, a subsidiary wholly owned by GIGABYTE, is the enterprise unit that split off from GIGABYTE that designs, manufactures, and sells servers, server motherboards, immersion solutions, and workstations. As the GIGABYTE brand is widely recognized, Giga Computing will continue to use and promote it, and that includes at expos where we will join as GIGABYTE. Although the company name has changed, our customers can still expect the same quality and services as before. Giga Computing strives to do better and that includes greater push for efficiency and cooling with immersion and DLC technology. As well as providing public AI benchmarks.

As one of the founding members of MLCommons, GIGABYTE has continued to support the communitys efforts in benchmarking server solutions for various AI training & inference workloads. In the latest round of MLPerf Inference v3.1, Giga Computing submitted a powerful GIGABYTE system for platforms: Intel Xeon & NVIDIA H100 SXM5, and the results speak for themselves while showing great efficiency as measured in performance/watt. We did find that our system achieved excellent performance in some tests such as rnnt-Server and bert99-offline. We would have liked to have more benchmarks, but due to resource limitations we are not able; however, we found that our partners NVIDIA, Qualcomm, and Krai chose our GIGABYTE servers to do their own testing.

Google

Google Cloud recently launched an expansion to its AI infrastructure portfolio - Cloud TPU v5e - and is proud to announce its performance results in this round of MLPerf Inference (data center category). TPU v5e systems use multiple accelerators linked together by a high-speed interconnect and can be configured with a topology ranging from 1x1 to 16x16 (256 chips), giving the user the flexibility to choose the system that best meets their needs. This wide range of topology options offered by TPU systems allows users to run and scale AI inference workloads cost-effectively, without compromising on performance.

In this submission, Google Cloud used a TPU v5e system with a 2x2 topology (4 TPU chips) to run the 6-billion-parameter GPTJ benchmark. This benchmark demonstrates both the ease of scaling and the cost-efficiency offered by the TPU v5e systems for inference of large language models. Users can easily add more TPU v5e instances to achieve higher total queries per second (QPS), while maintaining the same performance per dollar advantage.

We are looking forward to seeing what Google Cloud customers achieve with the new TPU v5e systems.

HPE

HPE successfully submitted results in partnership with Intel, NVIDIA, Qualcomm, and Krai. HPE demonstrated a range of high-performing inference systems for both the datacenter and edge in Computer Vision, natural language processing (NLP), and large language models (LLM).

In the datacenter category, HPE Cray systems with eight (8) NVIDIA GPUs led our portfolio in performance, delivering more than 340,000 samples per second throughput for ResNet-50 Computer Vision, and more than 28,000 samples per second throughput for Bert 99.0 NLP.

HPE also submitted for the first time the newly available HPE ProLiant DL380a Gen11 and HPE ProLiant DL320 Gen11 servers with NVIDIA H100 and L4 GPUs. The HPE ProLiant DL380a Gen11 with four (4) NVIDIA H100 GPUs is ideal for NLP and LLM inference. The HPE ProLiant DL320 Gen11 with four (4) NVIDIA L4 GPUs is a 1U server positioned for computer vision inference. The HPE ProLiant DL380a Gen11 showed strong inference performance using 4th Gen. Intel Xeon Scalable Processors in CPU-only inference scenarios. The HPE ProLiant DL385 Gen10 Plus v2 with eight (8) Qualcomm Cloud AI 100 Standard accelerators remained well balanced for over-network inference compared to offline datacenter performance. Qualcomm Cloud AI 100 Standard is ideal for both computer vision and NLP inference.

In the Edge category, HPE Edgeline e920d powered by four (4) Qualcomm Cloud AI 100 Standard accelerators remains one of the lowest latency systems in the Edge category for SingleStream and MultiStream inference scenarios. The HPE Edgeline e920d also achieved strong performance improvements in throughput and energy efficiency.

Many thanks to Krais collaboration in achieving high-performance and energy efficiency for Qualcomm Cloud AI 100 accelerators.

IEI

IEI Industry Co., LTD is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, IEI delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI, and deep learning.

In MLCommons Inference v3.1, IEI submitted the NF5468M6 system.

NF5468M6 is a highly versatile 4U AI server supporting between 4 and 16 NVIDIA single and double-width GPUs, making it ideal for a wide range of AI applications including AI cloud, IVA, video processing and much more. NF5468M6 offers ultra-high storage capacity and the unique function of switching topologies between Balance, Common and Cascade in one click, which helps to flexibly adapt to various needs for AI application performance optimization.

Intel

Intel is pleased to report MLPerf Inference v3.1 performance results for our Gaudi2 accelerators, 4th Gen Intel Xeon Scalable processors and Intel Xeon CPU Max Series. These results reinforce Intels commitment to delivering the full spectrum of products to address wide-ranging customer AI requirements.

Gaudi2 performance on both GPT-J-99 and GPT-J-99.9 for server queries and offline samples are 78.58/second and 84.08/second, respectively. These outstanding inference performance results complement our June training results and show continued validation of Gaudi2 performance on large language models. Performance and model coverage will continue to advance in the coming benchmarks as Gaudi2 software is updated continually with releases every six to eight weeks.

Intel remains the only server CPU vendor to submit MLPerf results. Our submission for 4th Gen Intel Xeon Scalable processors with Intel AMX validates that CPUs have great performance for general purpose AI workloads, as demonstrated with MLPerf models, and the new and larger DLRM v2 recommendation and GPT-J models.

The results confirm that 4th Gen Intel Xeon Scalable processor with optimized data pre-processing, modeling and deployment tools and optimizations, is an ideal solution to build and deploy general purpose AI workloads with the most popular open source AI frameworks and libraries.

For the GPT-J 100-word summarization task of a news article of approximately 1,000 to 1,500 words, 4th Gen Intel Xeon processors summarized two paragraphs per second in offline mode and one paragraph per second in real-time server mode.

This is the first time weve submitted MLPerf results for our Intel Xeon CPU Max Series, which provides up to 64GB of high-bandwidth memory. For GPT-J, it was the only CPU able to achieve 99.9% accuracy, which is critical for usages for which the highest accuracy is of paramount importance.

With our ongoing software updates, we expect continued advances in performance and productivity, and reporting new training metrics with the November training cycle.

For more details, please see MLCommons.org.

Notices & Disclaimers

Performance varies by use, configuration and other factors. Learn more at http://www.Intel.com/PerformanceIndex .

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See

backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary.

Follow this link:
MLPerf Releases Latest Inference Results and New Storage ... - EnterpriseAI

Read More..

WhatsApp testing automatic security code verification for end-to-end encryption – Zee Business

Meta-owned WhatsApp is reportedly rolling out an 'automatic security code verification' feature for end-to-end encryption to a limited number of beta testers on Android.

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

With this feature, the app will try to automatically verify if messages are end-to-end encryption without requiring any user intervention, according to WABetaInfo.

This process will be called Key Transparency, enhancing the overall security and privacy of users' conversations by checking if they are using a secure connection.

However, WhatsApp still provides users with the manual verification feature in case the automatic verification fails or it is not available.

According to the report, this feature is especially useful in situations where traditional QR code scanning or manual verification is difficult, such as when users need to easily verify encryption remotely.

By automating security code verification, the report said that WhatsApp hopes to provide added security and convenience for their users while verifying end-to-end encryption with no additional effort on their part.

Meanwhile, WhatsApp is reportedly working on bringing "third-party chat" support on Android in order to comply with the new European Union (EU) regulations.

The new feature will offer users the ability to communicate with each other using different apps.

For instance, someone from the Signal app could send a message to a WhatsApp user, even without a WhatsApp account.

The feature comes just days after the European Commission confirmed that Meta fits the definition of a "gatekeeper" under the EU's Digital Markets Act (DMA), which mandates that communication software like WhatsApp allow interoperability with third-party messaging apps by March 2024.

See the rest here:
WhatsApp testing automatic security code verification for end-to-end encryption - Zee Business

Read More..

Verification of Integrity and Data Encryption (IDE) for CXL Devices – Design and Reuse

In continuation of our series on IDE blogs, Why IDE Security Technology for PCIe and CXL? and Verification of Integrity and Data Encryption(IDE) for PCIe Devices, this blog focuses on IDE verification considerations for CXL devices.

With increasing trends of AI, ML, and deep learning in the computing space, there is a focus on security features for SoCs catering to high performance computing (HPC), data analytics, automotive, etc. CXL is rapidly growing as the interconnect of choice for these applications, which does pose security concerns while transmitting mission-critical workloads. Therefore, CXL specification decides to incorporate security IDE features with a similar flow as PCIe IDE. In fact, the operational modus for FLITs transmitted on CXL.io semantics is the same as PCIe.

Coming to CXL.cachemem, AES-GCM is used with 256 key sizes for data encryption and integrity and 96-bit MAC for data protection. However, CXL.cachemem supports only Link IDE. Lets take a closer look at key verification aspects for CXL.cachemem IDE.

Click here to read more ...

Read the original post:
Verification of Integrity and Data Encryption (IDE) for CXL Devices - Design and Reuse

Read More..

How 5G is Creating New Opportunities for Tech Professionals? – Analytics Insight

Unveil how 5G technology revolutionizes new opportunities for tech professionals in the IT sector

For various professionals, there will be different opportunities. The 5G communication systems will be developed, designed, implemented, and run by the communication and software engineers. The essential circuits for 5G devices and infrastructures will be designed by electronics experts, who will also develop 5G technology-enabled IoT networks with millions of IoT endpoints globally.

Faster and more dependable communications are delivered by 5G networks. In addition to the Internet of Things (IoT), autonomous driving, fixed wireless, internet, and quicker video viewing, they will open doors to intriguing new prospects. For business customers, 5G will make it possible to use new services like connected car fleets, remote health diagnostic services, smart factory automation and safety applications, remote mining, drilling, and other hazardous operations. Additionally, it will assist in bringing broadband to isolated and rural homes.

Universities should embed specialized courses to equip tech professionals with the skills needed to harness growing opportunities. These courses should cover topics such as cloud network architecture, integration of 4G/5G networks, and training and certifications for developing use cases like Hybrid Remote Teaching, Smart Manufacturing, and Drone Logistics. Moreover, these courses should be developed broadly and incorporate interdisciplinary approaches spanning various technologies and applications.

New levels of connectivity and creativity have arrived with the introduction of 5G technology. Beyond only providing us with higher internet connections on our devices, 5G is significantly impacting several other businesses and presenting exciting new prospects for tech professionals. In this article, well examine how 5G technology is reshaping the future of the IT sector and examine its disruptive potential.

One of the most significant implications of 5G is its role in accelerating the Internet of Things (IoT) revolution. With ultra-low latency and massive device connectivity, 5G enables seamless communication between IoT devices. It creates many opportunities for tech professionals to develop, deploy, and manage IoT solutions that can revolutionize industries like healthcare, agriculture, transportation, and more.

5Gs low latency capabilities are driving the adoption of edge computing. Tech professionals are now tasked with building the infrastructure and applications that leverage edge computing to process data closer to the source. It enhances real-time decision-making and reduces the load on centralized cloud servers, creating a more efficient and responsive digital ecosystem.

5Gs high-speed and low-latency networks perfectly match AR and VR applications. Tech experts can now explore new frontiers in immersive technologies, from designing lifelike virtual experiences to developing AR applications for industries like education, gaming, healthcare, and remote collaboration.

The COVID-19 pandemic accelerated the adoption of telemedicine, but 5G takes it to the next level. Healthcare tech professionals are leveraging 5Gs capabilities to enhance remote patient monitoring, conduct high-definition telehealth consultations, and even perform surgeries remotely using robotic systems. This opens up exciting career opportunities in health tech and telemedicine.

The automotive industry is undergoing a massive transformation with the introduction of autonomous vehicles. 5Gs low latency and high reliability are critical for enabling vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication systems. Tech professionals are at the forefront of developing the software, algorithms, and cybersecurity measures required for safe and efficient autonomous transportation.

The tech industry increasingly focuses on environmental sustainability, and 5G can contribute to this cause. Professionals in green tech are exploring how 5G can be harnessed to optimize resource management, reduce energy consumption, and create eco-friendly solutions.

More:
How 5G is Creating New Opportunities for Tech Professionals? - Analytics Insight

Read More..

Key Players Hewlett Packard, IBM, and More Compete in the … – GlobeNewswire

Dublin, Sept. 15, 2023 (GLOBE NEWSWIRE) -- The "Clustering Software Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2023-2028" report has been added to ResearchAndMarkets.com's offering.

The global clustering software market reached a size of USD 2.8 billion in 2022, and it is expected to grow to USD 3.5 billion by 2028, exhibiting a Compound Annual Growth Rate (CAGR) of 3.2% during the period from 2023 to 2028.

Clustering software refers to various software applications that connect, coordinate, and manage multiple distributed servers. It enables these servers to collectively perform computing and administrative tasks such as load balancing, node failure detection, and failover assignment.

This technology divides complex software into smaller, manageable subsystems and facilitates efficient data management over large networks, providing fault-tolerant responses. It plays a crucial role in various industries, including telecom, aerospace, academics, life sciences, and defense.

Market Dynamics

Several factors are driving the growth of the global clustering software market:

Key Market Segmentation

The report provides an analysis of the global clustering software market's key trends and forecasts at the global, regional, and country levels from 2023 to 2028. It categorizes the market based on:

Competitive Landscape

Key players in the global clustering software market include Hewlett Packard Enterprise Company, IBM Corporation, Fujitsu, Microsoft Corporation, NEC Corp., Oracle, Red Hat, Broadcom, Inc., VMware, and others.

Key Questions Answered in This Report

The report answers important questions related to the global clustering software market:

Key Attributes:

For more information about this report visit https://www.researchandmarkets.com/r/lzfi5i

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

View original post here:
Key Players Hewlett Packard, IBM, and More Compete in the ... - GlobeNewswire

Read More..