Page 814«..1020..813814815816..820830..»

Meta Platforms Is Determined To Make Ethernet Work For AI – The Next Platform

We said it from the beginning: There is no way that Meta Platforms, the originator of the Open Compute Project, wanted to buy a complete supercomputer system from Nvidia in order to advance its AI research and move newer large language models and recommendation engines into production. Meta Platforms, which has Facebook as its core platform, likes to design and build its own stuff, but got caught flat-footed by the lack of OAM-compatible GPU and matrix accelerators and had no choice but to buy an N-1 generation DGX SuperPOD system using InfiniBand interconnects between modes.

And now, as Meta Platforms looks ahead to the future of AI inside the social network and the interconnect underpinning the compute engines it must lash together at incredible scale to compete against its hyperscaler and cloud builder rivals, it is back to Ethernet interconnects. This is why Meta Platforms is one of the founding companies behind the Ultra Ethernet Consortium, a buddy movie collection of Ethernet ASIC suppliers and switch makers who do not really want to cooperate with each other but who are being compelled by the Internet titans and their new AI upstart competition to figure out a way to not only make Ethernet as good as InfiniBand for AI and HPC networking, but make it stretch to the scale they need to operate. That would be for Meta Platforms around 32,000 compute engines today, and then hundreds of thousands of devices and then over 1 million devices at some points in the not too distant future.

What unites these companies Broadcom, Cisco Systems, and Hewlett Packard Enterprise for switch ASICs (and soon Marvell we think), Microsoft and Meta Platforms among the titans, and Cisco, HPE, and Arista Networks among the switch makers is a common enemy: InfiniBand.

The enemy of my enemy is my ally.

The math is very simple. In the early 2010s, when the hyperscalers and cloud builders were really starting to build massive infrastructure, the networking portion of any distributed system represented less than 10 percent of the cost of that overall system, including switches, network interfaces, and cables. As the first generation of 100 Gb/sec gear came out, the costs were very high because the design was not right, and soon networking was representing 15 percent or more of the cost of a cluster. With the advent of affordable 100 Gb/sec Ethernet and now the advance to 200 Gb/sec and 400 Gb/sec speeds, the cost is now down below 10 percent again but only on the front end network where applications run. For AI training and inference infrastructure among the hyperscalers and cloud builders, Nvidia will tell you plain and simple that the network represents 20 percent of the cluster cost. InfiniBand, explains Nvidia co-founder and chief executive officer Jensen Huang, delivers 20 percent better performance at scale at the same bandwidth than Ethernet, however, so InfiniBand is effectively free.

Well, no. It is not free. You still have to come up with the cash, and it is 20 percent of the cost of the cluster, which is impressive when you think of the very high cost of GPU compute engines compared to the overall cost of a Web infrastructure cluster based on CPUs. The cost of InfiniBand networking for AI systems, node for node, must be enormously more expensive than Ethernet admittedly at a lower bandwidth was on other infrastructure clusters to run databases, storage, and applications.

And this was why Ethernet with RDMA over Converged Ethernet a kind of low latency Ethernet that borrows many ideas from InfiniBand was on display at the Networking @ Scale 2023 event hosted by Meta Platforms. The company talked about how it has been using Ethernet for modest-sized AI training and inference clusters and how its near term plans were to scale to systems with 32,000 GPUs sharing data and enabling a factor of 16X improvement in scale over the initial 2,000 GPU clusters it had been using to create and train its LLaMA 1 and LLaMA 2 models. (The Research Super Computer system that Meta Platforms bought from Nvidia topped out at 16,000 GPUs, with most of them being Nvidias Ampere A100 GPUs with a relatively small share of them being the more recent and more capacious Hopper H100 modules.

Meta Platforms knows a thing or two about building datacenter-scale networks, given that its applications serve over 3 billion people on Earth thats roughly 40 percent of the population on the planet. But, as the Networking @ Scale presentations showed, scaling AI is a whole lot more troublesome than scaling PHP or Python applications the various middleware, databases, and storage that underpins them to keep us up to date on our social networks. (Can you even tell if the feeds are slightly behind the actual posts on a social application? No, you cant.)

AI models are growing 1,000X every two to three years, explained Rajiv Krishnamurthy, director of software engineering for the Network Infrastructure group at the company. And we have observed this internally at Meta and I think that seems to be a secular trend based on whatever we are observing in industry too. And that number is difficult to grok. So from a physical perspective, this translates into tens of thousands of GPU cluster sizes, which means that they are generating exaflops of compute. This is backed by exabytes of data storage. And from a networking perspective, you are looking at manipulating about terabits per second of data. The workloads themselves, they are finicky. By that people understand that typical AI HPC workloads have very low latency requirements and also from a packet perspective, they cannot tolerate losses.

Meta Platforms wants to have production clusters for AI training that scale 2X beyond the Nvidia RSC machine it acquired in January 2022 and ramped up throughout all of last year to its full complement of 16,000 GPUs. And then, before too long, it will be talking about 48,000 GPUs then 64,000 GPUs and so on. . . .

Like other hyperscalers who actually run their own applications at scale, Meta Platforms has to balance the needs of large language models (LLMs) against the needs of recommendation engines (Reco in some of the slides at the Networking @ Scale event) that are also using AI to provide. LLMs need to store models and weights to do inference, but recommendation engines need to store massive amounts of embeddings usually at least terabytes of data in memory as well, which us a set of data that has salient characteristics about us and the zillions of objects it is recommending so it can make correlations and therefore recommend the next thing that might be useful or interesting to us.

Architecting a system that can do LLM training (thats using LLaMA 2 at Meta Platforms at this point) and inference as well as Reco training and inference (in this case, the homegrown Deep Learning Recommendation Model, or DLRM) is very difficult, and one might even say impossible given the divergent requirements of these four workloads, as Jongsoo Park, a research scientist at the AI Systems division of Meta Platforms, showed in this spider graph:

LLMs need three orders of magnitude more compute than reco engines, says Park, needing about 1 teraflops of compute for every sentence that is processed and against a datastore of hundreds of billions of sentences and therefore trillions of tokens. This training is distributed across the cluster, but so is the inference, which is now busting out beyond an eight GPU server node to clusters with 16, 24, and even 32 GPUs. Park sized up the compute needs for these four distinct workloads as such:

Imagine, if you will, walking into the office of the CEO and CFO and explaining that you have this wonderful hyperrelational database thingamabob and it could answer questions in plain American, but it needs on the order of 1 petaflops to process one sentence of your corpus of enterprise data and it would need 10 petaflops of oomph to start talking within one second of asking a question. You would be laughed out of the boardroom. But, if you say generative AI, then they will probably come up with the money because everybody thinks they can be a hyperscaler. Or borrow some of their iron and frameworks at the very least.

Love this table that Park showed off:

This table shows the interplay of LLaMA model generation, model size (parameter count), dataset size (tokens), aggregate zettaflops needed to complete the training on the hardware shown. Add parameters and you either need more GPUs or more times, and it scales linearly. Add more tokens and you either need more GPUs and more time, and it scales linearly. Scale up parameters and tokens, you need exponentially more GPUs or more time or both.

Park said that this GPU cluster running LLaMA2 34B with 2,000 A100 GPUs was the largest Ethernet RoCE network in the world as far as he knew, and you can see how if you doubled up the parameter count to LLaMA2 70B, it would probably take 1 million GPU hours to complete against a 2 trillion token dataset and that InfiniBand is about 15 percent faster at the same 200 Gb/sec port speed used in the clusters.

This is just the beginning. Meta Platforms needs to ramp up its parameter scale, but it cant do so until it can scale up its back-end AI network and also get its hands on 32,000 of Nvidias H100 GPUs. We presume that Meta Platforms has done its penance with Nvidia by touting the RSC system for the past year and a half and will revert to using PCI-Express versions of Hopper and build its own systems from here on out.

With 32,000 H100s yielding about 30 percent of peak performance in production at FP8 quarter precision floating point math, Park says Meta Platforms will be able to train a LLaMA2 model with 65 billion parameters in a day. Lots of things will have to change to make this happen, and this includes increasing the training token batch beyond 2,000 and making that scale across more than a few thousand GPUs. The global training batch size will also have to be maintained across 32,000 GPUs as well, and using what he called 3D parallelism a combination of data parallel, tensor parallel, and pipeline parallel techniques to spread the work out across the GPUs. Park says data parallelism is running out of stream because the parameters and data sizes are getting so large, so there is no way to get around this issue.

As for latency, Meta Platforms looks at time to first token and then the average response time for each successive token. The first token should come in under 1 second, which is why it is taking more than eight GPUs to do inference, and then each successive token should come in 50 milliseconds. (An eyeblink is around 200 milliseconds, which is the attention span of a human being since the Internet was commercialized and widely distributed.)

There are subtleties with inference that we were not aware of, and these also have compute and networking needs that are at odds with each other, which is driving system architects to distraction:

The inference stages are prefill and decode. The prefill stage is about understanding the prompts, which means processing tens of thousands of tokens in a parallel fashion through large messages on the order of hundreds of megabytes. The time to first token is a few seconds and you need hundreds of GB/sec to feed the prompts into the inference engine. The decode stage is all about latency. One token is output at a time, with each output token being fed back into the transformer model to generate the next token.

Petr Lapukhov drilled down into the AI networks at Meta Platforms a bit more. Lapukhov was a senior network engineer at Microsoft working on LAN and WAN issues for the Bing search engine, has been at Meta Platforms for the past decade as a network engineer, and most recently has been focused on AI systems and their network topologies.

Here is how the Meta Platforms AI systems have evolved over time and a relatively short period of time at that:

In the old days of only a couple of years ago, DLRM training and inference could be done on a single node. Then, with its first generation of Ethernet RoCE clusters, Meta could cluster multiple nodes together, but the cluster size was fairly limited. To get the kind of scale it needed, it had to move to InfiniBand and Ethernet RoCE v2, and the former had a financial problem and the latter had some technical problems, but the company has made do up until now.

Starting with the basic building blocks, an eight-way GPU server based on Nvidia accelerators can deliver 450 GB/sec of bandwidth across the devices with tens of accelerators inside of a node, according to Lapukhov. Model parallel traffic runs over the in-node interconnect, in this case NVLink but it could also be PCI-Express switching infrastructure. From here, models have to scale with data parallelism across thousands of nodes (with tens of thousands of aggregate GPU compute engines) using some form of RDMA (either InfiniBand or Ethernet RoCE) and you can deliver on the order of 50 GB/sec of bandwidth between the nodes with a reasonable number of network interface cards.

For Ethernet AI networks, Meta Platforms is using the same Clos topology that it uses for its datacenter-scale front end network for applications and not the fat tree tropology generally favored by those using InfiniBand in AI training and HPC clusters.

To get to 32,256 GPUs the charts from Meta Platforms are imprecise the company puts two servers in a rack, each with eight Nvidia H100 GPUs. This is not particularly dense, as racks go, but it is no less dense than what Nvidia itself is doing with its DGX H100 clusters. This means there are 2,000 racks that need to be connected, like this:

If you look at this carefully, it is really eight clusters of 4,096 GPUs each cross-linked in two tiers of networking.

Each rack has a pair of servers with a total of sixteen GPUs and a top of rack switch. It is not clear how many ports there are in the servers or switches, but there had better be one uplink port per GPU, which means eight ports per server. (This is what Nvidia does with its DGX designs.) There are a total of 2,016 of these TORs in the whole enchilada. That is a fairly large number of switches as networks go.

These top of rack switches are cross connected into a cluster using eighteen cluster switches (what you might call a spine), which works out to 144 switches across the full cluster. And then there are another eighteen aggregation switches with a 7:1 oversubscription taper that link the eight sub-clusters to each other. Thats 2,178 switches to interlink 4,032 nodes. That is a 1.85:1 ratio, thanks to the bandwidth needs of those data hungry GPUs.

This table by Lapukhov was cool, and it showed that the sub-cluster granularity as far as the AI models were concerned is really on the order of 256 to 512 GPUs:

And this shows how the collective operations that underpin AI are mapped onto the network:

The gist is this, and it is not surprising. As you make larger fabrics to span more GPUs, you add more layers to the network and that means more latency, which will have the effect of lowering the utilization of the GPUs at least some of the time when they are waiting for collective operations to finish being propagated around the cluster. But fully shared data parallel all-gather operations tend to send small messages usually 1 MB or smaller and if you can handle small messages well, you can do tensor parallelism with fine-grained overlapping of communication and computation.

Sounds like someone needs big fat NUMA nodes for inference and training. . . . which is exactly what NVLink does and what NVSwitch extends.

So what does this look like in the Meta Platforms datacenters? Well, here is what the front-end datacenter fabric looks like:

A datacenter us carved up into four rooms, and there is some aggregation networking in each room and then the core network that lashes together the rooms in its own area at the center of the datacenter. To add AI to server rooms, the cluster training switches (CTSW) and rack training switches (RTSW) are added to the same rooms as the other application servers and can be interleaved with application servers. Across four data halls, Meta Platforms can house tens of thousands of reasonably tightly coupled GPUs:

Here is a 3D representation of the network planes if this makes it easier to visualize:

Back in the old days, Meta Platforms was using 100 Gb/sec Ethernet and RoCE v1 with some success:

With the shift to Ethernet RoCE v2, which had much-improved latency and packet protection features, Meta Platforms had eight ports of 200 Gb/sec going into each server (whew!) and cross-coupled these with rack and cluster switches using 400 Gb/sec ports.

In the second generation of its AI fabric, which is presumably what is helping Arista Networks make so much money from Meta Platforms, the social network has moved to 400 Gb/sec downlinks to the hosts for each GPU and is still running the higher levels of the network undersubscribed to keep the bits moving without any obstructions.

Echoing our supply win versus design win observation that has driven a lot of datacenter infrastructure sales since the beginning of the coronavirus pandemic, Lapukhov laid it right out there when asked what is the most important property of an AI fabric.

So funny enough, the most important property is buildability, Lapukhov said. Will you have the materials on time to build your fabric? I know its controversial, its very unusual to say this thing. But what we found out is that building a large system requires you to get a lot of components on time in one place and test them. So from my perspective, you have Ethernet and InfiniBand as two poles, but they solve the problem in different ways. Ethernet offers you an open ecosystem, multiple vendors, and easier supply sources to get your hardware. InfiniBand offers you the pedigree of technology used in HPC clusters, but there is only one supplier as of today. So the answer is, whatever you can make work on the timescale you need. So for us for longest time, it was Ethernet. We built many fabrics on Ethernet because this is technology we are familiar with good supply and we have had devices to deploy on time. And that took precedence. We have been building clusters with InfiniBand as far back as three years ago. So as of today, we have allow our technologists to deploy both InfiniBand and Ethernet. And once again, Ill reiterate the most important property is building the fabric you can build on time for your GPUs to arrive and use in the datacenter.

Exactly. And it will be like this for many more years to come, we think. But if the Ultra Ethernet Consortium has it Meta Platforms way, Ethernet will be a lot more like InfiniBand and will have multiple suppliers, thus giving all hyperscalers and cloud builders and ultimately you more options and more competitive pressure to reduce prices on networking. Dont expect it to get much below 10 percent of the cost of a cluster, though not as long as GPUs stay costly. And ironically, as the cost of GPUs falls, the share of the cluster cost that comes from networking will rise, putting even more pressure on InfiniBand.

It is a very good thing for Nvidia right now that it has such high performance GPUs and higher performance InfiniBand networking. Make hay while that AI sun is shining.

Read more from the original source:
Meta Platforms Is Determined To Make Ethernet Work For AI - The Next Platform

Read More..

How to do the AI image trend on Instagram – Android Authority

If youre a regular Instagram user, you may have seen people with unusually artistic profile images, looking as if theyve been painted or drawn. Most of these people arent commissioning art well explain whats really going in the guide below, and how you can get in on the action if reality isnt enough.

QUICK ANSWER

Use tools like Lensa, NightCafe, or TikTok filters to generate AI images, then upload one as your profile picture. Some tools, including Lensa, cost a fee.

JUMP TO KEY SECTIONS

To cut to the chase, its people taking advantage of generative AI apps to enhance or stylize their profile pictures. Whereas most AI image generators create purely synthetic content based on prompts, some of the images you see on Instagram use real selfie photos as their source material, so they should at least partly resemble the people who use them.

Its worth mentioning here that selfie-based generators depend on well-lit, close-up photos with uncovered faces, and the better the source material, the better the output will be. You may end up taking new photos to make them work, in which case there might not be much reason to turn to AI. Theres also a chance that you wont like the way AI stylizes you, even if the output is visually acceptable.

On a psychological level, theres a risk that AI images can lead to a distorted body image. They depict an idealized or exaggerated version of ourselves that we can never achieve, so bear that in mind with your own avatar, or the ones you see online.

The app of choice for AI images on Instagram seems to be Lensa, and itll cost you to generate profile pictures, even if you sign up for a trial subscription theyll just cost less than the normal price. We mention the app here because of popularity and convenience. If you want to save cash, its absolutely worth hunting down free options (such as a few listed below).

Heres how to use Lensa to make images for Instagram:

If you want images based on selfie photos, your options are relatively limited, but alternatives to Lensa exist.

Visit link:
How to do the AI image trend on Instagram - Android Authority

Read More..

CrowdStrike Fal.Con 2023: CrowdStrike Brings AI and Cloud … – TechRepublic

At CrowdStrike Fal.Con 2023, CrowdStrike announced a new Falcon Raptor release with generative-AI capabilities and the acquisition of Bionic.

At CrowdStrikes annual Fal.Con show in Las Vegas this week, the company announced a series of enhancements to its Falcon security platform, including a new Raptor release with generative-AI capabilities. The company also announced the acquisition of Bionic to add cloud application security to its portfolio.

Jump to:

CrowdStrike Falcon covers endpoint security, Extended Detection and Response, cloud security, threat intelligence, identity protection, security/IT Ops and observability. The new Raptor release adds petabyte-scale, fast data collection, search and storage to keep up with generative AI-powered cybersecurity and stay ahead of cybercriminals. Its being rolled out gradually to existing CrowdStrike customers beginning in September of 2023.

The key elements of the Raptor release are:

Raptor eliminates security noise and reduces the time analysts take to chase down incidents, said Raj Rajamani, head of products at CrowdStrike, when I interviewed him at Fal.Con.

In earlier versions of Falcon, data existed in multiple backends, which increased the possibility of blind spots that could be exploited by hackers. Raptor provides a single data plane to bring the data together in the CrowdStrike platform.

There is no longer a need for security analysts to go to different points to try to correlate CrowdStrike and third-party data, as everything is stitched together by Charlotte AI to reduce the time needed for triage and analysis, said Rajamani.

This is achieved by decoupling the data from the compute power needed to compile, process and analyze it. Rajamani said this can take query response times down from hours to seconds and larger queries from days to a few hours.

As CrowdStrike Falcon consists of multiple modules that broadly address the security landscape, it competes on multiple fronts. On the EDR side, its main competitors are Microsoft and SentinelOne. On cloud security, it lines up against the likes of Microsoft and Palo Alto Networks. For identity protection, its primary competitor is probably Microsoft. Rajamani said that CrowdStrike has an advantage over Microsoft and others through its ability to build a unified data plane using a single agent and console for all security-related data.

Others solve parts of the security puzzle but struggle to bring it all together without a 360-degree view, he said. The sum of the parts is greater than the whole.

The other big announcement at CrowdStrikes Fal.Con was an agreement to acquire Application Security Posture Management vendor Bionic. This extends CrowdStrikes cloud native application protection platform to deliver risk visibility and protection across all cloud infrastructure, applications and services.

The crowded cloud-native software platform marketplace is led by PingSafe, Aqua Security, Palo Alto Networks, Orca and many others; the addition of ASPM from Bionic should give CrowdStrike an edge. ASPM adds app-level visibility to infrastructure, and it solves problems such as being able to detect which applications even legacy applications are operating within the enterprise and what databases and servers these apps are touching. This is accomplished without an agent.

Rajamani likened it to the difference between an X-ray (CNAPP) and an MRI (ASPM). The addition of Bionic provides CrowdStrike with the ability to detect a wider range of potential issues.

The integration of Bionic means we can greatly reduce the number of alerts to enable analysts to zero in on the ones that matter, said Rajamani. As a result, CrowdStrike will be the first cybersecurity company to deliver complete code-to-runtime cloud security from one unified platform.

Go here to see the original:
CrowdStrike Fal.Con 2023: CrowdStrike Brings AI and Cloud ... - TechRepublic

Read More..

SAP Announces New Generative AI Assistant Joule – iTWire

SAP Software has announced Joule, a natural-language, generative AI copilot that it says will transform the way business runs.

Built directly into the solutions that power mission-critical processes, SAP says Joule is a copilot that truly understands business, and will be embedded throughout SAPs cloud enterprise portfolio, delivering proactive and contextualised insights from across the breadth and depth of SAP solutions and third-party sources.

By quickly sorting through and contextualizing data from multiple systems to surface smarter insights, Joule helps people get work done faster and drive better business outcomes in a secure, compliant way. Joule delivers on SAPs proven track record of revolutionary technology that drives real results, notes SAP.

With almost 300 million enterprise users around the world working regularly with cloud solutions from SAP, Joule has the power to redefine the way businesses and the people who power them work, said Christian Klein, CEO and member of the Executive Board of SAP SE. Joule draws on SAPs unique position at the nexus of business and technology and builds on our relevant, reliable, responsible approach to Business AI. Joule will know what you mean, not just what you say.

SAP says Joule will be embedded into SAP applications from HR to finance, supply chain, procurement and customer experience, as well as into SAP Business Technology Platform - and Joule transforms the SAP user experience its like tapping your smartest colleague on the shoulder.

Employees simply ask a question or frame a problem in plain language and receive intelligent answers drawn from the wealth of business data across the SAP portfolio and third-party sources, retaining context. Imagine, for example, a manufacturer asking Joule for help understanding sales performance better. Joule can identify underperforming regions, link to other data sets that reveal a supply chain issue, and automatically connect to the supply chain system to offer potential fixes for the manufacturers review. Joule will continuously deliver new scenarios for all SAP solutions. For example, in HR it will help write unbiased job descriptions and generate relevant interview questions, explains SAP.

As generative AI moves on from the initial hype, the work to ensure measurable return on investment begins, said Phil Carter, Group Vice President, Worldwide Thought Leadership Research, IDC. SAP understands that generative AI will eventually become part of the fabric of everyday life and work and is taking the time to build a business copilot that focuses on generating responses based on real-world scenarios and to put in place the necessary guardrails to ensure its also responsible.

SAP announced that Joule will be available with SAP SuccessFactors solutions and the SAP Start site later this year, and with SAP S/4HANA Cloud, public edition early next year - and SAP Customer Experience and SAP Ariba solutions along with SAP Business Technology Platform will follow, with many other updates across the SAP portfolio to be announced at the SuccessConnect event on October 24, the SAP Spend Connect Live event on October 911, the SAP Customer Experience LIVE event on October 25 and the SAP TechEd conference on November 23.

Did you realise that Gartner also recommends that security teams prioritise NDR solutions to enhance their detection and response?

Picking the right NDR for your team and process can sometimes be the biggest challenge.

If you want to try out a Network Detection and Response tool, why not start with the best?

Vectra Network Detection and Response is the industry's most advanced AI-driven attack defence for identifying and stopping malicious tactics in your network without noise or the need for decryption.

Download the 2022 Gartner Market Guide for Network Detection and Response (NDR) for recommendations on how Network Detection and Response solutions can expand deeper into existing on-premises networks, and new cloud environments.

DOWNLOAD NOW!

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

Follow this link:
SAP Announces New Generative AI Assistant Joule - iTWire

Read More..

China Accuses US of Hacking Huawei Servers as Far Back as 2009 – Slashdot

China accused the U.S. of infiltrating Huawei servers beginning in 2009, part of a broad-based effort to steal data that culminated in tens of thousands of cyber-attacks against Chinese targets last year. From a report: The Tailored Access Operations unit of the National Security Agency carried out the attacks in 2009, which then continuously monitored the servers, China's Ministry of State Security said in a post on its official WeChat account on Wednesday. It didn't provide details of attacks since 2009. Cyberattacks are a point of tension between Washington and Beijing, which has accused its political rival of orchestrating attacks against Chinese targets ever since Edward Snowden made explosive allegations about U.S. spying. Washington and cybersecurity researchers have said the Asian country has sponsored attacks against the West.

The ministry's accusations emerged as the two countries battle for technological supremacy. Huawei in particular has spurred alarm in Washington since the telecom leader unveiled a smartphone powered by an advanced chip it designed, which was made by Semiconductor Manufacturing International Corp. That's in spite of years-long U.S. sanctions intended to cut Huawei off from the American technology it needs to design sophisticated chips and phones. The U.S. has been "over-stretching" the concept of national security with its clampdown on Chinese enterprises, Foreign Ministry spokeswoman Mao Ning told reporters at a regular press briefing in Beijing on Wednesday. "What we want to tell the US is that suppression and containing of China will not stop China's development. It will only make us more resolved in our development," Mao said.

See the original post here:
China Accuses US of Hacking Huawei Servers as Far Back as 2009 - Slashdot

Read More..

Nozomi Networks discovers flaws in Bently Nevada protection systems – iTWire

OT security specialist Nozomi Networks has identified three vulnerabilities on the Baker Hughes Bently Nevada 3500 rack model used to detect and prevent anomalies in rotating machinery such as turbines, compressors, motors, and generators.

Nozomi warns that the most serious of the three vulnerabilities may allow an attacker to bypass the authentication process and obtain complete access to the device by delivering a malicious request.

According to Nozomi, "the development of a patch is not planned due to legacy limitations."

The initial discovery was made by reverse engineering the proprietary protocol used by the device, and Nozomi has confirmed that all of these vulnerabilities affect firmware versions up to 5.05 and later of the /22 TDI Module (both USB and serial versions).

Nozomi suggests the following measures to mitigate the issues.

1. RUN mode vs CONFIG mode: PLCs and control systems often implement physical keys to either put the device in RUN mode or in CONFIG mode. The latter is typically used by technicians during maintenance activities to enable writing permission of new configurations on the device. One common misconfiguration that might occur is to either forget to put back the device into RUN mode after a maintenance activity or opt for a default always-on CONFIG mode to facilitate remote changes. A best practice is to make sure that devices are always kept in RUN mode whenever possible.

2. Network segmentation: Design and implement proper network segmentation strategies to prevent unauthorised parties from interacting with critical assets. This is especially recommended for legacy solutions that are no longer actively supported by vendors.

3. Strong and unique passwords: Make sure to guarantee uniqueness in conjunction with robustness when choosing credentials. The former property is often underestimated but could provide defence in those scenarios where credentials extracted from a vulnerable machine or component could be easily reused over fully patched systems sharing the same credentials.

4. Non-default enhanced security features: Check your device manual for security features that are not enabled by default. Often, these additional features could strongly reduce the likelihood or the impact of a specific vulnerability and mitigate 'hard-to-patch' situations. With respect to Bently Nevada devices, Nozomi Networks recommends customers review the various security levels made available through the configuration utility and choose the one that matches specific needs and security policy.

Did you realise that Gartner also recommends that security teams prioritise NDR solutions to enhance their detection and response?

Picking the right NDR for your team and process can sometimes be the biggest challenge.

If you want to try out a Network Detection and Response tool, why not start with the best?

Vectra Network Detection and Response is the industry's most advanced AI-driven attack defence for identifying and stopping malicious tactics in your network without noise or the need for decryption.

Download the 2022 Gartner Market Guide for Network Detection and Response (NDR) for recommendations on how Network Detection and Response solutions can expand deeper into existing on-premises networks, and new cloud environments.

DOWNLOAD NOW!

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

Read the original here:
Nozomi Networks discovers flaws in Bently Nevada protection systems - iTWire

Read More..

Genetically engineering associations between plants and diazotrophs could lessen dependence on synthetic fertilizer – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

close

Nitrogen is an essential nutrient for plant growth, but the overuse of synthetic nitrogen fertilizers in agriculture is not sustainable.

In a review article publishing in the journal Trends in Microbiology on September 26, a team of bacteriologists and plant scientists discuss the possibility of using genetic engineering to facilitate mutualistic relationships between plants and nitrogen-fixing microbes called "diazotrophs." These engineered associations would help crops acquire nitrogen from the air by mimicking the mutualisms between legumes and nitrogen-fixing bacteria.

"Engineering associative diazotrophs to provide nitrogen to crops is a promising and relatively quickly realizable solution to the high cost and sustainability issues associated with synthetic nitrogen fertilizers," writes the research team, led by senior author Jean-Michel An of the University of WisconsinMadison.

Diazotrophs are species of soil bacteria and archaea that naturally "fix" atmospheric nitrogen into ammonium, a source that plants can use. Some of these microbes have formed mutualistic relationships with plants whereby the plants provide them with a source of carbon and a safe, low-oxygen home, and in return, they supply the plants with nitrogen. For example, legumes house nitrogen-fixing microbes in small nodules on their roots.

However, these mutualisms only occur in a small number of plants and a scant number of crop species. If more plants were able to form associations with nitrogen fixers, it would lessen the need for synthetic nitrogen fertilizers, but these sorts of relationships take eons to evolve naturally.

How to enhance nitrogen fixation in non-legume crops is an ongoing challenge in agriculture. Several different methods have been proposed, including genetically modifying plants so that they themselves produce nitrogenase, the enzyme that nitrogen fixers use to convert atmospheric nitrogen into ammonium, or engineering non-legume plants to produce root nodules.

An alternative methodthe topic of this reviewwould involve engineering both plants and nitrogen-fixing microbes to facilitate mutualistic associations. Essentially, plants would be engineered to be better hosts, and microbes would be engineered to release fixed nitrogen more readily when they encounter molecules that are secreted by the engineered plant hosts.

"Since free-living or associative diazotrophs do not altruistically share their fixed nitrogen with plants, they need to be manipulated to release the fixed nitrogen so the plants can access it," the authors write.

The approach would rely on bi-directional signaling between plants and microbes, something that already occurs naturally. Microbes have chemoreceptors that allow them to sense metabolites that plants secrete into the soil, while plants are able to sense microbe-associated molecular patterns and microbe-secreted plant hormones. These signaling pathways could be tweaked via genetic engineering to make communication more specific between pairs of engineered plants and microbes.

The authors also discuss ways to make these engineered relationships more efficient. Since nitrogen fixation is an energy-intensive process, it would be useful for microbes to be able to regulate nitrogen fixation and only produce ammonium when necessary.

"Relying on signaling from plant-dependent small molecules would ensure that nitrogen is only fixed when the engineered strain is proximal to the desired crop species," the authors write. "In these systems, cells perform energy-intensive fixation only when most beneficial to the crop."

Many nitrogen-fixing microbes could provide additional benefits to plants beyond nitrogen fixation, including promoting growth and stress tolerance. The authors say that future research should focus on "stacking" these multiple benefits. However, since these processes are energy-intensive, the researchers suggest developing microbial communities made up of several species that each provide different benefits to "spread the production load among several strains."

The authors acknowledge that genetic modification is a complex issue, and the large-scale use of genetically modified organisms in agriculture would require public acceptance. "There needs to be transparent communication between scientists, breeders, growers, and consumers about the risks and benefits of these emerging technologies," the authors write.

There's also the issue of biocontainment. Because microbes readily exchange genetic material within and between species, measures will be needed to prevent the spread of transgenic material into native microbes in surrounding ecosystems. Several such biocontainment methods have been developed in the laboratory, for example, engineering the microbes so that they are reliant on molecules that are not naturally available, meaning that they will be restricted to the fields in which the engineered host plants are present, or wiring the microbes with "kill switches."

The authors suggest that these control measures might be more effective if they are layered, since each measure has its limitations, and they stress the need to test these engineered plant-microbe mutualisms under the variable field conditions in which crops are grown.

"The practical use of plant-microbe interactions and their laboratory-to-land transition are still challenging due to the high variability of biotic and abiotic environmental factors and their impact on plants, microbes, and their interactions," the authors write.

"Trials in highly-controlled environments such as greenhouses often translate poorly to field conditions, and we propose that engineered strains should be tested more readily under highly replicated field trials."

More information: Chakraborty et al., Scripting a new dialogue between diazotrophs and crops, Trends in Microbiology (2023). DOI: 10.1016/j.tim.2023.08.007 , cell.com/trends/microbiology/f 0966-842X(23)00239-1

Read the original here:

Genetically engineering associations between plants and diazotrophs could lessen dependence on synthetic fertilizer - Phys.org

Read More..

Tesla’s Engineering Under Scrutiny Because of the Cybertruck and Alleged Teardowns – autoevolution

It is ironic that similar processes can bring diverse conclusions. Sandy Munro has torn down a few Tesla vehicles and was more often fascinated by the company's engineering solutions than by the flaws he and his team discovered. Another teardown report is not so favorable on Tesla. A Cybertruck assessment also puts the battery electric vehicle (BEV) maker's engineering under scrutiny.

In this engineer's words, "their structures up to the Model 3 are quite inefficient and don't have great rigidity. The dimensional variation is shocking (far beyond even SBU, IYKYK)." IYKYK means "if you know, you know," which is probably the most precise use this acronym has ever had. In a quick search to learn what SBU means, I found two suitable meanings: "stratigraphic boundary uncertainty" and "sequential build-up." There are probably more meanings, but I obviously don't know what the author meant, only that SBU represents loose dimensional tolerance control.

The engineer did not stop there. BlueSilverWave also wrote that "the hang-on parts are generally relatively poorly performing on their own. They can't touch our structural or powertrain durability tests." They also said that rate and handling are bad, ergonomics fails to meet package targets, and that noise, vibration, and harshness (NVH) level is poor, as well as sound quality. The engineer joked that "we pay JD Power far too much to find out just how bad the quality numbers are (hilariously bad)."

Photo: Ryan Zohoury

The engineer's conclusion was that "Teslas just aren't very good" and that "it really makes you question the customer sometimes." BlueSilverWave added that "Musk's genius is in two very closely related areas: getting investors to give him an unlimited checkbook" and "getting customers to believe they're doing something new, novel, and important, in a way that lets him walk past screwing up things that legacy players get right as an inevitability."

We can't say customers miss that. It is more likely that they prefer to oversee the flaws. I wrote in 2020 about a Tesla fan who said the company sold prototypes, not production vehicles. Ironically, Pete Gruber confirmed in 2021 that this was probably the case with the Roadster, considering how many design flaws the car presented. Buyer complaints are also increasing as Tesla reaches regular customers instead of its own advocates and investors.

Photo: 057 Technology

If that was not enough, a recent article from Fast Company said the Cybertruck would face several issues in reaching production because of its flat body panels. If you use regular steel to obtain stamped parts, they have to present curves to retain their shape and avoid vibration. Adrian Clarke said the Cybertruck body panels were prone to that, which causes discomfort to occupants and may also bring constructive issues, such as loosening bolts. The problem with the professional car designer's observations is that the Cybertruck adopts a thick stainless steel that cannot be stamped. It may only be folded, creating the first difficulty in manufacturing this vehicle. It may be the case that this harder stainless steel does not vibrate as much as regular steel.

Photo: Nick Thomas

Lately, we have been hearing that Tesla is giving signs that deliveries for its electric unibody pickup truck are close. The BEV maker closed its Kato Road battery manufacturing facility and excluded the Model Y with 4680 cells from its website. That would be a sign that it would focus on making the Cybertruck. Another one is that some reservation holders are not able to edit configurations anymore. What if it is the other way around and production has been delayed (again)? We'll only know for sure when Tesla sets a delivery date. Even if it is confirmed, Tesla may have decided to deliver prototypes that comply with regulations which is pretty weird. Its engineering should still be under scrutiny perhaps more visibly than ever as Musk's concerns about sub-10-micron tolerances demonstrate.

Original post:

Tesla's Engineering Under Scrutiny Because of the Cybertruck and Alleged Teardowns - autoevolution

Read More..

Missouri S&T Springfield engineering professor honored with ASEE … – Missouri S&T News and Research

SPRINGFIELD, Mo. Since Missouri S&T began a cooperative engineering program with Missouri State University in Springfield, Missouri, in 2008, the programs faculty members have regularly been recognized for excellence.

Earlier this month, Dr. Rohit Dua, an associate teaching professor of electrical and computer engineering, was awarded the American Society for Engineering Education (ASEE) Midwest Sections Outstanding Service Award.

This is a special award for me to receive, Dua says. I am happy to serve our profession in a variety of ways, and it means a lot to have my peers recognize me for my efforts.

Dua also received this award in 2018. He has been an active member of ASEE since 2014 and has served as program chair for the ASEE Midwest Section annual conference for the past two years. He is slated to hold this position for the 2024 conference as well.

The ASEE Midwest Section includes members from Missouri, Arkansas, Kansas, Nebraska and Oklahoma.

In addition to his service directly to ASEE, Dua also coordinates engineering outreach events for K-12 students.

We have a laboratory set up to teach middle- and high-school students some basic engineering and electrical principles, he says. After taking part in these activities, I have seen some of these same students be successful in the cooperative engineering program. It is incredible to inspire these young students and see their reactions when learning about engineering.

Dua says the cooperative engineering program is unique in that most of the students are from Southwest Missouri, and this allows them to remain in the area instead of moving to Rolla.

Students complete their engineering courses through S&T at the Robert W. Plaster Free Enterprise Center in downtown Springfield and take their other courses through Missouri State University. Their engineering degree is awarded from Missouri S&T.

Missouri S&T is the best engineering school in the state, he says. In Springfield, we have excellent Missouri S&T faculty members and laboratories. This is a fantastic resource for Southwest Missouri.

Students can earn degrees in electrical engineering, civil engineering or mechanical engineering, he says. They also have several paid internship opportunities available, as well as research opportunities. Dua regularly supports students as part of the universitys Opportunities for Undergraduate Research Experience program.

Dua has been a member of the Springfield faculty since 2010. Prior to that, he was an assistant professor at New York Institute of Technology. He earned a Ph.D. in electrical engineering from Missouri S&T in 2006 and a bachelors degree in electrical engineering from Pune University in India.

He has won multiple awards from Missouri S&T and MSU for his project-based teaching style, including both Missouri S&Ts Faculty Achievement Award and Experiential Learning Award, as well the Teaching Award for Excellence in High-Impact Practices awarded by MSUs Faculty Center for Teaching and Learning.For more information about the Cooperative Engineering Program, visit missouristate.edu/EGR.

Missouri University of Science and Technology (Missouri S&T) is a STEM-focused research university of over 7,000 students. Part of the four-campus University of Missouri System and located in Rolla, Missouri, Missouri S&T offers 101 degrees in 40 areas of study and is among the nations top 10 universities for return on investment, according to Business Insider. For more information about Missouri S&T, visit http://www.mst.edu.

View original post here:

Missouri S&T Springfield engineering professor honored with ASEE ... - Missouri S&T News and Research

Read More..

Steensma named Royal Academy of Engineering Visiting Professor … – Washington University in St. Louis

Joe Steensma, a professor of practice at the Brown School at Washington University in St. Louis, has been named a Royal Academy of Engineering Visiting Professor at The Engineering & Design Institute London (TEDI).

The professorship is part of the Royal Academy of Engineerings visiting professorship initiative, which aims to enhance the learning experience of U.K. engineering students through providing them with additional mentorship and industry networking opportunities, as well as through the development of innovative engineering curricula.

Steensma will spend about 30 days a year for the next three years supporting TEDI-London through programming, events and expertise.

A scientist and entrepreneur who has founded and led several businesses focused on public health, Steensma joined the faculty at the Brown School to help commercialize some of the innovative products and services the school has developed. He teaches classes in biostatistics, environmental health and the public health implications of climate change.

Read this article:

Steensma named Royal Academy of Engineering Visiting Professor ... - Washington University in St. Louis

Read More..