Category Archives: Cloud Servers

What Is Server Backup and What Are the Benefits? – Server Watch

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A server backup is a proactive process of creating duplicate copies of critical data stored on a server or networked computer system. These duplicate copies, known as backups, are stored separately from the original data source, ensuring their availability even if the primary data becomes compromised, corrupted, or lost due to various factors such as hardware failures, human error, cyberattacks, or natural disasters.

Server backups can encompass various types of data, ranging from entire server images to specific files and folders. The choice of what to back up and how often depends on the specific needs and priorities of the organization or individual.

Server backup solutions come in various forms, including local backups and cloud backups, each with its advantages and considerations. Moreover, the frequency of backups can vary, with options such as daily, hourly, or even real-time continuous backups to ensure that data is as up-to-date as possible.

In this article, we will unravel the concept of server backup, understand what it entails, and realize the invaluable benefits it brings in safeguarding your data against such alarming scenarios.

Despite the integral role that backup solutions play in modern data protection, recent statistics paint a concerning picture: 14% of all data is not backed up at all, and a staggering 58% of recovery attempts fail.

These figures highlight a critical gap in data security, leaving businesses most valuable assets exposed and at risk of being lost forever in the event of an outage or cyberattack.

Here are the primary reasons highlighting the paramount importance of server backups:

Server backups are instrumental in the preservation of vital data. They encompass a wide range of digital assets, including essential business documents, customer records, and the intricate settings and configurations that keep servers and applications running smoothly.

Without backups, the risk of losing this precious information to unforeseen events such as hardware failures or human error is a constant threat.

In the event of data loss or system failures, server backups act as a dependable means of data restoration. This is a critical component in minimizing downtime and sustaining business continuity. Swift and efficient data recovery not only saves time but also prevents substantial financial and operational setbacks.

Beyond data recovery, server backups fortify data security. They provide an additional layer of defense against cyberthreats, particularly ransomware and malware. By isolating backups from the primary network, they reduce susceptibility to malicious attacks.

In the face of a cyberattack, having secure backups ensures that data can be restored without yielding to ransom demands, thereby thwarting potential data breaches and financial losses.

Implementing server backups yields a multitude of advantages, making them an indispensable part of modern data management and cybersecurity strategies. Here are the key benefits of server backups:

Data backup strategies encompass various methods to cater to diverse needs and scenarios. These different types of data backups provide flexibility and options for preserving data efficiently. Some of the most common types include full backups, incremental backups, differential backups, and mirror backups.

A full backup, also known as a level 0 backup, involves copying all the data in a system or specific storage location to a backup destination. It creates an exact replica of the entire dataset, making it the most comprehensive and time-consuming backup method. Full backups serve as a foundation for other backup strategies.

Incremental backups focus on capturing changes made to data since the last backup. Initially, a full backup is performed, and subsequent backups only record new or modified files.

This approach conserves storage space and reduces backup time. However, the restoration process requires the full backup and all incremental backups since that point, which can be time-consuming.

Differential backups, like incremental backups, start with a full backup. However, they capture all changes made since the last full backup, regardless of how many differential backups have occurred in the interim. This simplifies the restoration process compared to incremental backups but requires more storage space over time.

A mirror backup, also known as a 1:1 backup or a real-time backup, duplicates data in real time to a separate storage device or location. This method provides immediate redundancy and can be crucial for minimizing data loss in high-availability environments. However, it can be resource-intensive and costly due to the constant data synchronization.

Snapshot backups capture a point-in-time image of data, creating a read-only copy of the entire dataset. These snapshots are lightweight and dont consume as much storage space as full backups. They are particularly useful for creating consistent copies of data for development, testing, or data analysis purposes but theyre not so helpful for full data recovery in case of a major loss event.

Continuous data protection (CDP) is an advanced backup method that captures every change made to data in real time or at short intervals, allowing for granular recovery. It offers near-zero recovery point objectives (RPOs), ensuring minimal data loss during restoration. CDP solutions are often used in mission-critical applications where data integrity and availability are paramount.

When it comes to data backup strategies, two primary approaches stand out: local backup and cloud backup. Each method has its own set of advantages and considerations, making it essential to choose the one that aligns with your specific needs and priorities.

The table below compares various aspects of local backup and cloud backup:

Local backups, also known as on-premises backups, involve the practice of storing data on physical devices located at the same site as the source data. These physical devices can take various forms, such as external hard drives, network-attached storage (NAS) devices, or even tape drives.

Cloud backups, a core component of cloud computing, have transformed the landscape of data protection and management. They involve securely storing data on remote servers maintained by cloud service providers.

Cloud computing has revolutionized the way data is stored, accessed, and protected. Backup solutions in cloud computing offer several distinct advantages over traditional methods. Heres a closer look at how cloud backups work:

Offline backups, also known as cold backups or offline data archives, involve storing copies of your data in a location that is disconnected from the network and not constantly accessible. This practice serves as a critical safeguard against various data loss scenarios. Heres why its so important:

Cyberattacks, such as ransomware, are becoming increasingly sophisticated and can easily target and encrypt data stored on connected devices or NAS. Offline backups, being physically disconnected from the network, are immune to such attacks.

In the event of a ransomware attack, having offline backups ensures that you have clean, uninfected copies of your data to restore from.

Human errors, software bugs, or system glitches can result in the accidental deletion or corruption of data. If your backups are constantly connected to the network or live on the same system, these errors can propagate to your backup copies.

Offline backups, on the other hand, provide a static and protected snapshot of your data, reducing the risk of such errors affecting your backup.

Physical threats like fires, floods, or hardware failures can lead to the destruction of your primary data and any online backups stored in the same location.

Offline backups, stored in a physically separate and secure location, act as a disaster recovery mechanism. They remain unaffected by local catastrophes, ensuring data availability even in the face of such events.

Some data needs to be retained for extended periods, often for compliance or historical reasons. Offline backups offer a cost-effective solution for archiving data for the long term. These backups can be stored on offline media like tape drives, which are well-suited for preserving data over extended periods.

Online backups often require a robust and continuous internet connection for data synchronization and accessibility. In situations where the internet is slow, unreliable, or unavailable, offline backups provide a reliable fallback for data retrieval and restoration.

In industries with strict data privacy and compliance regulations, offline backups can help maintain data sovereignty and security. By keeping sensitive data offline, organizations can better control access and protect it from unauthorized breaches.

Imagine a vast ocean stretching as far as the eye can see, teeming with information, insights, and opportunities. This metaphorical ocean of data represents the ever-expanding volume of digital information generated by businesses, governments, and individuals worldwide.

Just as sailors navigate the complexities of the open sea, IT professionals must navigate this sea of data to unlock its potential while safeguarding against its inherent risks. This section explores the strategies and considerations for successfully managing and making the most of this data-rich landscape.

The server backup market has evolved significantly over the years to address the ever-increasing data volumes, diverse IT infrastructures, and evolving cybersecurity threats. This section explores the dynamics, trends, and key players shaping the server backup market.

The server backup landscape is undergoing a profound transformation, with a particular spotlight on the cloud-based backup services sector. As businesses increasingly rely on digital operations and cloud services, the demand for robust server backup solutions has surged.

Projections indicate that the cloud-based backup services market is poised for substantial growth, starting from an estimated value of $4,439.6 million in 2023 and set to ascend to a remarkable $19,811.3 million by 2033. This surge reflects a compelling compound annual growth rate (CAGR) of 16.1% anticipated throughout the forecast period.

Understanding the forces propelling this domain forward not only informs strategic decision-making but also ensures the safeguarding of critical data assets. Here are the primary trends and drivers that are shaping the future of server backup solutions and practices:

Several companies dominate the server backup market, offering a wide range of solutions catering to diverse business needs. Prominent players include:

When it comes to disaster recovery planning, backups play a pivotal role in safeguarding critical data and ensuring business continuity. They are the safety net that allows organizations to rebound from catastrophic events.

Here are the key steps involved in integrating backups into an effective disaster recovery plan:

Server backups offer a lifeline in the face of data loss, system failures, cyberattacks, and unforeseen disasters. They provide a safety net, ensuring that critical data can be retrieved and operations can resume, even in the most challenging circumstances. In the evolving landscape of data management and protection, server backups are the time-tested, reliable, and battle-tested means by which organizations fortify their digital assets.

Securing your digital assets with server backups begins with meticulous planning, including data assessment, strategy design, and rigorous testing. It also involves implementing redundancy, encryption, and multi-location storage to mitigate risks effectively. It necessitates continuous monitoring, regular updates, and employee training to maintain the integrity of your backup systems. Finally, it also entails embracing cloud-based and offsite backups to extend your safety net to the cloud.

For help securing your servers against data loss and theft, you can turn to one of these top server backup solutions.

See the original post here:
What Is Server Backup and What Are the Benefits? - Server Watch

AI powered greenhouse outperforms Dutch growers – hortidaily.com

It comes as no surprise to Dutch greenhouse owner Frank van Kleef that an AI-powered greenhouse outperforms the benchmark Dutch grower - after all, there's a reason he left his own greenhouse company to join the British/Dutch company, Optimal, which is pioneering AI-powered greenhouse cultivation. In their commercial-scale greenhouse in Westdorpe, Optimal's system is already achieving higher yields and higher Brix levels while also saving energy and reducing CO2 emissions. "The computer is always consistent in its decisions," Frank explains.

Frank van Kleef & Dave Hunter

"There are not enough skilled growers available to service the greenhouses that are being built around the world," says Optimal's Founder and CEO, Dave Hunter. "This problem needs to be addressed if the greenhouse industry is to expand. All greenhouses worldwide need operational knowledge, consistent results, and an efficient way to run their facilities. If this can be done at scale, then greenhouses will thrive globally and become critical to humanity's food security in the face of climate change."

Leveraging Grower ExpertiseTo gain a deeper understanding of the industry and the challenges it faces, Dave joined forces with Frank van Kleef. Frank has been one of the owners of Royal Pride, which became one of the leading greenhouse companies in the world that operates 70 ha of tomatoes and cucumbers. He now focuses full-time on developing the Optimal system and actively participates in the Westdorpe demonstration greenhouse. Frank's role is not that of a traditional grower but more of a supervisor who assesses processes and evaluates results in terms of quantity and quality. He does this meticulously, as the greenhouse's produce is sold through his social enterprise, Boeregoed.

Predictive Control SystemCombining Frank's knowledge with AI, the Optimal team first developed a predictive control system. A digital twin of the greenhouse is continuously updated with real-time information and weather forecasts. Using advanced optimization software, the inputs to the greenhouse are re-planned each minute (e.g., heating pipe temperatures and vent positions). Optimal then applies these inputs to the greenhouse via an interface with the process computer. Dave explains, "We can predict the full impact of decisions before they are made and anticipate the effects of weather changes in a way that no human grower can do."

Optimal's predictive control system uses a digital twin to optimise the inputs to the greenhouse each minute

Frank explains the biggest advantage lies not just in optimizing the greenhouse environment but in preventing errors. "When you present the same situation to a grower ten times, they'll act the same way seven times and differently three times. That's human nature. A computer doesn't do that. It always makes the best choice." Dave adds that the system never sleeps. "It never over- or under-irrigates, and you don't need to adjust the climate because it's already been well-managed in advance. You don't need to react to increased evaporation because the system has already accounted for that. This results in a better overall plant health, a more resilient crop, and thus a better quality and yield."

Software and Operational KnowledgeHowever, running a greenhouse entails more than just a control system, even if it's an exceptional one. Dave explains, "What we offer our customers is the operation of their greenhouses on their behalf, ensuring optimal efficiency and reliability season after season."

To achieve this, all tasks inside and outside the facility are documented in software-based Standard Operating Procedures. Greenhouse workers and operators are trained to follow these procedures, and a smartphone app guides them throughout their workday, providing instructions on what tasks need to be performed, how to perform them, and what to do when encountering irregularities.

Sam Baker, Optimal's Operational Excellence Lead, with PC showing an example of a Standard Operating Procedure

Labor schedulesThe system also generates labor schedules, offering insights into labor requirements and worker performance. Daily and weekly feedback sessions are conducted to assess the team's performance. Dave states, "Labour is a critical factor in greenhouse operations, with significant implications for cultivation. Standardizing it allows us to achieve optimal and predictable results."

Dave Hunter, CEO and founder, using the smartphone app to collect data on the crop

Optimal Performance and Real-World DeploymentSo, what are the results of this comprehensive system? In their demonstration greenhouse, the team has achieved impressive results, including a 13% higher yield, a 14% higher Brix level, and energy savings of 27%, along with a 20% reduction in CO2 emissions compared to a benchmark Dutch greenhouse.

The Optimal team has also conducted various experiments involving ten different tomato and cucumber varieties. "We conduct experiments here ourselves and use this data to continually improve our performance," Dave explains. "Over the last five months, we've been running a low-energy experiment to optimize performance in high-energy-cost environments, such as those experienced in Europe over the last two years."

A basket of Optimal's tasty cucumbers and specialist tomato varieties.

Real-World Deployment and Future ExpansionCurrently, discussions are underway with leading greenhouse owners in the United States, the UK, and other locations. Dave notes that the system easily adapts to different climatic conditions, and there are no minimum technical requirements or specific greenhouse needs to get started. Dave explains, "We can work in any greenhouse: we install additional sensors and connect them to cloud servers and are up and running in days." The team is also exploring expansion into strawberries, peppers, and leafy greens.

When asked what he's most proud of, Dave emphasizes that Optimal's success lies in the synergy between humans and machinesa complex but rewarding endeavor. "Our engineers understood that AI and predictive control models would result in better greenhouse outcomes: they view a greenhouse as a system that can be controlled through simulations and optimization. However, it's vital to align people, the system, and the hardware to work together. That's where Frank and the other experienced operators in Optimal's team have added tremendous value."

For more information:Optimalcontact@optimal.ag http://www.optimal.ag

Link:
AI powered greenhouse outperforms Dutch growers - hortidaily.com

AppFactor drags legacy enterprise apps to the cloud through automation – TechCrunch

Image Credits: AppFactor

Technical debt is often the unsung villain of the enterprise, crippling companies seeking to modernize as they realize just how much legacy lives in their stack. And as with most kinds of debt, there is usually interest to pay, too.

This is something that fledgling U.K. startup AppFactor is setting out to solve, with a platform that helps enterprises automatically re-architect their legacy applications, readying them for deployment into a new cloud-native home.

AppFactor was formally incorporated in mid-2021, however CEO and founder Keith Neilson has only been working on it fully since January, recently closing a pre-seed round of funding he says weighed in at north of 1 million ($1.3 million).

Presenting onstage today as part of the Startup Battlefield at TechCrunch Disrupt, Neilson showcased AppFactors technology and laid out his startups mission in a space ripe for change. TechCrunch caught up with Neilson ahead of time to get the lowdown on the scale of the problem as he sees it, and what exactly AppFactor is doing to address it.

To outsiders, some technical debt might be obvious through exposure to bugs or laggy systems. Or, perhaps, the amount of time it takes the company to improve existing products and introduce new features.

Those on the inside, meanwhile, might have a better idea of their technical debt when they see that their IT budget spend is weighted disproportionality toward maintenance versus building shiny new things. Data from consultancy McKinsey suggests that technical debt could account for as much as 40% of companies total IT budget, while a separate report from Stripe indicates that developers on average spend a third of their working week addressing existing technology issues rather than writing new code.

But its not always easy to get a clear picture of the level of technical debt a company has, due to the fact that it might span multiple areas and domains within an organization. This opaque underbelly may include things like overly complex, duplicate or downright bad code; a lack of automated testing; security vulnerabilities; and general poor design.

The big challenge enterprises have is that theyve built and architected enterprise-grade applications at a [specific] point in time, and business requirements and processes change the environments around these applications, and the applications and their dependencies evolve over time, Neilson said.

Thus, technical debt as McKinsey notes can perhaps best be viewed as a sort of tax a company pays on all internal development thats focused on fixing myriad legacy technology infrastructures. This includes new libraries and frameworks, or integration points and dependency changes as companies fine-tune their stack. Ultimately, it amounts to a whole mish-mash of complexity that snowballs over time to create an unwieldy mess.

A typical example of a legacy enterprise application might involve an older Microsoft SQL database; some middleware layer; and a .NET front end, requiring a mixture of physical and virtual infrastructure to function. The running processes, libraries, dependencies and general componentry permeating the application and infrastructure would require significant manual spadework just to figure out what is what, as they attempt a lift-and-shift transformation to a more cloud-native form.

And that, essentially, is what AppFactor is setting out to offer. It scans the companys IT environment to identify all their apps and respective dependencies, divorces the virtual and physically-hosted apps from their current environment, and rebuilds each component and app layer into separate containers ready for their new home be that a modern cloud architecture such as Kubernetes, or a managed database service.

All of this is generated and driven by the product [AppFactor], so you can rapidly move your existing application estates over to the latest cloud technology within days, not months and years, Neilson said.

AppFactor consists of three core components, including a scanner/analyzer thats deployed to servers to collect the data needed to unearth their applications and dependencies; an orchestrator, which essentially controls the scanner/analyzers behavior, including the IP range and target systems; and the overarching AppFactor SaaS platform that handles all the data analysis, machine learning (ML) processes and services that generate visual mappings, containerization tasks and more.

The company says it has been working with some commercial clients, including U.K.-based enterprise software company Civica. To date, only the discover and assess facet of its platform has been commercially available. However, the company is also gearing up to launch its app modernization module in November. So this means customers will have the ability not only to find suitable candidates for modernization, serving all the relevant reporting and analytics, but eventually enact the transformation itself.

Perhaps one of the more interesting features of the platform from a slick bells-and-whistles perspective, at least is a tool that enables users to visualize app dependencies through a 3D visualization engine. Eventually, this could be used to visualize entire environments.

Currently its at more of an infrastructure and process level view, but clearly there is scope to go deeper down, which we plan on building out, Neilson said.

Curiously, AppFactor is also making this available for VR headsets, with the company demoing this functionality via an Oculus at its TC Disrupt booth.

One of the toughest activities upfront that can help de-risk any [app] changes is being able to account, view and understand dependencies be it across infrastructure, architecture or code, Neilson said. This view is about being able to view and interact with the makeup and anatomy of our application estates in a granular and powerful way. Some of these systems are incredibly complex, with communications, libraries, files, services, processes and more happening in a magnitude of places, across multiple environments so this is a really powerful way of being able to intuitively understand, validate and reaffirm knowledge, which empowers any future evolution of the application and its attributes.

Current app modernization tooling is substantively manual, and therefore resource-intensive. It might involve using a command-line tool like Docker, which requires significant continuous testing, and even then might not cover the full gamut of dependencies due to the manual nature of running the tool. And the likes of Googles Migrate for Anthos, which resulted from its Velostrata acquisition five years ago, and AWSs App2Container make it somewhat easier for enterprises to convert virtual machines (VMs) to containers. However, these are still very manual and command-line based, dont necessarily provide extensive visibility into dependencies and dont support physical infrastructure-based apps.

There are other similar-ish services out there too focused on helping companies transition from monolithic software to microservices, such as venture-backed Vfunction.

The ultimate goal of each of these services is to help companies reduce their technical debt and get with the times, albeit adopting slightly different approaches en route.

We believe there are four pillars of technical debt infrastructure, architecture, code and dependencies, Neilson said. We also believe that there are many applications that do not suit microservices, so our vision is to allow the attributes of an enterprise app dictate the optimum architecture pattern.

To achieve this, AppFactor says its developing machine learning classifications to help generate the patterns necessary to transform more complex, multi-host apps. Essentially, its about creating fingerprinting techniques to identify what complex or bespoke apps are made of.

We are using a trained data model to build this, and it employs a number of attributes and datapoints that can help identify application patterns, Neilson said.

Additionally, Neilson said that theyre experimenting with a number of other AI use-cases, including large language models (LLMs) to generate the YAML (a human-readable data serialization language for creating configuration files) for Kubernetes deployments

We have some [other] future use cases around code generation, but were not there yet, Neilson added.

Read the original here:
AppFactor drags legacy enterprise apps to the cloud through automation - TechCrunch

Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented … – Slashdot

Brazilian engineer Luiz Andre Barroso, who ripped up the rulebook at Google, has died. His radical ideas for data centers laid the foundations for cloud computing. Wired: Luiz Andre Barroso had never designed a data center before Google asked him to do it in the early 2000s. By the time he finished his first, he had overturned many conventions of the computing industry, laying the foundations for Silicon Valley's development of cloud computing.

Barroso, a 22-year veteran of Google who unexpectedly died on September 16 at age 59, built his data centers with low-cost components instead of expensive specialized hardware. He reimagined how they worked together to develop the concept of "the data center as a computer," which now underpins the web, mobile apps, and other internet services.

Jen Fitzpatrick, senior vice president of Google's infrastructure organization, says Barroso left an indelible imprint at the company whose contributions to the industry are countless. "We lost a beloved friend, colleague and respected leader," she writes in a statement on behalf of the company.

Read the original post:
Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented ... - Slashdot

Meta Platforms Is Determined To Make Ethernet Work For AI – The Next Platform

We said it from the beginning: There is no way that Meta Platforms, the originator of the Open Compute Project, wanted to buy a complete supercomputer system from Nvidia in order to advance its AI research and move newer large language models and recommendation engines into production. Meta Platforms, which has Facebook as its core platform, likes to design and build its own stuff, but got caught flat-footed by the lack of OAM-compatible GPU and matrix accelerators and had no choice but to buy an N-1 generation DGX SuperPOD system using InfiniBand interconnects between modes.

And now, as Meta Platforms looks ahead to the future of AI inside the social network and the interconnect underpinning the compute engines it must lash together at incredible scale to compete against its hyperscaler and cloud builder rivals, it is back to Ethernet interconnects. This is why Meta Platforms is one of the founding companies behind the Ultra Ethernet Consortium, a buddy movie collection of Ethernet ASIC suppliers and switch makers who do not really want to cooperate with each other but who are being compelled by the Internet titans and their new AI upstart competition to figure out a way to not only make Ethernet as good as InfiniBand for AI and HPC networking, but make it stretch to the scale they need to operate. That would be for Meta Platforms around 32,000 compute engines today, and then hundreds of thousands of devices and then over 1 million devices at some points in the not too distant future.

What unites these companies Broadcom, Cisco Systems, and Hewlett Packard Enterprise for switch ASICs (and soon Marvell we think), Microsoft and Meta Platforms among the titans, and Cisco, HPE, and Arista Networks among the switch makers is a common enemy: InfiniBand.

The enemy of my enemy is my ally.

The math is very simple. In the early 2010s, when the hyperscalers and cloud builders were really starting to build massive infrastructure, the networking portion of any distributed system represented less than 10 percent of the cost of that overall system, including switches, network interfaces, and cables. As the first generation of 100 Gb/sec gear came out, the costs were very high because the design was not right, and soon networking was representing 15 percent or more of the cost of a cluster. With the advent of affordable 100 Gb/sec Ethernet and now the advance to 200 Gb/sec and 400 Gb/sec speeds, the cost is now down below 10 percent again but only on the front end network where applications run. For AI training and inference infrastructure among the hyperscalers and cloud builders, Nvidia will tell you plain and simple that the network represents 20 percent of the cluster cost. InfiniBand, explains Nvidia co-founder and chief executive officer Jensen Huang, delivers 20 percent better performance at scale at the same bandwidth than Ethernet, however, so InfiniBand is effectively free.

Well, no. It is not free. You still have to come up with the cash, and it is 20 percent of the cost of the cluster, which is impressive when you think of the very high cost of GPU compute engines compared to the overall cost of a Web infrastructure cluster based on CPUs. The cost of InfiniBand networking for AI systems, node for node, must be enormously more expensive than Ethernet admittedly at a lower bandwidth was on other infrastructure clusters to run databases, storage, and applications.

And this was why Ethernet with RDMA over Converged Ethernet a kind of low latency Ethernet that borrows many ideas from InfiniBand was on display at the Networking @ Scale 2023 event hosted by Meta Platforms. The company talked about how it has been using Ethernet for modest-sized AI training and inference clusters and how its near term plans were to scale to systems with 32,000 GPUs sharing data and enabling a factor of 16X improvement in scale over the initial 2,000 GPU clusters it had been using to create and train its LLaMA 1 and LLaMA 2 models. (The Research Super Computer system that Meta Platforms bought from Nvidia topped out at 16,000 GPUs, with most of them being Nvidias Ampere A100 GPUs with a relatively small share of them being the more recent and more capacious Hopper H100 modules.

Meta Platforms knows a thing or two about building datacenter-scale networks, given that its applications serve over 3 billion people on Earth thats roughly 40 percent of the population on the planet. But, as the Networking @ Scale presentations showed, scaling AI is a whole lot more troublesome than scaling PHP or Python applications the various middleware, databases, and storage that underpins them to keep us up to date on our social networks. (Can you even tell if the feeds are slightly behind the actual posts on a social application? No, you cant.)

AI models are growing 1,000X every two to three years, explained Rajiv Krishnamurthy, director of software engineering for the Network Infrastructure group at the company. And we have observed this internally at Meta and I think that seems to be a secular trend based on whatever we are observing in industry too. And that number is difficult to grok. So from a physical perspective, this translates into tens of thousands of GPU cluster sizes, which means that they are generating exaflops of compute. This is backed by exabytes of data storage. And from a networking perspective, you are looking at manipulating about terabits per second of data. The workloads themselves, they are finicky. By that people understand that typical AI HPC workloads have very low latency requirements and also from a packet perspective, they cannot tolerate losses.

Meta Platforms wants to have production clusters for AI training that scale 2X beyond the Nvidia RSC machine it acquired in January 2022 and ramped up throughout all of last year to its full complement of 16,000 GPUs. And then, before too long, it will be talking about 48,000 GPUs then 64,000 GPUs and so on. . . .

Like other hyperscalers who actually run their own applications at scale, Meta Platforms has to balance the needs of large language models (LLMs) against the needs of recommendation engines (Reco in some of the slides at the Networking @ Scale event) that are also using AI to provide. LLMs need to store models and weights to do inference, but recommendation engines need to store massive amounts of embeddings usually at least terabytes of data in memory as well, which us a set of data that has salient characteristics about us and the zillions of objects it is recommending so it can make correlations and therefore recommend the next thing that might be useful or interesting to us.

Architecting a system that can do LLM training (thats using LLaMA 2 at Meta Platforms at this point) and inference as well as Reco training and inference (in this case, the homegrown Deep Learning Recommendation Model, or DLRM) is very difficult, and one might even say impossible given the divergent requirements of these four workloads, as Jongsoo Park, a research scientist at the AI Systems division of Meta Platforms, showed in this spider graph:

LLMs need three orders of magnitude more compute than reco engines, says Park, needing about 1 teraflops of compute for every sentence that is processed and against a datastore of hundreds of billions of sentences and therefore trillions of tokens. This training is distributed across the cluster, but so is the inference, which is now busting out beyond an eight GPU server node to clusters with 16, 24, and even 32 GPUs. Park sized up the compute needs for these four distinct workloads as such:

Imagine, if you will, walking into the office of the CEO and CFO and explaining that you have this wonderful hyperrelational database thingamabob and it could answer questions in plain American, but it needs on the order of 1 petaflops to process one sentence of your corpus of enterprise data and it would need 10 petaflops of oomph to start talking within one second of asking a question. You would be laughed out of the boardroom. But, if you say generative AI, then they will probably come up with the money because everybody thinks they can be a hyperscaler. Or borrow some of their iron and frameworks at the very least.

Love this table that Park showed off:

This table shows the interplay of LLaMA model generation, model size (parameter count), dataset size (tokens), aggregate zettaflops needed to complete the training on the hardware shown. Add parameters and you either need more GPUs or more times, and it scales linearly. Add more tokens and you either need more GPUs and more time, and it scales linearly. Scale up parameters and tokens, you need exponentially more GPUs or more time or both.

Park said that this GPU cluster running LLaMA2 34B with 2,000 A100 GPUs was the largest Ethernet RoCE network in the world as far as he knew, and you can see how if you doubled up the parameter count to LLaMA2 70B, it would probably take 1 million GPU hours to complete against a 2 trillion token dataset and that InfiniBand is about 15 percent faster at the same 200 Gb/sec port speed used in the clusters.

This is just the beginning. Meta Platforms needs to ramp up its parameter scale, but it cant do so until it can scale up its back-end AI network and also get its hands on 32,000 of Nvidias H100 GPUs. We presume that Meta Platforms has done its penance with Nvidia by touting the RSC system for the past year and a half and will revert to using PCI-Express versions of Hopper and build its own systems from here on out.

With 32,000 H100s yielding about 30 percent of peak performance in production at FP8 quarter precision floating point math, Park says Meta Platforms will be able to train a LLaMA2 model with 65 billion parameters in a day. Lots of things will have to change to make this happen, and this includes increasing the training token batch beyond 2,000 and making that scale across more than a few thousand GPUs. The global training batch size will also have to be maintained across 32,000 GPUs as well, and using what he called 3D parallelism a combination of data parallel, tensor parallel, and pipeline parallel techniques to spread the work out across the GPUs. Park says data parallelism is running out of stream because the parameters and data sizes are getting so large, so there is no way to get around this issue.

As for latency, Meta Platforms looks at time to first token and then the average response time for each successive token. The first token should come in under 1 second, which is why it is taking more than eight GPUs to do inference, and then each successive token should come in 50 milliseconds. (An eyeblink is around 200 milliseconds, which is the attention span of a human being since the Internet was commercialized and widely distributed.)

There are subtleties with inference that we were not aware of, and these also have compute and networking needs that are at odds with each other, which is driving system architects to distraction:

The inference stages are prefill and decode. The prefill stage is about understanding the prompts, which means processing tens of thousands of tokens in a parallel fashion through large messages on the order of hundreds of megabytes. The time to first token is a few seconds and you need hundreds of GB/sec to feed the prompts into the inference engine. The decode stage is all about latency. One token is output at a time, with each output token being fed back into the transformer model to generate the next token.

Petr Lapukhov drilled down into the AI networks at Meta Platforms a bit more. Lapukhov was a senior network engineer at Microsoft working on LAN and WAN issues for the Bing search engine, has been at Meta Platforms for the past decade as a network engineer, and most recently has been focused on AI systems and their network topologies.

Here is how the Meta Platforms AI systems have evolved over time and a relatively short period of time at that:

In the old days of only a couple of years ago, DLRM training and inference could be done on a single node. Then, with its first generation of Ethernet RoCE clusters, Meta could cluster multiple nodes together, but the cluster size was fairly limited. To get the kind of scale it needed, it had to move to InfiniBand and Ethernet RoCE v2, and the former had a financial problem and the latter had some technical problems, but the company has made do up until now.

Starting with the basic building blocks, an eight-way GPU server based on Nvidia accelerators can deliver 450 GB/sec of bandwidth across the devices with tens of accelerators inside of a node, according to Lapukhov. Model parallel traffic runs over the in-node interconnect, in this case NVLink but it could also be PCI-Express switching infrastructure. From here, models have to scale with data parallelism across thousands of nodes (with tens of thousands of aggregate GPU compute engines) using some form of RDMA (either InfiniBand or Ethernet RoCE) and you can deliver on the order of 50 GB/sec of bandwidth between the nodes with a reasonable number of network interface cards.

For Ethernet AI networks, Meta Platforms is using the same Clos topology that it uses for its datacenter-scale front end network for applications and not the fat tree tropology generally favored by those using InfiniBand in AI training and HPC clusters.

To get to 32,256 GPUs the charts from Meta Platforms are imprecise the company puts two servers in a rack, each with eight Nvidia H100 GPUs. This is not particularly dense, as racks go, but it is no less dense than what Nvidia itself is doing with its DGX H100 clusters. This means there are 2,000 racks that need to be connected, like this:

If you look at this carefully, it is really eight clusters of 4,096 GPUs each cross-linked in two tiers of networking.

Each rack has a pair of servers with a total of sixteen GPUs and a top of rack switch. It is not clear how many ports there are in the servers or switches, but there had better be one uplink port per GPU, which means eight ports per server. (This is what Nvidia does with its DGX designs.) There are a total of 2,016 of these TORs in the whole enchilada. That is a fairly large number of switches as networks go.

These top of rack switches are cross connected into a cluster using eighteen cluster switches (what you might call a spine), which works out to 144 switches across the full cluster. And then there are another eighteen aggregation switches with a 7:1 oversubscription taper that link the eight sub-clusters to each other. Thats 2,178 switches to interlink 4,032 nodes. That is a 1.85:1 ratio, thanks to the bandwidth needs of those data hungry GPUs.

This table by Lapukhov was cool, and it showed that the sub-cluster granularity as far as the AI models were concerned is really on the order of 256 to 512 GPUs:

And this shows how the collective operations that underpin AI are mapped onto the network:

The gist is this, and it is not surprising. As you make larger fabrics to span more GPUs, you add more layers to the network and that means more latency, which will have the effect of lowering the utilization of the GPUs at least some of the time when they are waiting for collective operations to finish being propagated around the cluster. But fully shared data parallel all-gather operations tend to send small messages usually 1 MB or smaller and if you can handle small messages well, you can do tensor parallelism with fine-grained overlapping of communication and computation.

Sounds like someone needs big fat NUMA nodes for inference and training. . . . which is exactly what NVLink does and what NVSwitch extends.

So what does this look like in the Meta Platforms datacenters? Well, here is what the front-end datacenter fabric looks like:

A datacenter us carved up into four rooms, and there is some aggregation networking in each room and then the core network that lashes together the rooms in its own area at the center of the datacenter. To add AI to server rooms, the cluster training switches (CTSW) and rack training switches (RTSW) are added to the same rooms as the other application servers and can be interleaved with application servers. Across four data halls, Meta Platforms can house tens of thousands of reasonably tightly coupled GPUs:

Here is a 3D representation of the network planes if this makes it easier to visualize:

Back in the old days, Meta Platforms was using 100 Gb/sec Ethernet and RoCE v1 with some success:

With the shift to Ethernet RoCE v2, which had much-improved latency and packet protection features, Meta Platforms had eight ports of 200 Gb/sec going into each server (whew!) and cross-coupled these with rack and cluster switches using 400 Gb/sec ports.

In the second generation of its AI fabric, which is presumably what is helping Arista Networks make so much money from Meta Platforms, the social network has moved to 400 Gb/sec downlinks to the hosts for each GPU and is still running the higher levels of the network undersubscribed to keep the bits moving without any obstructions.

Echoing our supply win versus design win observation that has driven a lot of datacenter infrastructure sales since the beginning of the coronavirus pandemic, Lapukhov laid it right out there when asked what is the most important property of an AI fabric.

So funny enough, the most important property is buildability, Lapukhov said. Will you have the materials on time to build your fabric? I know its controversial, its very unusual to say this thing. But what we found out is that building a large system requires you to get a lot of components on time in one place and test them. So from my perspective, you have Ethernet and InfiniBand as two poles, but they solve the problem in different ways. Ethernet offers you an open ecosystem, multiple vendors, and easier supply sources to get your hardware. InfiniBand offers you the pedigree of technology used in HPC clusters, but there is only one supplier as of today. So the answer is, whatever you can make work on the timescale you need. So for us for longest time, it was Ethernet. We built many fabrics on Ethernet because this is technology we are familiar with good supply and we have had devices to deploy on time. And that took precedence. We have been building clusters with InfiniBand as far back as three years ago. So as of today, we have allow our technologists to deploy both InfiniBand and Ethernet. And once again, Ill reiterate the most important property is building the fabric you can build on time for your GPUs to arrive and use in the datacenter.

Exactly. And it will be like this for many more years to come, we think. But if the Ultra Ethernet Consortium has it Meta Platforms way, Ethernet will be a lot more like InfiniBand and will have multiple suppliers, thus giving all hyperscalers and cloud builders and ultimately you more options and more competitive pressure to reduce prices on networking. Dont expect it to get much below 10 percent of the cost of a cluster, though not as long as GPUs stay costly. And ironically, as the cost of GPUs falls, the share of the cluster cost that comes from networking will rise, putting even more pressure on InfiniBand.

It is a very good thing for Nvidia right now that it has such high performance GPUs and higher performance InfiniBand networking. Make hay while that AI sun is shining.

Read more from the original source:
Meta Platforms Is Determined To Make Ethernet Work For AI - The Next Platform

How to do the AI image trend on Instagram – Android Authority

If youre a regular Instagram user, you may have seen people with unusually artistic profile images, looking as if theyve been painted or drawn. Most of these people arent commissioning art well explain whats really going in the guide below, and how you can get in on the action if reality isnt enough.

QUICK ANSWER

Use tools like Lensa, NightCafe, or TikTok filters to generate AI images, then upload one as your profile picture. Some tools, including Lensa, cost a fee.

JUMP TO KEY SECTIONS

To cut to the chase, its people taking advantage of generative AI apps to enhance or stylize their profile pictures. Whereas most AI image generators create purely synthetic content based on prompts, some of the images you see on Instagram use real selfie photos as their source material, so they should at least partly resemble the people who use them.

Its worth mentioning here that selfie-based generators depend on well-lit, close-up photos with uncovered faces, and the better the source material, the better the output will be. You may end up taking new photos to make them work, in which case there might not be much reason to turn to AI. Theres also a chance that you wont like the way AI stylizes you, even if the output is visually acceptable.

On a psychological level, theres a risk that AI images can lead to a distorted body image. They depict an idealized or exaggerated version of ourselves that we can never achieve, so bear that in mind with your own avatar, or the ones you see online.

The app of choice for AI images on Instagram seems to be Lensa, and itll cost you to generate profile pictures, even if you sign up for a trial subscription theyll just cost less than the normal price. We mention the app here because of popularity and convenience. If you want to save cash, its absolutely worth hunting down free options (such as a few listed below).

Heres how to use Lensa to make images for Instagram:

If you want images based on selfie photos, your options are relatively limited, but alternatives to Lensa exist.

Visit link:
How to do the AI image trend on Instagram - Android Authority

SAP Announces New Generative AI Assistant Joule – iTWire

SAP Software has announced Joule, a natural-language, generative AI copilot that it says will transform the way business runs.

Built directly into the solutions that power mission-critical processes, SAP says Joule is a copilot that truly understands business, and will be embedded throughout SAPs cloud enterprise portfolio, delivering proactive and contextualised insights from across the breadth and depth of SAP solutions and third-party sources.

By quickly sorting through and contextualizing data from multiple systems to surface smarter insights, Joule helps people get work done faster and drive better business outcomes in a secure, compliant way. Joule delivers on SAPs proven track record of revolutionary technology that drives real results, notes SAP.

With almost 300 million enterprise users around the world working regularly with cloud solutions from SAP, Joule has the power to redefine the way businesses and the people who power them work, said Christian Klein, CEO and member of the Executive Board of SAP SE. Joule draws on SAPs unique position at the nexus of business and technology and builds on our relevant, reliable, responsible approach to Business AI. Joule will know what you mean, not just what you say.

SAP says Joule will be embedded into SAP applications from HR to finance, supply chain, procurement and customer experience, as well as into SAP Business Technology Platform - and Joule transforms the SAP user experience its like tapping your smartest colleague on the shoulder.

Employees simply ask a question or frame a problem in plain language and receive intelligent answers drawn from the wealth of business data across the SAP portfolio and third-party sources, retaining context. Imagine, for example, a manufacturer asking Joule for help understanding sales performance better. Joule can identify underperforming regions, link to other data sets that reveal a supply chain issue, and automatically connect to the supply chain system to offer potential fixes for the manufacturers review. Joule will continuously deliver new scenarios for all SAP solutions. For example, in HR it will help write unbiased job descriptions and generate relevant interview questions, explains SAP.

As generative AI moves on from the initial hype, the work to ensure measurable return on investment begins, said Phil Carter, Group Vice President, Worldwide Thought Leadership Research, IDC. SAP understands that generative AI will eventually become part of the fabric of everyday life and work and is taking the time to build a business copilot that focuses on generating responses based on real-world scenarios and to put in place the necessary guardrails to ensure its also responsible.

SAP announced that Joule will be available with SAP SuccessFactors solutions and the SAP Start site later this year, and with SAP S/4HANA Cloud, public edition early next year - and SAP Customer Experience and SAP Ariba solutions along with SAP Business Technology Platform will follow, with many other updates across the SAP portfolio to be announced at the SuccessConnect event on October 24, the SAP Spend Connect Live event on October 911, the SAP Customer Experience LIVE event on October 25 and the SAP TechEd conference on November 23.

Did you realise that Gartner also recommends that security teams prioritise NDR solutions to enhance their detection and response?

Picking the right NDR for your team and process can sometimes be the biggest challenge.

If you want to try out a Network Detection and Response tool, why not start with the best?

Vectra Network Detection and Response is the industry's most advanced AI-driven attack defence for identifying and stopping malicious tactics in your network without noise or the need for decryption.

Download the 2022 Gartner Market Guide for Network Detection and Response (NDR) for recommendations on how Network Detection and Response solutions can expand deeper into existing on-premises networks, and new cloud environments.

DOWNLOAD NOW!

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

Follow this link:
SAP Announces New Generative AI Assistant Joule - iTWire

CrowdStrike Fal.Con 2023: CrowdStrike Brings AI and Cloud … – TechRepublic

At CrowdStrike Fal.Con 2023, CrowdStrike announced a new Falcon Raptor release with generative-AI capabilities and the acquisition of Bionic.

At CrowdStrikes annual Fal.Con show in Las Vegas this week, the company announced a series of enhancements to its Falcon security platform, including a new Raptor release with generative-AI capabilities. The company also announced the acquisition of Bionic to add cloud application security to its portfolio.

Jump to:

CrowdStrike Falcon covers endpoint security, Extended Detection and Response, cloud security, threat intelligence, identity protection, security/IT Ops and observability. The new Raptor release adds petabyte-scale, fast data collection, search and storage to keep up with generative AI-powered cybersecurity and stay ahead of cybercriminals. Its being rolled out gradually to existing CrowdStrike customers beginning in September of 2023.

The key elements of the Raptor release are:

Raptor eliminates security noise and reduces the time analysts take to chase down incidents, said Raj Rajamani, head of products at CrowdStrike, when I interviewed him at Fal.Con.

In earlier versions of Falcon, data existed in multiple backends, which increased the possibility of blind spots that could be exploited by hackers. Raptor provides a single data plane to bring the data together in the CrowdStrike platform.

There is no longer a need for security analysts to go to different points to try to correlate CrowdStrike and third-party data, as everything is stitched together by Charlotte AI to reduce the time needed for triage and analysis, said Rajamani.

This is achieved by decoupling the data from the compute power needed to compile, process and analyze it. Rajamani said this can take query response times down from hours to seconds and larger queries from days to a few hours.

As CrowdStrike Falcon consists of multiple modules that broadly address the security landscape, it competes on multiple fronts. On the EDR side, its main competitors are Microsoft and SentinelOne. On cloud security, it lines up against the likes of Microsoft and Palo Alto Networks. For identity protection, its primary competitor is probably Microsoft. Rajamani said that CrowdStrike has an advantage over Microsoft and others through its ability to build a unified data plane using a single agent and console for all security-related data.

Others solve parts of the security puzzle but struggle to bring it all together without a 360-degree view, he said. The sum of the parts is greater than the whole.

The other big announcement at CrowdStrikes Fal.Con was an agreement to acquire Application Security Posture Management vendor Bionic. This extends CrowdStrikes cloud native application protection platform to deliver risk visibility and protection across all cloud infrastructure, applications and services.

The crowded cloud-native software platform marketplace is led by PingSafe, Aqua Security, Palo Alto Networks, Orca and many others; the addition of ASPM from Bionic should give CrowdStrike an edge. ASPM adds app-level visibility to infrastructure, and it solves problems such as being able to detect which applications even legacy applications are operating within the enterprise and what databases and servers these apps are touching. This is accomplished without an agent.

Rajamani likened it to the difference between an X-ray (CNAPP) and an MRI (ASPM). The addition of Bionic provides CrowdStrike with the ability to detect a wider range of potential issues.

The integration of Bionic means we can greatly reduce the number of alerts to enable analysts to zero in on the ones that matter, said Rajamani. As a result, CrowdStrike will be the first cybersecurity company to deliver complete code-to-runtime cloud security from one unified platform.

Go here to see the original:
CrowdStrike Fal.Con 2023: CrowdStrike Brings AI and Cloud ... - TechRepublic

China Accuses US of Hacking Huawei Servers as Far Back as 2009 – Slashdot

China accused the U.S. of infiltrating Huawei servers beginning in 2009, part of a broad-based effort to steal data that culminated in tens of thousands of cyber-attacks against Chinese targets last year. From a report: The Tailored Access Operations unit of the National Security Agency carried out the attacks in 2009, which then continuously monitored the servers, China's Ministry of State Security said in a post on its official WeChat account on Wednesday. It didn't provide details of attacks since 2009. Cyberattacks are a point of tension between Washington and Beijing, which has accused its political rival of orchestrating attacks against Chinese targets ever since Edward Snowden made explosive allegations about U.S. spying. Washington and cybersecurity researchers have said the Asian country has sponsored attacks against the West.

The ministry's accusations emerged as the two countries battle for technological supremacy. Huawei in particular has spurred alarm in Washington since the telecom leader unveiled a smartphone powered by an advanced chip it designed, which was made by Semiconductor Manufacturing International Corp. That's in spite of years-long U.S. sanctions intended to cut Huawei off from the American technology it needs to design sophisticated chips and phones. The U.S. has been "over-stretching" the concept of national security with its clampdown on Chinese enterprises, Foreign Ministry spokeswoman Mao Ning told reporters at a regular press briefing in Beijing on Wednesday. "What we want to tell the US is that suppression and containing of China will not stop China's development. It will only make us more resolved in our development," Mao said.

See the original post here:
China Accuses US of Hacking Huawei Servers as Far Back as 2009 - Slashdot

Nozomi Networks discovers flaws in Bently Nevada protection systems – iTWire

OT security specialist Nozomi Networks has identified three vulnerabilities on the Baker Hughes Bently Nevada 3500 rack model used to detect and prevent anomalies in rotating machinery such as turbines, compressors, motors, and generators.

Nozomi warns that the most serious of the three vulnerabilities may allow an attacker to bypass the authentication process and obtain complete access to the device by delivering a malicious request.

According to Nozomi, "the development of a patch is not planned due to legacy limitations."

The initial discovery was made by reverse engineering the proprietary protocol used by the device, and Nozomi has confirmed that all of these vulnerabilities affect firmware versions up to 5.05 and later of the /22 TDI Module (both USB and serial versions).

Nozomi suggests the following measures to mitigate the issues.

1. RUN mode vs CONFIG mode: PLCs and control systems often implement physical keys to either put the device in RUN mode or in CONFIG mode. The latter is typically used by technicians during maintenance activities to enable writing permission of new configurations on the device. One common misconfiguration that might occur is to either forget to put back the device into RUN mode after a maintenance activity or opt for a default always-on CONFIG mode to facilitate remote changes. A best practice is to make sure that devices are always kept in RUN mode whenever possible.

2. Network segmentation: Design and implement proper network segmentation strategies to prevent unauthorised parties from interacting with critical assets. This is especially recommended for legacy solutions that are no longer actively supported by vendors.

3. Strong and unique passwords: Make sure to guarantee uniqueness in conjunction with robustness when choosing credentials. The former property is often underestimated but could provide defence in those scenarios where credentials extracted from a vulnerable machine or component could be easily reused over fully patched systems sharing the same credentials.

4. Non-default enhanced security features: Check your device manual for security features that are not enabled by default. Often, these additional features could strongly reduce the likelihood or the impact of a specific vulnerability and mitigate 'hard-to-patch' situations. With respect to Bently Nevada devices, Nozomi Networks recommends customers review the various security levels made available through the configuration utility and choose the one that matches specific needs and security policy.

Did you realise that Gartner also recommends that security teams prioritise NDR solutions to enhance their detection and response?

Picking the right NDR for your team and process can sometimes be the biggest challenge.

If you want to try out a Network Detection and Response tool, why not start with the best?

Vectra Network Detection and Response is the industry's most advanced AI-driven attack defence for identifying and stopping malicious tactics in your network without noise or the need for decryption.

Download the 2022 Gartner Market Guide for Network Detection and Response (NDR) for recommendations on how Network Detection and Response solutions can expand deeper into existing on-premises networks, and new cloud environments.

DOWNLOAD NOW!

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

Read the original here:
Nozomi Networks discovers flaws in Bently Nevada protection systems - iTWire