Page 813«..1020..812813814815..820830..»

Revolutionising retail efficiency through the power of the cloud – Cape Business News

Neil Coetzee, Head of Engineering at redPanda Software

Technology has become not just a tool, but a catalyst for transformation to bring greater efficiencies as well as improve the customer experience. Central to this is the cloud that brings with it the combination of advanced technologies, edge solutions, and the ability to integrate within the existing retail ecosystem.

Historically, retailers have stuck to traditional deployment methods. For instance, manually updating Point-of-Sale (PoS) systems outside of working hours. This not only presented a risk, with limited after-hours support but was also expensive, time-consuming, and lacked real-time testing. Invariably, when the store would open the next day, there would be teething issues that had to be ironed out. This not only had a detrimental effect on profits, but also negatively impacted on the customer experience.

Enter the concept of blue/green deployments. This deployment strategy, essentially allowing retailers to run parallel application versions in identical environments effectively enabling seamless transitions. In the world of retail, this means the capability to deploy changes during production hours without interrupting the shopping experience. Instead of an overnight manual PoS update, we now have automated deployments that are tested rigorously, ensuring operational effectiveness.

Additionally, enterprise-class retail solutions could only be afforded by the largest stores. However, the emergence of major cloud providers like AWS and GCP in South Africa and the rest of the continent mean that even small-scale retailers at an individual store level can leverage powerful cloud-based solutions. The cloud has essentially levelled the playing field, making world-class technology accessible to retailers of all sizes.

Mobilising the in-store experience

The ubiquity of mobile devices, including mobile printers, scanners, and WiFi devices, has reshaped the retail environment. Gone are the days of tethered printers and wired devices opening up their own set of challenges. With cloud integration, these wireless devices can seamlessly communicate, streamlining operations, and enhance the customer experience without being reliant on an on-premises server.

Historically, local retailers have faced significant connectivity challenges. Yet, with cloud advancements, especially in edge solutions, the dependency on international connectivity has diminished. Now, even if undersea cables fail as was the case recently, retailers can still tap into in-country cloud solutions.

There is also the opportunity to harness the potential of AI within the retail store environment. With cloud solutions, retailers can effortlessly integrate AI-driven managed services. Instead of creating and configuring bespoke applications that must be hosted within their on-premises environments, the cloud offers plug-and-play solutions that provide instant benefits.

This also extends to the cybersecurity segment. At a time when cyber threats are constant, cloud deployments offer unparalleled security. Traditional non-cloud systems, often plagued with outdated security solutions, stand vulnerable. In contrast, cloud deployments force retailers to adhere to stringent security standards by the cloud providers, thereby safeguarding retailer data and customer information.

Disaster recovery and load shedding resilience

The unpredictability of load shedding can wreak havoc on traditional in-store servers. However, with the cloud, disaster recovery becomes streamlined. Retailers can quickly restore operations with the click of a button, bypassing prolonged downtimes.

Furthermore, while in-store servers might require uninterrupted power supplies (UPS) to function during load shedding, mobile devices, now integral to modern retail, rely on the cloud. This means a diminished dependency on in-store servers, pushing retailers towards a more resilient and agile operational model.

Deploying the cloud within an extensive retail environment remains a costly exercise. The retailer must understand the use case for it and know how the cloud can be fit for purpose as well as the benefits it can provide the store environment.

In navigating the challenges and opportunities of the modern retail landscape, cloud solutions offer an unmatched combination of agility, resilience, and efficiency. As we at redPanda Software have seen, the integration of the cloud into retail operations is not just a step forward; its a leap into the future.

More:
Revolutionising retail efficiency through the power of the cloud - Cape Business News

Read More..

Desktop as a Service (DaaS) Working and Importance | Spiceworks – Spiceworks News and Insights

Desktop as a service or DaaS is a cloud-based desktop service where vendors stream desktops, operating systems, and application images remotely to users devices and look after provisioning, maintenance, upgrades, and other routine tasks, all through a one-stop-shop subscription offering.

DaaS Providing Full Abstraction for Virtual Desktops

Source: TIBCOOpens a new window

The modern-day workforce needs to respond to emails and interact with coworkers from anywhere and on any device. The pandemic showed that remote teams can succeed regardless of location or distance. However, many businesses still strive to meet the technological requirements of their hybrid and remote employees. Here, desktop-as-a-service, also known as DaaS, can be immensely beneficial.

Desktop-as-a-service is a comprehensive cloud-based virtual desktop solution. It provides multiple user services, including operating systems, apps, data storage, and user preferences, all in a single bundle. It is an efficient, secure, and cost-effective desktop virtualization strategy.

DaaS liberates businesses from dependence on physical hardware for their operating systems and productivity software. Instead, enterprises can utilize DaaS to connect to virtual desktops from a cloud service provider over the internet. Cloud providers can provide this service by distributing and administering virtual desktops through their own data centers.

Three key traits define DaaS solutions, which are:

IT administrators deploy operating systems and apps on every employees device in a typical desktop deployment approach. Here, administrators frequently invest excessive time and resources in mundane tasks such as installing software, managing enhancements and updates, and trying to safeguard devices.

This traditional terminal deployment model is also unsuitable for an increasingly mobile and remote workforce. Many employees now work at home or while traveling, using various devices such as desktops, laptops, and/or mobile phones.

Organizations must, therefore, provide a robust and standardized user experience across all these devices to help these employees work efficiently. DaaS allows individuals to enjoy access to the same data and apps, irrespective of the device they use. As such, desktop as a service is the perfect fit for large organizations trying to cut down on IT costs and resources. This model is also suitable for companies with a mid-sized to large remote or mobile workforce.

The answer is no; virtualization is a part of DaaS, but desktop as a service includes several other cloud-delivered services priced on a subscription model.

Virtual desktop infrastructure (VDI) solutions such as DaaS offerings deliver desktops to devices from a consolidated data center. This is why the two terms are commonly confused. However, the infrastructure underpinning the VDI solution is typically on-premise and controlled by the IT department.

A cloud provider administers the storage, computation, and network infrastructure in the DaaS model. The organization offering desktops to its staff can oversee the desktop operating system, apps, anti-virus software, and any other desktop-related tasks or even partner with a managed desktop service provider.

DaaS eliminates the substantial upfront costs that come with establishing VDI on-premise. Typically, DaaS offerings come with a subscription-based plan that requires no upfront payment. Additionally, you can delegate all the administrative tasks necessary to support, repair, adapt, or update the VDI.

See More: How Citrix Is Powering IT Teams With Desktop-as-a-Service (DaaS): Q&A With Adam Lotz

If we were to describe the working of DaaS in one sentence, it is simply when a service provider distributes virtual desktops on a per-user model and delivers them to end users through the internet. The provider then offers support services for these virtual desktops. They administer the backend for companies that find it prohibitively costly or resource-intensive to create their individualized virtual desktop infrastructure.

DaaS enables a company to function with a smaller IT department as the vendor handles implementation, connectivity challenges, and various end-user concerns. That is essentially how desktop as a service or DaaS works.

Lets look at some of the key features that support the functioning of DaaS.

1. Flexible licensing

This allows businesses to transfer their existing licenses and permits, if available, or to seamlessly incorporate licensing into the DaaS solution. The latter eliminates the inconvenience of configuring individual license credentials.

2. Low-latency connection

A complete user experience that replicates using an actual office desktop is one of the most crucial aspects of DaaS. Employees want seamless application access, high-quality audio, and a video interface without jitters or delays. This may depend on server configuration or the geographic location of data centers and mainframes.

3. Multi-user access

A company can avoid wasting resources on different workstations for every user through the DaaS model. DaaS enables individuals to perform tasks on the exact same workstation at different times while still storing their data and apps independently.

4. Local integration

A cloud-based workstation does not eliminate the need for users to have access to localized devices such as printers and credit card readers. As a result, a DaaS platform would facilitate the automatic and seamless integration of the desktop with these local devices.

5. Unified management consoles

While DaaS is administered completely by the vendor, the organizations IT team will require visibility and control. A central management console will facilitate large-scale DaaS deployments. By consolidating where apps and desktops are managed, the company can save the time and effort of its IT personnel.

6. Resource optimization

DaaS can execute a resource optimization program to help a company save money when machines are inactive. For instance, it can autonomously shut down a workstation after the user has been inactive for a predetermined time, particularly after regular business hours. This disconnection of inactive desktops also compels users to initiate new sessions as and when required. This reduces the likelihood of malicious conduct or security threats.

7. Various access options

DaaS platforms offer numerous login options to accommodate the preferences of diverse employees and consumers. Some users like to access their workstations via a web browser, while others appreciate downloadable or independent versions. Most DaaS solutions can accommodate all of these requirements.

An important part of understanding how DaaS works is also knowing the challenges that it can throw up. When implementing desktop-as-a-service, the following aspects should be kept in mind.

1. Risk of high costs in the long term

In terms of initial investment, DaaS is less expensive than VDI. However, as time passes, the subscription fees may become prohibitively high. In addition, DaaS license fees can be higher than the time and effort of running virtual desktops. DaaS pricing is still more consistent and predictable than VDI pricing but isnt fundamentally cheaper over the long run.

2. Challenges around customizability

Another thing to consider is that a one-size-fits-all strategy may not be optimal for all businesses. Some DaaS products lack features an organization may require. Compliance and security regulations differ from one organization to another, making it hard to find a DaaS product that perfectly meets your needs.

3. Limited control

Desktop as a service allows limited control over system upgrades and safety in general, resulting in issues that IT teams cannot resolve. For instance, if the public cloud hosting an organizations virtual desktops is disrupted, the company faces an efficiency blockade. If theres a connectivity challenge, the IT team would have far less visibility into the networks functioning and would have to wait for the service provider to fix the issue.

See More: Top Five Remote PC Management Solutions for the Hybrid Work Era

Desktop as a service of DaaS is fast becoming an important component in the IT stack of organizations. This is because it:

The proliferation of remote and hybrid work settings has mandated a new method to access apps and data. IT teams can simply move data between platforms using DaaS. Users can access the information they need from various devices, regardless of location.

The vendor streamlines the complex systems underpinning DaaS. You do not need to build internal IT competencies or recruit skilled virtual desktop specialists. Instead, you can work together with your users to meet their expectations. IT teams are not required to oversee every endpoint, reducing their overall workload.

Subscription-based DaaS enhances the predictability of operational expenses and lowers IT costs. You are not required to invest in setting up and running a data center. CIOs and finance departments have recognized the importance of DaaS since it removes unnecessary hardware and software expenditures from their balance sheets.

DaaS can free employees from having to work exclusively on company-issued devices. They dont even have to restrict themselves to a single device. With DaaS, IT teams can readily facilitate bring your own device or BYOD policies, allowing employees to work on their own smartphones, tablets, or laptops. It makes it easier to switch between devices and still access the same apps and processes.

Data is located in the cloud providers data center and not on the devices that employees use. It is simple to disconnect a lost or stolen machine/gadget from the cloud-based service, simplifying endpoint security.

Numerous resources would be required to replace all obsolete equipment and legacy devices. In such cases, one can use DaaS to update their operating system. Running the latest operating system from the cloud is more cost-effective than changing a fleet of on-premise hardware.

Various cloud providers offer DaaS backup and recovery options. This prevents any disruption if a workstation crashes, no matter the size of your IT department. To enhance business continuity, you can back up desktop files and layouts per predefined SLAs.

In virtualized environments, network latency significantly impacts the general user experience. Through the network, users transmit keystrokes or mouse clicks to a remote computer. Desktop performance depends heavily on network connectivity strength.

Traditional VDI did not achieve widespread adoption because it was frequently sluggish due to inherent network problems. 5G connectivity is now tackling network latency issues as they happen. This will generate an effortless user experience for individuals using workstations driven by DaaS.

The hardware component is already configured in a DaaS implementation. It only requires a connection to a fresh device. IT departments can set it up without any configuration requirements. DaaS can thus save seasonal businesses (with cyclical fluctuations in demand or personnel) a massive amount of time and money.

Some DaaS vendors offer browser support. This is a safe and straightforward way for consumers to access DaaS. Instead of deploying a virtual desktop, users can opt for a browser-friendly virtualized desktop using a browser approved by the DaaS vendor.

The Web Access feature of Amazon WorkSpaces and the Azure Virtual Desktop web client are two instances of this possibility. This further reduces IT effort and supports niche use cases, such as testing within a sandbox environment.

These three use cases further illustrate the importance of DaaS.

See More: Top Five Remote PC Management Solutions for the Hybrid Work Era

Desktop as a service (DaaS) enables simplified administration, more versatility, and reduced overall cost of ownership compared to legacy models. Companies that want to offer options for remote work and device flexibility can set up a digital workspace quickly and effortlessly with DaaS. Users can sign in to their virtual desktops from wherever they are and from various machines, and their desktops will appear identical to the last time they checked in from a different location.

Since the data resides in a centralized, remote location, it is continually backed up. This means that internal IT teams do not need to manage backups themselves. They also need not be concerned about files residing on a machine at the workplace but not at home.

Given these overwhelming benefits, companies are increasingly turning to DaaS for their employees IT needs. Despite its challenges, desktop as a service can add significant value to an organizations IT infrastructure.

Did this article help you understand how desktop-as-a-service (DaaS) works and its importance in todays digital world? Tell us on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . Wed love to hear from you!

Image Source: Shutterstock

Read the original post:
Desktop as a Service (DaaS) Working and Importance | Spiceworks - Spiceworks News and Insights

Read More..

Cloudflare helps companies reduce their IT infrastructure’s carbon footprint by up to 96% by moving to the cloud – ZAWYA

DUBAI, UAE: Cloudflare, Inc. (NYSE: NET), the security, performance, and reliability company helping to build a better Internet, today shared a new independent report published by Analysys Mason that shows switching enterprise network services from on premises devices to Cloudflares cloud-based services can cut related carbon emissions up to 78% for very large businesses to up to 96% for small businesses. The report is one of the first of its kind to calculate potential emissions savings achieved by replacing enterprise network and security hardware boxes with more efficient cloud services.

Global Internet usage accounts for3.7%of global CO2 emissions, about equal to the CO2 emissions of all air traffic around the world. The Internet needs to reduce its overall energy consumption, especially as regulators continue to implement the Paris Climate Accord, including plans to transition to azero emissions economy. TheEuropean Climate Lawrequires that Europes economy and society become climate-neutral by 2050, with a target of reducing net GHG emissions by at least 55% by 2030, compared to 1990 levels. Regulators in theUnited Statesand theEuropean Union, among others, have also announced plans to require companies to disclose climate-related information including carbon emissions resulting from their operations and supply chains, as well as climate related risks and opportunities. Finally, among theFortune Global 500, 63% of companies now set 2050 targets for emissions reductions. Companies large and small will increasingly be looking to reduce carbon throughout their supply chains, particularly their IT infrastructure.

The best way to reduce your IT infrastructures carbon footprint is easy: move to the cloud, said Matthew Prince, CEO and co-founder, Cloudflare. At Cloudflare, weve built one of the worlds most efficient networks, getting the most out of every watt of energy and every one of our servers. Thats why, with Cloudflare, companies can help hit their sustainability goals without sacrificing security, speed, performance, or innovation.

The Analysys Mason study found that switchingenterprise network servicesfrom on premises devices to Cloudflare services can cut related carbon emissions up to 96%, depending on the current network footprint. The greatest reduction comes from consolidating services, which improves carbon efficiency by increasing the utilization of servers that are providing multiple network functions. On premises devices are designed to host multiple workloads and consume power constantly, but are only used for part of the day and part of the week. Cloud infrastructure is shared by millions of customers, often all over the world. As a result, cloud providers are able to achieve economies of scale that result in less downtime, less waste, and lower emissions. Furthermore, the Analysys Mason study found that there are additional gains due to the high Power Usage Effectiveness of cloud data centres, and differences in the carbon intensity of generation in the local electricity grid.

Happy Cog is a full-service digital agency that designs, builds, and markets experiences that engage our clients and their audiences. Weve relied on Cloudflare for many of those websites and apps because it's secure, reliable, fast, and affordable but also aligns with many of our clients sustainability roadmaps and goals, said Matt Weinberg, Co-Founder and President of Technology at Happy Cog. Switching our clients from their previous on premises or other constant-usage infrastructure to Cloudflare's network and services has let them be greener, more efficient, and more cost effective. It's ideal when you can offer your clients a solution that covers all their needs and provides a delightful experience now, without having to compromise on their longer term priorities.

Report Methodology

Analysys Mason compared a typical hardware stack deployed in an enterprise data center or IT closet, and its associated energy consumption, to the energy consumption of comparable functions delivered by Cloudflaresglobal network. Traffic requirements were translated to energy requirements for both on-premise and cloud-based alternatives. The analysis includes assumptions for the power usage effectiveness (PUE) of cloud data centers vs. on-premises data centers or data rooms, and the carbon from electricity, based on the mix of fossil fuel versus renewable energy sources in the local grid.

-Ends-

About Cloudflare

Cloudflare, Inc. (www.cloudflare.com/ @cloudflare) is on a mission to help build a better Internet. Cloudflares suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was awarded by Reuters Events for Global Responsible Business in 2020, named to Fast Company's Most Innovative Companies in 2021, and ranked among Newsweek's Top 100 Most Loved Workplaces in 2022.

Original post:
Cloudflare helps companies reduce their IT infrastructure's carbon footprint by up to 96% by moving to the cloud - ZAWYA

Read More..

15 VPS Security Tips to Prevent Attacks on Your Server – G2

Linux virtual private server (VPS) stands as a trusted choice for companies across the world.

The flexibility and power of Linux VPS make it a prime pick. Yet theres a dark cloud hovering: cyber threats.

The facts cause alarm.

In March 2023, according to IT Governance, 41.9 million records, mainly drivers licenses, passport numbers, monthly financial statements, etc., were compromised by cyberattacks worldwide.

Additionally, the three biggest security incidents of May 2023 alone accounted for more than 84 million breached records 86% of the months total. The easiest target? An inadequately secured server.

An inadequately secured VPS waits like a ticking time bomb, ready to blow a hole in your reputation, finances, and customer trust. Thankfully, fortifying your Linux VPS isn't string theory, but you have to practice diligence, expand awareness, and employ proven security measures.

In this guide, were going to talk about 15 VPS security tips. Simple, actionable, and indispensable, these strategies will convert your server from vulnerable to vault.

Keeping VPS protected from potential threats and weaknesses involves a suite of protocols, tools, and best practices. Essentially, virtualized servers mimicking dedicated servers within larger servers, VPS, are highly susceptible to cyber threats due to their connectivity to the internet.

VPS security shields these digital environments from unauthorized access, malware, Distributed Denial-of-Service (DDoS) attacks, or further security breaches.

Linux VPS, though reputable for its robust security framework, is not impervious to threats.

Like any other system, vulnerabilities emerge, and hackers constantly prowl for any weak points by leveraging:

At its source, VPS technology relies on bare-metal servers, which inherently bolster security for web hosting.

Bare-metal servers are physical servers dedicated exclusively to one tenant. This exclusivity ensures complete control over the hardware, eliminating multi-tenancy risks. With this control, theres minimal chance for one users vulnerabilities to affect another's.

Next in line is the hypervisor.

This software marvel divides a bare-metal server into multiple VPS instances. By partitioning and sharing resources, it hosts several virtual environments on a single host machine. It remains isolated, often out of the general public's reach, curbing potential security breaches.

Source: Webpage Scientist

When we pit VPS against shared hosting, the former takes the prize.

One vulnerability can expose all hosted sites with shared hosting, but using VPS, even if you're technically sharing a bare-metal server, the partitioned and virtualized environments offer layers of security buffers, making VPS a safer bet.

While technology has provided businesses with tools to scale and operate efficiently, it's also opened the gates to sophisticated cyber threats. Your server, the backbone of your online presence, demands unwavering protection.

A lapse in online safety isn't just a technical glitch; it's a breach of trust, a dent in reputation, and a potential financial pitfall. Which proactive measures should you take to shelter the impenetrable fortress of your server against cyber threats?

Root logins grant users the highest level of server access. By logging in as root, anybody can make whatever changes they want, clearly a huge risk. Administrators should ideally use a non-root user account with the necessary privileges and then switch to a root user when essential.

By disabling direct root logins, they can shrink the attack surface.

Dropbox once experienced a data breach because an employee used a password from a site that had been hacked.

Logs record all activities that happen on your server. Regular log monitoring allows you to spot any unusual patterns or potential security breaches. Early detection means the difference between thwarting a hacking attempt and dealing with a full-blown crisis.

For example, if a shoplifter visits multiple times, the shop owner can detect patterns in their behavior. Similarly, consistent log analysis signals repeated unauthorized access attempts.

The Equifax breach in 2017 affected 143 million people. The culprit turned out to be an unpatched vulnerability in the Apache Struts web application software, an unnecessary module for most.

What does this mean?

Every pre-installed software package or module can potentially introduce vulnerabilities, and not all are necessary for your operations. Removing unused or obsolete packages reduces the number of possible entry points.

Secure shell (SSH) is commonly used to safely access servers. However, attackers often target the default port 22. By simply changing this to a non-standard port, you can dodge many automated attack attempts.

Moreover, using SSH keys - cryptographic keys instead of passwords fortifies security. SSH keys are more complex and harder to crack than even the strongest passwords.

Major companies encourage the use of SSH keys for authentication. GitHub, for one, emphasizes its security benefits over traditional passwords.

iptables function as an internal firewall, controlling the traffic that goes in and out of your server.

By filtering and setting rules on IP packets, you can decide which connections to allow and which to block. This gives you another shield against hackers.

Major web platforms, such as Amazon Web Services, frequently emphasize the importance of setting up correct iptables rules to secure resources.

While Linux is often praised for its robust security, it's not immune to threats.

Installing antivirus on your VPS helps detect and neutralize malicious software to keep your data safe and uncompromised. Just as software has protected millions of computers worldwide by detecting threats in real time, an antivirus for your server continuously scans files and processes to keep malware at bay.

In 2021, the ransomware attack on the Colonial Pipeline resulted in a shutdown and disrupted fuel supplies all across the East Coast of the United States.

Taking regular backups of your data protects you and your server from such disasters. By having backups, you can restore everything to its previous state in the event of a data loss incident.

Disabling IPv6, the latest version of the internet protocol, can prevent potential vulnerabilities and attacks. But it may also introduce new risks if not properly configured and secured.

Disabling IPv6 reduces the attack surface and potential exposure to cyber threats.

Every open port on your VPS is a potential gateway for cyberattacks. By disabling ports you don't use, you're essentially shutting unnecessary open doors. It makes it harder for intruders to get in.

Disabling unused ports lowers the risk of human error.

GNU Privacy Guard (GnuPG) encryption helps encrypt and sign your data and communication. It provides a secure layer so your data remains confidential and tamper-proof.

In 2022, a ransomware variant called "LockFile" was discovered that used GnuPG encryption to encrypt files on infected systems. The ransomware was particularly sneaky, targeting specific organizations and slipping past standard security protocols.

Rootkits are malicious software platforms that can gain unauthorized access to a server and remain hidden. Installing a rootkit scanner neutralizes the hidden threats.

In 2023, the cybersecurity community identified a novel rootkit named "MosaicRegressor" that specifically targeted Linux servers. Alarmingly, it could slip past conventional security protocols with ease.

Your firewall is your servers bouncer for your server. It checks all the data coming in and going out. With the right rules and guidelines, firewalls stop dodgy requests or certain unwanted IP addresses.

For instance, businesses with a DDoS attack problem could often mitigate the effects using well-configured firewalls.

Make sure only the right people keep your server safe. We often look out for dangers from the outside, but sometimes, the troublemaker might be calling from inside the house.

In November 2021, a glaring example surfaced when a former employee of the South Georgia Medical Center in Valdosta, Georgia, downloaded confidential data onto one of their own USB drives a day after quitting the job.

Regularly reviewing and updating user permissions prevents potentially disastrous situations like this.

To conduct disk partitioning, you have to split your server's hard drive into multiple isolated sections so that if one partition faces issues, the others remain functional.

File transfer protocol (FTP) was once the go-to method for transferring files, but it lacks encryption, meaning data sent via FTP is vulnerable to eavesdropping. Secure file transfer protocol (SFTP) was then developed to work similarly to FTP with the added bonus of data encryption.

Think about when you transmit customer details or confidential business data. Using SFTP is similar to sending a sealed, secure courier package, whereas using FTP is like sending a postcard anyone can read it if they intercept it.

Choosing a hosting service isn't just about speed and uptime; a secure hosting provider is the first line of defense against potential cyber threats. Seek out providers that prioritize end-to-end encryption, regularly update their systems, and offer consistent backups.

Reviews and testimonials can be valuable, but deepen your understanding by asking the following questions:

Cyber threats are often closer than you think. Even minute vulnerabilities can invite hackers to infiltrate your systems. Recognizing weak spots and acting promptly fortifies your VPS security.

Peruse these common pitfalls to learn how to circumvent them.

The hackers' favorite gateway is a frail password. According to a survey by the UK's National Cyber Security Centre, 23.2 million victims used "123456" as passwords that were later stolen.

A whopping 81% of company data breaches are due to stolen, weak passwords.

Fix: Enforce a password policy that requires alphanumeric characters, special symbols, and varying cases to reduce the reliance on easily guessable phrases. Password manager softwarecan generate and store complex passwords.

Recommendations from the National Institute of Standards and Technology call for people to create passwords that are "easy-to-remember phrases, long" a series of four or five words mashed together.

Running outdated software is akin to leaving your doors unlocked. Cybercriminals constantly look for known vulnerabilities in old versions the same way house thieves look for overgrown lawns and full mailboxes.

Consider the WannaCry ransomware attack, which exploited older Windows versions and affected over 200,000 computers.

Fix: You need to regularly update and patch software. IT teams can adopt automated systems, like unattended upgrades for Linux, to keep software updates timely.

An open port is like an unlocked door for hackers. For instance, the Redis database vulnerability resulted from unprotected ports.

Fix: Use tools like Nmap to scan and identify open ports. Close unnecessary ports and employ firewalls like UFW or iptables to restrict access. The fewer doors you have open, the fewer ways to sneak in.

Overprivileged users spell disaster. Having analyzed quarterly reports for 500 companies, Accenture reported that 37% of cyberattacks in businesses originate with internal actors.

Fix: Set up the principle of least privilege (PoLP). Assign roles based on necessity and audit user permissions routinely. Ensuring that each user has only the permissions they need minimizes potential damage.

Without a vigilant eye on server operations, irregularities go unnoticed and pave the way to potential threats.

Take a situation where an unexpected surge in traffic occurs. This might be a DDoS attack, but without proper supervision, someone could easily misconstrue it as a sudden influx of genuine users.

Fix: Invest in monitoring tools. Periodically review logs and set up alerts for unusual incidents because you can't protect what you cant monitor.

Function-level control goes beyond general user permissions and dives into the specific tasks a user can perform.

Say an employee in a company's finance department has access to view and modify payroll data. Without clear boundaries, that employee could effect unintended changes, errors, or even malicious activities.

Fix: Implement function-based access control (FBAC) systems to make sure that users only access the functions vital to their role. Regular audits of these permissions further fine-tune and secure access.

By controlling functions, you're not just limiting access; you're molding a secure, role-appropriate environment for each user.

As cyber dangers grow trickier and more common, an unprotected server can lead to big problems. You might lose important data you might lose the faith people have in you.

Keeping a VPS safe is like tending to a garden; you've got to keep at it. By staying updated and following good safety tips, you're building a strong defense.

And remember, by guarding your server, you're showing your users you really care about their trust.

Dive deep into the basics ofVPS hostingand learn more about its types, benefits, and best practices to follow to make VPS hosting work for you.

Here is the original post:
15 VPS Security Tips to Prevent Attacks on Your Server - G2

Read More..

UCaaS, CCaaS, CPaaS Whats the Difference? – Telecom Reseller

With the increasing demand for quick and convenient ways to reach organizations, various innovative technologies have taken place in the past few years. UCaaS, CCaaS, and CPaas are some of the few technologies that have gained a lot of recognition in these years. All these technologies are cloud-based technologies and share similarities. However, they are quite different from each other and serve different purposes. In this blog, we will get a detailed insight into all these three cloud-based technologies.

In todays knowledge-driven business world, how a business shares and exchanges information defines its success. Companies, especially the call/contact centers, sales and service teams, and many others are reorganizing and updating their communication strategies to stay competitive. Various developments have been made in the way people share and exchange information. However, the major development throughout this shift has been the emergence of cloud technology solutions. You might have already heard about the term cloud, also known as cloud computing and technology.

What is cloud computing?

Cloud computing is simply an array of services delivered over the internet or the cloud. Instead of relying on local hard drives and private data centers, cloud computing uses remote servers to store and access data.

Before the emergence of cloud computing, businesses primarily depended on databases, servers, hardware, software, and other supplements to take their businesses online. Companies used to invest heavily in purchasing and maintaining their own servers to meet their business requirements. To minimize the risk of outages and downtime and to accommodate peak traffic volume, businesses had to buy enough server space. In such cases, businesses end-up having large amounts of server space unused for much of the time.

The emergence of cloud services has made it easier for businesses to store and maintain information without investing a huge amount on servers. With these services, companies can save a lot by eliminating the need for onsite servers, maintenance workforces, and other costly IT resources. Instead of buying, owning, and maintaining physical data centers and servers, users can access various technology services including databases and storage as per their specific requirements from a cloud provider. With all these things in mind, it would be correct to say that cloud computing has completely transformed the way businesses used to store and share information.

Who all can use cloud computing?

Businesses of every size, type, and industry can benefit from using cloud services for a variety of reasons such as:

The most popular cloud-based services: UCaaS, CCaas, and CPaaS

Since businesses are swiftly shifting from on-premises communication models to cloud-based communication models, they are using various types of services to enhance their business communication. If we talk about different types of as a Service models in the cloud-based communication systems, then the most popular models are:

All these cloud-based communication systems share similar characteristics but vary in terms of their services and functionalities. Still, many people get confused about these cloud-based services.

For a better understanding of each service, lets discuss all these cloud-based technologies in detail.

What is UCaaS?

UCaaS is the short form of Unified Communications as a Service. It is an all-in-one cloud-based platform that has developed from the concept of unified communications (UC). With the help of UCaaS vendors, businesses can streamline and integrate their communication channels as well as people easily and effectively. The UCaaS vendors generally emphasize areas like voice, SMS, chat, and conferencing applications. This cloud-delivered unified communications model is widely used to support various communications functions including:

Since the cloud hosts UCaaS services, companies dont need to install and maintain any additional hardware or infrastructure. UCaaS providers are responsible to own and maintain the platform and software. With the help of UCaaS services, agents can provide superior customer service. They can access various advanced company tools and data to assist customers without having to switch back and forth between applications. Even customers can connect with businesses in their preferred medium which makes their support experience even better.

Advantages of the UCaaS Model

Some of the key advantages of choosing the UCaaS model include:

What is CCaaS?

CCaaS is the short form of Contact Center as a Service. CCaaS enables organizations to use a remotely hosted and cloud-based set-up for the smooth running of their contact center operations. The vendors of CCaaS build different products on APIs (Application Programming Interfaces) that work across several operating systems and Internet browsers. The service providers generally focus on offering companies enough capacity and tools to handle and manage customer interactions, queries, and service/support cases easily. Using CCaaS, contact center agents can easily access their companys contact center software from their mobile device or any web browser, from anywhere and at any time. The CCaaS model is widely used to support various communications functions including:

Due to the increasing dependence on digital communication channels, especially after the pandemic situation, the CCaaS market offerings are expected to grow to $27.8 billion by 2023. Since the platform is hosted in the cloud, users need not invest in any costly hardware. It lets customers connect with a company via their preferred way of communication which leads to a better customer experience.

Advantages of CCaaS Model

Here are some of the key benefits of choosing a CCaaS model:

What is CPaaS?

CPaaS is the short form of Communication Platform as a service. This platform is an extension of both UCaaS and CCaaS. It allows companies to add various communication features to their existing applications. To do this, they do not need to invest in completely new hardware or infrastructure or any additional technical assistance. In simple words, the CPaaS platform lets users build real-time communication features without the need to create their own tech stack. CPaaS vendors provide users with different APIs and code snippets to add to the environments of the cloud-based platforms. The CPaaS model is widely used to support various communications functions like:

CPaaS provides users with prebuilt features and communication channels, thereby eliminating the need to hire any expert team to add APIs and code snippets. The platform includes almost everything like pre-built applications, sample codes, an application programming interface (API), and more to assist companies in integrating desired solutions quickly and easily.

Advantages of CPaaS Model

Conclusion:

UCaaS, CCaaS, and CPaaS are all widely used, popular cloud-based technologies. All these technologies help in augmenting the communication experience. You can choose the one based on the specific needs and demands of your business. Choosing the right technology can help you add agility, reliability, security, and control to your overall communication system.

See the full article here.

Here is the original post:
UCaaS, CCaaS, CPaaS Whats the Difference? - Telecom Reseller

Read More..

Even with repatriation cost savings, the value of cloud computing is still strong – InfoWorld

37Signals, led by CTO David Heinemeier Hansson, implemented a cloud repatriation plan that has already saved them $1 million. Previously, the company spent $3.2 million annually on cloud services. They viewed that as being too much. Their repatriation project invested $600,000 in eight servers hosted by Deft. Hansson now projects that the plan can save $10 million over five years. Thats money they can put back into the business directly, investing in innovations and digital transformation projects.

As a result, their cloud spending has decreased by 60%, going from around $180,000 to less than $80,000 per month. Hansson expects another significant drop in expenditures. Despite managing their hardware, the ops team size has stayed the same.

Of course, those who drive repatriation projects based on this anecdotal data may not find not the same level of cost benefits. Indeed, just because money is saved by moving applications and data to cheaper owned hardware platforms, the benefits of cloud computing are more challenging to measure.

Many enterprises may happily report cost reductions of 60% or more but miss the bigger picture in terms of agility and speed to innovation that cloud computing is able to provide over owned hardware systems. The danger is that enterprises will rush towards managed services providers and colocation services, even renting their own data center space, and end up with long-term fixed costs and capital expenses that are not cost-justifiable when considering all hard and soft benefits.

This is not a push-back on repatriation, only that the value calculations are much more complex than many people understand. I fear enterprises may rush to on-premises systems to save a few bucks, much like they rushed to cloud platforms just a few years ago. The same mistakes can occur when companies dont understand the true value thats being delivered.

Much of this comes down to carefully defining what value means to the business. For some businesses, cost savings can be turned into value if they are in an industry that does not value innovation and speed, and the cheapest and best product wins the day. Take a company that just makes staples, has made staples for the last 100 years, and will continue to make staples to meet a steady demand.

For these types of more traditional companies, the cloud really does not have value and perhaps they should have never made the trek to the public cloud. Thus, repatriation is really right-sizing, working on platforms that are more cost-efficient for the type of computing they need and their type of business.

For others, its not that easy. Most businesses succeed by their innovations, no matter if its a product, service, or process that makes the customer experience better, such as automated supply chains so optimized that products are delivered to the customer faster and through a superior experience.

Even traditional companies such as banks can benefit from this type of innovation which is much easier when leveraging public clouds as the primary platforms even if its cheaper to operate on owned hardware. The value is in the innovation and speed to market, not in any savings that may be possible by taking cheaper paths to computing that limit agility and speed of growth.

So, is Linthicum against repatriation or for repatriation? Neither. This has never been about one direction or another; its about matching up the technology configuration and resources to the needs of the business. Of course, many people dont want to hear this; they want a simple answer to Which one is better?

This is why were here in the first place. We seem to be missing some of the strategic planning to understand the business and match a technology configuration to maximize the business value. We seem to run headlong to whatever the cool kids are doing these days. Thats never been the right approach, and well end up fixing things on the back end and gathering too much technical debt.

Here is the original post:
Even with repatriation cost savings, the value of cloud computing is still strong - InfoWorld

Read More..

What Is Server Backup and What Are the Benefits? – Server Watch

ServerWatch content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A server backup is a proactive process of creating duplicate copies of critical data stored on a server or networked computer system. These duplicate copies, known as backups, are stored separately from the original data source, ensuring their availability even if the primary data becomes compromised, corrupted, or lost due to various factors such as hardware failures, human error, cyberattacks, or natural disasters.

Server backups can encompass various types of data, ranging from entire server images to specific files and folders. The choice of what to back up and how often depends on the specific needs and priorities of the organization or individual.

Server backup solutions come in various forms, including local backups and cloud backups, each with its advantages and considerations. Moreover, the frequency of backups can vary, with options such as daily, hourly, or even real-time continuous backups to ensure that data is as up-to-date as possible.

In this article, we will unravel the concept of server backup, understand what it entails, and realize the invaluable benefits it brings in safeguarding your data against such alarming scenarios.

Despite the integral role that backup solutions play in modern data protection, recent statistics paint a concerning picture: 14% of all data is not backed up at all, and a staggering 58% of recovery attempts fail.

These figures highlight a critical gap in data security, leaving businesses most valuable assets exposed and at risk of being lost forever in the event of an outage or cyberattack.

Here are the primary reasons highlighting the paramount importance of server backups:

Server backups are instrumental in the preservation of vital data. They encompass a wide range of digital assets, including essential business documents, customer records, and the intricate settings and configurations that keep servers and applications running smoothly.

Without backups, the risk of losing this precious information to unforeseen events such as hardware failures or human error is a constant threat.

In the event of data loss or system failures, server backups act as a dependable means of data restoration. This is a critical component in minimizing downtime and sustaining business continuity. Swift and efficient data recovery not only saves time but also prevents substantial financial and operational setbacks.

Beyond data recovery, server backups fortify data security. They provide an additional layer of defense against cyberthreats, particularly ransomware and malware. By isolating backups from the primary network, they reduce susceptibility to malicious attacks.

In the face of a cyberattack, having secure backups ensures that data can be restored without yielding to ransom demands, thereby thwarting potential data breaches and financial losses.

Implementing server backups yields a multitude of advantages, making them an indispensable part of modern data management and cybersecurity strategies. Here are the key benefits of server backups:

Data backup strategies encompass various methods to cater to diverse needs and scenarios. These different types of data backups provide flexibility and options for preserving data efficiently. Some of the most common types include full backups, incremental backups, differential backups, and mirror backups.

A full backup, also known as a level 0 backup, involves copying all the data in a system or specific storage location to a backup destination. It creates an exact replica of the entire dataset, making it the most comprehensive and time-consuming backup method. Full backups serve as a foundation for other backup strategies.

Incremental backups focus on capturing changes made to data since the last backup. Initially, a full backup is performed, and subsequent backups only record new or modified files.

This approach conserves storage space and reduces backup time. However, the restoration process requires the full backup and all incremental backups since that point, which can be time-consuming.

Differential backups, like incremental backups, start with a full backup. However, they capture all changes made since the last full backup, regardless of how many differential backups have occurred in the interim. This simplifies the restoration process compared to incremental backups but requires more storage space over time.

A mirror backup, also known as a 1:1 backup or a real-time backup, duplicates data in real time to a separate storage device or location. This method provides immediate redundancy and can be crucial for minimizing data loss in high-availability environments. However, it can be resource-intensive and costly due to the constant data synchronization.

Snapshot backups capture a point-in-time image of data, creating a read-only copy of the entire dataset. These snapshots are lightweight and dont consume as much storage space as full backups. They are particularly useful for creating consistent copies of data for development, testing, or data analysis purposes but theyre not so helpful for full data recovery in case of a major loss event.

Continuous data protection (CDP) is an advanced backup method that captures every change made to data in real time or at short intervals, allowing for granular recovery. It offers near-zero recovery point objectives (RPOs), ensuring minimal data loss during restoration. CDP solutions are often used in mission-critical applications where data integrity and availability are paramount.

When it comes to data backup strategies, two primary approaches stand out: local backup and cloud backup. Each method has its own set of advantages and considerations, making it essential to choose the one that aligns with your specific needs and priorities.

The table below compares various aspects of local backup and cloud backup:

Local backups, also known as on-premises backups, involve the practice of storing data on physical devices located at the same site as the source data. These physical devices can take various forms, such as external hard drives, network-attached storage (NAS) devices, or even tape drives.

Cloud backups, a core component of cloud computing, have transformed the landscape of data protection and management. They involve securely storing data on remote servers maintained by cloud service providers.

Cloud computing has revolutionized the way data is stored, accessed, and protected. Backup solutions in cloud computing offer several distinct advantages over traditional methods. Heres a closer look at how cloud backups work:

Offline backups, also known as cold backups or offline data archives, involve storing copies of your data in a location that is disconnected from the network and not constantly accessible. This practice serves as a critical safeguard against various data loss scenarios. Heres why its so important:

Cyberattacks, such as ransomware, are becoming increasingly sophisticated and can easily target and encrypt data stored on connected devices or NAS. Offline backups, being physically disconnected from the network, are immune to such attacks.

In the event of a ransomware attack, having offline backups ensures that you have clean, uninfected copies of your data to restore from.

Human errors, software bugs, or system glitches can result in the accidental deletion or corruption of data. If your backups are constantly connected to the network or live on the same system, these errors can propagate to your backup copies.

Offline backups, on the other hand, provide a static and protected snapshot of your data, reducing the risk of such errors affecting your backup.

Physical threats like fires, floods, or hardware failures can lead to the destruction of your primary data and any online backups stored in the same location.

Offline backups, stored in a physically separate and secure location, act as a disaster recovery mechanism. They remain unaffected by local catastrophes, ensuring data availability even in the face of such events.

Some data needs to be retained for extended periods, often for compliance or historical reasons. Offline backups offer a cost-effective solution for archiving data for the long term. These backups can be stored on offline media like tape drives, which are well-suited for preserving data over extended periods.

Online backups often require a robust and continuous internet connection for data synchronization and accessibility. In situations where the internet is slow, unreliable, or unavailable, offline backups provide a reliable fallback for data retrieval and restoration.

In industries with strict data privacy and compliance regulations, offline backups can help maintain data sovereignty and security. By keeping sensitive data offline, organizations can better control access and protect it from unauthorized breaches.

Imagine a vast ocean stretching as far as the eye can see, teeming with information, insights, and opportunities. This metaphorical ocean of data represents the ever-expanding volume of digital information generated by businesses, governments, and individuals worldwide.

Just as sailors navigate the complexities of the open sea, IT professionals must navigate this sea of data to unlock its potential while safeguarding against its inherent risks. This section explores the strategies and considerations for successfully managing and making the most of this data-rich landscape.

The server backup market has evolved significantly over the years to address the ever-increasing data volumes, diverse IT infrastructures, and evolving cybersecurity threats. This section explores the dynamics, trends, and key players shaping the server backup market.

The server backup landscape is undergoing a profound transformation, with a particular spotlight on the cloud-based backup services sector. As businesses increasingly rely on digital operations and cloud services, the demand for robust server backup solutions has surged.

Projections indicate that the cloud-based backup services market is poised for substantial growth, starting from an estimated value of $4,439.6 million in 2023 and set to ascend to a remarkable $19,811.3 million by 2033. This surge reflects a compelling compound annual growth rate (CAGR) of 16.1% anticipated throughout the forecast period.

Understanding the forces propelling this domain forward not only informs strategic decision-making but also ensures the safeguarding of critical data assets. Here are the primary trends and drivers that are shaping the future of server backup solutions and practices:

Several companies dominate the server backup market, offering a wide range of solutions catering to diverse business needs. Prominent players include:

When it comes to disaster recovery planning, backups play a pivotal role in safeguarding critical data and ensuring business continuity. They are the safety net that allows organizations to rebound from catastrophic events.

Here are the key steps involved in integrating backups into an effective disaster recovery plan:

Server backups offer a lifeline in the face of data loss, system failures, cyberattacks, and unforeseen disasters. They provide a safety net, ensuring that critical data can be retrieved and operations can resume, even in the most challenging circumstances. In the evolving landscape of data management and protection, server backups are the time-tested, reliable, and battle-tested means by which organizations fortify their digital assets.

Securing your digital assets with server backups begins with meticulous planning, including data assessment, strategy design, and rigorous testing. It also involves implementing redundancy, encryption, and multi-location storage to mitigate risks effectively. It necessitates continuous monitoring, regular updates, and employee training to maintain the integrity of your backup systems. Finally, it also entails embracing cloud-based and offsite backups to extend your safety net to the cloud.

For help securing your servers against data loss and theft, you can turn to one of these top server backup solutions.

See the original post here:
What Is Server Backup and What Are the Benefits? - Server Watch

Read More..

AI powered greenhouse outperforms Dutch growers – hortidaily.com

It comes as no surprise to Dutch greenhouse owner Frank van Kleef that an AI-powered greenhouse outperforms the benchmark Dutch grower - after all, there's a reason he left his own greenhouse company to join the British/Dutch company, Optimal, which is pioneering AI-powered greenhouse cultivation. In their commercial-scale greenhouse in Westdorpe, Optimal's system is already achieving higher yields and higher Brix levels while also saving energy and reducing CO2 emissions. "The computer is always consistent in its decisions," Frank explains.

Frank van Kleef & Dave Hunter

"There are not enough skilled growers available to service the greenhouses that are being built around the world," says Optimal's Founder and CEO, Dave Hunter. "This problem needs to be addressed if the greenhouse industry is to expand. All greenhouses worldwide need operational knowledge, consistent results, and an efficient way to run their facilities. If this can be done at scale, then greenhouses will thrive globally and become critical to humanity's food security in the face of climate change."

Leveraging Grower ExpertiseTo gain a deeper understanding of the industry and the challenges it faces, Dave joined forces with Frank van Kleef. Frank has been one of the owners of Royal Pride, which became one of the leading greenhouse companies in the world that operates 70 ha of tomatoes and cucumbers. He now focuses full-time on developing the Optimal system and actively participates in the Westdorpe demonstration greenhouse. Frank's role is not that of a traditional grower but more of a supervisor who assesses processes and evaluates results in terms of quantity and quality. He does this meticulously, as the greenhouse's produce is sold through his social enterprise, Boeregoed.

Predictive Control SystemCombining Frank's knowledge with AI, the Optimal team first developed a predictive control system. A digital twin of the greenhouse is continuously updated with real-time information and weather forecasts. Using advanced optimization software, the inputs to the greenhouse are re-planned each minute (e.g., heating pipe temperatures and vent positions). Optimal then applies these inputs to the greenhouse via an interface with the process computer. Dave explains, "We can predict the full impact of decisions before they are made and anticipate the effects of weather changes in a way that no human grower can do."

Optimal's predictive control system uses a digital twin to optimise the inputs to the greenhouse each minute

Frank explains the biggest advantage lies not just in optimizing the greenhouse environment but in preventing errors. "When you present the same situation to a grower ten times, they'll act the same way seven times and differently three times. That's human nature. A computer doesn't do that. It always makes the best choice." Dave adds that the system never sleeps. "It never over- or under-irrigates, and you don't need to adjust the climate because it's already been well-managed in advance. You don't need to react to increased evaporation because the system has already accounted for that. This results in a better overall plant health, a more resilient crop, and thus a better quality and yield."

Software and Operational KnowledgeHowever, running a greenhouse entails more than just a control system, even if it's an exceptional one. Dave explains, "What we offer our customers is the operation of their greenhouses on their behalf, ensuring optimal efficiency and reliability season after season."

To achieve this, all tasks inside and outside the facility are documented in software-based Standard Operating Procedures. Greenhouse workers and operators are trained to follow these procedures, and a smartphone app guides them throughout their workday, providing instructions on what tasks need to be performed, how to perform them, and what to do when encountering irregularities.

Sam Baker, Optimal's Operational Excellence Lead, with PC showing an example of a Standard Operating Procedure

Labor schedulesThe system also generates labor schedules, offering insights into labor requirements and worker performance. Daily and weekly feedback sessions are conducted to assess the team's performance. Dave states, "Labour is a critical factor in greenhouse operations, with significant implications for cultivation. Standardizing it allows us to achieve optimal and predictable results."

Dave Hunter, CEO and founder, using the smartphone app to collect data on the crop

Optimal Performance and Real-World DeploymentSo, what are the results of this comprehensive system? In their demonstration greenhouse, the team has achieved impressive results, including a 13% higher yield, a 14% higher Brix level, and energy savings of 27%, along with a 20% reduction in CO2 emissions compared to a benchmark Dutch greenhouse.

The Optimal team has also conducted various experiments involving ten different tomato and cucumber varieties. "We conduct experiments here ourselves and use this data to continually improve our performance," Dave explains. "Over the last five months, we've been running a low-energy experiment to optimize performance in high-energy-cost environments, such as those experienced in Europe over the last two years."

A basket of Optimal's tasty cucumbers and specialist tomato varieties.

Real-World Deployment and Future ExpansionCurrently, discussions are underway with leading greenhouse owners in the United States, the UK, and other locations. Dave notes that the system easily adapts to different climatic conditions, and there are no minimum technical requirements or specific greenhouse needs to get started. Dave explains, "We can work in any greenhouse: we install additional sensors and connect them to cloud servers and are up and running in days." The team is also exploring expansion into strawberries, peppers, and leafy greens.

When asked what he's most proud of, Dave emphasizes that Optimal's success lies in the synergy between humans and machinesa complex but rewarding endeavor. "Our engineers understood that AI and predictive control models would result in better greenhouse outcomes: they view a greenhouse as a system that can be controlled through simulations and optimization. However, it's vital to align people, the system, and the hardware to work together. That's where Frank and the other experienced operators in Optimal's team have added tremendous value."

For more information:Optimalcontact@optimal.ag http://www.optimal.ag

Link:
AI powered greenhouse outperforms Dutch growers - hortidaily.com

Read More..

AppFactor drags legacy enterprise apps to the cloud through automation – TechCrunch

Image Credits: AppFactor

Technical debt is often the unsung villain of the enterprise, crippling companies seeking to modernize as they realize just how much legacy lives in their stack. And as with most kinds of debt, there is usually interest to pay, too.

This is something that fledgling U.K. startup AppFactor is setting out to solve, with a platform that helps enterprises automatically re-architect their legacy applications, readying them for deployment into a new cloud-native home.

AppFactor was formally incorporated in mid-2021, however CEO and founder Keith Neilson has only been working on it fully since January, recently closing a pre-seed round of funding he says weighed in at north of 1 million ($1.3 million).

Presenting onstage today as part of the Startup Battlefield at TechCrunch Disrupt, Neilson showcased AppFactors technology and laid out his startups mission in a space ripe for change. TechCrunch caught up with Neilson ahead of time to get the lowdown on the scale of the problem as he sees it, and what exactly AppFactor is doing to address it.

To outsiders, some technical debt might be obvious through exposure to bugs or laggy systems. Or, perhaps, the amount of time it takes the company to improve existing products and introduce new features.

Those on the inside, meanwhile, might have a better idea of their technical debt when they see that their IT budget spend is weighted disproportionality toward maintenance versus building shiny new things. Data from consultancy McKinsey suggests that technical debt could account for as much as 40% of companies total IT budget, while a separate report from Stripe indicates that developers on average spend a third of their working week addressing existing technology issues rather than writing new code.

But its not always easy to get a clear picture of the level of technical debt a company has, due to the fact that it might span multiple areas and domains within an organization. This opaque underbelly may include things like overly complex, duplicate or downright bad code; a lack of automated testing; security vulnerabilities; and general poor design.

The big challenge enterprises have is that theyve built and architected enterprise-grade applications at a [specific] point in time, and business requirements and processes change the environments around these applications, and the applications and their dependencies evolve over time, Neilson said.

Thus, technical debt as McKinsey notes can perhaps best be viewed as a sort of tax a company pays on all internal development thats focused on fixing myriad legacy technology infrastructures. This includes new libraries and frameworks, or integration points and dependency changes as companies fine-tune their stack. Ultimately, it amounts to a whole mish-mash of complexity that snowballs over time to create an unwieldy mess.

A typical example of a legacy enterprise application might involve an older Microsoft SQL database; some middleware layer; and a .NET front end, requiring a mixture of physical and virtual infrastructure to function. The running processes, libraries, dependencies and general componentry permeating the application and infrastructure would require significant manual spadework just to figure out what is what, as they attempt a lift-and-shift transformation to a more cloud-native form.

And that, essentially, is what AppFactor is setting out to offer. It scans the companys IT environment to identify all their apps and respective dependencies, divorces the virtual and physically-hosted apps from their current environment, and rebuilds each component and app layer into separate containers ready for their new home be that a modern cloud architecture such as Kubernetes, or a managed database service.

All of this is generated and driven by the product [AppFactor], so you can rapidly move your existing application estates over to the latest cloud technology within days, not months and years, Neilson said.

AppFactor consists of three core components, including a scanner/analyzer thats deployed to servers to collect the data needed to unearth their applications and dependencies; an orchestrator, which essentially controls the scanner/analyzers behavior, including the IP range and target systems; and the overarching AppFactor SaaS platform that handles all the data analysis, machine learning (ML) processes and services that generate visual mappings, containerization tasks and more.

The company says it has been working with some commercial clients, including U.K.-based enterprise software company Civica. To date, only the discover and assess facet of its platform has been commercially available. However, the company is also gearing up to launch its app modernization module in November. So this means customers will have the ability not only to find suitable candidates for modernization, serving all the relevant reporting and analytics, but eventually enact the transformation itself.

Perhaps one of the more interesting features of the platform from a slick bells-and-whistles perspective, at least is a tool that enables users to visualize app dependencies through a 3D visualization engine. Eventually, this could be used to visualize entire environments.

Currently its at more of an infrastructure and process level view, but clearly there is scope to go deeper down, which we plan on building out, Neilson said.

Curiously, AppFactor is also making this available for VR headsets, with the company demoing this functionality via an Oculus at its TC Disrupt booth.

One of the toughest activities upfront that can help de-risk any [app] changes is being able to account, view and understand dependencies be it across infrastructure, architecture or code, Neilson said. This view is about being able to view and interact with the makeup and anatomy of our application estates in a granular and powerful way. Some of these systems are incredibly complex, with communications, libraries, files, services, processes and more happening in a magnitude of places, across multiple environments so this is a really powerful way of being able to intuitively understand, validate and reaffirm knowledge, which empowers any future evolution of the application and its attributes.

Current app modernization tooling is substantively manual, and therefore resource-intensive. It might involve using a command-line tool like Docker, which requires significant continuous testing, and even then might not cover the full gamut of dependencies due to the manual nature of running the tool. And the likes of Googles Migrate for Anthos, which resulted from its Velostrata acquisition five years ago, and AWSs App2Container make it somewhat easier for enterprises to convert virtual machines (VMs) to containers. However, these are still very manual and command-line based, dont necessarily provide extensive visibility into dependencies and dont support physical infrastructure-based apps.

There are other similar-ish services out there too focused on helping companies transition from monolithic software to microservices, such as venture-backed Vfunction.

The ultimate goal of each of these services is to help companies reduce their technical debt and get with the times, albeit adopting slightly different approaches en route.

We believe there are four pillars of technical debt infrastructure, architecture, code and dependencies, Neilson said. We also believe that there are many applications that do not suit microservices, so our vision is to allow the attributes of an enterprise app dictate the optimum architecture pattern.

To achieve this, AppFactor says its developing machine learning classifications to help generate the patterns necessary to transform more complex, multi-host apps. Essentially, its about creating fingerprinting techniques to identify what complex or bespoke apps are made of.

We are using a trained data model to build this, and it employs a number of attributes and datapoints that can help identify application patterns, Neilson said.

Additionally, Neilson said that theyre experimenting with a number of other AI use-cases, including large language models (LLMs) to generate the YAML (a human-readable data serialization language for creating configuration files) for Kubernetes deployments

We have some [other] future use cases around code generation, but were not there yet, Neilson added.

Read the original here:
AppFactor drags legacy enterprise apps to the cloud through automation - TechCrunch

Read More..

Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented … – Slashdot

Brazilian engineer Luiz Andre Barroso, who ripped up the rulebook at Google, has died. His radical ideas for data centers laid the foundations for cloud computing. Wired: Luiz Andre Barroso had never designed a data center before Google asked him to do it in the early 2000s. By the time he finished his first, he had overturned many conventions of the computing industry, laying the foundations for Silicon Valley's development of cloud computing.

Barroso, a 22-year veteran of Google who unexpectedly died on September 16 at age 59, built his data centers with low-cost components instead of expensive specialized hardware. He reimagined how they worked together to develop the concept of "the data center as a computer," which now underpins the web, mobile apps, and other internet services.

Jen Fitzpatrick, senior vice president of Google's infrastructure organization, says Barroso left an indelible imprint at the company whose contributions to the industry are countless. "We lost a beloved friend, colleague and respected leader," she writes in a statement on behalf of the company.

Read the original post:
Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented ... - Slashdot

Read More..