Category Archives: Cloud Servers

Cloud Infrastructure Options: How to Choose – Cloud Infrastructure … – InformationWeek

When choosing a cloud infrastructure direction, its important to weigh the advantages of three types, which include traditional, hyperconverged infrastructure (HCI), and distributed cloud architectures.

Traditional or three-tier infrastructure refers to the combination of disaggregated servers, storage arrays, and networking infrastructure.

Hyperconvergence provides a building block software-defined approach to compute, network, and storage on standard server hardware under unified management.

The third option, distributed cloud, refers to the distribution of public cloud services to different physical locations.

Pavel Despot, senior product manager at Akamai, explains the main differences between hyperconverged, traditional, and distributed cloud infrastructures come down to location.

A traditional cloud infrastructure, which contains the delivery of computing services, like servers, databases, and networking over the internet, is bound to a chosen location or locations, he says. Hyperconverged cloud infrastructures keep hardware components in a single integrated cluster.

On the other hand, he notes a distributed cloud infrastructure doesnt take the approach that workloads are built for specific locations; instead, workloads and applications can be deployed to multiple geographical endpoints.

Distributed clouds solve common pain points of the traditional cloud, such as high costs, latency and limited global reach, he says.

While all three operate on the idea that pools of resources can be drawn upon as needed, the nature and breadth of those pools are different.

Hyperconverged solutions use commonly available hypervisors to allocate resources available for various compute, storage, and networking functions.

As a result, youre limited by how much hardware you have in that location, Despot says. So, management requires you to keep an eye on how much capacity youre using and plan ahead.

He notes its important to remember that HCI solutions generate significant overhead costs, further eating into how much you have available for your workload.

Cory Peters, vice president of cloud services at SHI International, explains hyperconverged, traditional, and distributed cloud differ in terms of scalability and flexibility.

Hyperconverged infrastructure offers seamless scalability and flexibility through its integrated approach and software-defined resource allocation, he says. Traditional infrastructure presents limitations in scalability and flexibility due to its fragmented nature and manual configuration processes.

Distributed cloud infrastructure provides scalability and flexibility benefits, particularly in edge computing scenarios, by distributing resources closer to end-users and enabling dynamic resource allocation.

One industry example of this could be an autonomous vehicle company employing a distributed cloud infrastructure to support its fleet, Peters explains.

Edge computing capabilities lets vehicles process sensor data on-board and make instantaneous decisions.

This process ensures safety and responsiveness without relying on a centralized cloud infrastructure.

Understanding these differences is essential for organizations to make informed decisions about which infrastructure model aligns best with their scalability and flexibility requirements, Peters says. By selecting the right model, businesses can ensure they have the necessary agility and adaptability to meet evolving demands and drive innovation.

Swaminathan Chandrasekaran, principal and global cloud CoE lead at KMPG, cautions distributed cloud infrastructure can raise costs if not properly managed.

You need to consider data transfer costs for network ingress and egress between clouds as well as properly utilizing commitment discounts for workload placement on provider contracts, he says.

The biggest difference from a cost perspective between traditional infrastructure in your own data center and moving to public cloud is shifting from a CapEx model of owning your own infrastructure assets to an OpEx model where you pay for what you use.

You can further optimize costs in an OpEx model with burst-capacity scenarios that have high resource demands in short or infrequent intervals, Chandrasekaran adds.

He says with traditional infrastructure, organizations must plan for, procure, deploy, and provision hardware for each new use case or increase in capacity demand from the business.

This can generally take weeks to months for an environment that can be delivered before it can even be made fit-for-purpose for an application for the business, he explains. Applications and systems are at greater risk of impact from hardware failure and could see longer mean time to recovery in such situations.

He points out HCI and distributed cloud infrastructures allow for on-demand provisioning, greatly reducing the time to market for new solutions to power the business.

By centralizing these virtual resources behind a single control plane, you also gain efficiencies in managing and maintaining these IT resources, he says. With built-in levels of resiliency and greater portability of virtual environments, mean time to recovery at times of failure are greatly reduced.

Peters says the choice of infrastructure type has a significant impact on the agility and speed of IT applications within an organization.

Hyperconverged infrastructure stands out in terms of agility and speed, thanks to its integrated architecture and software-defined resource allocation, he says.

Traditional infrastructure presents challenges in both agility and speed due to its fragmented nature and manual configuration processes.

Distributed cloud infrastructure excels in agility and speed, especially in edge computing scenarios, by bringing resources closer to end-users and reducing network latency.

He says understanding the impact of different infrastructure types on the agility and speed of IT applications helps organizations make informed decisions that align with their application requirements and business objectives.

By choosing the right infrastructure model, businesses can optimize the agility and speed of their IT applications, leading to improved productivity, customer satisfaction, and competitive advantage, Peters says.

10 Must-Have Enterprise Cloud Skills

How to Trim Your Cloud Budget

8 Secrets for Smoother Sailing During a Cloud Migration

See original here:
Cloud Infrastructure Options: How to Choose - Cloud Infrastructure ... - InformationWeek

LinkedIn Leads Standardized Cloud Gear Alliance – LinkedIn Leads … – InformationWeek

LinkedIn is leading a project with muscular industry backers to produce componentized and standardized data center hardware that will be useful in small and medium-sized enterprise data centers as well as large cloud centers.

LinkedIn announced the Open19 Project in July last year. Yesterday it announcedthat HPE, GE Digital, Flexj and Vapor IO have said they're backing the Open19 Project and forming a foundation to support it. Vapor IO is a firm with products to manage edge computing, hybrid cloud computing and decentralized data centers collecting device data and sensor information flow. Flex supplies customizable manufacturing of electronics equipment.

Open19 data center gear includes a data center switch and is expected to include standardized storage options. Curt Belusar, senior director of hyper-scale engineering at HPE, said in a blog yesterday, "Open19 community members can 'mix and match components in the way that best meets their data center needs to increase operational efficiency and reduce cost." The Open19 project gets its name from its reliance on a standard 19-inch wide rack for mounting servers and other devices.

Want to learn more about open source hardware? See Battle Intensifies To Become Cloud Hardware Leader.

Open19 is as much about having a standard networking environment in multiple cloud data centers as it is about gear. In a blog May 23, Darren Haas, senior VP, cloud and data for GE Digital, said it was difficult to implement Predix when cloud environments constantly varied in their hardware and networking: "The ability to use an open network stack with a mix of simple interchangeable solutions sharing the same standard will help allow us to deliver racks quickly, reduce deployment costs and have a wider inventory..."

Open 19 has several smaller backers, such as Inspur and Cumulus Networks, who were already part of the open source hardware project. Cumulus joined in April. Inspur is a 19-inch rack and "brick" supplier with ties to the Chinese technology companies Baidu, Alibaba and Tencent.

Open19 is a younger project than Open Compute, which was founded by Facebook in 2009 and was the first to apply the principles of open source code sharing and development to hardware design. Open Compute's first task was to popularize a standardize server and server switch design originated by Facebook and used in its hyper-scale data centers.

With Open19, the emphasis is more on useful components that can be assembled in large or small data processing centers, including some at the edge of the network or aggregators on the Internet of Things. Both projects share the goal of coming up with industry standard designs that can be stamped out by a number of low name recognition producers to minimize costs and maximize their usefulness in the data center.

In a blog on July 19 last year, Yuval Bachar, a veteran of engineering organizations at Cisco and Facebook and then the principal engineer for global infrastructure architecture at Linked in, wrote: "This new project aims to establish a new open standard for servers based on a common form factor. The goals of Open19 are to provide lower cost per rack, lower cost per server, optimized power utilization, and (eventually) an open standard that everyone can contribute to and participate in."

LinkedIn, of course, was acquired by Microsoft, so Open19 designs and products may find a future in Azure data centers as well as in LinkedIn's. But Microsoft is also a backer of Open Compute. For now, the initiative lies with LinkedIn engineers such as Bachar to make something distinctive of the project and allow it to serve complementary objectives.

See the original post:
LinkedIn Leads Standardized Cloud Gear Alliance - LinkedIn Leads ... - InformationWeek

Supermicro Announces Future Support and Upcoming Early Access … – PR Newswire

Supermicro's Advanced GPU Systems for Generative AI Applications with Dual 5th Gen Intel Xeon Processors Will Take Advantage of the Increased Number of Cores, Performance, and Performance Per Watt in The Same Power Envelope

SAN JOSE, Calif., Sept. 19, 2023 /PRNewswire/ -- Intel Innovation 2023 -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing future support for the upcoming 5th Gen Intel Xeon processors. In addition, Supermicro will soon offer early shipping and free remote early access testing of the new Systems via its JumpStart Programfor qualified customers. To learn more, go to http://www.supermicro.com/x13for details. The Supermicro 8x GPU optimized servers, the SuperBlade servers, and the Hyper Series will soon be ready for customers to test their workloads on the new CPU.

"Supermicro's range of Generative High-Performance AI systems, including recently launched GPUs, continues to lead the industry in AI offerings with its broad range of X13 family of servers designed for various workloads, from the edge to the cloud," said Charles Liang, president, and CEO, Supermicro. "Our support for the upcoming 5th Gen Intel Xeon processors, with more cores, an increased performance per watt, and the latest DDR5-5600MHz memory, will allow our customers to realize even greater application performance and power efficiency for AI, Cloud, 5G Edge, and Enterprise workloads. These new features will help customers accelerate their business and maximize their competitive advantage."

Watch the Supermicro TechTALK about how Supermicro is working with Intel to bring to market new X13 servers with the 5th Gen Intel Xeon processors.

Supermicro X13 systems will be able to take advantage of the new processors' built-in workload accelerators, enhanced security features, and increased performance within the same power envelope as the previous generation of 4th Gen Intel Xeon processors. Using PCIe 5.0, the latest peripheral devices using CXL 1.1, NVMe storage, and the latest GPU accelerators deliver reduced application execution times. Customers will soon be able to leverage the new 5th Gen Intel Xeonprocessors in the tried-and-tested Supermicro X13 servers, with no software redesign or architectural changes required, reducing the lead times associated with completely new platforms and CPU designs, resulting in a faster time to productivity.

"5th Gen Intel Xeon processors build on the success of the Xeon platform, which has been leading the way in data center compute for several generations," said Lisa Spelman, CVP and GM Xeon Products & Solutions at Intel. "Our strong partnership with Supermicro will help deliver the benefits of 5th Gen Intel Xeon processors to customers soon and on platforms already proven in the data center."

The Supermicro portfolio of X13 systems is performance optimized, energy efficient, incorporates improved manageability and security, supports open industry standards, and is rack-scale optimized.

Performance Optimized

Energy Efficient - Reduces Datacenter OPEX

Improved Security and Manageability

Support for Open Industry Standards

Supermicro will offer early access availability of X13 systems powered by 5th Gen Intel Xeon processors to qualified customers through its remote JumpStart and Early Ship programs. The Supermicro JumpStart program allows qualified customers to perform workload validation with the new Supermicro systems. Go to supermicro.com/x13 for details.

The Supermicro X13 Portfolio Includes the following:

SuperBlade Supermicro's high-performance, density-optimized, and energy-efficient multi-node platform optimized for AI, Data Analytics, HPC, Cloud, and Enterprise workloads.

GPU Servers with PCIe GPUs Systems supporting advanced accelerators to deliver dramatic performance gains and cost savings. These systems are designed for HPC, AI/ML, rendering, and VDI workloads.

Universal GPU Servers Open, modular, standards-based servers that provide superior performance and serviceability with GPU options, including the latest PCIe, OAM, and NVIDIA SXM technologies.

Petascale Storage Industry-leading storage density and performance with EDSFF E1.S and E3.S drives, allowing unprecedented capacity and performance in a single 1U or 2U chassis.

Hyper Flagship performance rackmount servers are built to take on the most demanding workloads along with the storage & I/O flexibility that provides a custom fit for a wide range of application needs.

Hyper-E Delivers the power and flexibility of our flagship Hyper family optimized for deployment in edge environments. Edge-friendly features include a short-depth chassis and front I/O, making Hyper-E suitable for edge data centers and telco cabinets.

BigTwin 2U 2-Node or 2U 4-Node platform providing superior density, performance, and serviceability with dual processors per node and hot-swappable tool-less design. These systems are ideal for cloud, storage and media workloads.

GrandTwin Purpose-built for single-processor performance and memory density, featuring front (cold aisle) hot-swappable nodes and front or rear I/O for easier serviceability.

FatTwin Advanced, high density multi-node 4U twin architecture with 8 or 4 single-processor nodes optimized for data center compute or storage density.

Edge Servers High-density processing power in compact form factors optimized for telco cabinet and Edge data center installation. Optional DC power configurations and enhanced operating temperatures up to 55 C (131 F).

CloudDC All-in-one platform for cloud data centers, with flexible I/O and storage configurations and dual AIOM slots (PCIe 5.0; OCP 3.0 compliant) for maximum data throughput.

WIO Offers a wide range of I/O options to deliver truly optimized systems for specific enterprise requirements.

Mainstream- Cost-effective dual processor platforms for everyday enterprise workloads

Enterprise Storage Optimized for large-scale object storage workloads, utilizing 3.5" spinning media for high density and exceptional TCO. Front and front/rear loading configurations provide easy access to drives, while tool-less brackets simplify maintenance.

Workstations Delivering data center performance in portable, under-desk form factors, Supermicro X13 workstations are ideal for AI, 3D design, and media & entertainment workloads in offices, research labs, and field offices.

For more information about Supermicro's X13 family of servers, please visit Supermicro.com/X13

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first-to-market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are transforming into a Total IT Solutions provider with server, AI, storage, IoT, and switch systems, software, and services while delivering advanced high-volume motherboard, power, and chassis products. The products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

All other brands, names, and trademarks are the property of their respective owners.

Photo - https://mma.prnewswire.com/media/2214847/Picture1.jpgLogo - https://mma.prnewswire.com/media/1443241/Supermicro_Logo.jpg

SOURCE Super Micro Computer, Inc.

Read this article:
Supermicro Announces Future Support and Upcoming Early Access ... - PR Newswire

Anyscale launches Endpoints, a more cost-effective platform for fine … – SiliconANGLE News

Anyscale Inc., creator of the open-source distributed computing platform Ray, is launching a new service called Anyscale Endpoints that it says will help developers integrate fast, cost-effective and scalable large language models into their applications.

The service was announced today at the 2023 Ray Summit, the projects annual user conference, which is targeting generative artificial intelligence developers.

Anyscales open-source Python framework Ray is software thats used to run distributed computing projects powered by multiple cloud servers. It features a universal serverless compute application programming interface and an expanded ecosystem of libraries. Using them together, developers can build scalable applications that run on multicloud platforms without needing to worry about the underlying infrastructure. Thats because Ray eliminates the need for in-house distributed computing expertise.

As for the Anyscale cloud platform, its a managed version of Ray that makes the software more accessible. It runs on Amazon Web Services and solves the difficulty of bringing artificial intelligence prototypes built on a laptop to the cloud, where they can be scaled across hundreds of machines.

With the launch of Anyscale Endpoints, developers now have a simple way to build distributed applications that leverage the most advanced generative AI capabilities using the application programming interfaces of popular LLMs such as OpenAI LPs GPT-4. Doing so was impossible before. The only option was for developers to create their own AI models. They required to assemble their own machine learning pipelines, train AI models from scratch, then securely deploy them at large scale.Now, Anyscale said, developers can seamlessly add LLM superpowers to their distributed applications without needing to build a custom AI platform. Whats more, Anyscale said, they can do so at much lower cost, with Endpoints said to cost less than half the price of comparable proprietary solutions.

Robert Nishihara, co-founder and chief executive at Anyscale, said the infrastructure complexity, compute resources and costs have limited the possibilities of AI for developers. With seamless access via a simple API to powerful GPUs at a market-leading price, Endpoints lets developers take advantage of open-source LLMs without the complexity of traditional ML infrastructure, he said.

Anyscale said LLMs provide strong value to applications thanks to their flexibility. They can be fine-tuned with an organizations own data to perform very specific tasks, acting as a customer service bot or a knowledge base for internal workers, and performing many other jobs.

The company said users will be able to run Endpoints on their existing cloud accounts within AWS or Google Cloud, improving security for activities such as fine-tuning. Customers can also use existing security controls and policies. Endpoints integrates with most popular Python, plus machine learning libraries and frameworks such as Hugging Face and Weights & Biases.

In addition, users who upgrade from the open-source Ray to the full Anyscale AI Application Platform will get better controls to fully customize LLMs, and the ability to deploy dozens of AI-infused applications on the same infrastructure.

Perhaps the most appealing feature is the price, with Endpoints available at a cost of $1 per million tokens for LLMs such as Llama 2, and even less for some other models. Anyscale said this is less than half the cost of other, proprietary AI systems, making LLMs much more accessible to developers.

Anyscale Endpoints is available today, and the company promised it will continue to evolve the service rapidly.

Back in March, Nishihara appeared on theCUBE, SiliconANGLEs mobile livestreaming studio, where he talked in depth about the Anyscale platform and its ability to simplify AI workloads and harness machine learning frameworks at scale:

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Original post:
Anyscale launches Endpoints, a more cost-effective platform for fine ... - SiliconANGLE News

Inside Intermaxs ambitious journey to be a sustainable cloud leader – CIO

A fleet of green data centers and a well-advanced plan to stop using fossil-fuel powered vehicles are among the key steps driving Intermaxs mission to be the most sustainable cloud services provider in the Netherlands.

Ludo Baauw founder, corporate social responsibility lead and CEO of Intermax Group, sees firsthand the direct impact that sustainability initiatives have on the environment and biodiversity. Its a perspective that shaped his goals for Intermax, and which can be seen in small things and big efforts.

Intermax, which provides managed cloud services, infrastructure-as-a-service, a broad array of application services, and robust security offerings among them Security Information and Event Management services and a complete Security Operations Center solution has big sustainability ambitions.

It aims to become carbon neutral in 2027 and carbon negative by 2030. Notably, all five of the companys data centers are already 100% powered by Dutch solar and wind sources. The company also just deployed its most advanced, most environmentally friendly, data center.

Our newest data center reflects our decision to fully commit to sustainability, while ensuring a seamless transition without any disruption or inconvenience for our customers, says Baauw. That move required months of meticulous planning, but was made much easier with our VMware-driven clouds, which are entirely 2N, fully redundant systems mirrored across two regional DCs. It was a significant task for our technical team, and we are still evaluating the outcome, but we estimate that the new center is saving 30% on cooling power and delivering significant improvements in power usage effectiveness.

Intermax is also now initiating a proof-of-concept to repurpose the residual heat from the new data centers servers through immersion cooling with the aim of reintroducing that heat into the local districts network.

The company began its sustainability journey more than a decade ago.

We founded the company in 1994 and in 2009 made the commitment to become the most secure cloud service provider in the country with the highest degree of certification, Baauw says. We achieved that goal within a few years and in 2013 we committed to phasing out all of our fossil-fuel powered vehicles, an effort we are on the brink of achieving as we prepare to retire the older ones in our fleet. Now we are raising the bar and aiming to become the most sustainable cloud provider.

It is a goal Baauw sees customers embracing as well. Intermaxs customers, which include leading organizations across industries, are increasingly concerned with their own environmental impact.

We have a lot of customers in government and healthcare, among them many hospitals, he adds. They ask us specifically if we can help them reach their goal of becoming carbon neutral. We can of course do so while empowering them with IT services that draw on the inherent capabilities of the cloud and enable them to focus more on their mission while working smarter and more efficiently. To do that though, our cloud services must be as sustainable as possible.

Baauw believes actions in his own life have helped to drive his companys sustainability mission.

My colleagues and I drink a lot of coffee together at the office and the grounds used to go to the bin, says Baauw. Now we take that home for composting in our gardens. I use it in my meadows, where it ultimately helps to sustain my beehives, which provide the honey I often bring back to the office for our tea. Thats about as circular as you can get, and while but a small effort, is consistent with the mindset that has led us to take on some very big challenges in our drive to make Intermax a company that makes a difference for the environment.

Baauw also stresses that making sustainability gains in any organization is an effort that is inherently collaborative. Spreading the word is essential.

Achieving carbon neutrality is a goal that requires the collective support of vendors and customers, he says. At Intermax, we are deeply committed to becoming a CO2-negative organization as swiftly as possible. We are also very appreciative that VMware shares our passion and commitment to address the issue of climate change, and are honored to join the VMware Zero Carbon Committed initiative. Our industry can make a great difference when we all work together.

Not surprisingly, Baauw is optimistic. He believes great gains are right around the corner.

Cloud computing has the potential to significantly improve power usage efficiency, reduce e-waste and employ new technologies like immersion cooling on a vast scale all while enabling many of the technological solutions that will help us collectively address climate change, adds Baauw. Sustainability and opportunity are intertwined and should be pursued simultaneously. Fine details matter for example, we eliminated plastic and single-use giveaways to promote more sustainable corporate events and we also provide each employee with complimentary public transport and a leased bike. Most importantly though, you have to act and remember that every little bit helps.

Learn more about Intermax and its partnership with VMware here.

Learn more about Intermax and its CSR program in Dutch at https://www.intermax.nl/over-intermax/mvo/

Originally posted here:
Inside Intermaxs ambitious journey to be a sustainable cloud leader - CIO

3 critical stops on the back-end developer roadmap – TechTarget

Those seeking a career in back-end development and enterprise architecture will find occupational roadmaps contain a somewhat predictable list of required skills. These skills typically revolve around a proficiency in one or more high-profile programming languages, an understanding of both relational and NoSQL database operations, the ability to work with major back-end development frameworks and experience with container orchestration.

While knowledge of relational databases and RESTful APIs is essential, back-end developers shouldn't overlook the importance of other important development concepts.

A good roadmap will include certain overlooked skills that are just as important as Node.js runtimes and RESTful API builds.

To help new back-end developers get a step ahead on their journey, let's review three essential topics: messaging, cloud-based services and the modern design patterns that make microservices and cloud-native deployments scalable and productive.

New developers often see topics, queues and messaging as advanced areas. As a result, there is a lack of familiarity with this important back-end concept, along with a reluctance to incorporate messaging into an enterprise architecture.

Back-end developers need a strong understanding of how to incorporate message-based, publish and subscribe systems into their networks. The benefits of these architectures include the following:

In a traditional, synchronous system, the client makes a request to the server and waits for a response. In I/O-based architectures, each request triggers the creation of a new process on the server. This limits the number of concurrent requests to the maximum number of processes the server can create.

With traditional architectures, the server handles requests in the order it receives them. This can result in situations where simple actions stall and fail because the server is bogged down with complex queries that had arrived earlier. By introducing topics, queues and message handling into an enterprise architecture, back-end developers can enable synchronous interactions.

With a message-based system, developers place requests in a topic or queue. Subscribers, which might be SOA-based components or lightweight microservices, will read messages off the queue and reliably handle incoming requests when resources are available. This makes architectures more resilient, as they can spread out peak workloads over an extended period.

Queues can also categorize messages they receive. A publish-subscribe system can call on a server with more power to handle complex requests, while other machines handle the rest.

In modern environments, back-end developers will create subscribers as lightweight microservices that can be easily containerized and managed through an orchestration tool such as Kubernetes. As such, message-based systems are easily integrated into modern, cloud-native environments.

Back-end developers should introduce messaging and queues into an enterprise system whenever the following applies:

A traditional back-end architecture involves an application server that will interact with a relational database or a NoSQL system. Back-end developers are typically well-versed and comfortable with these types of system designs.

The problem with the inclusion of topics and queues is that they require back-end developers or system architects to introduce a new component into the enterprise architecture. Systems that include delayed processing, publish-subscribe systems and asynchronous communication are not typically part of an initial system design. As a result, back-end developers who want to use these types of systems must introduce a new server-side technology into the mix.

A reluctance to change and an excessive aversion to risk can often be a barrier to inclusions of messaging systems in modern, enterprise architectures.

Back-end developers trained and experienced with on-premises data centers sometimes overlook the benefits cloud computing can deliver. To be fully qualified to work in a modern enterprise, back-end developers must know how to create and deploy lambda expressions and how to provision and deploy managed services in the cloud.

Serverless computing allows programmers to develop business logic as lambda functions and deploys code directly into the cloud without the need to provision underlying servers or manage services at runtime.

The cloud vendor hosts lambda expressions on reliable, fault-tolerance infrastructure that can scale up or down to handle invocations as they happen. Without any infrastructure to manage, lambda expressions and the serverless computing architecture that supports them can greatly simplify the deployment stack and make the continuous delivery of code to the cloud easier.

Not only does serverless computing reduce the runtime management overhead, it can also be a cheaper deployment model. The pay-per-invocation serverless computing model has the capacity to reduce an organization's cloud spending, which is always an important nonfunctional aspect of enterprise deployments.

Back-end developers must be aware of the array of managed services the cloud makes available.

In the past, organizations would think about cloud computing as a reliable location for data storage and remotely hosted VMs. Today, with managed services, a cloud vendor handles the complexities of installation, provisioning and runtime management.

For example, in the past, to deploy container-based microservices into the cloud, the client would need to provision multiple VMs, such as EC2 instances in AWS, and install software to support the master Kubernetes node, master note replicas, multiple worker nodes and networking between master and worker nodes.

Runtime management, software updates, logs, upgrades and audits would be the client's responsibility. With a managed Kubernetes service, such as AWS Fargate, these complexities are hidden from the client.

With a managed Kubernetes service, microservices can be deployed directly into the cloud -- without the need to configure the environment. Logging, auditing and change tracking are provided by the cloud vendor.

A complete roadmap for back-end developers must include an ability to build and deploy serverless applications, along with an understanding of the types of fully managed services cloud vendors make available to their clients.

Singletons, factories, bridges and flyweights are widely known by developers as design patterns. Unfortunately, these design patterns are so common that new patterns created from the continuous delivery of cloud-native software hosted in orchestrated containers don't always get recognized.

Every back-end developer must know the standard Gang of Four design patterns and its categories: creational, behavioral and structural. They must also be familiar with modern, cloud-native design patterns as well, such as the API gateway, the circuit-breaker and the log aggregator.

The API gateway is now commonplace in cloud-native deployments. It provides a single interface for clients that might need to access multiple microservices. Development, integration and testing is easier when API makers deliver their clients a single, uniform interface to use.

Additionally, an API gateway can translate the data exchange format used by microservices into a format that is consumable by devices that use a nonstandard format, such as IoT.

A cloud-native request-response cycle might include multiple downstream calls before a roundtrip to a back-end resource is complete. However, if one of those microservices at the end of an invocation chain fails, then the failed microservices pipeline has wasted a great deal of processing power.

To stop a flood of microservices calls that will inevitably lead to failure, a common cloud-native design pattern is to include a circuit breaker in the invocation flow. A circuit breaker will recognize when calls to a microservice have either failed or taken an unreasonable length of time to be fulfilled.

When an error trips the circuit breaker, the client gets an immediate error response, and the circuit breaker will stop any downstream calls. This allows the microservice to continue to function while the failed call is worked out and saves the client time. It relieves the back-end system from consuming unnecessary resources.

When a predetermined amount of time transpires, the circuit breaker will send requests downstream again. If those requests are returned successfully, the circuit breaker resets and clients proceed as normal.

Administrators can deploy stateless, cloud-native applications to any computer node participating in a distributed cluster. However, by default, container-based applications log all events to their node's local hard drive, not to a shared folder or central repository.

As a result, every cloud-native deployment needs a mechanism to push log files from each worker node to a central data store. The logs are then managed within a log aggregator.

The aggregator will not only store the logs for auditing and troubleshooting purposes, it will also standardize the logs in a format that will make it possible to trace user sessions and distributed transactions that touched multiple nodes in the cluster.

Along with knowledge of important microservices design patterns, a cloud-native developer must also be familiar with the tenets of the 12-factor app, which provides guidance on how to configure several things, including the following:

While there is no official standard on how to develop and deploy a cloud-native application, back-end developers who stick to the tenets of the 12-factor app should encounter fewer problems with development, deployment and runtime management of microservices-based systems.

Read the original here:
3 critical stops on the back-end developer roadmap - TechTarget

Loadshedding Cannot Win Against the Cloud – IT News Africa

As load shedding continues to challenge businesses and individuals alike, there emerges a beacon of hope and efficiency: cloud computing. This revolutionary solution is not just the future; its the present, ensuring that load shedding is but a minor glitch in our digital lives.

Load shedding has posed significant challenges to businesses, particularly in South Africa where, according to a 2019 study by the South African Department of Energy, it cost the economy approximately R59 billion, hindering economic growth, says Graeme Millar, managing director of SevenC Computing.

Millar adds, Beyond the immediate loss of productivity due to power outages, businesses also suffer from disrupted communication channels, potential data losses, compromised security systems, and a general decrease in consumer confidence. This sporadic power supply not only stalls daily operations but erodes the competitive edge of South African businesses in the global market.

Millar continues to say that while load shedding may challenge operations, with the power of cloud computing, businesses can transform these hurdles into mere inconveniences, ensuring data remains accessible and protected in the grand digital scheme of things.

At its core, cloud computing is about storing and accessing data and applications over the internet, rather than relying on a local server or personal computer. This internet-based computing method offers a variety of services including storage, management, and processing of data.

What gives the cloud its edge, especially in combatting load shedding, is its decentralised nature. Instead of data being stored in a single location, cloud services use a network of remote servers hosted on the internet.

This means that even if one server in one location faces power issues, the others can take the load, ensuring continuous data access.

While the uninterrupted access provided by cloud services is a significant advantage during load shedding, the benefits of cloud computing dont end there:

Power interruptions dont just cut off access; they can cause data corruption or loss. Cloud storage offers a safeguard against such losses. With real-time data backup, the cloud ensures that data remains intact and uncompromised even during sudden power outages.

The cloud reduces the need for businesses to invest in expensive infrastructure. Instead of maintaining their own servers (which are susceptible to local power issues), companies can leverage the power of robust cloud servers.

Cloud solutions can easily be scaled up or down based on the needs of a business. This dynamic adjustability ensures businesses only pay for what they use and can adapt swiftly to changing demands.

While cloud computing offers a solution for the present, its potential for the future is limitless.

As technology continues to evolve, the capabilities of the cloud will expand, offering even more robust solutions against challenges like load shedding. Its not just about having a backup plan; its about adopting a system that offers efficiency, protection, and scalability.

while load shedding might be a hurdle, its not insurmountable. With cloud power on our side, businesses and individuals can look forward to a future where operations remain unhindered, where data is continuously accessible and protected, and where load shedding is merely a minor inconvenience in the grand digital scheme of things.

Go here to read the rest:
Loadshedding Cannot Win Against the Cloud - IT News Africa

Building a Cyber Resilient Business: The Recover Layer – MSSP Alert

The security landscape is changing rapidly and attacks are becoming more sophisticated and complex. At the same time, businesses worldwide are digitizing their workflows and relying on cloud platforms to carry out their operations.

While these tools are great for storing data and interacting with customers, they can also make businesses more vulnerable to cybercrime. In 2022,one in threebusinesses experienced data loss from these types of SaaS platforms.

Now more than ever businesses need to become cyber resilient by incorporating a layered approach to security that includes steps for prevention, protection, and recovery. When a breach or other disaster happens, every business should have systems in place to minimize loss and resume business as soon as possible.

Data security and cyberattack prevention often get the most focus, but a comprehensive approach to cyber resiliency must consider business continuity and recovery too. Any business downtime can put your customers, your reputation, your digital infrastructure, and your business model at risk.

There are a number of things that can cause disruption, business downtime, and data and device vulnerability. An attack or breach is only one threat you need to be prepared to recover from. Data loss can also occur from:

Humans make mistakes all the time. An employee could accidentally wipe out data that you need.

Hurricanes, earthquakes, floods, tornados, and fire can all pose a threat to on-site servers.

Hardware and network failures prevent access to your applications and data which disrupt your business operations.

When these events happen, you want the right tools in place to restore your data quickly.

In order to be cyber resilient, consider implementing the following strategies:

Its easy to believe that using a cloud-based application means your data is backed up and secure, but its not. SaaS vendors explicitly state that data protection and backup is the responsibility of the customer, yet many businesses rely on the native recovery options.

Those options are limited and ineffective. Deleted files are only stored in an applications trash bin for a matter of weeks, while theaverage time to detect a breach is ten months. By the time most companies notice an attack has happened, its too late to restore from the application alone.

Having a separate backup is not just necessary to ensure business continuity, but its also required to remain regulation compliant. GDPR, HIPAA, Sarbanes-Oxley, New Yorks SHIELD, and Californias CCPA all require that a business prove it can recover information after a loss.

OpenText Cybersecuritys Carbonite Cloud-to-Cloud Backup protects critical data stored on SaaS platforms by automating backups, encrypting the information, and keeping it secure so that you can restore data quickly.

Companies throughout the world are transitioning to hybrid operations where they use a mix of local servers and cloud system storage for storing data and doing business. As they expand their digital footprint, theyre also expanding their risk of cybercrime and data loss between systems.

In the event of a breach or other disaster, you need a cohesive and continual backup that will allow a full restoration.

OpenText Cybersecuritys Carbonite Server Backup provides a secure and continuously updated backup for critical local and cloud servers. With this tool, you can manage your backups and recover specific data or entire systems when needed.

You can have every tool in place for preventing and protecting against cyber attacks and still be susceptible to data loss from natural disasters, hardware failures or power outages. Many of these events give us little to no warning, so being prepared to restore data at any time is necessary.

OpenText Cybersecuritys Carbonite Recover is a continuously updated, running backup of critical servers and systems that lets you failover to an up-to-date copy with just a few clicks and within minutes.

All of these tools are part of OpenText Cybersecurity, providing you with all three aspects of recovery protection through a single provider in addition to tools that address the prevention and protection layers of a cyber resilient approach.

The team at My Fathers World, a Christian homeschool curriculum provider located in tornado country, constantly worried about system backups and disaster recovery.

Initially they ran a parallel physical server system that was located eight miles from their primary servers, using software to backup between the two locations. That approach became too expensive and time consuming as the business grew. Thats when My Fathers World began using cloud-hosted versions of Carbonite Server Backup and Carbonite Cloud Disaster Recovery by OpenText.

Carbonite Server Backup stores copies locally as well as in the Carbonite cloud and provides business continuity and data protection while meeting compliance regulations. Carbonite Cloud Disaster Recovery provides a fully managed service with a remote team of continuity and disaster recovery specialists that ensure the recovery of the businesss critical systems in the cloud.

My Fathers World signed up for the 48-hour recovery service, but in yearly tests, a full recovery has been achieved in just nine hours, while recovery of a specific file only takes minutes.

Guest blog courtesy of OpenText Cybersecurity. Regularly contributedguest blogsare part of MSSP Alertssponsorship program.

Continued here:
Building a Cyber Resilient Business: The Recover Layer - MSSP Alert

Solid air: building secure clouds – Software applications – ERP Today

Observability before fragility, this is how to build secure clouds.

Building contemporary software applications is a complex process. Despite the rise of low-code/no-code platforms and a whole firmament of software application development and data analytics automation channels, we still exist in a world where systems are created with instabilities, fragilities and incompatibilities.

With ERP systems very typically supporting mission-critical use cases which in some scenarios straddle and support life-critical software deployments the need to assess where the pressure and pain points are in the modern IT stack has never been more pressing.

But where do we start? Is it a question of the surface-level interactions at the presentation layer and the way we manage what users are capable of doing inside their chosen applications user interface? Is it the middle-tier networking layer and all the application programming interface (API) connections that now form neural joins between data, services, operating systems and applications themselves? Could it be the lower substrate base of our technology infrastructures and the way these are now engineered to perform in essentially cloud-native environments? Or, inevitably, is it perhaps all of these tiers and all the internal connection points within them?

Spoiler alert! No prizes for guessing that its obviously everywhere. Code fragility manifests itself in a seemingly infinite variety of forms and functional formulations from syntax structures to higher-level system semantics. Given the Sisyphean challenge ahead then, shall we at least start at the foundation level with infrastructure?

We need to build enterprise technologies based upon infrastructures that work as dynamically orchestrated entities Sumedh Thakar, Qualys

Qualys CEO Sumedh Thakar says he has looked at the use case models being showcased across his firms customer base and beyond. As an IT, security and compliance solutions company, Qualys has advocated a concerted move to Infrastructure-as-Code (or IaC) as a core capability for secured operations.

Thakar has described the need to now build enterprise technologies based uponinfrastructures that work as dynamically orchestrated entities with fine-grained engineering controls

As it sounds, IaC is a SaaS cloud method that delivers a descriptive model to define and subsequently provision a cloud infrastructure layer. Just as a physical infrastructure would include data storage capabilities, server capacities and properties, lower system level relationships and network management control tools such as load balancer an IaC layer does the same, but is defined by code, for the cloud.

Among the technologies on offer here is Qualys CloudView, an IaC-level management product designed to enable firms to assess what is now being called their level of cloud security posture management (CSPM). CEO Thakar points to his firms ability to shift left (i.e. start earlier) an enterprises approach to cloud security via a combination of integrated application services designed to insert security automation into the entire application lifecycle. Qualys says this ensures visibility into both pre-deployment application build-time and post-deployment live operational runtime environments to check for misconfiguration and more, all via a single unified dashboard.

But why does so much misconfiguration happen and can we do more to stop it? Largely, it appears to be a natural byproduct of diversity in both cloud-native and terrestrial enterprise software platform environments that may eventually migrate to cloud.

Misconfigurations happen for a range of reasons, the most blatant being insecure-by-default settings, where security or hardening is an added control rather than a default state, says Martin Jartelius, CSO at Outpost24. The next challenge is that configurations are often evaluated in test environments where insecure configurations may occur, such as use of invalid certificates or ignoring signatures and validations at the point of testing. This means that once a transition to production is made, insecurities remain just as they were tested.

Years ago, the silver bullet was called gold builds for workstations and servers today its the Infrastructure-as-Code Martin Jartelius, Outpost24

Further, notes Jartelius, new functionality is added over time and unless organizations are attentive, this may introduce new and once again default insecure options. He agrees that an IaC approach offers the benefits of reusing trusted and well-audited templates and thereby reduces the room for human errors. It does not address the root causes of misconfigurations, but it does allow consistency.

Without consistency, maintenance and keeping up to date become even harder, and keeps getting harder over time, clarifies Jartelius.

Years ago, the silver bullet was called gold builds for workstations and servers today its the Infrastructure-as-Code. It will not solve all issues, but it will provide a remedy to a degree and most importantly it will allow those proactive who utilize its full power to reap its benefits.

For those who are not versed in what they are doing, it however creates the opportunity to do a low-quality job faster, so at the end of the day just as any other tool its usefulness depends entirely on whose hands it is put.

With so many differences in syntax, format, structure, code dependencies and other delineating factors across every development environment, every application toolset and indeed every cloud platform, misconfiguration is an inevitability throughout the modern IT stack all the way to the presentation layer. But that said, we can dive into cloud security more specifically to understand whats happening here.

Misconfigurations in the cloud can cause data breaches James Hastings, eSentire

Because of the complexities described thus far, the overarching issue many organizations see when working with live cloud environments is a lack of visibility. This manifests in multiple ways such as complications when more than one cloud account or platform is leveraged, issues where the chosen technology necessitates new tooling for security monitoring purposes, or a lack of understanding of what is deployed or how its configured. This is the opinion of James Hastings in his position as senior product manager at eSentire, a cloud software and security specialist focused on managed detection and response.

Misconfigurations in the cloud, which occur due to improper settings being used when architecting and deploying to cloud platforms, can cause data breaches that have a business-wide impact. According to a recent Cyber Security Hub study on the future of cloud security, almost half of the respondents (44 percent) said their primary challenge with cloud security was a reduced ability to detect and prevent cloud misconfigurations.

This lack of visibility usually stems from improper tooling that either cant pull the needed data from a cloud account or workload, or where the tool isnt designed to scale in cloud environments, said Hastings.

These issues impact both ends of the cloud adoption model; users early in their cloud journey struggle from a lack of knowledge and experience, while cloud-native customers tend to run into issues establishing visibility and monitoring for services like serverless functions and other shared or ephemeral technologies. Outside of visibility, the eSentire team reports that cloud customers experience some common security pain points like alert fatigue and fear of (or the inability to detect) unknown threats.

Looking at the way cloud-centric IT departments are run today, can we ask whether IT security teams and developers really collaborate with each other effectively even in the so-called age of DevOps, when it should arguably be taken as a given? Or is there still a need to improve this aspect of operations?

It really depends on the teams in question, but in my experience, yes, some do and its becoming easier than ever, says Hastings. Newer cloud security tools take a more holistic approach to security. These solutions usually feature multiple modules that are all intertwined to offer native multi-signal correlation out of the box and are increasingly targeting the shift of security into the development process.

Tooling such as code analysis focuses on hardening application code before its deployed to server-based or serverless workloads; the hardening of this code reduces the attack surface of the eventual workload and also cuts down on the patching, investigation and response that might otherwise be necessary.

The previously noted CSPM-style checks found in IaC are helpful when it comes to evaluating cloud infrastructure for misconfigurations but, notes the eSentire engineering team, this process happens as a fundamental part of the automation template. So this enables organizations to create secure infrastructure from the get-go and spend less time on remediating platform misconfigurations.

The last tool that we see making a significant impact on this collaboration is the idea of integrating vulnerability assessment into a continuous integration and continuous deployment (CI/CD) pipeline, explains Hastings. Here, before any code or a container can be published, it must have a vulnerability assessment run against it. Organizations are able to set their own bar for security compliance and even go as far as blocking a build that doesnt meet their security standards.

All said and done then, doesnt where we stand now in cloud security (and system robustness as a whole) beg a wider question? Are modern IT approaches built to be secure by design, or are we missing out on embedding security into these processes from the start? It comes back to a comment and sentiment spoken many times when customers move to the cloud cloud is more secure. While the statement is true in broad terms, it really needs qualification; perhaps we should instead say cloud can be more secure, but its up to each and every organization to lock it down and make it so.

The problem with cloud security and perhaps system security in general is, its all too often bolted on and implemented as an afterthought. Hastings speaks from a position of experience and reminds us that most cloud practitioners (and certainly all cloud-native practitioners) realize this inconvenient truth.

This has ultimately spawned the idea of shifting security left i.e. starting it earlier and/or pushing a more embedded approach to security into engineering and DevOps practices, says Hastings. Doing so embeds security throughout the organizations operational fabric and means that code is written and infrastructure is created to a secure and locked-down narrative. It reduces the number of times that teams need to circle back to change code, implement patches or make other changes that likely have change control and approval processes in place.

The combination of security and development streamlines both processes, reduces the organizations risk, and enables velocity.

Cloud computings evolution has been nothing if not flaky from the start. We know that AWS CEO Adam Selipsky talks of the very early stages of cloud as having been a somewhat embryonic phase, when the virtualization planets were still aligning. Its for sure that we have spent the last decade and more shoring up security, consolidating cloud tool sprawl and looking for key avenues through which we can automate many of the management tasks that can lead to cloud fragility in the first place.

If we had the chance to do cloud all over again, we might use a different and more considered approach, but perhaps we wouldnt. This might just be a hefty symptomatic nuance of the way new technology platforms rapidly escalate and eventually germinate, oscillate, occasionally fluctuate and finally become part of our operational substrate.

Go here to read the rest:
Solid air: building secure clouds - Software applications - ERP Today

Apple admits that data should be kept off its cloud – Fudzilla

Much safer that way

Fruity cargo cult Apple has effectively admitted that if you want to keep your data safe it is problem better that you dont put it on its cloud.

While it was banging on about data security at its Wonderlust event Apple seemed to be telling all that listened that keeping your data safe sometimes means keeping it out of the cloud.

Apple said that the risk was always there that someone, somewhere, may get access to your personal information in the cloud.

To prove Apples point it said that all Siri Health data is processed on Apple Watch S9 instead of the cloud and the iPhone 15 Pro's A17 Pro chip Neural Engine uses machine learning on the device without sending your personal data to the cloud.

The Tame Apple Press, along with those speaking at the event seem to be pushing edge security rather than cloud. However, what they appear to be ignoring is that Apple is admitting that its cloud security is not up to snuff.

If only companies which administered cloud servers took their security a little more seriously.

See the rest here:
Apple admits that data should be kept off its cloud - Fudzilla