Page 2,806«..1020..2,8052,8062,8072,808..2,8202,830..»

Robin.io Platform on QCT Servers Accelerates Cloud-native Transformation – Business Wire

SAN JOSE, Calif.--(BUSINESS WIRE)--Quanta Cloud Technology (QCT), a global data center solution provider, announced IronCloud Robin Cloud Platform, its latest addition to its 5G solutions.

The partnership between QCT and Robin helps customers accelerate their cloud-native transformations. The solution is built on the Robin.ios Multi-Data Center Automation Platform (MDCAP) and Cloud Native Platform (CNP), a comprehensive carrier-grade bare-metal to services orchestration and enhanced Kubernetes platform, for 5G and Multi-access Edge Computing (MEC) applications. The solution harmonizes virtual machines and containers, enabling unprecedented resource sharing with easy to use, unified workflows and lifecycle automation that is customer proven to reduce both CAPEX and OPEX. All of this is deployable on QCT servers using 3rd Gen Intel Xeon Scalable processors.

To accelerate cloud-native transformations, QCT and Robin have developed a centralized automation platform using hardware, network acceleration technologies and best practices for orchestration, automation and lifecycle management. The result is an optimized solution that utilizes a cloud-native infrastructure for Telco workloads that supports both virtual machines and containers, in the same resource sharing cluster, from regional data center to far edge. Operators and enterprises can reliably reach the high throughput and low latency required by cloud-native 5G applications (i.e., Core, RAN and CDN). This partnership reduces the challenges of deploying and managing networks, while providing an optimized, cost-efficient infrastructure.

IronCloud Robin Cloud Platform features the following 3rd Gen Intel Xeon Scalable processor-based servers:

Lifecycle management and automation are keys to reducing 5G infrastructure and operation costs, said Mike Yang, president of QCT. By partnering with Robin.io, we have created an automated cloud-native platform, IronCloud, for our mutual customers to boost 5G application time to market.

Our partners are creating optimized, performant and automated solutions that accelerate the path to cloud-native for 5G, stated Keate Despain, Intel Network Builders and Ecosystem Programs Director. The IronCloud Robin Cloud Platform is a solution that will enable companies to deliver an end-to-end cloud native platform and a 5G service delivery network at cost savings.

Cloud-Native technology has proven benefits for 5G economics, said Partha Seetala, CEO and founder of Robin.io. Our close relationships with QCT and Intel have delivered a production-ready open and extensible platform for deployment and life cycle management. Its an entire Telco Network Stack, with both CNFs and VNFs that offers industry leading TCO.

About QCT

QCT is a global data center solution provider that combines the efficiency of hyperscale hardware with infrastructure software from a diversity of industry leaders to solve next-generation data center design and operational challenges. QCT serves cloud service providers, telecoms and enterprises running public, hybrid and private clouds. The parent of QCT is Quanta Computer, Inc., a Fortune Global 500 corporation. http://www.qct.io

About Robin.io

Robin.io, the 5G and application automation platform company, delivers products that automate the deployment, scaling and life cycle management of data- and network-intensive applications and for 5G service chains across edge, core and RAN. The Robin platform is used globally by companies including BNP Paribas, Palo Alto Networks, Rakuten Mobile, SAP, Sabre and USAA. Robin.io is headquartered in Silicon Valley, California. More at http://www.robin.io and Twitter: @robin4K8S.

Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

See more here:
Robin.io Platform on QCT Servers Accelerates Cloud-native Transformation - Business Wire

Read More..

Top Edge Computing Companies of 2021 – IT Business Edge

Edge computing is a distributed information technology (IT) architecture that not only addresses the shortcomings of traditional cloud computing, but also works hand-in-hand with it. The networking philosophy focuses on bringing computing as close to the source of client data as possible.

In essence, edge computing refers to bringing computation to the networks edge (a users computer, an Internet of Things (IoT) device, an edge server, etc.) and running fewer processes in the cloud. By moving cloud-intensive processes to local places, edge computing minimizes long-distance communication between a client and a server.

Also read: How AI Will Be Pushed to the Very Edge

Cloud computing offers several advantages over on-premises computing. Cloud service providers offer centralized access to the cloud via any device over the internet. Cloud computing, however, can lead to network latency, given the sheer distance between users and the collection of data centers where cloud services are hosted.

This is where edge computing comes into the picture. By bringing computation closer to end users, edge computing minimizes the distance client data has to travel while keeping the centralized nature of cloud computing. For example, lets say a bank incorporates 50 high-definition (HD) IoT video cameras to secure the premises. The cameras stream a raw video signal to a cloud server. On the cloud server, the video output is put through a motion-detection application to segregate clips that feature movement. These clips are saved on the database server. Due to the high volume of video footage being transferred, significant bandwidth is consumed.

By introducing the motion sensor computation to the network edge, much of the video footage will never have to travel to the cloud server. This will lead to a significant reduction in bandwidth use. This can be implemented by introducing an internal computer to each video camera.

The cloud server can now communicate with a larger number of video cameras, as only important footage is transferred to the server. Edge computing has several use cases, some of which include self-driving cars, medical monitoring devices, and video conferencing. In this guide, we will traverse all you should know about the best edge computing companies.

Also read: The Impact of 5G on Cloud Computing

Here are the best edge computing providers:

Mutable is a public edge cloud platform that operates like an Airbnb for servers. Mutables public cloud deployment on the edge brings the internet closer to users by upgrading underutilized servers of network operators at a physical distance of 40 kilometers or less into new revenue streams.

By bringing the processing closer to the makers of next-generation IoT, robotics, autonomous vehicle, artificial reality/virtual reality (AR/VR) and cloud gaming applications, Mutable enables developers to focus on continuous product iteration and development.

Key Differentiators

MobiledgeX is building a marketplace of edge computing services and resources that will connect developers with leading telecom operators like British Telecom, Telefonica, and Deutsche Telekom, in the quest to power the next generation of devices and applications.

The edge computing company develops edge cloud software to help telco operators run telco edge cloud on their infrastructure and provides device native software development kit (SDK) and matching engines that developers use to bring their applications (cloud-native) to the network edge and take advantage of the telco edge cloud.

Key Differentiators

Affirmed Networks technology is based on an open architecture and reduces operational costs, provides flexibility in the application of services and delivers high performance in a small footprint to help you meet customer requirements.

Affirmed Networks mobile edge computing (MEC) solution, Affirmed Cloud Edge, offers enterprises and communication service providers (CSPs) the ability to keep data local and host applications on the premises of customers, to maximize efficiency and reduce latency.

Affirmed Cloud Edge can be utilized and deployed at the network edge or used as part of the cloud edge offerings from Microsoft Azure, Amazon Web Services (AWS) or Google Cloud.

Key Differentiators

EdgeConneX builds and operates proximate, powerful, purpose-built data centers for customers in any deployment or scale edge, far edge, hyperscale and edge cable landing stations. At the time of writing, EdgeConneX operates over 40 data centers.

EdgeConneX designs and deploys purpose-built data centers to ensure the most efficient placement of customer network and IT infrastructure, in order to reduce latency and heighten performance. This includes rapidly built and specially tailored large-scale data center solutions for hyperscale customers.

Key Differentiators

Section helps accelerate your path to the network edge. The edge computing platform powers next-generation applications for software as a service (SaaS), platform as a service (PaaS), and application providers to deliver more secure and quicker digital experiences.

The edge computing provider allows you to deploy, scale, and protect your applications at the network edge, with the necessary flexibility, control, simplicity, and familiarity.

Key Differentiators

Mutable harnesses the power of underutilized servers at a physical distance of fewer than 40 kilometers and ensures latency rates of less than 20 milliseconds. The edge computing company transforms these servers into instant micro-data centers and promotes security, proximity, speed and sustainability.

Overall, Mutable is the best edge computing platform for enterprises. MobiledgeX simplifies deployment, management and analytics for app developers and telco operators and is a sublime platform. Affirmed Networks supports low latency, high-bandwidth applications and helps reduce OpEx with automated service provisioning and orchestration.

EdgeConneX is ideal for hyperscale customers. Section is a worthwhile option in its own right. Go through the offerings of each of the edge computing providers mentioned in this guide and select one that corresponds to your enterprise requirements and expectations.

Read next: AIOps Trends & Benefits for 2021

Originally posted here:
Top Edge Computing Companies of 2021 - IT Business Edge

Read More..

The Basics of End-to-End Cloud Media Production – TV Technology

A paradigm shift in media-production technologies is changing how the cloud is perceived, used, presented and applied to media production. The lines between ground-based and cloud-based media production are becoming blurred.

One of the first steps in getting to a cloud-based production environment is understanding how the requirements and the components differ and those physical hard interfaces are being replaced by topics like dashboards and virtualized desktops.

We start with cloud computing, an application-based solution known also as an infrastructure in the cloud. Cloud computing is divided into a front-end part and a back-end part. To the user, these details dont need to be thoroughly understoodbut it is helpful to know that the end-to-end ecosystem is changing so that acceptability of these differences can be evaluated and adopted.

Users needing access to the cloud will typically employ a browser and will utilize a (public) internet service provider (ISP) for that access. Sometimes, instead of an ISP, there may be a direct connection portal available by the cloud service provider as a cost-added feature that provides for faster, more secure connectivity.

The primary component of a cloud computing solution is its back-end, which has at its core the responsibility for securing, storing and/or processing data on its often proprietary central servers, compute stacks, databases and storage sets. Cloud computing is multifaceted, employing databases, servers, applications and other elements including orchestration, storage and monitoring.

For years this Cloudspotters Journal has identified the advantages of cloud capabilities including scalability, virtualization, availability and the like (Fig. 1). It goes without saying that services available in the cloud continue to grow.

Yesteryear had cloud focused on storage. Today, cloud providers offer hundreds of specific services ranging from compute and storage to cloud consulting (through partners) and management. Each provider aims to enable users to deploy their compute and storage requirements in the cloud offering various competitive platforms, all eager for users to experiment in any way conceivable.

MEDIA-SPECIFIC AND CLOUD FORWARD

In more recent times the capabilities typically exposed in cloud services have started to reach deeper and farther into media-specific offerings. Global connectivity coupled with the rapid exchange of content throughout the world has strengthened those capabilities, with the provisioning of services increasing at an almost exponential rate. Applications for media production in the cloud are no longer just a unique opportunity; they are becoming a way of operating.

Cloud-forward initiatives are definitely expanding beyond simple storage and compute functions or alternatives to back-office software consolidation. Once thought of for storage as backup, cloud services now endeavor to provide full-time playout of programming on a channelized basis that includes sports, gaming, OTT services and delivery, and even into end-to-end production using core products from providers who had previously only utilized ground-based server architectures.

Major media organizations are steadily adopting and combining technologies that take the hardware out of the shop and place it in an entirely software-centric environment connected by on-ramps and off-ramps located almost anywhere. Dynamic scalability and high-performance storage/compute capabilities are enabling this fundamental change in how content is assimilated into the production ecosystem.

Today GP8-based virtual machines are now enabled using infrastructure-as-code into software applications that were formerly run on dedicated pizza-box servers. As a result, organizations are already shifting away from in-house central equipment rooms and past the outsourced data center directly into public cloud environments. Media workflows are now being developed as cloud-native solution sets, ignoring how things used to be done and placing them into unconstrained, non-interdependent environments that are treated almost like the way a new-greenfield facility might have been engineered for single purpose occupation as recently as five years ago.

Automation is a key factor in the success of making cloud-native media production successful. Back-office-like servers are no longer mainstream. Individual sets of configurations coupling a single-function device into the next single operation are evaporating. Plug-in management that was custom-tailored to the product and then tweaked to meet the operational needs are now orchestrated to rapidly adapt to multiple functional requirements without discrete, complex or time-consuming adaptations.

Once the automation process is confirmed and the capability requirements are established, the rest just happens. Through a dashboard abstraction of the functions, users are then ill concerned with all the nuances of manually moving files around various services that are typically steeped in numerous interfaces, which must be individually accessed and configured for each successive use or application. Flows are continuous, repeatable (if necessary) and able to be monitored.

Using configuration management tool sets, images of the application-specific interfaces (APIs) are landed on a resource pool of compute servers that operationally never see the light of tech-administrators. Systems are booted up, configured for the applications per the dashboard and the artist/editor starts their creative tasks.

COLLAPSE AND REDEPLOY

Once the production, show or activity is completed and confirmed, automation then collapses the environment, which stops the process and halts the billing charges. Ground-based users dont do anything other than confirm the "end" or "stop" command and walk away. If there is a requirement to change or adMust something, the exact configurations can be re-established and the workflows can continue as before.

New capabilities were brought into accelerated use as a consequence of Covid-19 and are being applied to next-gen-production ecosystems. Content supply chains can now be adapted to cloud, assuming the feature sets are available. Previously ground-based features, such as analysis, transformation and quality control now become exception-based background tasks in the new cloud model.

Virtual desktop infrastructures (VDI) essentially take all the previously required elements of media production and wrap them into a solution that is secure, features high-quality and fast-reacting actions and can be accessible anywhere there is a stable internet connection with sufficient bandwidth (Fig. 2).

VDI technologies, which utilize functionality known as zero-clients, offer users a variety of advantages including mobility, accessibility, flexibility and improved security. The latter, security, is accomplished because now the data lives only on the servers and does not need to be transported to the active users workspace. Cloud-enabled platforms can replicate data using other secure technologies only to specific locations, even employing transparent ledger technology known as blockchain.

Microservices and containerization are the keys to this future cloud infrastructure for production (Fig. 3). Calling up only what is needed, only when it is needed, is what production services in the cloud are moving to.

Entire catalogs of capabilities are growing out of these cloud-native services, which up until now could not be established except through discrete sets of hardware and software that were purpose-built to do one and usually only one specific function or operation.

Reliable, secure, scalable, protected and cost-effective media productionwithout the annoyances of managing a complex local infrastructureis changing the face of media from one end to the other. Whether the production services are hosted in a public cloud, regional co-lo site or even in your own private data center, the concepts developed (and being perfected all the time) are real, available and are here today.

If youre not currently using these kinds of services, you probably will in short order. Stay tuned.

Karl Paulsen is chief technology officer at Diversified and a frequent contributor to TV Tech in storage, IP and cloud technologies. Contact him at kpaulsen#diversifiedus.

Go here to read the rest:
The Basics of End-to-End Cloud Media Production - TV Technology

Read More..

The 10 Coolest Cloud Security Tools Of 2021 (So Far) – CRN

Locking Down The Cloud

Vendors have made great advances thus far in 2021 around securing cloud applications, data, and workloads, rolling out tools that can discover critical data located in the public cloud, centrally manage Amazon Web Services EC2 instances in real-time, and visualize and prioritize cloud security risks to reduce the attackers blast radius and facilitate faster investigations.

Securing customer and workforce identity has been a major area of investment, with companies provisioning just-in-time access to hybrid and cloud servers and making it possible for all users and devices to securely access cloud, mobile, SaaS and on-premises applications and APIs. Other vendors, meanwhile, have focused on automatically discovering and preventing risks for new SaaS applications.

Four of the coolest new cloud security tools come from companies based in California, four come from companies based in the Northeastern United States, and companies based in Colorado, Michigan and France each contributed one offering to the list. Read on to learn what new cloud security features and functionality partners are now able to enjoy.

For more of the biggest startups, products and news stories of 2021 so far, click here .

See the article here:
The 10 Coolest Cloud Security Tools Of 2021 (So Far) - CRN

Read More..

How Virtual, Cloud-Based Technologies Are Powering the Next Industrial Revolution – Total Retail

Companies inmanyindustries, including technology, construction, and healthcare, are completely revamping the way in which their manufacturing arms are designing, building, producing and servicing the goods they needfor projects and customers.

Just a short five years ago, these manufacturers began to embark on the second coming of their own industrial revolution. It wasnt enough that the internet and even mobile technology created a wealth of efficiencies in the production cycle.

Instead, a few years ago, these manufacturers began to see how virtual technologies could completely change the way they operated, interacted with design teams, and providedmore timely responses to customer inquiries.

Gains in these areasas a result ofvirtual technologies have been quick toillustrate early and noticeable returns for the executives running these businesses.

In fact, nearly half of the executives polledin a recent survey(44 percent) said they're experiencing approximately 10 percent in operational savings by using immersive mixed reality technologies in the design, training, production or customer service areas of their business. A yearago, only a quarter of businesses (26 percent) were seeing similar results in savings.

In terms of overall production efficiency, 45 percent of enterprises are seeing at least a 10 percent increase in production efficiency increases today, up from only 11 percent a year ago.

However, these increases don't tell the whole story.When these virtualized technologies (such as augmented reality and virtual reality, AR/VR) were initially utilized by manufacturers, they were leveraged in an on-premise environment. However, today they're utilizedin a cloud environment, bringing even more efficiencies and returns to the business.

Thebasicdifference between cloud vs. on-premisedatais where itlives.On-premisesoftwareand dataareinstalled locally, onamanufacturerscomputers and serversinside the actual facility, whereascloud softwareand dataarehosted onaserver and accessiblevia a web browserover the internet.

On-premiseinfrastructures limit the speed and scalability needed for todays virtual designs, and italsolimits the ability to conduct knowledge sharing between organizations that can be critical when designing new products and understanding the best way for virtual buildouts.

Manufacturers are overcoming these limitations by leveraging cloud-based (or remote server-based)virtualplatforms powered by distributed cloud architecture and 3D vision-based artificial intelligence. These cloudplatforms provide the desired performance and scalability to drive innovation in the industry at speed and scale.

Imagine what it would be like to virtually design an airplane usingthe different eyeglass filters used by an ophthalmologistduring a typical eye exam. Some filters allow you to readonly thelarger print because they restrict your ability to read this would be designing virtually in anon-premisesoftware environment. Other filters allow you to see fine print with pinpoint accuracy this is what's possible in a cloud environment.

One of the key requirements forvirtualapplications is to precisely overlay on an object its model or the digital twin. This helps in providing work instructions for assembly and training,as well as catches any errors or defects in manufacturing.

Most on-device object tracking systems use a 2D image and/or marker-based tracking. This severely limits overlay accuracy in a 3D environment because 2D tracking cannot estimate depth with high accuracy, and consequently the scale, and the pose. This means even though users can achieve what looks like a good match when looking from one angle and/or position, the overlay loses proper accuracy during alignment.

Deep learning-based 3D AI allows users to identify 3D objects of arbitrary shape and size in various orientations with high accuracy in the 3D space. This approach is scalable with any arbitrary shape and is amenable to use in enterprise use cases requiring rendering overlay of complex 3D models and digital twins with their real-world counterparts.

Cloud technology is pivotal to achieve this level of detail because technology and hardware used in anon-premiseenvironment easily overheats from the compute power needed. Virtual technologyrequires a precise and persistent fusion of the real and virtual worlds. This means rendering complex models and scenes in photorealistic detail, rendered at the correct physical location (with respect to both the real and virtual worlds) with the correct scale and accurate pose.

This isonlyachieved today byusing discrete GPUs from one or morecloud-basedservers and deliveringthe rendered frames wirelesslyor remotelyto the head-mounted displays (HMDs) such as the Microsoft HoloLens and the Oculus Quest.

An increasing number ofmanufacturers aremoving theirvirtualsolutions away fromon-premise data centers. Today, 48 percent of enterprises are leveraging cloud-hosted environments, and another 21 percent say they will leverage the cloud when they implement immersive reality solutions in the future.

DijamPanigrahi is co-founder and COO ofGridRaster, a leading provider of cloud-based AR/VR platforms that power compelling high-quality AR/VR experiences on mobile devices for enterprises.

Link:
How Virtual, Cloud-Based Technologies Are Powering the Next Industrial Revolution - Total Retail

Read More..

How Much Is Hybrid Work Really Going to Cost? – PCMag

If one segment of the tech industry absolutely loves COVID-19, it's definitely cloud vendors. Whether it's public cloud infrastructure or all the managed services that run on them, the pandemic has companies accelerating cloud adoption just to keep the business working with no one in the office. In fact, a study by Synergy Research Group has business cloud spending growing by 35% in 2020 and exceeding on-prem data center and software, which dropped by 6% in the same year.

Now factor in that the pandemic has us all fired up about hybrid work, with IDC research predicting that up to 90% of enterprises will be permanently using that model by the end of 2022. That doesn't just mean employees working on their couch. It's that plus multi-cloud for mission-critical apps that now need to span hybrid workers. This is where some resources are at HQ, some are in the cloud, some managed services are on one cloud while others are in another cloud, some workers are at home, some are in the office. If you're an IT pro, that's giving you a headache. If you're a CFO, it's about to give you a migraine.

Just because SMBs are moving back to the office, they're not going to scale back cloud spend. Hybrid work necessitates new kinds of managed services to bridge that couch-to-office gap. You'll need that in the new normal because empowering remote workers using cloud apps is simply more cost-effective, but it's still a new cost. And it's probably not the only cloud cost you need to look at.

But that doesn't mean you shouldn't take close look at what you've got clouding around out there. A recent survey by CloudCheckrfound that 35% of companies reported unused virtual servers that were never disabled, 34% said employee adoption of cloud infrastructure and services was often low, and 32% said that apps they tried to migrate to the cloud were blocked because the realities of cloud architecture weren't taken into account prior to the move.

And the CloudCheckr survey doesn't even take into account managed services. Shadow IT, a pandemic, and a cloud computing universe that has a marketing answer for pretty much any kind of workload without your IT guys ever needing to wander into a data center, is a bad recipe for your wallet.

And don't kid yourself, it's happened. The pandemic didn't just hit us hard, it hit us fast. So more than a few SMB operators simply told teams to tap into whatever cloud services they needed to deal with remote work and left IT to play catch-up. So where does that leave you now that things are starting to return to sanity?

It leaves you doing math, but you need to do more than just the basics. Sure, do an audit first. What's in the cloud now? Why did someone put it there? Tally that up and see what you're spending now and what you were spending in 2019. Now you know how much more money is floating up into the cloud and why. But the next step takes more homework.

Is the cost justified or is there a cheaper way? That's not just math, that's some real deep investigation into not only what kinds of easy peasy cloud services are relevant, but also what's still possible in that musty, dusty data closet. Think about that for a bit, and you'll realize it's a lot of work, but it's also necessary work that you need to finish as soon as possible.

And there are other ouch factors to consider:

Your IT skillset. Clouds like Azure and AWS aren't lightweight environments. Yeah, you can spin up a Windows server in a few minutes, but spinning up dozens of them, connecting them, clustering them, connecting on- and off-prem clients to them, installing whatever workloads you need on them, and then making sure they can handle a sudden scaling need or disaster recovery problem...no, that's not easy and it's only part of a much longer list of requirements. You can't be an IT generalist to handle all that. You need full-on devops personnel, which means you need to factor in that cost.

Security. Every public cloud server that talks to clients or another server in a different cloud is a target both on its own and when that communication happens. Every client talking to any cloud service is a potentially vulnerable data exchange. All that data stored in the cloud is open to all kinds of attacks or simple carelessness from the service manager. Sure it's convenient, but even with a surge in cloud security, it's still a vulnerable environment and you need to secure yours as much as you can. That'll need a budget infusion, too.

The real cost of remote workers. We all know hybrid work is here to stay, but how much is that really going to cost? IT has been managing remote problems, like client, router, and bandwidth issues, on an ad hoc basis. Solve the problem any way you can till things return to normal. Ok, but we're almost there, and now the remote scenario looks permanent. That means you need to come up with a long-term plan, and that covers a lot of ground. Probably new management software, but what's it need to do exactly? And do you give everyone a new router so IT can standardize; how much will that cost? How do you need to update your help desk? Do you need to pay for home worker bandwidth just to keep a hybrid workload moving smoothly? That's a lot of questions, and it's only a few of the ones you'll need to ask. Others will include space management software for hot-desking, employee monitoring, and new "hybrid optimized" software, like say, Windows 11.

Cloud services are a great resource for a hybrid work model. But considering cost is a necessary step if you want to keep your IT budget in the black even in 2021. It's not just about the bill. It's matching the bill to how it affects your business' capabilities. Time to sit your IT team down, figure out the questions, and start coming up with the answers. 2022 isn't that far off. Don't agree? Hit me in the comments.

Robinhood Gets a $70 million ticket for harming millions of consumers. This is a lesson for those who think a little consumer misinformation to drive up revenue is just a white lie. Organizations like FINRA, which whacked Robinhood, and looking for you.

Zoom wants to add real-time translation to your video calls. Trying to hang on to that pandemic growth, Zoom acquired Kites, which has been developing real-time translation software. It'll be a while before it's actually in your Zoom client, but for multinational companies, it'll be a real boon.

Facebook hits $1 trillion. It's finally happened. Two antitrust cases get dismissed and Facebook shares surged so hard the company has passed the big T.

Yet More PCMag Business News

What Exactly Is The 'Hybrid Work Model'? Opinions Vary Widely

Microsoft Gets Fuzzy On When You Can Actually Upgrade to Windows 11

What Will Businesses Get From Windows 11?

GitHub Enlists AI to Help Your Devs Code

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

View post:
How Much Is Hybrid Work Really Going to Cost? - PCMag

Read More..

The Origin of Technosignatures – Scientific American

The search for extraterrestrial intelligence stands out in the quest to find life elsewhere because it assumes that certain kinds of life will manipulate and exploit its environment with intention. And that intention may go far beyond just supporting essential survival and function. By contrast, the general search for other living systems, or biosignatures, really is all about eating, reproducing and, not to put too fine a point on it, making waste.

The assumption of intention has a long history. Back in the late 1800s and early 1900s the American astronomer Percival Lowell convinced himself, and others, of non-natural features on the surface of Mars, and associated these with the efforts of an advanced but dying species to channel water from the polar regions. Around the same time, Nikola Tesla suggested the possibility of using wireless transmission to contact Mars, and even thought that he might have picked up repeating, structured signals from beyond the Earth. Nearly a century earlier, the great mathematician and physicist Carl Friedrich Gauss had also thought about active contact, and suggested carving up the Siberian tundra to make a geometric signal that could be seen by extraterrestrials.

Today the search for intention is represented by a still-coalescing field of cosmic technosignatures, which encompasses the search for structured electromagnetic signals as well as a wide variety of other evidence of intentional manipulation of matter and energyfrom alien megastructures to industrial pollution, or nighttime lighting systems on distant worlds.

But theres a puzzle that really comes ahead of all of this. We tend to automatically assume that technology in all of the forms known to us is a marker of advanced life and its intentions, but we seldom ask the fundamental question of why technology happens in the first place.

I started thinking about this conundrum back in 2018, and it leads to a deeper way to quantify intelligent life, based on the external information that a species generates, utilizes, propagates and encodes in what we call technologyeverything from cave paintings and books to flash drives and cloud servers and the structures sustaining them. To give this a label I called it the dataome. One consequence of this reframing of the nature of our world is that our quest for technosignatures is actually, in the end, about the detection of extraterrestrial dataomes.

A critical aspect of this reframing is that a dataome may be much more like a living system than any kind of isolated, inert, synthetic system. This rather provocative (well, okay, very provocative) idea is one of the conclusions I draw in a much more detailed investigation my new book The Ascent of Information. Our informational world, our dataome, is best thought of as a symbiotic entity to us (and to life on Earth in general). It genuinely is another ome, not unlike the microbiomes that exist in an intimate and inextricable relationship with all multicellular life.

As such, the arrival of a dataome on a world represents an origin event. Just as the origin of biological life is, we presume, represented by the successful encoding of self-propagating, evolving information in a substrate of organic molecules. A dataome is the successful encoding of self-propagating, evolving information into a different substrate, and with a seemingly different spatial and temporal distribution routing much of its function through a biological system like us. And like other major origin events it involves the wholesale restructuring of the planetary environment, from the utilization of energy to fundamental chemical changes in atmospheres or oceans.

In other words, Id claim that technosignatures are a consequence of dataomes, just as biosignatures are a consequence of genomes.

That distinction may seem subtle, but its important. Many remotely observable biosignatures are a result of the inner chemistry of life; metabolic byproducts like oxygen or methane in planetary atmospheres for example. Others are consequences of how life harvests energy, such as the colors of pigments associated with photosynthesis. All of these signatures are deeply rooted in the genomes of life, and ultimately thats how we understand their basis and likelihood, and how we disentangle these markers from challenging and incomplete astronomical measurements.

Analogous to biosignatures, technosignatures must be rooted in the dataomes that coexist with biological life (or perhaps that had once coexisted with biological life). To understand the basis and likelihood of techosignatures we therefore need to recognize and study the nature of dataomes.

For example, a dataome and its biological symbiotes may exist in uneasy Darwinian balance, where the interests of each side are not always aligned, but coexistence provides a statistical advantage to each. This could be a key factor for evaluating observations about environmental compositions and energy transformations on other worlds. We ourselves are experiencing an increase in the carbon content of our atmosphere that can be associated with the exponential growth of our dataome, yet that compositional change is not good for preserving the conditions that our biological selves have thrived in.

Projecting where our own dataome is taking us could provide clues to the scales and qualities of technosignatures elsewhere. If we only think about technosignatures as if theyre an arbitrary collection of phenomena rather than a consequence of something Darwinian in nature, it could be easy to miss whats going on out there in the cosmos.

Read the rest here:
The Origin of Technosignatures - Scientific American

Read More..

You can hijack Google Cloud VMs using DHCP floods, says this guy, once the stars are aligned and… – The Register

Google Compute Engine virtual machines can be hijacked and made to hand over root shell access via a cunning DHCP attack, according to security researcher Imre Rad.

Though the weakness remains unpatched, there are some mitigating factors that diminish the potential risk. Overall, it's a pretty neat hack if a tad impractical: it's an Ocean's Eleven of exploitation that you may find interesting from a network security point of view.

In a write-up on GitHub, Rad explains that attackers can take over GCE VMs because they rely on ISC DHCP software that uses a weak random number generator.

A successful attack involves overloading a victim's VM with DHCP traffic so that it ends up using a rogue attacker-controlled metadata server, which can be on the same network or on the other side of the internet. The DHCP flood would typically come from a neighboring attacker-controlled system hosted within Google Cloud.

When the technique is pulled off just right, the VM uses the rogue metadata server for its configuration instead of an official Google one, and ultimately the miscreant can log into the VM via SSH as the root user.

ISC's implementation of the DHCP client, according to Rad, relies on three things to generate a random identifier: the Unix time when the process is started; the PID of the dhclient process; and the sum of the last four bytes of the Ethernet addresses (MAC) of the machine's network interface cards. This random number, XID, is used by the client to track its communications with Google's DHCP servers.

So the idea is to hit the victim VM with a stream of DHCP packets, with a best guess for the XID, until the dhclient accepts them over Google's legit DHCP server packets, at which point you can configure the network stack on the victim VM to use the rogue metadata server by aliasing Google server hostnames.

Two of these XID ingredients, Rad says, are predictable. The last four bytes of the MAC address are the same as the internal IP address of the box. And the PID gets assigned by the Linux kernel in a linear way.

"To mount this attack, the attacker needs to craft multiple DHCP packets using a set of precalculated/suspected XIDs and flood the victim's dhclient directly," explains Rad.

"If the XID is correct, the victim machine applies the network configuration. This is a race condition, but since the flood is fast and exhaustive, the metadata server has no real chance to win."

Crafting the correct XID in a flood of DHCP packets is made easier by the insufficient randomization scheme. Doing so allows the attacker to reconfigure the target's network stack at will.

According to Rad, Google relies on its metadata servers to handle the distribution of SSH keys. By impersonating a metadata server, SSH access can be granted to the attacker.

Rad's technique is based on an attack disclosed last year by security researcher Chris Moberly, but differs in that the DHCP flooding is done remotely and the XIDs are guessed.

In the three attack scenarios devised by Rad, two require the attacker to be on the same subnet as the target VM to send the flood of DHCP traffic. In one scenario, the victim VM needs to be rebooting, and in the other, it is refreshing its DHCP lease. The third allows for a remote attack over the internet but requires the firewall in front of the target VM to be fully open.

Rad concedes this third case is "probably not a common scenario" but notes that GCP Cloud console provides that option and speculates there are likely to be VMs with that configuration.

Suggested defense techniques include not referring to the metadata server using its virtual hostname (metadata.google.internal), not managing the virtual hostname via DHCP, securing metadata server communication using TLS, and blocking UDP on Ports 67/68 between VMs.

Google was said to be informed of this issue back in September 2020. After nine months of inaction, Rad published his advisory. The Chocolate Factory did not immediately respond to a request for comment. We imagine Google Cloud may have some defenses in place, such as detection of weird DHCP traffic, for one.

Speaking of Google, its security expert Felix Wilhelm found a guest-to-host escape bug in the Linux KVM hypervisor code for AMD Epyc processors that was present in kernel versions 5.10 to 5.11, when it was spotted and patched.

Continued here:
You can hijack Google Cloud VMs using DHCP floods, says this guy, once the stars are aligned and... - The Register

Read More..

To Protect Consumer Data, Don’t Do Everything on the Cloud – Harvard Business Review

When collecting consumer data, there is almost always a risk to consumer privacy. Sensitive information could be leaked unintentionally or breached by bad actors. For example, the Equifax data breach of 2017 compromised the personal information of 143 million U.S. consumers. Smaller breaches, which you may or may not hear about, happen all the time. As companies collect more data and rely more heavily on its insights the potential for data to be compromised will likely only grow.

With the appropriate data architecture and processes, however, these risks can be substantially mitigated by ensuring that private data is touched at as few points as possible. Specifically, companies should consider the potential of what is known as edge computing. Under this paradigm, computations are performed not in the cloud, but on devices that are on the edge of the network, close to where the data are generated. For example, the computations that make Apples Face ID work happen right on your iPhone. As researchers who study privacy in the context of business, computer science, and statistics, we think this approach is sensible and should be used more because edge computing minimizes the transmission and retention of sensitive information to the cloud, lowering the risk that it could land in the wrong hands.

But how does this tech actually work, and how can companies who dont have Apple-sized resources deploy it?

Consider a hypothetical wine store that wants to capture the faces of consumers sampling a new wine to measure how they like it. The stores owners are picking between two competing video technologies: The first system captures hours of video, sends the data to third-party servers, saves the content to a database, processes the footage using facial analysis algorithms, and reports the insight that 80% of consumers looked happy upon tasting the new wine. The second system runs facial analysis algorithms on the camera itself, does not store or transmit any video footage, and reports the same 80% aggregated insight to the wine retailer.

The second system uses edge computing to restrict the number of points at which private data are touched by humans, servers, databases, or interfaces. Therefore, it reduces the chances of a data breach or future unauthorized use. It only gathers sufficient data to make a business decision: Should the wine retailer invest in advertising the new wine?

As companies work to protect their customers privacy, they will face similar situations as the one above. And in many cases, there will be an edge computing solution. Heres what they need to know.

In 1980, the Organization for Economic Cooperation and Development, an international forum of 38 countries, established guidelines for the protection of privacy and trans-border flows of personal data for its member countries with the goal of harmonizing national privacy legislation. These guidelines, which were based on principles such as purpose limitation and data minimization, evolved into recent data-privacy legislation such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA), both introduced in 2018.

The rise of edge computing helps organizations meet the privacy guidelines above by implementing three critical design choices. The design choices begin with how to think about data collection and extend to the actual data processing. They are:

A mindful data architecture should collect and retain only the must-have information. Data-collection approaches should be designed and implemented around the desired insights (in other words, its purpose should be limited), thus reducing the number of variables and people tracked, meaning the minimum amount of data is collected.

In some ways, this is an old idea: In 1922, the groundbreaking British statistician R.A. Fisher developed the statistical theory of a sufficient statistic, which provides all the information required on the desired insight. (E.g., 80% of consumers looked happy upon tasting the new wine.) Minimal sufficiency goes a step further by most efficiently capturing the sufficient information required for an insight. Translated loosely, the wine retailer may use an edge device to perform facial analysis on fewer consumers a smaller sample to reach the same 80% insight.

For many business decisions we dont need insights on the individual level. Summarizing the information at a group level retains most of the necessary insights while minimizing the risk of compromising private data. Such non-personal data is often not subject to data protection legislation, such as the GDPR or the CCPA.

When it is critical to obtain insights at a personal level, the data may be altered to hide the individuals identity while minimally impacting the accuracy of insights. For instance, Apple uses a technique called local differential privacy to add statistical noise to any information that is shared by a users device, so Apple cannot reproduce the true data. In some situations, alteration of individual data is legally mandated, such as in clinical studies. Techniques may include pseudo-anonymization and go as far as generating synthetic data.

Knowing when to apply data-processing tools is as critical as using the right tools. Applying sufficiency, aggregation, and alteration during data collection maximizes protection while retaining the most useful information. This approach can also reduce costs for cyber insurance, compliance with data-protection regulations, and more scalable infrastructure.

Restricting private data collection and processing to the edge is not without its downsides. Companies will not have all their consumer data available to go back and re-run new types of analyses when business objectives change. However, this is the exact situation we advocate against to protect consumer privacy.

Information and privacy operate in a tradeoff that is, a unit increase in privacy requires some loss of information. By prioritizing data utility with purposeful insights, edge computing reduces the quantity of information from a data lake to the sufficient data necessary to make the same business decision. This emphasis on finding the most useful data over keeping heaps of raw information increases consumer privacy.

The design choices that support this approach sufficiency, aggregation, and alteration apply to structured data, such as names, emails or number of units sold, and unstructured data, such as images, videos, audio, and text. To illustrate, let us assume the retailer in our wine-tasting example receives consumer input via video, audio, and text.

If the goal of the wine retailer is to understand consumer reactions broken down by demographic groups, there is no need to identify individual consumers via facial recognition or to maintain a biometric database. One might wonder arent the pictures that contain peoples faces private data? Indeed, they are. And this is where edge computing allows the video feed to be analyzed locally (namely, on the camera) without ever being stored permanently or transmitted anywhere. AI models are trained to extract in real time the required information, such as positive sentiment and demographics, and discard everything else. That is an example of sufficiency and aggregation employed during data collection.

In our wine-tasting setting, an audio analysis may distinguish between when speech occurs versus silence or background music. It may also reveal the age of the person speaking, their emotions, and energy levels. Are people more excited after tasting the new wine? AI models can understand the overall energy of the speaker without knowing what was said. They analyze inflections and intonations in the voice to reveal an individuals state of mind. Sufficiency is built into the classifications (i.e., the output) of the AI technology by default. Running these models on the edge and summarizing results by demographic group also achieves data aggregation.

Our wine retailer can use consumer textual feedback about the new wine not only to understand whether consumers are satisfied but, equally importantly, learn the words consumers use to describe the taste and feel of the new wine. This information is invaluable input into the development of advertising. In this analysis, the data do not need to be tied to specific consumers. Instead, textual comments are aggregated across consumers, and the relative frequencies of taste and feeling keywords for each wine type are sent to the wine retailer. Alternatively, if insights are desired on the personal level, textual feedback can be altered synthetically using Natural Language Generation (NLG) models.

In the examples above, the Sufficiency-Aggregation-Alteration design choices enhance privacy. These ideas are also relevant to applications and data types as far ranging as unlocking your phone, evaluating your health with smart devices, and creating better experiences. Paradoxically, the mindful use of edge computing and AI, which often scares people, is critical for maximizing privacy protection. Privacy advocates also promote the idea of consumers owning and controlling their personal data via a Customer Data Platform (CDP). A data architecture that links the CDP to an edge device (think of voice-activated home assistants) can further increase consumer trust by providing consumers complete control and transparency over their data.

This framework is only a partial solution to concerns about privacy, however, to be deployed alongside other beneficial practices such as data encryption, minimizing access privileges, and data retention. Encryption is employed when data are stored permanently and in transit. That is an essential first step to minimize unauthorized access because it converts the dataset into a black box. Without a key, the black box has no value. Likewise, limiting data access to a need-to-know basis, having clear policies for data retention, and providing opt-out mechanisms, reduces the risk of data leaks. Even though the above steps are standard practice, not everyone employs them, creating many more touchpoints where private data breaches can occur. Be a good manager and check with your IT team and third-party vendors.

***

Privacy is a social choice, and leadership teams should prioritize data utility. Many companies have been collecting as much data as possible and deciding later what is useful versus not. They are implicitly trading off all consumer privacy with the most information. We advocate a more disciplined approach wherein the uses of the data are specified upfront to guide both the collection and retention of data. Furthermore, technology has offered us all the tools we need to safeguard privacy without impacting business intelligence. By leveraging edge computing and AI technologies, companies may apply the design choices of sufficiency, aggregation, and alteration at the data collection stage. With a carefully designed architecture, we may obtain the desired insights and secure the privacy of consumers data at the same time. Contrary to conventional wisdom, we can have our (privacy) cake and eat it too.

Continued here:
To Protect Consumer Data, Don't Do Everything on the Cloud - Harvard Business Review

Read More..

HPE GreenLake: The HPC cloud that comes to you – The Register

Sponsored By its very nature, high performance computing is an expensive proposition compared to other kinds of computing. Scale and speed cost money, and that is never going to change. But that doesnt mean that you have to pay for HPC all at once, or even own it at all.

And it doesnt necessarily mean that you need to pay for a bunch of cluster experts to manage a complex system, either, which can be a significant part of the overall cost of an HPC system and which is often a lot more difficult to find than getting an HPC system through the budgeting process.

Traditionally, organizations have signed leasing or financing agreements to cushion the blow of a big capital outlay required to build an HPC cluster, which includes servers (usually some with GPU acceleration these days), high speed and capacious storage, and fast networking to link it all together.

However, the same kind of pay per use, self-service, scalability, and simplified IT operations that comes with the cloud is, thankfully, available on-premises for HPC systems though HPE GreenLake for HPC offering, which previewed in December 2020 and which will be in selected availability in June, with general availability coming shortly thereafter.

There is much more to HPE GreenLake than a superior cloud-like pricing scheme. But after getting a preview of the HPE GreenLake for HPC, we boiled it all down to this: HPE GreenLake is like having a local cloud on your premises for running IT infrastructure that is owned by HPE, managed by HPEs substantial experts and a lot of automation it has developed, that is used by you. In this case we are focusing on traditional HPC simulation and modeling, machine learning and other forms of AI, and data analytics that are commonly called high performance computing these days.

Ahead of the HPE Discover conference at the end of June 2021, Don Randall, worldwide marketing manager of the HPE GreenLake as-a-service offerings, gave us a preview of what the full HPE GreenLake for HPC service will look like and some hints about how it will be improved over time.

HPE has sold products under the HPE GreenLake as-a-service model for a dozen years now, and it has some substantial customers in the HPC arena using earlier versions of the service, including Italian energy company Ente Nazionale Idrocarburi (ENI) and German industrial manufacturer Siemens, which has HPE GreenLake for HPC systems in use in 20 different locations around the globe.

With the update this year, HPE is adding the GreenLake Central master console on top of its as-a-service offering, and is integrating telemetry from on-prem clusters, and usage and cost data from public clouds such as Amazon Web Services and Microsoft Azure that will allow GreenLake shops to see the totality of their on-premises GreenLake infrastructure alongside the public cloud capacity they use. This is all cloudy infrastructure, after all, and as Randall explains, HPE absolutely expects and wants for customers to use the public clouds opportunistically when it is appropriate.

The earlier versions of HPE GreenLake for HPC lacked the automated firmware and software patching capabilities that HPE is rolling out this year, and the fit and finish has improved considerably, too, according to Randall. And there are plans to add more features and functions to GreenLake for HPC in the coming months and years, some of which HPE is willing to hint about now.

HPE GreenLake has been evolving over the years, and adding features to expand support for HPC customers, for good reason. Despite a decade and a half of cloud computing, 83 percent of HPC implementations are outside of the public cloud, according to Hyperion Research. And they are staying on-premises for good, sound reasons. HPC is, by definition, not the general-purpose computing, networking, and storage that is typically deployed in an enterprise datacenter. Compute is often denser and hotter, networks are heftier, and storage is bigger and faster; the scale is generally larger than what is seen for other kinds of systems in the enterprise.

And moving to the public cloud presents its own issues, including latency issues between users and the closest public cloud regions, the size of datasets creates its own gravity that makes it very hard and expensive to move data off the public cloud once it has been placed there. And then there is the issue of application entanglement. In some cases, applications are so intertwined that they cant be moved piecemeal to the cloud, so you end up in an all-or-nothing situation, and moreover, for latency reasons, HPC applications want to be near to HPC data. So you cant break it apart that way, either, with data in the cloud and apps on premises, or vice versa, without paying some latency and cost penalties.

HPE GreenLake for HPC is meant to solve all of these issues, and more.

We have got a ton of things that that are putting us way out in the lead, says Randall. We have the expertise to design, integrate, and deliver HPC setups globally, and we are number one in HPC, and we have people who are really, really sharp. HPE has invented a lot of HPC technology or acquired it, and we have a services model that is we have been refining and is well ahead of what other IT companies and public clouds are doing. That services model is a key differentiator for HPE GreenLake for HPC, according to Randall. HPE puts more iron on the floor for an HPC system than the customer is using, so this excess capacity is ready to use when it is needed.

Self-service for the provisioning of compute, storage, and networks is done through the HPE GreenLake Central console, and the entire HPC stack clusters, operating software and so on, is managed by HPE experts from one of a dozen centers around the world. customers operate the clusters with self-service capabilities in HPE Greenlake Central to manage queues, jobs, and output. HPE GreenLake for HPC gets HPC centers out if the business of maintaining the hardware and software of those clusters, and while HPE is not offering application management services, it does have widget with self-service capabilities that will snap into the GreenLake Central console and it will entertain managing the HPC applications themselves under a separate contract if customers really want this.

The initial GreenLake for HPC stack was built on the Singularity HPC-specific Kubernetes platform, and over time it may evolve to use the HPE Ezmeral Kubernetes container platform. Initially, HPE GreenLake for HPC included HPEs Apollo servers and storage, plus standard storage and Aruba interconnects, but now includes Lustre parallel file systems, a homegrown HPE cluster manager, and the industry standard SLURM job scheduler as well as Ethernet networks with RDMA acceleration. The HPE Slingshot variant of Ethernet tuned for HPC and IBMs Spectrum Scale (formerly known as General Parallel File System, or GPFS) parallel storage will be added in the future. HPE Cray EX compute systems will also be available under the HPE GreenLake for HPC offering, as will other parallel file systems that are up and coming in the HPC arena.

HPE started out with a focus on clusters to run computer aided engineering applications (with a heavy emphasis on ANSYS), but is expanding its HPE GreenLake cluster designs so they are tuned for financial services, molecular dynamics, electronic design automation, and computational fluid dynamics workloads, and it has an eye on peddling GreenLake for HPC to customers doing seismic analysis, weather forecasting, and financial services risk management. The scale of the machines offered under GreenLake for HPC will be growing, too, and Randall says that HPE will absolutely sell exascale-class HPC systems to customers under the GreenLake model.

We have a feeling that the vast majority of exascale-class systems could end up being sold this way, given the benefits of the HPE GreenLake approach. Imagine if all of the firmware in the systems was updated automagically, and ditto for the entire HPC software stack? Imagine proactive maintenance and replacement of parts before they fail.

Imagine not trying to hire technical staff to design, build, and maintain a cluster, and getting the kind of cloud experience that people have come to expect without having to go all-in on one of the public clouds and make do with whatever compute, storage, and networking they have to offer which may or may not be what you need in your HPC system for your specific HPC workloads. Imagine keeping the experience of an on-premises cluster, but having variable capacity inherent in the system that you can turn on and off with the click of a mouse? This is what HPE GreenLake for HPC can do, and it is going to change the way that companies consume HPC.

Sponsored by HPE

See the rest here:
HPE GreenLake: The HPC cloud that comes to you - The Register

Read More..