Page 1,756«..1020..1,7551,7561,7571,758..1,7701,780..»

Backup best practices for Microsoft Office 365 and cloud-based SaaS applications – TechRepublic

Microsoft Office 365 and other cloud-based SaaS applications are most effective when data backup best practices support them. Learn how to set up backup best practices here.

By David Friend, CEO and co-founder, Wasabi Technologies

Microsoft Office 365 and other cloud-based SaaS applications are a key working component of many business operations, enabling teams to share documents among multiple devices, easily house data in a private or public cloud, and work together in real time. But while Office 365 and similar SaaS apps store your data, if your data is lost, Microsoft does not guarantee that it will restore it. With Office 365 as a prime target for SaaS attacks this year, there is no protection against data deletion, regardless of if it is accidental, intentional or criminal.

SEE: Cloud Data Storage Policy (TechRepublic Premium)

To protect valuable company data, backing up data assets in SaaS applications has never been more critical. Below are some important backup and data protection strategies for cloud-based SaaS users. If you apply these backup best practices to your operations, they can help to minimize damages resulting from cyberattacks and other forms of data loss.

Jump to:

When organizations are evaluating their backup options, some may be enticed to take the traditional approach of backing up their data on-premises. However, to meet todays data security needs, this on-prem storage approach falls short, as large data volumes quickly outgrow on-prems capabilities. This adds increased complexity and costs to scale and maintain capacity.

In addition, on-prem backups require ongoing synchronization with live data, which can squander an organizations time and resources, especially if they are producing enormous amounts of data. Finally, on-prem storage leaves companies more susceptible to cyberattacks and data loss due to its format.

Since on-prem storage effectively acts as a single copy of data, it can easily be targeted and damaged. When this happens, data backups are rarely in place to help teams with data recovery. This problem can ultimately be prevented by utilizing the cloud or other complementary backup solutions, like Veeam, which are able to restore data if human error or a breach occurs.

Compared to on-prem backups, cloud storage is a more flexible, risk-averse and cost-effective option for cloud-based SaaS users. In addition to generally being less expensive compared to on-premises solutions, it requires less time and resources to manage since cloud vendors handle maintenance and configuration needs for their customers.

SEE: Checklist: Cloud storage management (TechRepublic Premium)

Cloud backups can also mitigate the impacts of data breaches and ransomware attacks through the following cloud-based best practices:

With data backed up to the cloud, companies can utilize recovery testing to identify errors in their data recovery process before attacks occur. Since the recovery process can be complicated and time-consuming when performed, testing this in advance can help address and eliminate any issues when a real attack occurs. The cloud enables businesses to easily test their recovery processes through its effortless access to data, providing ample amounts of preparation time.

Cloud providers can also offer immutable storage features, which prevent anyone even a systems administrator from adjusting, tampering with or deleting data during a set period of time. This storage and security feature is crucial to keeping files safe from corruption. Leveraging object-level immutability can prevent ransomware attacks when bad actors attempt to encrypt data, acting as an essential additional layer of protection for organizations cloud backups.

With a cloud backup or multicloud approach, organizations can diversify their backups and store their data in different environments. This is a more advantageous strategy compared to storing all data in one location, as it helps companies avoid the risk of losing everything during a single-system attack.

A 3-2-1 backup plan, which recommends companies keep three copies of data two on different media formats and one offsite is a smart cloud security approach in this situation. This strategy prevents hackers from accessing all the storage locations but also allows companies to continue functioning if an attack occurs, minimizing downtime.

Businesses can no longer ignore security threats for Office 365 and other cloud-based SaaS platforms. As these tools become more integrated into business operations at all levels, data that is stored in these platforms without appropriate backups is becoming extremely vulnerable in the face of growing cyber risk. Protecting data assets with the right tools, training, and cloud data storage and backup strategies can remedy these security issues before its too late.

David Friend is the co-founder and CEO of Wasabi, a revolutionary cloud storage company. Davids first company, ARP Instruments, developed synthesizers used by Stevie Wonder, David Bowie, and Led Zeppelin and even helped Steven Spielberg communicate with aliens in Close Encounters of the Third Kind. Throughout his career, David has founded or co-founded five other companies in the tech space. David graduated from Yale and attended the Princeton University Graduate School of Engineering, where he was a David Sarnoff Fellow.

See the original post:
Backup best practices for Microsoft Office 365 and cloud-based SaaS applications - TechRepublic

Read More..

Komprise tells users: Go do it yourself Blocks and Files – Blocks and Files

Komprise has added self-service features for line-of-business (LOB) IT, analytics, and research teams to its unstructured data management software, lessening the burden on admins by giving users controlled read-only access to their individual data estates.

The Komprise Intelligent Data Management (IDM) product helps users manage and monitor unstructured data estates across their on-premises and public cloud environments and store their data more effectively. That means moving little-accessed data to cheaper storage, migrating data to the public clouds if needed and understanding which users access what data and that datas locations.

Todd Dorsey, a DCIG data storage analyst, said: User self-service is a growing trend to offload administrative tasks from central IT and give end users the ability to get the data and functionality they need faster. By putting more control in the hands of departmental teams and data owners, Komprise is helping increase value from unstructured data in the enterprise.

This is a tool for admin staff to give them the equivalent of night vision in a previously dark data environment and organize it so that access and storage costs are optimised. Komprise IDM is an example of a hierarchical storage management (HSM) or information lifecycle management (ILM) product.

At present, if central IT wanted to find out in detail what files and object data a user department needed on various storage tiers, tier-level retention and data movement policies, theyd have to ask the department which takes time and occupies both IT admin and user department resource.

Now, with the new software, central IT can authorize departmental end users to access IDM to look at and into their own data and its usage. These users are given a Deep Analytics profile, but only with read access. They can then monitor usage metrics, data trends, tag and search data and identify datasets for analytics, tiering and deletion, which only IT admins could do up until now.

This capability builds uponKomprises Smart Data Workflows, which enabled IT teams to automate the tagging and discovery of relevant file and object data across hybrid data storage silos and moving the right data to cloud services.

Potential applications for this include:

This addition of user-level DIY facilities should help to enable finer-grained unstructured data management and faster response by both IT admins and user groups to changing situations.

The autumn release of Komprises IDM software also features:

Komprise COO Krishna Subramanian claimed that the SMB protocol changes can accelerate data movements significantly: It is orders of magnitude around the 27 times mark that we benchmark; we will get that much. And its often much more than that.

The new capabilities are spreading Komprises use with its customers: We are seeing the adoption of our product really kind of scale out. Every day were getting new new use cases from all these departments, like some of them wanted to use it to run genomics, analysis, the cloud and things like that.

She said Komprises customer count was approaching 500. More of these customers are using AWS than Azure, with GCP usage trailing the older two public clouds.

Read the original:
Komprise tells users: Go do it yourself Blocks and Files - Blocks and Files

Read More..

Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays – Forbes

New tools from Google's cloud unit will help healthcare organizations analyze and store medical images.

Google on Tuesday announced a new set of artificial intelligence tools aimed at letting healthcare organizations use the search giants software and servers to read, store and label X-rays, MRIs and other medical imaging.

The tools, from Googles cloud unit, allow hospitals and medical companies to search through imaging metadata or develop software to quickly analyze images for diagnoses. Called the Medical Imaging Suite, the tools can also help healthcare professionals to automatically annotate medical images and build machine learning models for research.

With the advancements in medical imaging technology, there's been an increase in the size and complexity of these images, Alissa Hsu Lynch, Google Clouds global lead for health tech strategy and solutions, said in an interview. We know that AI can enable faster, more accurate diagnosis and therefore help improve productivity for healthcare workers.

Based on Google's other forays into healthcare, privacy advocates may raise concerns that the tech giant, which makes the majority of its $257 billion annual revenue from personalized ads based on user data, would use patient information to feed its vast advertising machine.

Lynch says Google doesnt have any access to patients protected health information, and none of the data from the service would be used for the companys advertising efforts. Google claims the service is compliant with the Health Insurance Portability and Accountability Act, or HIPAA, a federal law that regulates the use of patient data.

The tech giant is working with a handful of medical organizations as early partners for the imaging software. One partner, a company called Hologic, is using the Google suite for cloud storage, as well as developing tech to help improve cervical cancer diagnostics. Another partner called Hackensack Meridian Health, a network of healthcare providers in New Jersey, is using the tools to scrub identifying information from millions of gigabytes of X-rays. The company will also use the software to help build an algorithm for predicting the metastasis of prostate cancer.

Google's software tools will help healthcare organizations to view and search through imaging data.

The new tools come as Google and its parent company Alphabet invest more heavily in health-related initiatives. In the early days of the pandemic, Alphabets Verily unit, which focuses on life-sciences and med tech, partnered with the Trump administration to provide online screening for Covid tests. Google also partnered with Apple to create a system for contract tracing on smartphones. Last year the company dissolved its Google Health unit, restructuring its health efforts so they werent housed in one central division.

Google has stirred controversy in the past for its healthcare efforts. In 2019, Google drew blowback for an initiative called Project Nightingale, in which the company partnered with Ascension, the second-largest healthcare system in the country, to collect the personal health information of millions of people. The data included lab results, diagnoses and hospitalization records, including names and birthdays, according to the Wall Street Journal, though Google at the time said the project complied with federal law. Google had reportedly been using the data in part to design new software.

Two years earlier, the tech giant partnered with the National Institute of Health to publicly post more than 100,000 images of human chest X-rays. The goal there was to showcase the companys cloud storage capabilities and make the data available to researchers. But two days before the images were to be posted, the NIH told Google its software had not properly removed data from the X-rays that could identify patients, according to The Washington Post, which would potentially violate federal law. In response, Google canceled its project with NIH.

Asked about Googles past fumble with de-identifying information, Sameer Sethi, SVP and chief data and analytics officer at Hackensack Meridian Health, says the company has safeguards in place to prevent such mishaps.

You never actually trust the tool, he told Forbes. He adds Hackensack Meridian Health works with a third-party company to certify that the images are de-identified, even after using Googles tools. We will not bring anything to use without expert determination.

Read the original post:
Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays - Forbes

Read More..

Edge storage: What it is and the technologies it uses – ComputerWeekly.com

Large, monolithic datacentres at the heart of enterprises could give way to hundreds or thousands of smaller data stores and devices, each with their own storage capacity.

This driver for this is organisations moving their processes to the business edge. Edge computing is no longer simply about putting some local storage into a remote or branch office (ROBO). Rather, it is being driven by the internet of things (IoT), smart devices and sensors, and technologies such as autonomous cars.All these technologies increasingly need their own local edge data storage.

Industry analysts Gartner confirm business data is moving from the datacentre to the cloud and the edge. The firm identifies four use cases for edge storage: distributed clouds and datacentres, data processing at the edge, content collaboration and access, and digital ingest and streaming.

This isnt an exhaustive list applications such as autonomous vehicles that sit outside enterprise IT are driving edge computing too. Meanwhile, industrial processes, sensors and IoT are all drivers that push more computing to the edge.

The market for edge storage is being shaped by changes in storage technology and by applications for edge computing. Increasingly, edge devices need persistent storage that is robust and secure, but applications also demand performance that goes beyond the SD or micro-SD cards found in early generation IoT devices and single board computers.

A few years ago, edge computing was most closely associated with remote or branch office (Robo) deployments. For storage, Robo was about providing at least some level of backup or replication to secure data, especially if a device failed, and caching or staging data before sending it to the datacentre for further processing. This batch-based approach worked well enough in retail and other environments with fairly predictable data flows.

But adding storage by way of a networked PC, a small server or a NAS device only really works in office or back office environments, because they are static, environmentally stable and usually reasonably secure.

Todays business edge is much larger and covers much more hostile operating environments. These range from the factory floor, with edge devices attached to manufacturing equipment and power tools, to cameras and other sensors out in the environment, to telecoms kit and even vehicles.

Enrico Signoretti, an analyst at GigaOM, describes these environments as the industrial edge, remote edge or far edge. Storage needs to be reliable, easy to manage and given the number of devices firms might deploy a cost-effective solution.

Edge applications require storage to be physically robust, secure, physically and virtually often encrypted and able to withstand temperature fluctuations and vibration. It needs to be persistent, but draw little power. In some cases, it also needs to be fast, especially where firms want to apply artificial intelligence (AI) to systems at the edge.

Alex McDonald, Europe, Middle East and Africa (EMEA) chair at the Storage Networking Industry Association (SNIA), says that edge storage includes storage and memory product technologies that provide residences for edge-generated data include SSDs, SSD arrays, embedded DRAM [dynamic random-access memory], flash and persistent memory.

In some cases, storage and compute systems need to be adapted to operate in much wider range of environments than conventional IT. This requires physical robustness and security measures. Single-board computers, for example, often rely on removable memory cards. Although encryption protects against data loss, it will not prevent someone physically removing the memory module.

Ruggedised and enhanced specification devices will support environments that require additional safeguarding in embedded applications, from automotive to manufacturing, says McDonald.

Organisations working with edge computing are also looking at storage class memory (SCM), NVMe-over-fabrics, and hyper-converged infrastructure (HCI).

Hyper-converged infrastructure, with its on-board storage, is perhaps best suited to applications that may need to scale up in the future. IT teams can add HCI nodes relatively easily even in remote locations without adding significant management overheads.

But for the most part, edge computings storage requirements are relatively small. The focus is not on multiple terabytes of storage, but on systems that can handle time-sensitive, perishable data that is then analysed locally and passed on to a central system usually the cloud or a combination of both.

This requires systems to be able to perform immediate actions on the data, such as performing analytics, before passing it on to a central store or process. This data triage needs to be nimble and, ideally, close to the compute resources. This, in turn, has prompted interest in NVMe-over-fibre channel and storage-class memory.

And, by putting some local storage into the device, systems designers can minimise one of edge computings biggest challenges its demands on bandwidth.

Organisations that want to add data storage to their edge systems do so, at least in part, to reduce demands on their networks and centralised datacentres, or to reduce latency in their processing.

Some firms now have so many edge devices that they risk overwhelming local networks. Although the idea of decentralised computing connected to the cloud is attractive, in practice network latency, the possibility of network disruption and even cloud storage costs have prompted device manufacturers to include at least support for local storage.

A growing number of vendors also make edge appliances that work alongside (or more accurately, just behind) IoT devices to gather data from them. Some are data transfer devices, such as Googles Edge Appliance, while some take on some of the AI processing itself, offloading it from the network.

By doing this, systems architects can provide a more robust form of edge computing. More data is processed near to the sensor or device, decisions can be made more quickly via analytics or AI, and the amount of data sent to the corporate LAN or cloud service can be vastly reduced.

Adding storage to the edge, directly or via appliances, also allows for replication or batch-based archiving and makes it easier to operate with intermittent or unreliable connections, especially for mobile applications. Jimmy Tam, CEO of Peer Software, says that some vendors are integrating hard disk drives in combination with SSDs to allow devices to store larger data volumes at a lower cost.

In the case where the edge storage is mainly focused as a data ingestion platform that then replicates or transmits the data to the cloud, a larger proportion of storage may be HDD instead of SSD to allow for more data density, he says.

It seems unlikely that any single storage technology will dominate at the edge. As Gartner notes in a recent research report: Although edge storage solutions possess common fundamental principles, it is not a single technology, because it needs to be tailored to the specific use cases.

Nonetheless, Gartner expects to see more data storage technology being edge ready, including datacentre technologies that work better with the demands of the edge.

IoT and other edge vendors will work to improve storage performance, especially by moving to server and workstation-class storage, such as Flash, NVMe and NVMe-over-fabrics, as well as storage-class memory, rather than USB-based technologies such as SD or micro-SD.

But the real focus looks set to be on how to manage ever larger numbers of storage-equipped devices. Developments such as 5G will only increase the applications for edge computing, so firms will look for storage that is not just rugged but self-healing and, at least in normal operations, can largely manage itself.

See original here:
Edge storage: What it is and the technologies it uses - ComputerWeekly.com

Read More..

Elastic Announces the Beta of New Universal Profiling and Additional Synthetic Monitoring Capabilities to Enhance Cloud-Native Observability -…

MOUNTAIN VIEW, Calif.--(BUSINESS WIRE)--Elastic (NYSE: ESTC), the company behind Elasticsearch, today announced new features and enhancements across its Elastic Observability solution, enabling customers to gain deeper and more frictionless visibility at all levels of applications, services, and infrastructure.

Innovations across the Elastic Observability solution include:

Providing effortless, deep visibility for cloud-native production environments with zero instrumentation and low overhead, with always-on Universal Profiling

Elastics new Universal Profiling capability, now in private beta, provides visibility into how application code and infrastructure are performing at all times in production, across a wide range of languages, in both containerized and non-containerized environments.

Modern cloud-native environments are increasingly complex, creating infrastructure and application blind spots for DevOps and SRE teams. Engineering teams typically use profiling to spot performance bottlenecks and troubleshoot issues faster. However, most profiling solutions have significant drawbacks limiting adoption in production environments:

Universal Profiling is lightweight and requires zero instrumentation. Enabled by eBPF-based technology, it overcomes the limitations of other profiling solutions by requiring no changes to the application code, making it easier to quickly identify performance bottlenecks, improve time to resolve problems, and reduce cloud costs.

The low overhead of Universal Profiling, less than 1% CPU overhead, makes it possible to deploy in production environments to deliver deep and broad visibility into infrastructure and cloud-native application performance at scale.

For a production application running across a few hundred servers, early results show code optimization savings of 10% to 20% of CPU resources, resulting in cost savings and a reduction of CO2 emissions per year.

Introducing new capabilities to cloud- and developer-first synthetic monitoring

Synthetic monitoring enables teams to proactively simulate user interactions in applications to quickly detect user-facing availability and performance issues and optimize the end-user experience.

Designed to reduce manual and repetitive tasks for development and operations teams, Elastic is introducing the beta of the following innovative synthetic monitoring capabilities available within the current Uptime application for Elastic Cloud customers:

Additionally, a new and intuitive user interface to simplify workflows and make it easier to identify and quickly troubleshoot problems in production is currently under development and planned for future availability.

For more information read the Elastic blog about whats new in Elastic Observability. Additional information about how Elastic Universal Profiling provides visibility into how application code and infrastructure are performing at all times can be found here.

Supporting Quotes

About Elastic:

Elastic (NYSE: ESTC) is a leading platform for search-powered solutions. We help organizations, their employees, and their customers accelerate the results that matter. With solutions in Enterprise Search, Observability, and Security, we enhance customer and employee search experiences, keep mission-critical applications running smoothly, and protect against cyber threats. Delivered wherever data lives, in one cloud, across multiple clouds, or on-premise, Elastic enables 19,000+ customers and more than half of the Fortune 500, to achieve new levels of success at scale and on a single platform. Learn more at elastic.co.

The release and timing of any features or functionality described in this document remain at Elastics sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.

Read the original here:
Elastic Announces the Beta of New Universal Profiling and Additional Synthetic Monitoring Capabilities to Enhance Cloud-Native Observability -...

Read More..

Spot by NetApp exec on acquisitions and catalog slimming – TechTarget

NetApp has continued expanding Spot, the all-encompassing portfolio for its data management products, throughout 2022.

The company's acquisitions this year include Fylamynt, a cloud automation startup, and Instaclustr, a database-as-a-service vendor. Terms for both deals, which were focused on rounding out Spot by NetApp, were not disclosed.

The acquisition spree appears to be winding down, however, with Michael Berry, CFO and executive vice president at NetApp, mentioning a potential pause on further acquisitions in the coming months during its fourth quarter and fiscal 2022 earnings call in June.

"From a capital allocation perspective, we expect to hit pause on CloudOps acquisitions for the first half of fiscal '23, as we focus on strengthening our field and customer success go-to-market motions, while integrating our CloudOps product portfolio," Berry said during the call.

In this Q&A, Kevin McGrath, vice president and general manager of Spot by NetApp, spoke about how recent acquisitions will add to the Spot portfolio for customers, how he sees the software suite evolving and what the company's next maneuvers in the cloud might be. McGrath was the former CTO at Spot before NetApp's acquisition of the startup in 2020.

Early analyst sentiment following the acquisition of Fylamynt indicated some optimism about what it can add to the platform. What's the planned implementation for the technology, and what's next?

Kevin McGrath: Fylamynt is going to become Spot Connect and is going to be that connective tissue between everything that we do.

One of the things we've done is we have all these acquisitions [with] all these APIs. Spot Connect is going to be a drag-and-drop system. We want to connect all the different parts of what NetApp is putting together. We're not only going to put [our products] in the same console, but we're going to give you a nice, neat way to connect them -- not only with each other, but with the services you use, like ServiceNow, Jira and Slack.

Are we going to compete with some of the tools a cloud provider is going to offer? Absolutely. Kevin McGrathVice president and general manager, Spot by NetApp

[NetApp] Cloud Insights has a lot of that visibility and optimization that we're going to start bringing into the platform. I think we will have a networking story sooner rather than later, as we start bringing all these [tools] together.

Last year, NetApp announced partnerships with some cloud hyperscalers, such as the debut of Amazon FSx for NetApp OnTap on AWS. However, Spot by NetApp is a challenger to some hyperscaler tools and capabilities. How do you see those partnerships evolving?

McGrath: What the major cloud providers want is more usage. Are we going to compete with some of the tools a cloud provider is going to offer? Absolutely.

I think, in some cases, we're going to step on each other's toes. But in other cases, we're going to prove [to] the cloud provider that if someone uses our tool set, they're going to be a happier customer, a stickier customer and a customer that will eventually scale more on their cloud.

One trend we've seen this year is vendors attempting to sell outside of typical IT silos. Is Spot by NetApp pursuing a similar goal of expanding NetApp's presence in the customer's data management stack beyond storage, albeit with individual offerings within a catalog rather than just one product?

McGrath: I think that's one of the big transitions that you're seeing in the market. Your top-down sell from the CIO into an IT team, that's kind of breaking off. We don't always sell into the main IT department. NetApp CEO George Kurian said he wants to force NetApp to get out of just selling to storage admins.

The people who are going to use data going forward are not necessarily storage engineers. Remember that, at the end of the day, [NetApp OnTap] is not a hardware thing. It's software that can run anywhere. It doesn't have to run on the hardware that we ship to data centers. I think there's a concerted effort to say, 'We are going to adapt to the cloud and not try to get the cloud to adapt to us.'

I don't know if there's one specific area, but these platform engineering teams, these DevOps teams -- they have an impossible job. They get requirements from application teams, finance teams, business teams. As that role expands at more and more companies, we're going to keep solving for their pain points.

We don't want to take away any of the entry points. We want to keep that consumption-based model of cloud. But for our larger customers, we absolutely want to produce a method for them to come in and consume data, and not have to worry about the point solutions underneath. A suite of services in their own way, a more consumable way, as a single SKU.

Editor's note: This interview has been edited for clarity, length and style.

Tim McCarthy is a journalist living on the North Shore of Massachusetts. He covers cloud and data storage news.

Excerpt from:
Spot by NetApp exec on acquisitions and catalog slimming - TechTarget

Read More..

Developing Cohesive DevOps and Cloud Computing Strategies to Bolster Digital Transformation – EnterpriseTalk

Organizations that develop cohesive cloud computing and DevOps strategies will help to accelerate and streamline the application development, deployment, and delivery processes.

This approach enables organizations to give the developers more authority over their components, resulting in accelerating digital transformation. DevOps teams can streamline the end-to-end application development and deployment processes by reducing possible human errors.

According to a recent report published by Grand View Research, the cloud computing market size worldwide was valued at USD 368.97 billion in 2021. The research also forecasts that market size will grow at 15.7% CAGR from the forecasted period from 2022 to 2030.

Also Read: 6 Challenges of Digital Employee Experience (DEX)

Here are a few ways that cohesive DevOps and cloud computing strategies will help businesses to bolster digital transformation:

Migration of applications to the cloud will enable businesses to modernize their business network and make the most of cutting-edge cloud computing services. This approach is one of the most effective ways to streamline and execute regular maintenance tasks. Enterprises can ensure an efficient software delivery process by making programmers and operations function strategically rather than arbitrarily.

Cloud services enable businesses to scale because it allows them to respond quickly with relevance and in a cost-effective manner by improving storage and performance. Businesses today need to be economically viable, and the cloud offers them robust services while ensuring Economic sustainability. It is one of the most effective ways to reduce operational costs substantially by automating core development and operation processes. Leveraging the cloud to accomplish DevOps goals will assist in saving financial resources, seamlessly ingrain an Innovation-first approach, and keep the workforce engaged. It is crucial for modern enterprises to be able to consistently innovate to deliver robust applications and services which give them a competitive edge and enhance the entire DevOps process.

Organizations that want to accelerate their digital transformation processes need to have effective application development, deployment, and delivery processes. Cloud enables organizations with a platform that helps them to deliver applications across various platforms without getting into the hassle of implementing intricate technologies in the transformation process. It is an efficient process to utilize resources at scale to meet demands and minimize purchasing costs and sustain IT systems more effectively.

Also Read: Four Key Indicators of a Fake Agile

Organizations can leverage a cohesive cloud, and the DevOps approach will enable businesses to make the most out of all the opportunities generated through digital transformation initiatives. Cloud security is a shared responsibility of the entire organization, and the lack of centralized security infrastructure and security policies can expose critical data to risk. CIOs should consider developing DevSecOps to strengthen cloud security by implementing automated procedures to identify and mitigate vulnerabilities before they become significant threats. It is crucial for organizations to integrate security from the start of the application development phases to delivery to ensure application security at all stages.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

Here is the original post:
Developing Cohesive DevOps and Cloud Computing Strategies to Bolster Digital Transformation - EnterpriseTalk

Read More..

The changing cloud landscape: From observability to optimization – TechCrunch

Chelsea GoddardContributor

Over the last few years, cloud computing has grown more expensive than ever. Initially drawn to the promise of cutting costs on infrastructure spend, companies far and wide flocked to behemoths like AWS and Google Cloud to host their services. Technical teams were told this would reduce engineering costs and increase developer productivity, and in some cases it did.

Fundamental shifts in AI/ML were made possible by the ability to batch jobs and run them in parallel in the cloud. This reduced the amount of time it took to train certain types of models and led to faster innovation cycles. Another example was the shift in how software is actually architected: from monolithic applications running on VMs to a microservices and container-based infrastructure paradigm.

Yet, while the adoption of the cloud fundamentally changed how we build, manage and run technology products, it also led to an unforeseen consequence: runaway cloud costs.

Total enterprise spend in billions. Numbers are approximated based on data from Synergy Research Group. Image Credits: Chelsea Goddard

While the promise of spending less spurred companies to migrate services to the cloud, many teams didnt know how to do this efficiently and, by extension, cost-effectively. This created the first up-front investment opportunity we have seen behind the recent surge in venture funding to cloud observability platforms like Chronosphere ($255 million), Observe ($70 million) and Cribl ($150 million).

The basic thesis here is simple: If we provide visibility into what services cost, we can help teams reduce their spend. We can liken this to the age-old adage that goes something like, You cannot change what you cannot see. This has also been the primary driver for larger companies acquiring smaller observability players: to reduce the risk of churn by baiting customers with additional observability features, then increase their average contract value (ACV).

Visit link:
The changing cloud landscape: From observability to optimization - TechCrunch

Read More..

The Benefits of Cloud Computing – WinBuzzer

This article was contributed by Prem Ananth for Knowledgehut Solutions.

Cloud computing turns IT infrastructure into a utility. It allows access to programs as utilities via the Internet. Cloud computing enables the online creation, configuration, and customization of applications. Using cloud computing, one can plug into infrastructure through the internet and consume computing resources without maintaining and installing them on-premises.

You can advance your skills and improve your confidence with the best AWS and AzureCloud Computing certificationsavailable at Knowledge hut. We can assist you if you are unsure how to pursue computing certification. Read on to understand what cloud computing is and its benefits for enhancing your computing skills.

Advertisement

A network or the internet is referred to as a cloud. In other words, something that exists in a distant area is a cloud. WAN, LAN, or VPN are examples of public or private networks over which cloud services can be delivered. Email, online conferencing, and Customer Relationship Management (CRM) programs all run in the cloud.

Cloud computing is the process of remotely changing, setting up, and obtaining hardware and software resources. It offers data storage, networking, and applications online. Cloud computing provides platform independence since the software does not need to be installed manually on the PC. As a result, Cloud Computing enables our business applications to be mobile and collaborative.

In its most basic form, cloud computing refers to storing and accessing data and applications on remote servers housed on the internet rather than the computers hard disk drive or local network. Cloud computing is another term for network computing.

Cloud Computing Architecture comprises the elements and sub-elements necessary for cloud services. These elements are:

Cloud computing represents a significant shift from the usual way organizations think about using IT assets effectively and efficiently. Here are seven common reasons why businesses use cloud computing services:

Cloud computing reduces the capital investment of purchasing hardware and software as well as establishing and operating on-site data centers, server racks, 24-hour electricity for power and cooling, and IT specialists to manage the infrastructure. The cost quickly adds up without the accounts department getting time to figure out expenses. The reduction in hardware expenses is one advantage of cloud computing. Instead of acquiring in-house equipment, hardware requirements are outsourced to a vendor. New hardware may be enormous, costly, and difficult for firms that are fast expanding. Cloud computing solves these problems by making resources available fast and easily. Furthermore, the expense of maintaining or replacing equipment is passed on to the suppliers. In addition to purchasing prices, off-site hardware reduces existing power costs and saves the workspace. Large data centers may consume valuable office space and generate a lot of heat. Moving to cloud apps or storage can assist maximize space while dramatically reducing energy costs.

Cloud computing services are self-service and on-demand. Even massive amounts of computing resources may be delivered in minutes, generally with only a few mouse clicks, allowing enterprises a lot of flexibility and relieving the strain on capacity planning.

The capacity to scale elastically and globally is one of the advantages of cloud computing services. In cloud language, this implies delivering the appropriate quantity of IT resources at the required place and at the required time. For example, as much as the required computing resources, storage, and network capacity are provided at the appropriate time and from the appropriate geographic location.

With cloud computing, it is simple to increase and decrease the number and capacity of servers as needed. This is accomplished by either boosting or lowering the clouds resources. The flexibility to change programs in response to changes in business size and demands is a fantastic feature of cloud applications, especially when facing a surge in demand or growth in the organization.

On-site and local data centers often need a significant amount of racking and stacking hardware installation, software upgrades, and other time-consuming IT administration tasks. Cloud computing eliminates the need for many of these processes, allowing IT staff to focus their efforts on more critical business objectives.

The major cloud computing services are delivered through a global network of secure data centers that are continually upgraded to the most recent generation of efficient and fast computer equipment. This has various advantages over a specific enterprise data center, including decreased network congestion for applications and larger economies of scale.

Because data may be duplicated at numerous alternative sites on the cloud providers infrastructure, cloud computing enables offsite replication, recovery procedures, and continuity planning easier and less expensive.

Many cloud providers include a comprehensive system of regulations, protocols, and safeguards that improve your entire security architecture, assisting in the protection of your data, applications, and network from possible attacks.

The organization is free of concerns with management or upgradation, as someone else handles the IT infrastructure while you run the business.

Cloud providers may make better use of their data center resources thanks to virtualization. Not surprisingly, many organizations have chosen the cloud deployment model for their on-premises network to maximize utilization and cost savings compared to conventional IT infrastructure while providing the same service and flexibility to their end-users.

Cloud computing is the transmission of computer services such as servers, memory spaces, databases, networks, applications, analytics, and expertise through the Internet (the cloud) to provide faster innovation, more adaptable capabilities, and large-scale economies of scale. You probably incur expenses just for the cloud services you use, reducing your operational costs, enabling you to run your infrastructure more efficiently, and allowing you to grow as your company needs evolve.

Experience the most in-demand Cloud Computing skills from industry professionals and develop your knowledge of cloud storage, data centers, networks, security, and analytics. KnowledgeHut, a renowned IT training company, provides cloud computing certification base programs that comprise training in cloud computing at a low cost, allowing developers and their teams to spend more time creating software that affects the world. Learn about Cloud Computing tools, technology, and trends. Take the Cloud Computing certification using our interactive learning platform to become certified in Cloud Computing and future-ready. KnowledgeHut is an internationally recognized Industry Leader in the transformation of digital skills. Our outstanding faculty includes renowned industry professionals with decades of technical expertise in fields such as Tech, Telecom, Investment Management, Healthcare, Automotive industries, Power, and Commerce, among others.

FAQ

Many big corporations can recruit professional developers and outsource many of their operations, clouds are usually safer than most private data centers. In addition, cloud infrastructure providers give tools and architectural alternatives for isolating operations, encrypting data, and detecting possible risks. Public clouds, on the other hand, operate on a collective responsibility paradigm, in which the user safeguards the applications and data stored in the cloud. This allocation of security responsibility changes depending on the cloud computing tier.

Most global enterprise adoption in the early days was ad hoc, driven by programmers and business lines looking to avoid traditional IT procurement procedures. To successfully transition to the cloud, firms must have a comprehensive strategy. Assemble relevant stakeholders and cloud-experienced people a configuration known as a cloud center of excellence to design a plan based on the companys ambitions.

While cloud computing reduces Capital costs since IT teams arent purchasing equipment, it also adds considerable Operation costs to their budgets frequently enough to negate most or all of their operating savings. Complex pricing and security approaches can potentially cause significant issues if IT teams are not able to adjust.To manage the cloud, IT teams must frequently learn new skills or recruit new personnel, and there are limitations to the control and flexibility over specific cloud resources

A successful cloud migration plan can take over a year to implement, even though it is a continuous effort. Communication is essential for success. Offer stakeholders update frequently, and ensure that IT is an essential component in the decision-making process. Mark executives to promote cloud usage and ensure personnel is adequately prepared for the change.

Virtually every job can and should be moved to the cloud. Large, monolithic programs that operate continuously and have reasonably steady and predictable loads are frequently more cost-effective on-premises. Furthermore, due to administration and data security issues, highly regulated organizations choose to store apps and sensitive information in their data centers. An enterprise can deploy an application to the cloud in its current state, but this is rarely the best solution. Use a cloud migration to rethink your IT infrastructure and discover areas for improvement. To take use of IaaSs service-based approach, an application can be divided into microservices.

Prem Ananth works as a content writer and SEO specialist for various clients in the IT industry.

Advertisement

Read the original post:
The Benefits of Cloud Computing - WinBuzzer

Read More..

The future of automotive computing: Cloud and edge – McKinsey

As the connected-car ecosystem evolves, it will affect multiple value chains, including those for automotive, telecommunications, software, and semiconductors. In this report, we explore some of the most important changes transforming the sector, especially the opportunities that may arise from the growth of 5G and edge computing. We also examine the value that semiconductor companies might capture in the years ahead if they are willing to take a new look at their products, capabilities, organizational and operational capabilities, and their go-to-market approaches.

Four well-known technology trends have emerged as key drivers of innovation in the automotive industry: autonomous driving, connectivity, electrification, and shared mobilitysuch as car-sharing services (Exhibit 1). Collectively, these are referred to as the ACES trends, and they will have a significant impact on computing and mobile-network requirements. Autonomous driving may have the greatest effect, since it necessitates higher onboard-computing power to analyze massive amounts of sensor data in real time. Other autonomous technologies, over-the-air (OTA) updates, and integration of third-party services will also require high-performance and intelligent connectivity within and outside of the car. Similarly, increasingly stringent vehicle safety requirements require faster, more reliable mobile networks with very low latencies.

Exhibit 1

With ACES functions, industry players now have three main choices for workload location: onboard the vehicle, cloud, and edge (Exhibit 2).

Exhibit 2

To ensure that use cases meet the thresholds for technical feasibility, companies must decide where and how to balance workloads across the available computing resources (Exhibit 3). This could allow use cases to meet increasingly strict safety requirements and deliver a better user experience. Multiple factors may need to be considered for balancing workloads across onboard, edge, and cloud computing, but four may be particularly important. The first is safety, since workloads essential for passenger safety require extremely fast reaction times. Other considerations include latency, computing complexity, and requirements for data transfer, which depend on the type, volume, and heterogeneity of data.

Exhibit 3

Connected-car use cases today typically rely on either onboard computing or the cloud to process their workloads. For example, navigation systems can tolerate relatively high latency and may function better in the cloud. OTA updates are typically delivered via a cloud data center and downloaded via Wi-Fi when it is least disruptive, and infotainment content originates in the cloud and is buffered onboard to give users a better experience. By contrast, accident prevention workloads such as autonomous emergency-braking systems (AEBS) require very low latency and high levels of computing capability, which, today, may mean that they are best processed onboard the vehicle.

Advances in computing and connectivity are expected to enable many new and advanced automotive use cases.

Advances in computing and connectivity are expected to enable many new and advanced use cases (Exhibit 4). These developments could alter where workloads are located. Of particular significance, the rollout of 5G mobile networkscould allow more edge processing. Given the importance of these interrelated technologies, we explored their characteristics in detail, focusing on automotive applications.

Exhibit 4

5G technology is expected to provide the bandwidth, low latency, reliability, and distributed capabilities that better address the needs of connected-car use cases. Its benefits to automotive applications fall into three main buckets:

These benefits could contribute to greater use of edge applications within the automotive sector. Workloads that are not safety-criticalinfotainment and smart traffic management, for examplecould start to shift to the edge from onboard or in the cloud. Eventually, 5G connectivity could reduce latency to the point that certain safety-critical functions could begin to be augmented by the edge infrastructure, rather than relying solely on onboard systems.

Most current automotive applications today tend to rely exclusively on one workload location. In the future, they may use some combination of edge computing with onboard or cloud processing that delivers higher performance. For instance, smart traffic management systems may improve onboard decision making by augmenting the vehicles sensor data with external data (for example, other vehicles telemetry data, real-time traffic monitoring, maps, and camera images). Data could be stored in multiple locations and then fused by the traffic management software. The final safety-related decision will be made onboard the vehicle. Ultimately, large amounts of real-time and non-real-time data may need to be managed across vehicles, the edge infrastructure, and the cloud to enable advanced use cases. In consequence, data exchanges between the edge and the cloud must be seamless.

The evolving automotive value chainwill open many new opportunities for those within the industry and external technology players. The total value created by connected-car use cases could reach more than $550billion by 2030, up from about $64 billion in 2020 (Exhibit 5).

Exhibit 5

Increased connectivity opens up opportunities for players across the automotive value chain to improve their operations and customer services. Take predictive maintenance in cars as an example. Aftermarket maintenance and repair provision now predominantly involve following a fixed interval maintenance schedule or reactive maintenance/repair. There is little visibility around the volume of vehicles that need to be serviced in a particular period, leading to inefficiencies in service scheduling, replacement parts ordering, and inventory, among others. Predictive maintenance using remote car diagnostics could improve the process by giving OEMs and dealers an opportunity to initiate and manage the maintenance process.

The pace of rollout of advanced connected-car use casesis highly contingent on the availability of 5G and edge computing. A variety of factors are converging to accelerate this. Demand is rising for these critical enablers, fueled by a proliferation of consumer and industry use cases. In the short term, value may be generated through enhancements to services already available with 4G, including navigation and routing, smart parking, centralized and adaptive traffic control, and monitoring of drivers, passengers, or packages.

We expect that greater 5G and edge availability may expand the list of viable use cases (technically and financially), boosting edge value exponentially. Looking to 2030, about 30 percent of our value estimate may be enabled by 5G and edge (from 5percent in 2020), largely consistent with our cross-sectoral report on advanced connectivity.

Value creation could be accelerated by traditional players moving into adjacencies and by new entrants from industries not traditionally in the automotive value chain, such as communication system providers (CSPs), hyperscalers, and software developers. Players such as Intel, Nvidia, and the Taiwan Semiconductor Manufacturing Company are adding automotive-softwarecapabilities, leading to greater synergies and vertical-integration benefits. In addition to accelerating value creation, new entrants may compete for a greater share of the total value.

Automotive-hardware value chains are expected to diverge based on the type of OEM. Traditional auto manufacturers, along with their value chains, are expected to see a continuation of well-established hardware development roles based on existing capabilities. Automobiles, components, devices, and chips for applications ranging from cars to the cloud may continue to be primarily manufactured by the companies that specializein them. Nontraditional or up-and-coming automotive players could codevelop vehicle platforms with the established car OEMs and use OEMs services or contract manufacturers such as Magna Steyr for the traditional portions of the value chain.

Established players may seek to increase their share by expanding their core businesses, moving up the technology stack, or by growing their value chain footprints. For instance, it is within the core business of semiconductor players to create advanced chipsets for automotive OEMs, but they could also capture additional value by providing onboard and edge software systems or by offering software-centric solutions to automotive OEMs. Similarly, to capture additional value, hyperscalers could create end-user services, such as infotainment apps for automotive OEMs or software platforms for contract manufacturers.

As players make strategic moves to improve their position in the market, we can expect two types of player ecosystems to form. In a closed ecosystem, membership is restricted and proprietary standards may be defined by a single player, as is the case with Volkswagen, or by a group of OEMs. Open ecosystems, which any company can join, generally espouse a democratized set of global standards and an evolution toward a common technology stack. In extreme exampleswhere common interfaces and a truly open standard existeach player may stay in its lane and focus on its core competencies.

Hybrid ecosystems will also exist. Players following this model are expected to use a mix of open and closed elements on a system-by-system basis. For example, this might be applied to systems in which OEMs and suppliers of a value chain have particular expertise or core competency.

Exhibit 6 describes the advantages and disadvantages of each ecosystem model.

Exhibit 6

Companies in the emerging connected-car value chain develop offerings for five domains: roads and physical infrastructure, vehicles, network, edge, and cloud. For each domain, companies can provide software services, software platforms, or hardware (Exhibit 7).

Exhibit 7

As automotive connectivity advances, we expect a decoupling of hardware and software. This means that hardware and software can develop independently, and each has its own timeline and life cycle. This trend may encourage OEMs and suppliers to define technology standards jointly and could hasten innovation cycles and time to market. Large multinational semiconductor companies have shown that development time can be reduced by up to 40 percent through decoupling and parallelization of hardware and software development. Furthermore, the target architecture that supports this decoupling features a strong middleware layer, providing another opportunity for value creation in the semiconductor sector. This middleware layer may likely be composed of at least two interlinked domain operating systems that may handle the decoupling for their respective domains. Decoupling hardware and software, which is a key aspect of innovation in automotive, tilts the ability to differentiate offerings heavily in favor of software.

New opportunities. In the software layer, companies could obtain value in several different ways. With open ecosystems, participants will have broadly adopted interoperability standards with relatively common interfaces. In such cases, companies may remain within their traditional domains. For instance, semiconductor players may focus on producing chipsets for specific customers across the domains and stack layers, OEMs concentrate on car systems, and CSPs specialize in the connectivity layer and perhaps edge infrastructure. Similarly, hyperscalers may capture value in cloud/edge services.

In closed ecosystems, by contrast, companies may define proprietary standards and interfaces to ensure high levels of interoperability with the technologies of their members. For example, OEMs in a closed ecosystem may develop analytics, visualization capabilities, and edge or cloud applications exclusively for their own use, in addition to creating software services and platforms for vehicles. Sources of differentiation for vehicles could include infotainment features with plug-and-play capabilities, autonomous capabilities such as sensor fusion algorithms, and safety features.

While software is a key enabler for innovation, it introduces vulnerabilities that can have costly implications for OEMs, making cybersecurity a priority (see sidebar, The importance of cybersecurity, for more information). Combined, the 5G and edge infrastructure could potentially offer increased flexibility to manage security events related to prevention and response.

Hardware players could leverage their expertise to offer advanced software platforms and services. Nvidia, for instance, has entered the market for advanced driver-assistance systems (ADAS) and is complementing its system-on-a-chip AI design capabilities with a vast range of software offerings that cover the whole automated-driving stackfrom OS and middleware to perceptionand trajectory planning.

Some companies are also moving into different stack layers. Take Huawei, which has traditionally been a network equipment provider and producer of consumer-grade electrical and electronic (E&E) equipment, and manufacturer of infrastructure for the edge and cloud. Currently, the company is targeting various vehicle stack layers, including the base vehicle operating systems, E&E hardware, automotive-specific E&E, and software and EV platforms. In the future, Huawei may develop vehicles, monitoring sensors, humanmachine interfaces, application layers, and software services and platforms for the edge and cloud domains.

Greater automotive connectivity will present semiconductor players and other companies along the automotive value chain with numerous opportunities. In all segments, they may benefit from becoming solution providers, rather than keeping a narrower focus on software, hardware, or other components. As they move ahead and attempt to capture value, companies may benefit from reexamining elements of their core strategy, including their capabilities and product portfolio.

The automotive semiconductor market is one of the most promising subsegments of the global semiconductor industry, along with the Internet of Things and data centers. Semiconductor companies that transform themselves from hardware players to solution providers may find it easier to differentiate their business from the competitions. For instance, they might win customers by developing application software optimized for their system architecture. Semiconductor companies could also find emerging opportunities in the orchestration layer, which may allow them to balance workloads between onboard, cloud, and edge computing.

As semiconductor companies review their current product offerings, they may find that they can expand their software presence and produce more purpose-specific chipssuch as microcontrollers for advanced driver-assistance, smart cockpit, and power-control systemsat scale by leveraging their experience in the automotive industry and in edge and cloud computing. Beyond software, semiconductor companies might find multiple opportunities, including those related to more advanced nodes with higher computing power and chipsets with higher efficiency.

Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

To improve their capabilities related to purpose-specific chips, semiconductor players would benefit from a better understanding of the needs of OEMs and consumers, as well the new requirements for specialized silicon. Semiconductor companies can capitalize on their edge and cloud capabilities by building strategic partnerships with hyperscalers and edge players that have a strong focus on automotive use cases.

Tier 1 suppliers could consider concentrating on capabilities that may allow them to become tier 0.5 system integrators with higher stack control points. In another big shift, they could leverage existing capabilities and assets to develop operating systems, ADAS, autonomous driving, and human-machine-interface software for new cars.

To produce the emerging offerings in the automotive-computing ecosystem, tier 1 players might consider recruiting full-stack employees who see the bigger picture and can design products better tuned to end-user expectations. They might also want to think about focusing on low-cost countries and high-volume growth markets with price-differentiated, customized, or lower-specification offerings that have already been tested in high-cost economies.

OEMs could take advantage of 5G and edge disruption by orienting business and partnership models toward as-a-service solutions. They could also leverage their existing assets and capabilities to build closed- or open-ecosystem applications, or focus on high-quality contract manufacturing. Key OEM high growth offerings could include as-a-service models pertaining to mobility, shared mobility, and batteries. OEMs, when seeking partnerships with other new and existing value chain players, need to keep two major things in mind: filling talent and capability gaps (for instance, in chip development) and effectively managing diverse portfolios.

CSPs must keep network investments in lockstep with developments in the automotive value chain to ensure sufficient 5G/edge service availability. To this end, they may need to form partnerships with automotive OEMs or hyperscalers that are entering the space. For best results, CSPs will ensure that their core connectivity assets can meet vehicle-to-everything (V2X) use case requirements and create a road map to support highly autonomous driving. Connectivity alone represents a small part of the overall value to CSPs, however, and companies will benefit from expanding their product portfolios to include edge-based infrastructure-as-a-service and platform-as-a-service. Evolving beyond the traditional connectivity core may necessitate organizational structures and operating models that support more agile working environments.

Hyperscalers could gain ground by moving quickly to partner with various value chain players to test and verify priority use cases across domains. They could also form partnerships with industry players to drive automotive-specific standards in their core cloud and emerging edge segment. To determine their full range of potential opportunitiesas well as the most attractive oneshyperscalers should first analyze their existing assets and capabilities, such as their existing cloud infrastructure and services. They would also benefit from aligning their cloud and edge product portfolios or by extending cloud-availability zones to cover leading locations for V2X use case rollouts and real-world testing. If hyperscalers want to increase the footprint of their cloud and edge offerings within the automotive value chain, they could consider a range of partnerships, such as those with OEMs to test and verify use cases.

The benefits of 5G and edge computing are real and fast approaching, but no single player can go it alone. There are opportunities already at scale today that are not clearly addressed in the technological road map of many automotive companies, and not everybody is capturing them.

Building partnerships and ecosystems for bringing a connected car to market and capturing value are crucial, and some semiconductor companies are already forging strong relationships with OEMs and others along the value chain. The ACES trends in the automotive industry are moving fast; semiconductor companies mustmove quickly to identify opportunities and refine their existing strategies. These efforts will not only help their bottom lines but also could also allow tier 1s and OEMs to shorten the time-to-market for their products and services, which would accelerate the adoption of smart vehiclesand that benefits everyone.

Read more:
The future of automotive computing: Cloud and edge - McKinsey

Read More..