Page 2,737«..1020..2,7362,7372,7382,739..2,7502,760..»

Get ready for hybrid cloud with Azure Arc Jumpstart and ArcBox – TechRepublic

Microsoft's evaluation tools are a good way of getting to grips with its hybrid cloud tools.

Image: GettyImages/PhonlamaiPhoto

It's been a while now since Microsoft unveiled its first Azure Arc preview. The service has grown considerably since then, adding more and more features, offering ways to help you run an Azure-like service on your own hardware, migrating workloads to and from the cloud.

SEE: Research: Video conferencing tools and cloud-based solutions dominate digital workspaces; VPN and VDI less popular with SMBs (TechRepublic Premium)

Microsoft's legacy of data center workloads makes sense of its commitment to hybrid clouds. While Amazon and Google have experience in running their own workloads, Microsoft's long history of enterprise IT gives it a unique position to support and migrate existing workloads from on-premises to public cloud, with a deep understanding of what those workloads are and what they're used for. That extends to understanding why an organisation might want to keep its applications and data on-premises, while still taking advantage of cloud services and architectures.

Microsoft has sensibly taken a step-by-step approach to releasing Azure Arc, rolling out new features and services one at a time. It's a logical way to deliver this type of service, as it needs to fit in with your own cloud migrations, even if it is to an on-premises cloud. The cloud management model is very different from how we manage our own data centers, and bringing it in-house needs thought and planning.

Arc's application-centric approach to managing virtual infrastructures and services allows you to run it alongside existing tools, moving servers to Arc-based management as they become available. You can start with one or two machines and a simple virtual infrastructure, slowly moving to a platform-as-a-service approach as you build new cloud native applications.

With so many services now available in Azure Arc, and more being added, there's a new question: "How do I understand if it works for me?"

One useful tool for getting to grips with Azure Arc is the Jumpstart documentation and tools. These are built around a GitHub repository, with more than 70 deployment scenarios focused on key elements of the Arc platform. Currently these support Arc-enabled servers, SQL Server, data services, Kubernetes, app services, and working with Arc and Azure Lighthouse as a combined set of operations management tools.

The Jumpstart scenarios are a useful way to speed up getting started, giving you the tools you need to bring servers into Arc management. They include guides for enabling both Windows and Linux servers, adding them to Azure Arc resource groups and enabling the appropriate management environments. The tools allow you to work with your own VMs on existing servers or VMs running on managed platforms like Azure Stack HCI.

It's important to remember that Azure Arc is a multi-platform, multi-cloud tool, so along with instructions for Windows and for Linux, there's documentation for using Hashicorp's Terraform infrastructure management tools with Azure Arc on Google Cloud Platform and Amazon Web Services. On-premises environments using VMware's vSphere can use Azure Arc via Terraform. The Jumpstart guides don't cover every scenario, but they provide a prescriptive way to get started, and by working with Terraform, they offer support for a popular open source DevOps platform, speeding up transitions and building on existing skills.

Image: Microsoft

Having guidelines and documentation is only part of the Jumpstart philosophy. It also offers a sandbox environment where you can try things out in a controlled environment. JumpStart's ArcBox is a way of quickly standing up a set of servers in Azure that are managed by Arc, giving you the option of exploring its various features in one place either as a proof-of-concept or as a training tool.

Having a combination of the two-in-one tool is extremely valuable, as you can ask questions about how you might run an Arc-enabled platform and then attempt to answer them. The ArcBox environment can go further, as you can add new nodes and new services as you need them. It takes advantage of Azure capabilities like nested virtualization to simulate both host and managed-client VMs.

SEE: AWS Lambda, a serverless computing framework: A cheat sheet (free PDF) (TechRepublic)

At the heart of the deployment is a Windows Server 2019 Data Center VM which hosts Windows, Ubuntu, and SQL Server VMs. These are automatically configured to be managed via Azure Arc, leaving the host to be managed directly. It's important to understand the separation between host infrastructure and application infrastructure, and ArcBox keeps its various host services outside its management envelope, leaving them for you to add to other management tools.

The Jumpstart tooling only supports a subset of the available Azure Arc services, with a basic Kubernetes cluster available as a host for managed containers. There's an interesting part to this element of ArcBox, as it's not a full-blown cluster, instead it's a single node running on Rancher's K3s minimal edge Kubernetes implementation. That shows the flexibility of the Arc Kubernetes tooling; if it can run with something like this, it can run with virtually any Kubernetes as long as it supports the Kubernetes management APIs. Again, you'll need to manage ArcBox's K3S cluster and its host Ubuntu VM outside of Azure Arc. Another three-node K3s cluster is used to host Azure Arc-managed Data Services, with both Azure SQL Managed Instance and an Azure PostgreSQL Hyperscale running on the cluster.

The entire ArcBox environment deploys from a single ARM template, which automates the entire process of setting up the sandbox. The aim is to make getting started as simple as possible, getting everything in place with a final step to automatically deploy the Arc-enabled services when you log into the host VM for the first time. Microsoft provides instructions for deploying ArcBox from the Azure Portal and from the Azure CLI. The whole process should take less than an hour, making it a quick and easy way of getting ready to experiment with Azure Arc.

You will be charged for the Azure resources ArcBox uses, but a basic evaluation should be possible using Visual Studio credits or even a free trial account. Microsoft continues to develop ArcBox, and as new features get added, it's worth giving them a spin to see if they can be used in your own Azure Arc environments.

DevOps, virtualization, the hybrid cloud, storage, and operational efficiency are just some of the data center topics we'll highlight. Delivered Mondays and Wednesdays

Read the rest here:
Get ready for hybrid cloud with Azure Arc Jumpstart and ArcBox - TechRepublic

Read More..

The AWS Shared Responsibility Model: Everything You Need to Know – Security Boulevard

Many modern organizations are migrating their infrastructure and systems to the cloud. AWS, like other cloud providers, has a Shared Responsibility Model that determines which cloud components AWS is responsible for securing and which are the customers responsibility to secure. Lets take a look at what this model means, its many challenges and how organizations can better protect their cloud infrastructure and improve their cloud security posture.

The AWS Shared Responsibility Model is a framework by AWS that determines which cloud architecture components Amazon, as the CSP (Cloud Service Provider), is responsible for securing, and which are the customers responsibility to secure. As a rule of thumb, AWS is responsible for security of the cloud, and the customer is responsible for security in the cloud.

Breaking that down, AWS is responsible for the host operating system, the virtualization layer and the physical security of the cloud servers. The customer is responsible for protecting the rest which is not a trivial amount of security ownership including network controls, configurations, IAM and customer data.

AWS Shared Responsibility Model [Source: AWS]

While structured and seemingly self-evident (i.e. everyone knows what is theirs to do security-wise), the shared responsibility model with AWS comes with considerable challenges.

First and foremost, many cloud customers do not understand the model or are even aware it exists. Some cloud customers may have the false sense that the CSP takes care of cloud security as part of the services and tools it provides. Others are not sure about the division of responsibility, i.e. the scope of their responsibility vs. that of the CSP. And there are gray areas, such as how to handle networking security given that both parties share responsibility for it.

In a recent IDC survey commissioned by Ermetic, a senior security decision maker noted that clarity regarding system security responsibility with cloud vendors and support is a big concern.

Contributing to the lack of clarity is that the level of security responsibility differs according to the kind of cloud service acquired: IaaS and PaaS require more security ownership by the customer whereas SaaS requires the customer to do almost nothing.

In a recent podcast, AWS security consultant Scott Piper notes that shared responsibility is a misnomer and should more accurately be called split responsibility. Piper points out that the model has no sharing or collaborative aspect just the responsibilities of the provider and the responsibilities of the customer, with no watchful eye by the provider ensuring that security problems or gaps arent created.

To complicate things further, each public cloud provider has its own shared responsibility model. The same area that one CSP designates the providers responsibility, another CSP may designate the customers responsibility or that of both. In the case of IaaS, broadly speaking, securing the physical aspects of infrastructure control falls to the CSP and securing the configuration and inner workings of the provisioned cloud resources falls to the customer.

Digital transformation accelerated by COVID-19 has caused many organizations to expand their move to the cloud. However, with this transformation so relatively new, these organizations internal company knowledge of managing and maintaining cloud infrastructure is still being developed. As a result, many such companies lack the expertise and bandwidth to identify, understand and solve the challenges of cloud security and shared responsibility.

For example, the IDC survey found that 50% of companies did not succeed in implementing the principle of least privilege a pillar of cloud security. They are failing at this important strategic initiative due to the difficulty and time it takes, lack of personnel or expertise, and multi-cloud difficulties.

Another reason the AWS shared responsibility model is challenging is the lack of tools available for consumers to secure the areas of shared responsibility that they own. According to Latch, an enterprise SaaS innovator in the building sector, Its a real challenge to find cloud-native security solutions that really work.

In addition, AWS does provide a growing array of security tools, such as for granting and controlling permissions IAM Policy Simulator and AWS Access Analyzer to name a couple. While these native tools do not answer all modern cloud security needs or even fully cover the customers shared responsibilities, customers may think they do. Also, native tools require much labor and expertise, so organizations may not be using them to the fullest. In fact, it happens that many organizations that implemented commercial and free of charge CSP tools were breached, and had sensitive data compromised.

The recent introduction of cloud security to organizations has also raised new questions about cloud security ownership within an organization. While security was traditionally the hands-down responsibility of IT or dedicated security teams, today, multiple divisions, departments and/or roles are involved.

The same survey found that the professional roles involved in cloud infrastructure security are diverse, and can include IT/Operations, DevOps, Cloud Security, Development/Engineering and IAM Security. According to the study, this diversity of security ownership within the organization results in fragmented decision making, siloed security practices and difficulties in implementing security across the board. These internal realities, together with the complications of a shared responsibility model, may inhibit or at the very least complicate implementation of proper cloud security practices.

Many organizations today are meeting their needs by deploying multiple cloud environments: AWS, GCP, Azure and others. Native security tools are cloud specific by nature, are at different levels of maturity and do not cover multi cloud use cases. In the above mentioned podcast, Piper recognizes the challenges in using the same solutions for multiple clouds and the difficulty in gaining an aggregated, consistent view across all the clouds in use.

These challenges create an acute need for unified multi cloud solutions that can see deeply into each public cloud and help address each clouds shared responsibility requirements.

Due to these challenges, many organizations are not properly handling their share in the security of their cloud infrastructure. This lack of adequately securing what falls to them to secure, including network controls, configurations, IAM and customer data, puts the organization at considerable risk of having identities and their entitlements exploited, and sensitive data compromised upon a cloud data breach.

The risk is not theoretical. According to the IDC survey, 98%(!) of organizations suffered a cloud data breach in the last 18 months. Getting breached is almost a given in todays hyper-hacked cloud world. More importantly, 63% of organizations had sensitive data exposed in the cloud and that number ballooned to 85% for companies with large cloud footprints. This means that these organizations were not correctly identifying and mitigating the risks of access to their sensitive data. These failings can have significant business implications and penalties in the long-run.

Why was their sensitive data exposed? The same survey found that 71% were using their cloud providers commercial security tools and 68% were using their providers free tools. This heavy reliance on cloud provider tools brings us back to confusion around shared responsibility. Organizations are possibly unclear about the capabilities of these tools for ensuring cloud infrastructure protection or are not putting in place the appropriate solutions to carry out their shared part in the fray or both.

Cloud security tools in use or planned for use by organizations [IDC State of Cloud Security 2021]

Half the battle is understanding your organizations shared (or split) security responsibilities. The other half is effectively addressing them. Relying on native tools alone to cover your cloud infrastructure security responsibilities is not enough. Failing to implement more substantial solutions can lead to serious data compromise and business impact.

So where do you start? Find an identity focused solution that tackles the leading risk to cloud infrastructure permissions to reduce your cloud attack surface at scale and address compliance. Gain a complete view into your cloud assets so you can assess and report on risk, with step by step remediation. Later, bring least privilege automation to engineering, preventing risk from the get-go.

The post The AWS Shared Responsibility Model: Everything You Need to Know appeared first on Ermetic.

*** This is a Security Bloggers Network syndicated blog from Ermetic authored by Ermetic Team. Read the original post at: https://ermetic.com/whats-new/blog/aws/the-aws-shared-responsibility-model-everything-you-need-to-know/

View original post here:
The AWS Shared Responsibility Model: Everything You Need to Know - Security Boulevard

Read More..

Supercomputers are becoming another cloud service. Here’s what it means – ZDNet

These days supercomputers aren't necessarily esoteric, specialised hardware; they're made up of high-end servers that are densely interconnected and managed by software that deploys high performance computing (HPC) workloads across that hardware. Those servers can be in a data centre but they could also be in the cloud as well.

When it comes to large simulations like the computational fluid dynamics to simulate a wind tunnel processing the millions of data points needs the power of a distributed system and the software that schedules these workloads is designed for HPC systems. If you want to simulate 500 million data points and you want to do that 7,000 or 8,000 times to look at a variety of different conditions, that's going to generate about half a petabyte of data; even if a cloud virtual machine (VM) could cope with that amount of data, the compute time would take millions of hours so you need to distribute it and the tools to do that efficiently need something that looks like a supercomputer, even if it lives in a cloud data centre.

The best cloud storage services

Free and cheap personal and small business cloud storage services are everywhere. But, which one is best for you? Let's look at the top cloud storage options.

Read More

When the latest Top 500 list came out this summer, Azure had four supercomputers in the top 30; for comparison, AWS had one entry on the list, in 41st place.

SEE: Nextcloud Hub: User tips (free PDF) (TechRepublic)

HPC users on Azure run computational fluid dynamics, weather forecasting, geoscience simulation, machine learning, financial risk analysis, modelling for silicon chip design (a popular enough workload that Azure has FX-series VMs with an architecture specifically for electronic design automation), medical research, genomics, biomedical simulations and physics simulations, as well as workloads like rendering.

They do some of that on traditional HPC hardware; Azure offers Cray XC and CS supercomputers and the UK's Met Office is getting four Cray EX systems on Azure for its new weather-forecasting supercomputer. But you can also put together a supercomputer from H and N-Series VMs (using hardware like NVidia A100 Tensor Core GPUs and Xilinx FPGAs as well as the latest Epyc 7300 CPUs) with HPC images.

One reason the Met Office picked a cloud supercomputer was the flexibility to choose whatever the best solution is in 2027. As Richard Lawrence, the Met Office IT Fellow for supercomputing.put it at the recent HPC Forum, they wanted "to spend less time buying supercomputers and more time utilizing them".

But how does Microsoft build Azure to support HPC well when the requirements can be somewhat different? "There are things that cloud generically needs that HPC doesn't, and vice versa," Andrew Jones from Microsoft's HPC team told us.

Everyone needs fast networks, everybody needs fast storage, fast processors and more memory bandwidth, but the focus on how all that is integrated together is clearly different, he says.

HPC applications need to perform at scale, which cloud is ideal for, but they need to be deployed differently in cloud infrastructure from typical cloud applications.

SEE: Google's new cloud computing tool helps you pick the greenest data centers

If you're deploying a whole series of independent VMs it makes sense to spread them out across the datacenter so that they are relatively independent and resilient from each other, whereas in the HPC world you want to pack all your VMs as closest together as possible, so they have the tightest possible network connections between each other to get the best performance he explains.

Some HPC infrastructure proves very useful elsewhere. "The idea of high-performance interconnects that really drive scalable application performance and latency is a supercomputing and HPC thing," Jones notes. "It turns out it also works really well for other things like AI and some aspects of gaming and things like that."

Although high speed interconnects are enabling disaggregation in the hyperscale data centre, where you can split the memory and compute into different hardware and allocate as much as you need of each, that may not be useful for HPC even though more flexibility in allocating memory would be helpful, because it's expensive and not all the memory you allocate to a cluster will be used for every job.

"In the HPC world we are desperately trying to drag every bit of performance out of the interconnect we can and distributing stuff all over the data centre is probably not the right path to take for performance reasons. In HPC, we're normally stringing together large numbers of things that we mostly want to be as identical as possible to each other, in which case you don't get those benefits of disaggregation," he says.

What will cloud HPC look like in the future?

"HPC is a big enough player that we can influence the overall hardware architectures, so we can make sure that there are things like high memory bandwidth considerations, things like considerations for higher power processes and, therefore, cooling constraints and so on are built into those architectures," he points out.

The HPC world has tended to be fairly conservative, but that might be changing, Jones notes, which is good timing for cloud. "HPC has been relatively static in technology terms over the last however many years; all this diversity and processor choice has really only been common in the last couple of years," he says. GPUs have taken a decade to become common in HPC.

SEE: What is quantum computing? Everything you need to know about the strange world of quantum computers

The people involved in HPC have often been in the field for a while. But new people are coming into HPC who have different backgrounds; they're not all from the traditional scientific computing background.

"I think that diversity of perspectives and viewpoints coming into both the user side, and the design side will change some of the assumptions we'd always made about what was a reasonable amount of effort to focus on to get performance out of something or the willingness to try new technologies or the risk reward payoff for trying new technologies," Jone predicts.

So just as HPC means some changes for cloud infrastructure, cloud may mean big changes for HPC.

More:
Supercomputers are becoming another cloud service. Here's what it means - ZDNet

Read More..

Top 5 Benefits of Cloud Infrastructure Security – Security Boulevard

Embracing new technologies lead to qualitative growth but simultaneously holds high chances of quantitative data breaches. While adopting cloud technology, it is important to see the security of cloud infrastructure as one of the crucial responsibilities. There are various organizations out there that are still unsure of the security of their data present in the cloud environment.

Take a Moment to Stay Tuned Forever

Subscribe to get weekly cyber security updates!

In 2019, Collection #1, a massive data breach held responsible for compromising data set of over 770 million unique email addresses and 21 million unique passwords. The collection of data files was stored on a cloud storage service and MEGA. Similarly, information of over 108 million bets records was leaked by an online casino group. The leaked data included details of customers personal information along with deposits and withdrawals.

Then the same year, a famous food delivery service providing firm was breached, compromising the data of 4.9 million users, including consumers and delivery employees.

Additionally, a post from Security Boulevard says acording to a survey almost 98% of the companies had witnessed at least one cloud data breach in the past 18 months, that is compared to 79% in 2020.

These infamous data breaches are proof that storage service providers like the cloud requires consistent security management. Unfortunately, when we talk about cloud infrastructure security, many enterprises wrongly assume that their data is well guarded and is far away from the radar of cyber criminals. The truth is, these cyber criminals are experts at scraping up the exposed, vulnerable data by using unethical ways to look for unsecured databases.

For starters, the term cloud computing infrastructure security refers to the entire infrastructure, which involves a comprehensive set of policies, applications, and technologies. It also includes controls that are used to protect virtualized IP, services, applications, and data.

With companies migrating their extensive data and infrastructure to the cloud, the importance of cloud security testing becomes paramount. Cloud security offers multiple levels of control to provide continuity and protection in a network infrastructure. As a result, it is a highly essential element in creating a resilient environment that works for companies worldwide.

Enjoy the benefits of cloud infrastructure security by partnering with leading technology-based private cloud computing security service providers to keep the companys security smooth running.

Nowadays, cloud computing servers are becoming susceptible to data breaches. Cloud infrastructure security solutions help in ensuring that data like sensitive information and transaction is protected. It also helps in preventing the third party from tampering with the data being transmitted.

Distributed denial of service, aka DDoS attacks, is infamously rising and deployed to flood the computer system with requests. As a result, the website slows down to load to a level where it starts crashing when the number of requests exceeds the limit of handling. Cloud computing security provides solutions that focus on stopping bulk traffic that targets the companys cloud servers.

When it comes to the best practices of cloud infrastructure security solutions, it offers consistent support and high availability to support the companys assets. In addition, users get to enjoy the benefit of 27/7 live monitoring all year-round. This live monitoring and constant support offer to secure data effortlessly.

Infrastructure security in the cloud offers advanced threat detection strategies such as endpoint scanning techniques for threats at the device level. The endpoint scanning enhances the security of devices that are accessing your network.

In order to protect data, the entire infrastructure requires to be working under complaint regulations. Complaint secured cloud computing infrastructure helps in maintaining and managing the safety features of the cloud storage.

The points mentioned above are clear enough to state how beneficial and vital is cloud infrastructure security for an organization. Unfortunately, there are very many high-profile cases that have been witnessed in past years relating to data breaches.

To patch the loopholes and strengthen the IT infrastructure security, it is crucial to keep the security of cloud storage services a high priority. Engage with the top-class cloud computing security tools to get better results and have the data secured.

Get your hands on the latest DMARC report!

Check out the latest trends in Email Security

The post Top 5 Benefits of Cloud Infrastructure Security appeared first on Kratikal Blogs.

*** This is a Security Bloggers Network syndicated blog from Kratikal Blogs authored by Pallavi Dutta. Read the original post at: https://www.kratikal.com/blog/top-5-benefits-of-cloud-infrastructure-security/

Read the original:
Top 5 Benefits of Cloud Infrastructure Security - Security Boulevard

Read More..

Will Windows 365 Change the Future of PCs? | Internet News – InternetNews.com

Ive been covering PCs since the birth of the market back in the early 1980s. What I think is fascinating, given that IBM largely drove the early adoption of PCs into the enterprise, is that the company never grasped that the market didnt want the PC at the time. The idea of everyone having their computer wasnt the driver; it was that everyone wanted more control over their computing and, at the time, if you needed an application, you had to put in a request to MIS (which became IT) and then wait months, if not years, to get something that had little to do with what youd requested. Our users were not happy at all.

IBM responded to the threat Apple represented not by opening up their mainframes much like the cloud is today, but by creating an alternative to the Apple PC, the IBM PC. This error was a repetitive mistake that has plagued many companies. Apple almost went under trying to copy the IBM PC in the 1990s. Microsoft tried to copy the iPod with the Zune when they should have created the iPhone. Research in Motion tried to copy the iPhone, as did Palm. Palm failed, and BlackBerry almost did. Filling an unmet need when bringing out a competing product works; trying to copy that product badly doesnt.

Both Oracle and Sun seemed to grasp this in the 1990s with their Thin Client effort, but they didnt have the I/O of a mainframe, they didnt embrace the then-dominant desktop platform Windows, and they tried to sell their solution to IT, and not the users who drove the PC effort in the first place.

We continued to have a Thin Client market. Still, it was crippled by the lack of high-performance wireless data connections (no 5G until recently), the inability of existing server architectures to handle the needed I/O and a massive number of sessions, and a distinct lack of user focus. Thin clients sold but mostly in areas and markets where users didnt get a vote.

With the cloud, servers got the necessary upgrades to handle large numbers of users, provisioning advanced by leaps and bounds, and 4G LTE is being replaced with 5G finally, decades after the need was identified, supplying the right mix of hardware to create a mainframe-like product with a user focus. And thus, Windows 365 was born. The irony being this would likely be a perfect product for IBM to resell, but IBM finds itself partnered with Apple, the company that took them down the wrong path and cost them their dominant market position.

A products success is often predicated on the foundation set and how well it addresses an unmet need. The color TV didnt sell until Disneys Wonderful World of Color established a need around it a decade after it was created. Apple PCs didnt sell at high volumes until the Mac addressed a missed need for more appliance-like products that ran productivity software. The cell phone needed to become affordable. The smartphone needed to embrace the two-way pager and then the iPod initially. The laptop PC needed to be light enough to be portable and wirelessly connected for success.

The Thin Client needed to fully embrace the PC while providing the cost, security, manageability, and reliability advantages of a terminal. Where this market fails is when vendors treat the list of requirements like multiple choice rather than all-of-the-above. Windows 365 on paper tied back into the Azure Cloud is the first offering of its type to approach (we wont know for sure until it is in use at scale) this all-of-the-above set of requirements.

While several elements are yet to be disclosed, like a cost that isnt insignificant, and we dont yet have deployments at enterprise scale to test loading, Windows 365 represents the closest thing to what we wanted back in the 1980s when the PC was created. Cloud technology allows it to scale; Microsoft owns the solution and can better assure its security, reliability, and availability. We have enough wireless bandwidth to cover mobile users, and it should be far easier to secure against the Nation-State level of malware and security threats we are now facing. If you need to, you could power off a cloud service, physically halting a ransomware attack and digitally isolate the infected nodes without involving the users, for instance. And you can far better assure compliance with patch updates if those patches are applied centrally and not locally.

We still have to sort hardware because, for a cloud-hosted desktop, you dont need an entire OS or a potent processor or GPU, as those are provided in the cloud, but you do still need a good display and set of human interface accessories. PCs could drift toward Thin Client architectures, optimizing on bandwidth rather than performance, or smartphones could gain the ports or wireless accessory capabilities (for multiple monitors, for instance) needed for this solution. Thats coming, but we are finally back on the path we abandoned in the 1980s with the potential blend of the best aspects of terminals and PCs into Windows 365.

Read this article:
Will Windows 365 Change the Future of PCs? | Internet News - InternetNews.com

Read More..

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE? – HPCwire

Behind Atoss deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become an Atos HPC cloud center of competency and act as a global hub and also be able to position Atos hardware for sale. Financial details of the acquisition were not yet disclosed.

Atos, of course, is a leading systems maker in Europe. It has worked with Nimbix in recent years including occasionally on joint proposals to customers. Nimbix began in 2010 as a boutique HPC cloud company offering leading-edge HPC technology, often much earlier than the big cloud providers. It has been winding down its independent cloud business and migrating customers to public clouds while leveraging its HPC orchestration services and Nimbix JARVICE XE software. Andrew Grant, Atos vice president for HPC strategic projects, discussed deal drivers and plans for Nimbix with HPCwire.

Grant singled out three deal drivers:

The product jewel in the deal is the Nimbix JARVICE XE software platform, which has been positioned as the first container-native hybrid cloud platform. It supports x86, OpenPower and Arm. There has even been work with Riken to port it to Fugaku, currently the top performer on the Top500 list.

In late 2020, Nimbix Chief Technology Officer Leo Reiter told HPCwire that JARVICE XE can be used with any Arm platform, so long as it supports Linux and Kubernetes. Currently, JARVICE XE is deployed on Fujitsu FX700 systems (which like Fugaku are powered by the Fujitsu A64FX CPU), and JARVICE also supports Amazons Graviton processors, either as a fully standalone resource or as a processing target from an existing JARVICE-based system.

In distinguishing the Atos and Nimbix cloud platform, Grant said, Atos is focused very much on the front end of the portal interface to make the job submission portal, easy and customer friendly, and the security around that. The JARVICE focus is very much downstream and what happens at the delivery end. So the ability to move containers, and to do it in a multi-cloud way. So you could go to Amazon, Google, etc. and have the ability to dynamically create clusters. Any Kubernetes target is suitable for JARVICE.

So, whereas we can easily submit a Slurm job, lets say from an on-prem system into GCP (Google Cloud Platform), what we couldnt easily do before was be able to create a cluster on the fly, submit those containers, have those containers elastically grow in the cloud environment. That is the unique IP that JARVICE has, which I think is a fantastic asset. I dont think anybody else has that capability, he said.

Atos has broadly identified four uses cloud cases (slide below) and is working with the major hyperscalers on them.

Use case one is where youve got a customer who has an on-prem facility and just wants to burst parts of their workload into the hyperscaler. That might be to access a new bit of hardware, or it might be just theyve run out of capacity on site. Use case two is the same but the customer doesnt have an on-prem facility. These are the easy ones that we could do today that we could do without the JARVICE software, although JARVICE makes it a lot easier. Use case three would be where we have a customer that perhaps has a very specific requirement, and wants to host a platform with a regional extension of a cloud service provider. We do this today, for SAP HANA for Oracle, where, for example, we have bare metal servers in Googles datacenters for any customer thats wanting to do Oracle as a service, he said.

The fourth use case, as described by Grant, is multi-tenanted HPC system such as a supercomputer hosted at a service provider, where we can partition off parts of the machine in an elastic way and scale them up. And Im distinguishing here, between partitioning a supercomputer and providing access on a general-purpose cloud system where youve got virtualized resources.

Nimbix was known for bringing advanced HPC technology to market early think FPGAs, GPUs, IBMs Power processor, Arm and Grant said Atos is likewise inclined and will continue in that vein. One wonders, for example, if it will be an early supporter of RISC-V in the cloud.

Grant said, Were doing some hardware evaluations on various things at the moment. Its a bit too early to talk about, but Atos is generally very early to market. We were the first to market with AMD Rome, for example, and similarly we will be very early with Sapphire Rapids and some of these other technologies that are coming out on the horizon.

Its early days and Grant expects the integration of Nimbix and the roll-out of expanded cloud offerings to occur over the next year. For the moment the Nimbix executive team has taken on new titles. Steve Hebert, founder and Nimbix CEO, is now VP, Global Head of Atos Nimbix HPC Cloud. Rob Sherrard, also a Nimbix cofounder, is now Sr. Director of Global Delivery for Nimbix HPC Cloud Services. Reiter, Nimbix CTO is now CTO and Technical Director of the Atos Nimbix HPC Cloud. It certainly would not be unusual for executive movement following acquisitions.

Atos has big plans.

Weve grown consistently 20 percent year on year. And thats mainly been in Europe, but weve also done a lot in India and South America, and so on. But as I said, we havent done an awful lot in the U.S., and as you know, 46 percent of the global market is in the U.S. The key thing is Atos is a big company in the U.S. We were a $3 billion company in 2019 Im not sure what the 2020 numbers were with over 10,000 employees and 45 different locations, said Grant.

Critically, for me anyway, are things like eight out of the 10 largest [U.S.] manufacturers are customers, 50 percent of the top banks are our customers. Those organizations are using HPC, but theyre not using HPC from at us today, for the reasons Ive mentioned. A key part of what we want to do is take advantage of that footprint, those customers, and have an expanded Nimbix team able to go and support those and upsell and cross-sell HPC capabilities into those customers. We absolutely intend to grow general HPC footprint hardware as well as software as well as cloud in the US through this acquisition.

See more here:
Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE? - HPCwire

Read More..

Advances in technology have led SanJac to offer a degree in cloud computing. – Illinoisnewstoday.com

San Jacinto College continues to adapt to new technologies with the introduction of a new cloud computing bachelors degree program to train students for a profitable career.

This course will be available at the new Engineering Technology Center. South Campus, 13735 Beamer Road, Houston wants to expand the marketability of its students in the ever-growing technology sector.

There are plans to expand the course to include the Central and North campuses of San Jacinto College.

Cloud computing enables the storage, management, and processing of data through remote servers on the Internet rather than on local or physical computers.

This is an infrastructure that has evolved over the last few years, which has gradually changed the way businesses use internal systems.

What were doing is helping students, practitioners, and technicians support these companies, which is a bit different if youre running your network through a cloud provider, and how it works. Because we need to understand, says Kevin Morris. , Dean of Business and Technology, South Campus.

where: San Jacinto College South Campus,13735 Beamer Rd, Houston, Texas

Website:https://bit.ly/3i5bams

South Campus Phone: 281-929-4603

see nextCollapse

Most people are familiar with the physical assets companies bring in to build internal networks using physical tools such as hard drives, storage devices, and servers, so (information technology employees) network. You can do it, says Morris.

According to Morris, technology became essential during the pandemic as companies were developing ways to stay connected to their employees and external resources.

There are so many employees working from home that it can be difficult to get in when trying to access a VPN or network in your company, he says.

This course is available to part-time and full-time students, who can enroll directly from high school and the non-technical industry. Classes are offered in several ways, including face-to-face, online, or a hybrid of the two. The program also provides people already in the field of information technology with the opportunity to gain experience in this area.

According to Morris, cloud technology that works with tools such as working from home has been developed for several years, but the pandemic has made it more urgent.

In the last few years weve seen that networks work very well when people are in physical locations, but what if people need to access (remotely)? Is that another challenge, he said.

According to Morris, cloud service providers allow enterprises to move away from their internal infrastructure and move to house systems in a cloud environment.

Once the data is migrated to the cloud system, we have access to everything, Morris said. Not only does this eliminate the need to spend money on capital equipment, but it also gives employees access to materials, such as software data and storage, from home, office, or anywhere in the world.

Enterprises are increasingly dependent on cloud services such as Microsoft Azure, Amazon Web Services, Google, and IBM Oracle, giving employees access to a variety of resources, data storage, email, and other applications.

The program is part of San Jacintos goal of staying at the forefront of the changing work environment and employment market, part of which is to prepare students to adapt to changing technical situations. Morris said. To respond to these changes, schools meet regularly with local industry partners to seek feedback.

Computer technology is changing very rapidly. We need to be agile in our programs and curriculums and ask where our industry partners are needed and where we are finding employment gaps. Were trying to fill the gap and help. Employers, Morris said. We are trying to meet that challenge.

Current average annual salary for Microsoft Azure developers in the United States. Is $ 118,782, which is about $ 57.11 per hour. For Azure engineers, salaries are $ 135,000 per year and entry-level positions start at $ 87,750 per year.

Our vision is to provide more opportunities and improve the standard of living of all members of our community, Morris said.

For more information on cloud computing programs, please visit: https://bityl.co/83Vf Or call 281-929-4603.

yorozco@hcnonline.com

Advances in technology have led SanJac to offer a degree in cloud computing.

Source link Advances in technology have led SanJac to offer a degree in cloud computing.

See the original post here:
Advances in technology have led SanJac to offer a degree in cloud computing. - Illinoisnewstoday.com

Read More..

Customer trends require technology to leverage – Smart Business Network

Even in a world dominated by digital technologies, some companies still hesitate to adopt tools that could take their business to the next level. Thats often preventing businesses from engaging in developing customer trends, most of which increasingly rely on digital technologies to leverage.

Smart Business spoke with Jim Altman, Middle Market Pennsylvania Regional Executive at Huntington Bank, about some burgeoning digital trends, as well as strategies to take advantage of them.

What digital transformation trends are taking hold?

Companies today are using personalization to differentiate customer experiences. The experiences customers want differ based on who they are and what they need. With greater personalization and customization, companies can segment their approach so that narrower customer segments get catered products, experiences and services.

Unlocking the collected customer data companies hold goes a long way to understanding the uniqueness that a business can provide its customers. Every company collects a great deal of customer data, but many dont use that data to drive experiences. Its important that companies learn to analyze the data that they capture, in part so they can drive more personalized experiences.

Another trend has companies creating ecosystems that serve as a platform for broader interaction, either between a company and its customers, or with customers and other companies. Companies are starting to figure out who they need to partner with often because those partners work outside the companys first line of expertise to provide value to their customers, in part by offering multiple services in one place.

Why might businesses hesitate to adopt newer technologies?

Often business leaders, when presented with a new technology, arent eager to employ it. They may understand that the tool offers a more efficient way to perform a task, but they resist, instead relying on the way theyve always done it because, in their mind, its easier.

Additionally, as more applications are based in the cloud, some companies avoid using them because they are uncomfortable relying on cloud services. Thats typically out of concern for security, as some companies believe the cloud is not as secure as software hosted on on-premise servers. While this might seem to them like a prudent risk mitigation strategy, it often means the company misses out on opportunities to save costs through increased efficiencies, or to better connect with customers.

Overcoming that requires educating leadership about the pros and cons of cloud-based applications, making communication key. It takes working with someone who is able to explain the applications, how they work, and how to establish procedures that mitigate as much risk as possible.

Once the benefits and risks are clearly communicated, businesses should start small. For instance, use one cloud application to begin with and work into more as the organization gets more comfortable. Companies tend to think they need to move all applications to the cloud at once. But starting small can help the initiative gain credibility at all levels of the business, which instills confidence and helps the organization move to the next stage. The process can begin by creating a transition plan that identifies specific applications to move to the cloud one at a time. Once most people are comfortable, it can really take off from there.

How can companies mitigate technology risks?

A lot of risk can be mitigated by working with external partners. For example, there are a lot of fintechs that have created niche businesses that can help companies in very specific areas. For companies facing a risk thats outside their comfort zone, partnering with one of these up-and-coming companies that have mastered a specific functionality where that risk is found can help mitigate security concerns while connecting with valuable expertise.

Look for ways to partner with entities that have expertise and have already assumed certain risks. There are so many businesses that specialize in certain areas that companies often dont need to develop products, software or procedures internally because specialized partners can do it quicker and with less risk.

Insights Banking & Finance is brought to you by Huntington Bank

Read this article:
Customer trends require technology to leverage - Smart Business Network

Read More..

Google Cloud and SAP Take Companies to The Cloud in New Partnership – Somag News

Google Cloud: The German company SAP SE, creator of business management software, announced, on Thursday (29), the strengthening of a strategic partnership with Google Cloud. Under the agreement, the American cloud platform becomes part of the Germans RISE with SAP program, which promises its customers a holistic transformation into an Intelligent Company.

Extending the terms of an agreement signed in 2017, the two companies become cloud partners to accelerate the process of migrating customers and business processes to the new technology. The environment brings together computing services, including servers, storage, databases, and the use of Artificial Intelligence (AI) and Machine Learning (ML).

The investment in cloud services can represent a reduction in operational costs, while the reduction of a physical infrastructure implies gains of scale. The objective of the celebrated partnership is for companies to be able to execute transformations in their business, migrate critical business systems to the cloud, and have access to the sophisticated tools of SAP and Google Cloud.

Responsible for product engineering at SAP, executive Thomas Saueressig explains that RISE with SAP is already a globally recognized program adopted by customers who want to accelerate the journey to becoming smart companies. The expansion of the partnership with Google Cloud adds the powerful infrastructure of the American company, and IA and ML services to the portfolio for those seeking cloud services.

In practice, this means that RISE with SAP program solutions, such as SAP Analytics Cloud and SAP Data Warehouse Cloud, currently housed on the German companys business technology platform (SAP BTP), will migrate to Google Clouds reliable and scalable infrastructure , which still offers a high-speed network.

Some pilot projects, such as Energizer Holdings Inc. and MSC Industrial Supply, already access the RISE with SAP option within the Google Cloud.

Excerpt from:
Google Cloud and SAP Take Companies to The Cloud in New Partnership - Somag News

Read More..

Here’s What We Know About Google’s Tensor Mobile Chip So Far – iPhone in Canada

Tensor is the first system-on-chip (SoC) designed by Google, which will make its first appearance in the companys next-generation Pixel 6 and Pixel 6 Pro flagship smartphones set to release later this year.

Google CEO Sundar Pichai noted in a statement to Engadget that Tensor has been four years in the making and builds off of two decades of Googles computing experience. The company also says its newcomputational processing for video feature debuting in upcoming Pixel 6 phones was only possible with its own mobile processor .

Tensor is an obvious nod to the companys open-source platform for machine learning, TensorFlow. The chip is powerful enough to run multiple AI-intensive tasks simultaneously without a phone overheating, or apply computational processing to videos as theyre being captured.

The company isnt giving away all the details about the processor yet, nor is it sharing specific information about its latest flagships now. But theres a lot of new stuff here, and we wanted to make sure people had context, Rick Osterloh said. We think its a really big change, so thats why we want to start early.

You can learn more about the development ofGoogles new Tensor mobile processor at the source page.

Link:
Here's What We Know About Google's Tensor Mobile Chip So Far - iPhone in Canada

Read More..