Page 2,146«..1020..2,1452,1462,1472,148..2,1602,170..»

What’s next: A look into the bright future of hybrid cloud – Fast Company

For Francois Paillier, the decision to take a hybrid cloud approach was made for him early in the pandemic, when his server literally went up in smoke. As the CEO and cofounder of the genomics startup CircaGene, he had immediately joined the race to genetically sequence the COVID virus, only to have his primary host catch fire. Improvising, his team fell back on their on-premises computer while racing to spin up new virtual machines. I had to pilot my business to go fully virtual, so it needed to be secure, scalable, and resilient, he said. And now we have a hybrid system that is much better, with different servers for different purposes.

Since the launch of Amazon Web Services 20 years ago, there has been a steady shift away from companies running their own hardware in favor of fully virtualized everything-as-a-service. But the persistence of legacy systems coupled with emerging technologies such as AI has CTOs seeking a platform-agnostic hybrid model that strives to have the best of both worlds. The goal isnt to slash IT costs but to give their companies the ability to deploy and iterate fasterat speeds no one thought possible until a few years agoand become true catalysts for innovation.

The original promise of the cloud was agility, said Talia Gershon, director of cloud infrastructure research at IBM, during a wide-ranging conversation at The Future of Hybrid Cloud, a virtual event hosted in partnership by Fast Company and IBM. How can we achieve business objectives faster? One way is to bring the cloud to new places while adding new capabilities. To explore what these applications and their business cases might look like, Gershon was joined by Paillier and Sam Carter, CEO of Moneo. Here are three key takeaways from their discussion.

In CircaGenes case, this meant learning how to sort tasks and stratify data by securing confidential genetic records both locally and in an encrypted cloud, while using another cloud for analytics and computation. The question of where do we go? is business-driven, Paillier said.

Moneo faced a similar challenge. As a cash-back rewards platform launched amidst the pandemic, its clientsconsumer packaged goods brands (CPGs)were desperate to harness its data for their suddenly all-important online advertising. Brands wanted to know who was buying their products, where they were buying them, and how they wanted to interact, Carter said. The pressure faced by CPGs and retailers was immense.

They, in turn, bombarded the startup with a constant stream of requests for real-time custom analyticsany scrap that would give them an edge as foot traffic to physical stores plunged. In addition to privately hosting its core operations, Moneo scrambled to create a replica of its data in IBMs public cloud to handle the mounting number of reports. They needed to work with someone who was quick, predictable, safe, and scalableand who could do all of those things immediately, Carter added.

Nimbly pivoting from a public- or private-only approach is one thing when youre a startup; its another altogether when your organization is nearly bankrupt from decades of technical debt. Imagine being the developers tasked with figuring out how to refactor millions of lines of code into microservices and bring them into a cloud-native model, Gerson said. Fortunately, AI can help analyze which pieces of the technology stack should be virtualized onto public clouds, which can be refactored efficiently, and which should be left alone. Thats where technologies such as Kubernetes and OpenShift are able to offer a consistent user experience while enabling you to run applications where they make sense, she added.

Those applications include such cutting-edge techniques as homomorphic encryption, which enables CircaGene to analyze private health records without needing to decrypt them. Its an approach perfectly suited to hybrid clouds, using a publicly accessible algorithm to run blind computation against data stored securely in private systems. But Pailliers original inspiration was personal: the plight of a friend with breast cancer whose reluctance to share her fears had delayed detection. I decided to develop a module that examines your encrypted DNA and returns the results with a private encryption key and a recommendation, he said. Above a certain level, you need to take action.

While less a matter of life or death, Moneo is also grappling with questions of data governance and consent. Countless consumers have found it easytoo easyto share personal data, with little recourse for opting out once they have opted in. Thats about to change with a wave of new global legislation and regulations aimed at protecting users privacy that has created a new headache for brands. Whos agreed to share what, and how does that change over time? Carter asked rhetorically. Weve become an attractive option for brands because we do something they currently cantand isnt really core to their business.

The timing of that statement is important. While Moneo, through its hybrid cloud reports, can now offer customers an innovative service, the same capability is inevitably tomorrows open-source commodity. In fact, it already isIBM Research has already launched a beta test of Fybrik, a cloud-based service for orchestrating secure data governance across companies and platforms.

This is a topic near and dear to our hearts in the research division, Gerson said. How can new technologies help your teams move faster while automating compliance and minimizing risk? Thats the vision guiding our technology roadmap for the future of cloud.

View original post here:
What's next: A look into the bright future of hybrid cloud - Fast Company

Read More..

StorPool Named Finalist in DCS Awards for Cloud Project of the Year – StreetInsider.com

News and research before you hear about it on CNBC and others. Claim your 1-week free trial to StreetInsider Premium here.

SOFIA, Bulgaria--(BUSINESS WIRE)--StorPool Storage, a leading global storage software provider, today announced it has been named a finalist for the Cloud Project of the Year in the 2022 DCS Awards, which honor product designers, manufacturers, suppliers and providers in the data center technology market.

StorPool is nominated for its collaboration with Krystal, one of the UKs largest independent web hosting companies. Krystal provides hosting, cloud VPS, and enterprise services to 30,000 clients and more than 200,000 websites. StorPool supports Krystals ultra-fast NVMe-powered cloud platform Katapult with massive storage performance, a robust API, unique capacity management features to save hardware costs, and an extremely high level of data protection via triple replication to safeguard Krystal clients data.

StorPools storage solution is a vital component of Katapult and gives us the ability to have maximum performance and reliability with no trade-off, said Alex Easter, CTO of Krystal.

Krystal indeed created an award-winning infrastructure that serves as a model for other xSPs and cloud builders to achieve the performance, data security, space savings, density improvements, and elastic scalability of Katapult, said Alex Ivanov, product lead at StorPool Storage. We appreciate being recognized for this award and for StorPool softwares fast, highly available, easily integrated storage platform for cloud projects both large and small.

To vote for StorPool or for more information visit https://dcsawards.com/vote.

StorPool allows cloud infrastructures in order to run mission-critical workloads without the pain and challenges typically associated with legacy storage technologies. StorPool storage systems are ideal for storing and managing the data of demanding primary workloads databases, web servers, virtual desktops, real-time analytics solutions, and other mission-critical software. Under the hood, the primary storage platform provides thin-provisioned volumes to the workloads and applications running in on-premises clouds. The native multi-site, multi-cluster, and BC/DR capabilities supercharge hybrid- and multi-cloud efforts at scale. For more information about how StorPool helps create simpler, smarter and more-efficient clouds, visit https://storpool.com/storage-for-msp/

About StorPool Storage

StorPool Storage is a primary storage platform designed for large-scale cloud infrastructure. It is the easiest way to convert sets of standard servers into primary or secondary storage systems. The StorPool team has experience working with various clients Managed Service Providers, Hosting Service Providers, Cloud Service Providers, enterprises, and SaaS vendors. StorPool Storage comes as a software, plus a fully managed data storage service that transforms standard hardware into fast, highly available and scalable storage systems.

View source version on businesswire.com: https://www.businesswire.com/news/home/20220426005182/en/

Dan Miller, JPR Communications818-651-1013

Source: StorPool Storage

Originally posted here:
StorPool Named Finalist in DCS Awards for Cloud Project of the Year - StreetInsider.com

Read More..

MIT Technology Review Insights and Infosys Cobalt Launch first-ever Global Cloud Ecosystem Index – PR Newswire

The cloud has become a foundational part of nearly every national economy's journey towardenhanced productivity. "Today, we see that cloudiscomputing. Cloud and cloud-led innovation are foundational for businesses and governments in driving enterprise and economic growth," saysElizabeth Bramson-Boudreau, CEO and publisher of MIT Technology Review.

Based on research conducted between November 2021 and February 2022, the interactive Index shows which countries are progressing fastest in global efforts to adopt and deploy cloud computing services. The Index reveals Singapore has the highest score (8.48/10) for overall cloud innovation. Next ranked were Finland (8.46/10) and Sweden (8.43/10).

The key findings of this report are as follows:

"Data gathered from the Global Cloud Ecosystem Index validates that now, more than ever, there is urgency to go to the cloud from both enterprises and policymakers, as cloud can create positive economic impact," saysRavi Kumar S., president of Infosys. He continues, "The future of work will depend heavily on effective cloud transformations to create a dynamic digital future that uplifts and equalizes us all, ensuring more opportunities for everyone, irrespective of location. Infosys Cobalt is poised to continue playing a key role in building a community through the cloud that nurtures knowledge, assets, and talent to drive innovation."

To view the research findings,visit the interactive pageor clickhereto download the report.

To learn more about the cloud, visit The cloud hub: From cloud chaos to clarity.

For more information, please contact us at[emailprotected].

About MIT Technology ReviewFounded at the Massachusetts Institute of Technology in 1899,MIT Technology Reviewisa world-renowned independent media company whose insight, analysis, interviews, and live events explore the newest technologies and their commercial, social, and political impacts. MIT Technology Review derives its authority from its relationship to the world's foremost technology institution and from its editors' deep technical knowledge capacity to see technologies in their broadest context and unequaled access to leading innovators and researchers. Insights, MIT Technology Review's custom publishing division, conducts research worldwide and publishes a wide variety of content, including articles, reports, and podcasts.

Contact:Laurel Ruma[emailprotected]

SOURCE MIT Technology Review Insights

Go here to read the rest:
MIT Technology Review Insights and Infosys Cobalt Launch first-ever Global Cloud Ecosystem Index - PR Newswire

Read More..

Google Cloud’s Media CDN lets companies build on the network that keeps YouTube running – The Verge

Companies like Netflix, Disney, and HBO do battle over media streaming of movies and TV shows, but all of their services combined pale in comparison to YouTube, which says it delivers over a billion hours of video streams every single day. Now, Google Cloud is announcing general availability of its Media CDN, a network for media companies to use for their own streaming experiences. Competitors like Microsoft Azure, Amazon CloudFront, Fastly, and Cloudflare are already in the market, but none of them can point to the service so many people use every day to help sell their products.

As we noted two years ago, the world is streaming more video than ever, and things have not slowed down since. While we think a lot about the algorithms that drive engagement on YouTube, the actual network that keeps the videos streaming is what makes the entire thing work as well as it has since Google bought the video platform for $1.65 billion in 2006.

The pitch is laid out plainly in a statement issued at the 2022 NAB Show Streaming Summit, The same infrastructure that Google has built over the last decade to serve YouTube content to over 2 billion users is now being leveraged to deliver media at scale to Google Cloud customers with Media CDN.

YouTube has the occasional outage. But unless youre willing to build the next Netflix, operating on a network of servers that claims to reach over 200 countries and more than 1,300 cities around the globe, could be a big help in keeping things running, and now its available to more businesses. It also includes support for modern transport protocols like QUIC to use less data and deliver content smoothly as well as the APIs media companies use to serve advertisements add real-time data feeds for live sports broadcasts, and support new platforms.

Originally posted here:
Google Cloud's Media CDN lets companies build on the network that keeps YouTube running - The Verge

Read More..

Experts warn that Hive ransomware gang can detect unpatched servers – VentureBeat

The Hive threat group has been targeting organizations across the finance, energy and healthcare sectors as part of coordinated ransomware attacks since June 2021.

During the attacks, the group exploits ProxyShell vulnerabilities in MSFT Exchange servers to remotely execute arbitrary commands and encrypt the data of companies with this unique ransomware strain.

The group is highly organized, with the Varonis research team recently discovering that a threat actor managed to enter an organizations environment and encrypt the target data with the ransomware strain in less than 72 hours.

These attacks are particularly concerning, as unpatched exchange servers are publicly discoverable via web crawlers. Anyone with an unpatched exchange server is at risk, said Peter Firstbrook, a Gartner analyst.

Even organizations that have migrated to the cloud version of Exchange often still have some on-premises Exchange servers that could be exploited if unpatched. There are circulating threats already and unpatched servers can be detected with a web crawler, so it is highly likely that unpatched servers will be exploited, Firstbrook added.

Despite the significance of these vulnerabilities, many organizations have failed to patch their on-premise Exchange servers (these vulnerabilities do not affect Exchange online or Office 365 servers).

Last year, Mandiant reported that around 30,000 Exchange Servers remain unpatched and recent attacks highlight that many organizations have been slow to update their systems.

This is problematic given that the vulnerabilities enable an attacker to remotely execute arbitrary commands and malicious code on Microsoft Exchange Server through the 443 port.

Attackers continue to exploit the ProxyShell vulnerabilities that were initially disclosed more than eight months ago. They have proven to be a reliable resource for attackers since their disclosure, despite patches being available, said Claire Tills, a senior research engineer at Tenable.

The latest attacks by an affiliate of the Hive ransomware group are enabled by the ubiquity of Microsoft Exchange and apparent delays in patching these months-old vulnerabilities. Organizations around the world in diverse sectors use Microsoft Exchange for critical business functions, making it an ideal target for threat actors.

According to Tills, organizations that fail to patch their exchange servers enable attackers to reduce the amount of reconnaissance and immediate steps they need to take to infiltrate target systems.

Organizations that are slow to patch, such as less mature or short-staffed IT organizations, can fall into the trap of thinking just because theres no obvious signs of intrusion that no ones used ProxyShell to gain a foothold in the environment but this isnt always the case.

Firstbrook notes that while ransomware attacks will be obvious to organizations when they happen, however there are lots of other attack techniques that will [be] much stealthier, so the absence of ransomware does not mean the Exchange server is not already compromised.

It is for this reason that Brian Donohue, a principal information security specialist at Red Canary, recommends that organizations ensure they can detect the execution Cobalt Strike or Mimikatz, even if they cant update Exchange.

Having broad defense in depth against a wide array of threats means that even if you cant patch your Exchange servers or the adversary is using entirely novel trade craft in certain parts of the attack, you might still catch the Mimikatz activity, or you might have an alert that looks for the heavily obfuscated PowerShell thats being used by Cobalt Strike all of which happens before anything gets encrypted, Donohue said.

In other words, enterprises that havent patched the vulnerabilities can still protect themselves by using managed detection and response and other security solutions to detect malicious activity that comes before ransomware encryption, so they can respond before its too late.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

View post:
Experts warn that Hive ransomware gang can detect unpatched servers - VentureBeat

Read More..

Hackers are using LemonDuck malware to target Docker cloud instances – The Indian Express

The boom in cryptocurrency prices have significantly increased the demand for crypto mining. Crypto mining, essentially, is running programs on high-end devices and gain cryptocurrency in return. Some crypto-miners even use cloud services to run these program.

Cyber criminals are now compromising Cloud servers and using crypto mining bots, in this case, LemonDuck malware. Researchers at the CrowdStrike Cloud Threat Research team detected LemonDuck targeting Docker,a cloud service to mine cryptocurrency on the Linux platform. This campaign is currently active.

The LemonDuck malware is code that can cause unwanted, usually dangerous changes to your system. It steals credentials, removes security controls, spreads via emails, moves laterally, and ultimately drops more tools for human-operated activity.

Due to the cryptocurrency boom in recent years, combined with cloud and container adoption in enterprises, cryptomining is proven to be a monetarily attractive option for attackers. Since cloud and container ecosystems heavily use Linux, it drew the attention of the operators of botnets like LemonDuck, which started targeting Docker for cryptomining on the Linux platform, the researchers said in the blog post.

According to the Google Threat Horizon report, 86 per cent of compromised Google Cloud instances were used to perform cryptocurrency mining.

The researchers call it a well-known cryptomining bot that infects Microsoft Exchange servers to mine cryptocurrency. It escalate privileges and moves laterally in compromised networks. This bot tries to monetize its efforts via various simultaneous active campaigns to mine cryptocurrency like Monero.

According to the researchers, LemonDuck targets exposed Docker APIs to get initial access. It then infects the system via an image file that has malicious code embedded inside it. CrowdStrike found multiple campaigns being operated by the hackers that were targeting Windows and Linux platforms simultaneously.

The researchers highlight that LemonDuck malware is so strong that it has the potential to evade Alibaba Clouds monitoring service that monitors cloud instances for malicious activities.

LemonDuck utilized some part of its vast C2 operation to target Linux and Docker in addition to its Windows campaigns. It utilized techniques to evade defenses not only by using disguised files and by killing monitoring daemon, but also by disabling Alibaba Clouds monitoring service, the researchers added.

CrowdStrike researchers expect such kinds of campaigns to increase as cloud adoption continues to grow.

Originally posted here:
Hackers are using LemonDuck malware to target Docker cloud instances - The Indian Express

Read More..

Mastodon Social Network: Everything You Need to Know Before Switching – TheTealMango

After the billionaire Elon Musks roughly $44 billion bid to buy Twitter, people are looking for its alternatives and the Mastodon Social Network comes on the top. This decentralized social network running using instances (communities) is now trending on Twitter and it has even launched an Android app recently to welcome the flocks of users.

But, what exactly is the Mastodon Social Network? Who owns and runs it? If it is worth switching over and leaving Twitter? Well discuss that here. You will also understand why Mastodons name is on the top of the Twitter alternatives despite being the opposite of it.

Also, Mastodon Social Network isnt a newly launched fad eating up on the recent hype of Musks deal with Twitter. It has been here for a long time, since late-2016, and revolutionized the way social networking works. However, people rarely get a hold of its core idea.

Mastodon Social Network is a free and open-source platform founded in October 2016, by Eugen Rochko, a German developer. It runs self-hosted social networking services and has microblogging features like Twitter, and they are offered by a large number of Mastodon nodes called Instances which run independently.

An individual who joins Mastodon Social Network joins a specific Instance that is capable of interoperating with other instances as part of a federated social network allowing users on different nodes to interact with each other.

You can understand instances (communities or nodes) like the various Sub-reddits on Reddit that are each dedicated to a specific topic. However, Reddit still owns them and can moderate them whenever needed. Whereas, in Mastodons case, all the instances are independent, dedicated to a specific topic or community like LGBTQ+, Art, Music, Region, etc, and cant be controlled by a single entity.

If you are a business or an organization, you can host your own social media platform on your own infrastructure based on Mastodon Social Network. This will help you operate your community in a decentralized manner, and it will not depend on any single company.

Mastodon Social Network works as a part of the Fediverse, which is the ensemble of federated or interconnected server platforms used for sharing content and allowing users to interact which other without having to sign up for separate accounts for each server.

They are like email, where you can communicate with Gmail, Yahoo!, Rediff Mail, or any other email client. However, your account wont be owned and controlled by anyone else. You are the owner of your content, actions, and words. The instance you are a part of is still managed by the admins of the community.

The founder, Eugen Rochko believes that small and closely related communities manage unwanted behavior more effectively than a large companys small safety team. Mastodon even allows users to block and report other users to administrators.

You can either join Mastodon Social Network as an individual or as an organization/community. When joining as an individual, you only need to join an instance, which is an individual server run by an organization or community. It is just like signing up with an email address or joining a Sub-reddit.

You will have to start the process by finding the right instance for your interest, or you can simply join the one for your region, interest, or the globally popular one. Well help you choose the right instance in the next section of this post.

If you are an organization, you can install Mastodon on a Ubuntu cloud server. After that, youll have to learn to expand your Mastodon Fediverse Network. You can also integrate it into WordPress or any other web application that you manage. Youll be the owner of your social network.

Mastodon is not a single site but a collection of many. Thus, joining it can be a bit complicated for some users as they just couldnt figure out which would be the right instance for them. The core website, joinmastodon.org, lists several communities or instances, but they present only the ones that are committed to active moderation against hate.

The largest of these instances is Mastodon.online, which is run by the developers of the Mastodon Social Network. You can start by joining it if you are confused about which instance to join.

There wouldnt be any problem later on as Mastodon allows smoothly moving your account to a different instance, and you can interact with users on other instances without having to even move over.

There is an unofficial directory, Instances.social, which holds a collection of around 2,500 instances or communities based on varying interests and themes. You can go through the list and find the perfect one for you. When you dont join any, Mastodon serves only the Japanese cat videos or mostly the content in other languages on the home feed.

Some controversial Mastodon instances include Sinblr, where previous Tumblr amateur porn creators reside, Dolphin.Town, where every post must contain only the letter e, Fosstodon, the community for open-source software enthusiasts, and Gab, where users post an extreme amount of hate content without much moderation.

Yes, Mastodon Social Network is now available on both Android and iOS devices. The iOS client launched in July 2021 to expand the network to more users. You can find it on the App Store for your iPhone and iPad.

Recently, Mastodon Social Network also launched an Android application in April 2022 to welcome the users coming after Elon Musks controversial bid to buy Twitter at the price of $54.2 per share. You can find the Mastodon app for Android in the Play Store.

Mastodon Social Network presents to you an actual network of communities with varying interests and rules capable of communicating with each other. It allows you to consume content without any interruptions and there isnt an algorithm that decides what you see.

You will see the full content from the people you follow and the instance you are a part of. It doesnt register your clicks, inputs, or any activity, unlike Facebook and Twitter. It is completely ad-free and doesnt want to sell you things. Also, your content is absolutely yours.

There is no centralized authority that decides what the users do on the platform. You are free to express yourself. There are instances that are generalized such as mastodon.social, mastodon.xyz, etc. And, there are instances dedicated to hardcore fans of something.

It offers a friendlier kind of social environment eliminating hateful, ugly, or influential content. There is no one controlling you. This serves as the major advantage of the network. However, it also comes with certain cons and the Gab community is the prime example.

If youll ask me, then you can definitely hop on the Mastodon ship and try it. However, you should only leave Twitter and completely switch to it once you get a complete hold of it. Mastodon is not for everyone, in my opinion. Its like the early days of the Internet, there is decentralization, there is no stalking, and people freely express themselves.

However, for casual users who do not wish to dive deep into the technology, it could be hard to understand. Mastodon still has a long way to go before competing with traditional social networks like Twitter and Facebook. They know how to retain users and make them addicted to their content, while Mastodon never tries it.

Mastodon Social Network is still far from competing with Twitter or any other traditional social media platform. It currently only has over 4.4 million users while Twitter has more than 217 million registered users. It is also not suitable for everyone, especially casual users.

We are not stating that its not good. Its amazing, in fact. There are zero ads, no one is treated in a special manner, and everyone is treated equally. There is no hate speech and harmful content, and there is complete decentralization. The community members control the instances.

However, casual users may find it hard to understand and may not understand the core concept behind decentralization. If you are able to figure it out, then you can definitely switch over and explore it for your purposes.

What do you think about it? Do find the Mastodon Social Network worth switching over to? Let us know in the comments section.

Read the original:
Mastodon Social Network: Everything You Need to Know Before Switching - TheTealMango

Read More..

Datto Launches Two Continuity Solutions that Provide the Last Line of Defense for MSPs Against Cyberattacks – Business Wire

NORWALK, Conn.--(BUSINESS WIRE)--Datto Holding Corp. (Datto) (NYSE:MSP), the leading global provider of security and cloud-based software solutions purpose-built for Managed Service Providers (MSPs), today launches two continuity solutions, including its next generation SIRIS 5 product featuring up to 4X the performance and Cloud Continuity for PCs, improved for todays dynamic, hybrid workforce. Both all-in-one backup and recovery solutions empower MSPs with best-in-class continuity for their small and medium business (SMB) clients. In the event of a physical loss, ransomware, hardware failure, or other disasters, Datto provides multiple recovery and restore options whether onsite or remote.

Last year in Q4, over 80% of ransomware attacks targeted SMBs, with an average business interruption of 20 days following successful attacks, which can be crippling for a smaller organization. To survive attacks, SMBs must have access to business continuity solutions that can quickly restore their data and operations to prevent significant downtime and business interruptions. Business Continuity and Disaster Recovery (BCDR) is an established backbone of any ransomware recovery strategy.

Dattos all-in-one complete BCDR solution with immutable backups and the secure Datto Cloud makes SIRIS 5 one of the best last lines of defense against cyberattacks, restoring business operations for SMBs within minutes. With SIRIS 5, partners can expect:

The SIRIS 5 appliance will run on purpose-built certified hardware powered by Dell. Dells world-class server hardware provides the industry-standard in reliability, serviceability, global reach, and supply chain resilience. Each SIRIS 5 device will undergo stepped-up comprehensive quality testing at a Datto facility and is backed by Dattos 5-year warranty. Coupled with Datto's renowned 24/7/365 support, Datto partners will have access to the most robust and reliable business continuity solution Datto has ever offered.

An MSPs best defense against evolving ransomware threats is a high-performing and reliable BCDR solutionand SIRIS 5 is Dattos most powerful and flexible solution yet, said Bob Petrocelli, Chief Technology Officer at Datto. With its cloud-first architecture and integrated security, the SIRIS platform was created for MSPs, delivering an essential all-in-one solution for backup and recovery. Were proud to release our flagship SIRIS 5 product which will deliver next-level reliability and performance when it matters most.

The number one concern for our clients is what a cyberattack would mean for their business. We need strong backup and recovery solutions in place if all else fails to ensure theyre up and running with minimal disruption, said Razwan Ahmad, CEO of N.O.C. Systems LLC, a Datto MSP partner located in Connecticut. With SIRIS 5 we know were covered. Dattos world-class technology and support enables us to protect our clients data and livelihoods with the strongest solution.

SIRIS 5 will be available today at 9 a.m. ET across the globe. To learn more about SIRIS 5 and Cloud Continuity for PCs, join one of todays webinar events offered at 9 a.m. ET and 7 p.m. ET: Driving MSP Growth with the Next Generation of Datto Continuity.

About Datto

As the leading global provider of security and cloud-based software solutions purpose-built for Managed Service Providers (MSPs), Datto believes there is no limit to what small and medium businesses (SMBs) can achieve with the right technology. Dattos proven Unified Continuity, Networking, Endpoint Management, and Business Management solutions drive cyber resilience, efficiency, and growth for MSPs. Delivered via an integrated platform, Dattos solutions help its global ecosystem of MSP partners serve over one million businesses around the world. From proactive dynamic detection and prevention to fast, flexible recovery from cyber incidents, Dattos solutions defend against costly downtime and data loss in servers, virtual machines, cloud applications, or anywhere data resides. Since its founding in 2007, Datto has won numerous awards for its product excellence, superior technical support, rapid growth, and for fostering an outstanding workplace. With headquarters in Norwalk, Connecticut, Datto has global offices in Australia, Canada, China, Denmark, Germany, Israel, the Netherlands, Singapore, and the United Kingdom. Learn more at http://www.datto.com.

MSP-C

Law enforcement pressure forces ransomware groups to refine tactics in Q4 2021, CoveWare, 2022, https://www.coveware.com/blog/2022/2/2/law-enforcement-pressure-forces-ransomware-groups-to-refine-tactics-in-q4-2021

Read more:
Datto Launches Two Continuity Solutions that Provide the Last Line of Defense for MSPs Against Cyberattacks - Business Wire

Read More..

Beware The Hype Of Modern Tech – IT Jungle

April 25, 2022Alex Woodie

Many IBM i shops are under the gun to modernize their applications as part of a digital transformation initiative. If the app is more than 10 or 15 years old and doesnt use the latest technology and techniques, its considered a legacy system that must be torn down and rebuilt according to current code. But there are substantial risks associated with these efforts not the least of which that the modern method is essentially incompatible with the IBM i architecture as it currently exists. IBM i shops should be careful when evaluating these new directions.

Amy Anderson, a modernization consultant working in IBMs Rochester, Minnesota, lab, says she was joking last year when she said every executive says they want to do containerized microservices in the cloud. If Anderson is thinking about a future in comedy, she might want to rethink her plans, because what she says isnt a joke; its the truth.

Many, if not most, tech executives these days are fully behind the drive to run their systems as containerized microservices in the cloud. They have been told by the analyst firms and the mainstream tech press and the cloud giants that the future of business IT is breaking up monolithic applications into lots of different pieces that communicate through microservices, probably REST. All these little apps will live in containers, likely managed by Kubernetes, enabling them to scale up and down seamlessly on the cloud, likely AWS or Microsoft Azure.

The containerized microservices in the cloud mantra has been repeated so often, many just accept it as the gospel truth. Of course that is the future of business tech! they say. How else could we possibly run all these applications? Its accepted as an article of faith that this is the right approach. Whether a company is running homegrown software or a packaged app, theyre adamant that the old ways must be left behind, and to embrace the glorious future that is containerized microservices running in the cloud.

The reality is that the supposedly glorious future is today is a pipe dream, at least when it comes to IBM i. Lets start with Kubernetes, the container orchestration system open sourced by Google in 2014, which is a critical component of running in the cloud native way.

Google is the most advanced technology firm on the planet, having single handedly developed many of the most important technologies of the past two decades (IBM may own the most patents, but Google has the biggest impact). Starting in the early 2000s, Google created a management system called Borg to corral all the giant clusters that power its various data services, and that eventually became Kubernetes.

While Kubernetes solves one problem eliminating the complexity inherent in deploying and scaling all the different components that go into a given application it introduces a lot more complexity to the user. Running a Kubernetes cluster is hard. If youve talked to anybody who has tried to do it themselves, youll quickly find out that its extremely difficult. It requires a whole new set of skills that most IT professionals do not have. The cloud giants, of course, have these folks in droves, but theyre practically non-existent everywhere else.

ISVs are eager to adopt Kubernetes as the new de facto operating system for one very good reason: because it helps them run their applications on the cloud. They want to run in the cloud for several reasons, not the least of which is that the accountants tell them that recurring OpEx revenue is better than one-time CapEx revenue. (Of course, its all the same dollars in the end, but we mustnt question the accountants about their precious codes and practices.)

For greenfield development, the cloud can make a lot of sense. Customers can get up and running very quickly on a cloud-based business application, and leave all the muss and fuss of managing hardware to the cloud provider. But there are downsides too, such as no ability to customize the application. For the vendors, the fact that customers cannot customize goes hand in hand with their inability to fall behind on releases. (Surely the vendor passes whatever benefit it receives through collective avoidance of technical debt back to you, dear customer.)

The Kubernetes route makes less sense for established products with an established installed base. It takes quite a bit of work to adapt an existing application to run inside a Docker container and have it managed in a Kubernetes pod. It can be done, but its a heavy lift. But when it comes to critical transactional systems, it likely becomes more of a full-blown re-implementation than a simple upgrade. There are no free lunches in IT.

When it comes to IBM i, lots of existing customers who are running their ERP systems on-prem are not ready to move their production business applications to the cloud. Notice what happened when Infor stopped rolling out enhancements for the M3 applications for IBM i customers. Infor wanted these folks to adopt M3 running on X86 servers running in AWS cloud. Many of them balked at this forced re-implementation, and now Infor is rolling out a new offering called CM3 that recognizes that customers want to keep their data on prem in their Db2 for i server.

Other ERP vendors have taken a similar approach to the cloud. SAP wants its Business Suite customers to move to S/4 HANA, which is a containerized, microservice-based ERP running in the cloud. The German ERP giant has committed to supporting on-prem Business Suite customers until 2027, and through 2030 with an extended maintenance agreement. After that, the customers must be on S/4 HANA, which at this point doesnt run on IBM i.

Will the 1,500-plus customers who have benefited from running SAP on IBM i for the past 30 years be willing to give up their entire legacy and begin anew in the S/4 HANA cloud? It sounds like a risky proposition, especially given the fact that much of the functionality that currently exists in Business Suite has yet to be re-constructed din S/4 HANA. Is this an acceptable risk?

Kubernetes is just part of the problem, but its a big one, because at this point IBM i doesnt support Kubernetes. Its not even clear what Kubernetes running on IBM i would look like, considering all the virtualization features that already exist in the IBM i and Power platform. (What would become of LPARs, subsystems, and iASPs? How would any of that work?) In any event, the executives in charge of IBM i have told IT Jungle there is no demand for Kubernetes among IBM i customers. But that could change.

Jack Henry & Associates officially unleashed its long-term roadmap earlier this year, but it had been working on the plan for years. The company has been a stalwart of the midrange platform for decades, reliably processing transactions for more than a thousand banks and credit unions running on its RPG-based core banking systems. It is also one of the biggest private cloud providers in the Power Systems arena, as it runs the Power machinery powering (pun intended) hundreds of customer applications.

The future roadmap for Jack Henry is (you guessed it) containerized microservices in the cloud. The company explains that it doesnt make sense to develop and maintain about 100 duplicate business functions across four separate products, and so it will slowly replace those redundant components that today make up its monolithic packages like Silverlake with smaller, bite-sized components that run in the cloud-native fashion on Kubernetes and connect and communicate via microservices.

Its not a bad plan, if youve been listening to the IT analysts and the press for the past five years. Jack Henry is doing exactly what theyve been espousing as the modern method. But how does it mesh with its current legacy? The reality is that none of Jack Henrys future software will be able to run on IBM i. Db2 for i is not even one of the long-term options for a database; instead it selected PostgreSQL, SQL Server, and MongoDB (depending on which cloud the customer is running in).

Jack Henry executives acknowledge that theres not much overlap between its roadmap and the IBM i roadmap at this point in time. But they say that theyre moving slowly and wont have all of the 100 or so business functions fully converted into containerized microservices for 15 years and then it will likely take another 15 years to get everybody moved over. So its not a pressing issue at the moment.

Maybe Kubernetes will run on IBM i by then? Maybe there will be something new and different that eliminates the technological mismatch? Who knows?

The IBM i system is a known entity, with known strengths and weaknesses. Containerized microservices in the cloud is an unknown entity, and its strengths and weaknesses are still being determined. While containerized microservices running in the cloud may ultimately win out as the superior platform for business IT, that hasnt been decided yet.

For the past 30 years, the mainstream IT world has leapt from one shiny object to the next, convinced that it will be The Next Big Thing. (TPM, the founder of this publication and its co-editor with me, has a whole different life as a journalist and analyst chasing this, called The Next Platform, not surprisingly.) Over the same period, the IBM i platform has continued more or less on the same path, with the same core architecture, running the same types of applications in the same reliable, secure manner.

The more hype is lavished upon containerized microservices in the cloud, the more it looks like just the latest shiny object, which will inevitably be replaced by the next shiny object. Meanwhile, the IBM i server will just keep ticking.

Infor CM3 to Provide On-Prem Alternative to Cloudy M3

Inside Jack Henrys Long-Term Modernization Roadmap

SAP on IBM i to S/4 HANA Migration: No Need to Rush

Wanted: Exciting New Publicist For Boring Old Server

Read the original post:
Beware The Hype Of Modern Tech - IT Jungle

Read More..

How to Get a Business’s Network Ready to Handle AI Applications – BizTech Magazine

Switch to Spine-and-Leaf Architecture

High-speed data center networking functions are the basis for everything else: intersystem links, storage and reliable connectivity to customers. That means not just high speed, but also low-latency and low-loss networks. To deliver the performance needed for AI, IT managers should be thinking about changes both in architecture and in hardware.

IT managers with traditional three-tier core/distribution/edge networks in their data centers should be planning to replace all that gear even without AI in the picture with spine-and-leaf architecture. Changing to spine-and-leaf ensures that every system in a computing pod is no more than two hops from every other system.

Selecting 40-gigabit-per-second or 100Gbps links between leaf switches and the network spine helps reduce the impact of oversubscription when servers are commonly connected at 10Gbps to the network leaf switches. To really be on the cutting edge of performance, IT managers can aim for a 100Gbps fabric end-to-end, although some find that 10Gbps server connections occupy a price-performance sweet spot.

When a network has to support high-speed NVMe over Fabric storage, IT managers have another option for notching up speeds to match the demands being made by ML models: remote direct memory access (RDMA) combined with lossless Ethernet.

NVMe over Fabric can run over standard Ethernet, utilizing Transmission Control Protocol to encapsulate traffic. But NVMe over Fabric storage delivers even lower latency when server network interface controllers, or NICs, are replaced with RDMA NICs, or RNICs. By offloading everything from the CPU and bypassing the OS kernel, network stack and disk drivers, performance is supercharged over traditional architectures. The lossless Ethernet side of the equation is provided by modern high-performance network switches that can compensate for oversubscription, prioritize RDMA traffic and manage congestion end to end within the data center.

With high-speed networking in place, and high-speed storage systems ready to roll, IT managers are poised for the last part of the AI equation: computing power.

RELATED: Find out how AI is poised to revolutionize the insurance industry.

Start researching AI and ML, and you may discover that your old servers are not powerful enough and you need to immediately invest in graphics processing units to handle the load. In truth, moving to GPUs will give the best results in many cases, but not all the time. And for IT managers who have extensive experience with traditional servers and large server farms already deployed, adding GPUs can be an expensive choice.

The key point here is parallelism: the requirement to run multiple streams at the same time, combined with memory use. GPUs are great at parallel operations, and mainstream ML tools are especially efficient and high-performing when they can run on these GPUs. But all this performance comes at a cost, and GPU upgrades dont do anything when your developers and operations teams dont dim the lights when they run the processor-intensive parts of their ML models.

Thats the big difference between GPUs and storage and network upgrades, which deliver better performance for everything running in the data center, all the time.

IT managers should plan their investments carefully when it comes to GPUs, and make sure that workloads are heavy enough to justify investing in this new technology. Its also worthwhile to look at the major cloud computing providers, including Amazon, Google and Microsoft, as they already have the GPU hardware installed and ready to go, and are happy to rent it to you through their cloud computing services.

Go here to see the original:
How to Get a Business's Network Ready to Handle AI Applications - BizTech Magazine

Read More..