Category Archives: Cloud Hosting
AiFi, A2Z Cust2Mate, and Sitoo head up October’s biggest retail … – Retail Technology Innovation Hub
Ocado Retail
Thousands of cash rewards have been redeemed by UK consumers as part of the first nationwide digital deposit return scheme (DDRS) trial led by Polytag, Bower, and Ocado Retail.
Since 14th July, Ocado Retail customers have been able to claim monetary rewards for recycling any of its own brand two and four-pint milk bottles using Polytags QR codes, redeemable via recycling app Bower.
The trial saw more than 20,000 Ocado Retail milk bottles scanned by over 3,000 consumers using their smartphones before being deposited into home kerbside recycling.
The QR code integrated onto the milk bottles labelling is scanned via the aforementioned app, the bottle disposed of in a persons registered at home recycling bin, and the reward withdrawn into their bank account.
Up to 4,000 worth of rewards were obtained during the nine week long trial.
Superdrug
Superdrug has launched its own retail media network in the UK.
In a LinkedIn post, Paul Stafford, Head Of Digital Marketing, Retail Media & Online Brand at Superdrug, said: This week, I had the pleasure of announcing the arrival of Superdrugs own retail media network, Optimo, at our annual supplier conference.
He added: This has been a huge piece of work covering A.S. Watson Group (the owner of Superdrug) BUs across Hong Kong, Milan, Paris, Amsterdam and London. From a personal point of view, its been an incredible journey to work on a project that is at the coal face of major industry change.
I cant wait now to start working with our suppliers to deliver on their briefs.
Obviously this has been a massive team effort and couldn't be done without the support of Dan Jarvis (Chief Digital Officer, Europe at A.S. Watson Group), Andrea Zoboli (eLab Technology Director (Europe) at A.S. Watson Group, Marille Slenders, (Head of Media & Performance Benelux at A.S. Watson Group), Matt Walburn, (Customer and Marketing Director at Superdrug), Nicholas Queally, Marketing Manager - Promotions and Pricing at Superdrug), and many, many more.
Amazon (again)
Amazon customers in the UK, Italy and an additional US city will have the option to get their packages delivered by a Prime Air drone beginning in late 2024.
Specific cities in the US and abroad will be named in the coming months.
David Carbon, Vice President of Amazons Prime Air, says: As part of our continued efforts to innovate for customers, we are excited to announce the expansion of Prime Air delivery internationally, for the first time outside the US.
We have been delivering packages by drone for almost a year in California and in Texas. We have built a safe, reliable delivery service and have partnered very closely with regulators and communities.
We will continue with that collaboration into the future to ensure we are meeting the needs of our customers and the communities we serve.
Prime Air is also unveiling a new MK30 drone design, which is said to be quieter, smaller, and lighter, than previous models.
Dublin Airport
A new Dublin Town To Go store, powered by Zippin technology, has opened in T1 Arrivals at Dublin Airport.
People scan their payment card to enter and a combination of cameras and weight sensored shelving then detects what theyre buying and charges their card when they walk out.
The store, the first of its kind to open to the general public in Ireland, sells a range of goods to meet the needs of both departing and arriving passengers and also those waiting to welcome people in Arrivals, including coffee, pastries, sandwiches, salads, sweets, cereal, milk and toiletries.
Visit link:
AiFi, A2Z Cust2Mate, and Sitoo head up October's biggest retail ... - Retail Technology Innovation Hub
Roche, Ibex Medical Analytics and AWS partner for better cancer … – Health Imaging
Swiss pharmaceutical company Roche is teaming up with artificial intelligence vendor Ibex Medical Analytics and Amazon Web Services (AWS), the cloud hosting arm of Amazon, to bring AI-driven diagnostic support tools to pathology laboratories. Roche, Ibex and AWS have collaborated to enable access to Ibex's AI-powered decision support tools through Roche's Navify digital pathology platform.
Digital pathology technology is designed to streamline workflows and enhance laboratory diagnostic efficiency. Combined with AI, it has the potential to boost productivity and reduce turnaround times in those labs. To this end, Ibexs AI tools will be integrated into the Navify digital pathology platform and used to aid clinicians in diagnosing breast and prostate cancer, according to a press release from the companies.
This partnership will support biopsy diagnosis, cancer grading, subtype identification, and the recognition of noncancerous features. Additionally, it supports the review and analysis of slide images.
"Our collaboration with Roche is driving new AI-powered workflows that help pathologists and laboratories realize the full potential of going digital," Joseph Mossel, CEO and co-founder of Ibex Medical Analytics, said in a statement. "The next phase in the evolution of digital pathology is open platforms that enable technology providers to join forces and offer integrated software solutions that improve lab efficiency, biopsy review and ultimately the quality of patient care."
See more here:
Roche, Ibex Medical Analytics and AWS partner for better cancer ... - Health Imaging
Harmful effects of cloud concentration now a key concern for IT … – ITPro
Cloud concentration is now among the top risks identified by organizations and could harm long-term innovation and technology goals, new research has found.
Referring to the practice where an organization is heavily reliant on one particular hyperscale provider, cloud concentration was highlighted as one of the top five emerging risks among IT leaders for the second consecutive quarter, according to Gartner.
The study suggested that overdependence on one hyperscaler could cause severe disruption to operations if an incident - such as a security breach or downtime - were to occur at a provider.
Limited options with regard to technologies as a result of cloud concentration could also harm long-term innovation goals, the study found.
Ran Xu, director of research at Gartners legal risk and compliance practice, warned that businesses no longer view cloud concentration as an emerging risk, and instead view the prospect of disruption as a leading risk consideration.
The risk associated with cloud concentration is fast losing its emerging status as it is becoming a widely recognized risk for most enterprises, she said.
Many organizations are now in a position where they would face severe disruption in the event of a failure of a single provider.
Growing concerns over cloud concentration have arisen due to a sharpened focus among many businesses to consolidate IT operations and direct efforts toward a handful of strategic providers, according to Gartner.
This shift has been part of an effort to reduce complexity across IT estates, the consultancy said, as well as to cut costs and contend with evolving skill requirements.
The issue has been further compounded by the fact that a handful of hyperscaler vendors dominate global and regional markets, the study found.
In the UK, for example, concerns over the dominance of hyperscale providers such as Google, AWS, and Microsoft, have prompted a regulatory probe into whether their market shares are harming customers and impacting innovation.
Where organizations have chosen to go the route of hosting their IT services in public clouds, there arent many obvious ways to avoid concentration risk while keeping the benefits of cloud services, said Xu.
Moreover, regulations at the country and subnational level diverge on concentration risk, anti-competition, data sovereignty and privacy rules pertaining to cloud services further complicating the picture.
Gartner specifically highlighted three potential consequences of cloud concentration, including the prospect of a wide incident blast radius.
In this instance, the failure of a particular cloud provider could seriously impact business continuity and result in disastrous consequences such as loss of earnings and reputational damage.
The more applications (and business processes) depend on a particular cloud provider, the greater the potential breadth of impact of a cloud service issue, which may heighten business continuity concerns, the consultancy said.
Concentrated dependency on a specific vendor could also harm future technology options and long-term innovation, the study warned. This culture of dependency means vendors are able to exert significant influence over an organizations future technology strategy.
RELATED RESOURCE
Reduce security risks while supporting hybrid and remote work
DOWNLOAD NOW
Regulatory compliance challenges were also identified as a harmful effect of cloud concentration, according to Gartner. This is because some regulatory bodies may hold different views of the risks associated with concentration of IT infrastructure.
Organizations may be unable to meet regulatory demands to address concentration risk across different regulatory bodies, which may have different approaches to concentration risk.
Xu said the research underlines both the importance of having a well-considered continuity plan in the event of an IT provider failure, as well as consideration of alternative cloud strategies.
Currently, if the benefits of public cloud use are considered strategically important to a business, there are not many obvious solutions to remove the risk altogether, he said.
Businesses are increasingly diversifying their IT estates amidst a shift to multi-cloud and hybrid cloud practices. Research from Google last year pointed to a growing hybrid cloud shift among organizations, with 48% of tech leaders operating under a mostly hybrid setup compared to just 40% in 2020.
A separate survey from Parallels this month also found that nearly two-thirds (64%) of respondents had implemented hybrid cloud practices over the last year.
Read more from the original source:
Harmful effects of cloud concentration now a key concern for IT ... - ITPro
Run Generative AI on-premises, with a cloud experience – CIO
IT leaders are grappling with a critical question as they seek to deploy generative AI workloads today: Is it better for my business to run GenAI applications in the public cloud or on-premises?
The question inspires spirited debate from both sides of the hosting aisle. Most IT leaders say, It depends. True, but it also begs some unpacking.
As you prepare to run a new workload, your first inclination may be to build, test, and launch it in a public cloud. And why not? The approach has probably helped you reduce time to deployment and even accelerated innovation.
So naturally, as you consider rolling out a GenAI service you may be tempted to build and launch it in your preferred public cloud. You believe it will offer greater agility and speed faster than if you do it in your corporate datacenteror anywhere else.
Normally nobody would blink, blame you, or tell you to think twice. Except this workload is a bit different.
As always, youll base your workload placement decision on security, performance, latency, cost, and other variables, including the size and complexity of the large (or small) language model you plan to run, as well as the environments you plan to deploy it to.
Yet given the myriad known unknowns of deploying GenAI modelsand the fact that the value you may derive from it may be intrinsically linked to your corporate datayour ability to control this new technology might trump all other factors.
Using an off-the-shelf or open source model as you build, test, and tune your app on-premises you can bring the AI to your data, affording you greater processing efficiency while retaining control over your data.
Say you work in a regulated sector such as finance and you wish to create a GenAI service that surfaces product information. Strict data security and privacy mandates may govern if and how you work with AI services in the public cloud. Running a GenAI app on premises ensures that all data remains within the organizations environment, reducing the risk of data breaches while respecting regulatory requirements.
Plus, your ability to control access to the GenAI instance could help alleviate shadow AI concerns, which are growing among organizations. Protecting your IP while preventing that Wild West is good governance.
Some scenarios require real-time interactions with the AI model, such as chatbots that support sales or customers. Running the LLM on-premises can minimize latency since data doesnt have to travel to remote cloud servers and back. This can result in faster response times while enabling you to better monitor latency and throughput, as well as the accuracy of your model. Fifty-five percent of IT decision makers cited performance as a top reason for running GenAI workloads on-premises, according to a Dell survey of IT managers.1
Costs present another tricky variable. Operating a GenAI app in the public cloud can yield sticker shock as usage growsor if the implementation isnt properly scoped. Maybe youre looking to stand up a paired programming environment in which humans write code while GenAI puts it through the QA ringeror vice versa.
You get greater control over how many resources you consume on-premises, which will help you curb costs. Thats no small consideration, as 35% of IT leaders Dell surveyed cited cost as a key reason for deploying their GenAI workload on-premises.2
Maybe your GenAI journey starts on premises but once youve tested and trained your app, checking it for performance, bias, and other issues, you decide to also launch it in a public cloud. Eighty-two percent of IT-decision-makers indicated they were most interested in taking an on-premises or hybrid approach to building their GenAI solution, according to a Dell Generative AI Pulse survey.3
Hybrid cloud models naturally provide more choices. In that vein, did you know there are other ways to enjoy a cloud experience in-house? You can build a bridge between your on-premises estate and public clouds to get the best of both operating environments.
Dell APEX Cloud Platforms enable you to enjoy the agility and flexibility of cloud services, with the security, performance, and control of an on-premises solution. These platforms, which include Microsoft Azure, VMware, and Red Hat OpenShift, provide a unified cloud experience, allowing you to procure more infrastructure as required while enabling optimal deployment of GenAI apps, such as digital assistants and other tools that surface business information.
That way you can spend more of your time and energy accelerating your GenAI journey to achieve business outcomes that will help you drive digital transformation.
Learn more about Dell APEX Cloud Platforms.
Read this article:
Run Generative AI on-premises, with a cloud experience - CIO
Unprecedented 225m investment to create UK’s most powerful … – University of Bristol
Details of a 225m investment from the Government to create the UK's fastest supercomputer at the University of Bristol have been announced today [1 November 2023].
The funding injection, part of a 300m package to create a new national Artificial Intelligence Research Resource (AIRR) for the country announced at the government's AI Safety Summit at Bletchley Park, will make the UK a world leader in AI.
To be known as Isambard-AI, it will be 10 times more powerful than the UK's current fastest supercomputer and among the most powerful in the world when it opens at the National Composites Centre (NCC) in the summer of 2024.
Hewlett Packard Enterprise (HPE)* will build and deliver the new system with the next generation HPE Cray EX supercomputers and over 5,000 state-of-the-art NVIDIA GH200 superchips. The advanced technologies and design will allow Isambard-AI to reach up to 200 quadrillion calculations per second.
The new Bristol facility will be used by a wide range of organisations from across the UK to harness the power of AI, which is already the main driver of emerging technologies such as training large language models (LLMs), big data and robotics. The new supercomputing facility will also play a vital role in important areas such as accelerating automated drug discovery and climate research.
Isambard-AI will connect with a new supercomputer cluster at the University of Cambridge, called Dawn, which is being developed to offer additional capacity as part of the new national AIRR.
Professor Simon McIntosh-Smith, Director of the Isambard National Research Facility at the University of Bristol, said: "Isambard-AI represents a huge leap forward for AI computational power in the UK. Today Isambard-AI would rank within the top 10 fastest supercomputers in the world and, when in operation later in 2024, it will be one of the most powerful AI systems for open science anywhere.
"It's immensely exciting to be at the forefront of the AI revolution and to partner with industry leaders HPE and NVIDIA to rapidly build and deploy large-scale research computing infrastructure to create one of the most powerful supercomputers in the world. Isambard-AI will offer capacity never seen before in the UK for researchers and industry to harness the huge potential of AI in fields such as robotics, big data, climate research and drug discovery."
Justin Hotard, Executive Vice President and General Manager, HPC, AI & Labs at HPE, said: "Today's announcement of the UK's major investment in AI supercomputing underscores its commitment to taking a global leadership position in AI. The Isambard-AI system will harness world-leading supercomputing, including high-performance networking co-developed at HPE's Bristol labs, to provide the performance and scale required for compute-intensive AI projects. We are proud to partner with the UK Government and the University of Bristol to give UK researchers and industry access to Europe's largest AI system for open science."
Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, said: "In building one of the world's fastest AI supercomputers, the UK is demonstrating the importance for nations to create their own infrastructure. Isambard-AI will provide researchers with the same state-of-the-art AI and HPC compute resources used by the world's leading AI pioneers, enabling the UK to introduce the next wave of AI and scientific breakthroughs."
The government's new Frontier AI Taskforce will have priority access to support its work to mitigate the risks posed by the most advanced forms of AI, including national security from the development of bioweapons and cyberattacks. The resource will also support the work of the AI Safety Institute, as it develops a programme of research looking at the safety of frontier AI models and supports government policy with this analysis.
Science, Innovation and Technology Secretary Michelle Donelan said: "Frontier AI models are becoming exponentially more powerful. At our AI Safety Summit in Bletchley Park, we have made it clear that Britain is grasping the opportunity to lead the world in adopting this technology safely so we can put it to work and lead healthier, easier and longer lives.
"This means giving Britain's leading researchers and scientific talent access to the tools they need to delve into how this complicated technology works. That is why we are investing in building UK's supercomputers, making sure we cement our place as a world-leader in AI safety."
Isambard-AI will be hosted in a self-cooled, self-contained data centre, using the HPE Performance Optimized Data Center (POD), and will be situated at the National Composites Centre (NCC), based at the Bristol and Bath Science Park. NCC is one of seven research centres across the UK that form the High Value Manufacturing Catapult, helping to turn great ideas into reality by providing access to world-class research and development facilities and expertise that would otherwise be out of reach for many businesses in the UK.
Richard Oldfield, Chief Executive Officer, National Composites Centre, said: "We are immensely proud to be hosting Isambard-AI at the National Composites Centre. This underpins our vision of bringing together world-class innovators, academic researchers and cutting-edge technology to solve some of the world's most complex engineering challenges.
"Building on our expertise and state-of-the-art capability in accelerating industrial transformation from fundamental research to industrial application, we're excited to be the home of the UK's national AI supercomputing facility."
Phase one of the system, available in March 2024, will utilise Isambard 3 a TOP500-class supercomputer service for AI and high-performance computing (HPC). This is due to be installed at the NCC at the start of 2024 and was developed in collaboration with the GW4 group of universities an alliance made up of the Universities of Bath, Bristol, Cardiff and Exeter. It will offer early access to UK-based scientists, researchers and developers so their research can get underway as soon as possible.
GW4 Alliance Director, Dr Joanna Jenkinson MBE, said: "Isambard-AI builds on the success of the GW4 Isambard Supercomputer, which started as a GW4 collaborative research community, and was the world's first Arm-based computer to go into production use. Through GW4 Isambard, we have already seen progress made across research into vital areas such as Parkinson's disease, osteoporosis and supporting patients with heart failure.
"I am delighted the strength and innovation of GW4 Isambard, and the importance of supporting academic and industrial collaboration, has helped to enable thisnew national Artificial Intelligence Research Resource which will supercharge the nation's digital growth and AI capabilities."
Isambard-AI will be built using the HPE Cray EX supercomputer, a next-generation platform architected to support unprecedented performance and scale, and consist of 5,448 NVIDIA GH200 Grace Hopper Superchips which combine NVIDIA's Arm-based Grace CPU with a Hopper-based GPU optimized for power efficiency and giant-scale AI, along with the latest HPE Slingshot 11 interconnect, and nearly 25 petabytes of storage using the Cray Clusterstor E1000 optimised for AI workflows.
When in production, Isambard-AI will achieve well over 200 PetaFLOP/s using the Top500's Linpack benchmark, while also achieving over 21 ExaFLOP/s of AI performance to accelerate AI training for large-scale AI, such as large language models.
This represents a massive 10 times improvement over the next fastest system in the UK when Isambard-AI is live, and will enable Isambard-AI to perform over 21 quintillion AI-optimised floating point operations per second.
Isambard-AI will also feature sophisticated direct liquid-cooling capabilities as part of the HPE Cray EX supercomputer design to improve energy efficiency and overall carbon footprint impact. The system will be hosted in a self-cooled, self-contained data center, using the HPE Performance Optimized Data Center (POD), and will be situated at the National Composites Centre (NCC), based at the Bristol and Bath Science Park. HPE is also collaborating with the University of Bristol on a highly-energy efficient heat re-use model, extracting waste heat from the Isambard-AI system to use as renewable energy to heat local buildings, supporting the Net Zero carbon efficiency targets for 2030/2040 as mandated by the UK Government.
Bristol is one of the top UK universities for AI research and scientific computing. It already plays host to cutting-edge computing technology, including the previously announced Isambard 3 supercomputer, while being home to the UKRI Centre for Doctoral Training in Interactive Artificial Intelligence and the newly announced UKRI AI Centre for Doctoral Training in Practice-Oriented Artificial Intelligence.
About Hewlett Packard Enterprise
Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: http://www.hpe.com.
*Please note, the relationship between the University of Bristol and HPE remains subject to contract.
Read more from the original source:
Unprecedented 225m investment to create UK's most powerful ... - University of Bristol
AFP says it has blocked 10 websites hosting child abuse content – iTWire
The Australian Federal Police says it has blocked 10 websites with domains registered in the Cocos Keeling Islands which it claims were distributing child abuse images and videos.
In a statement on Wednesday, the AFP said it had blocked access on 10 October and since then recorded tens of thousands of attempted visits to these sites.
It said the IP addresses and identities of these individuals would be referred to the law enforcement authorities in their respective countries.
Investigations that led to the block were begun in 2022, the AFP said, adding that the Internet Watch Foundation, an international not-for-profit agency combatting child abuse, had flagged the Cocos Keeling Islands TLD as one of the top 10 worldwide associated with distributing child abuse material.
These videos and images show real children being exploited, physically harmed and traumatised for the perverse gratification of offenders. Each time these files are shared it perpetuates the harm caused to these children, he said.
The image posted on the blocked sites. Courtesy AFP
Australian domain names should not, and will not, be a safe haven for this kind of horrible content.
The AFP will use every tool in its arsenal, from website takedowns to seizing servers and arresting those who provide, access or distribute the material, to help keep children safe.
We will be working to identify the people responsible for creating these sites as well as those who accessed them.
Australian eSafety commissioner Julie Inman Grant said child sexual exploitation material was present in all corners of the Internet and Australians should report it to eSafety if they encountered such material.
With our partners in law enforcement, we continue to chip away at one of the most distressing and abhorrent violations of human rights: the sexual abuse of children, she said.
This challenge is of epic, global proportions, and I urge all Australians to help eradicate this content by reporting it to eSafety.gov.au. You can report anonymously and should only report the URL, rather than store or screenshot the illegal content."
Did you realise that Gartner also recommends that security teams prioritise NDR solutions to enhance their detection and response?
Picking the right NDR for your team and process can sometimes be the biggest challenge.
If you want to try out a Network Detection and Response tool, why not start with the best?
Vectra Network Detection and Response is the industry's most advanced AI-driven attack defence for identifying and stopping malicious tactics in your network without noise or the need for decryption.
Download the 2022 Gartner Market Guide for Network Detection and Response (NDR) for recommendations on how Network Detection and Response solutions can expand deeper into existing on-premises networks, and new cloud environments.
DOWNLOAD NOW!
Marketing budgets are now focused on Webinars combined with Lead Generation.
If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.
The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.
Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.
We look forward to discussing your campaign goals with you. Please click the button below.
MORE INFO HERE!
Continue reading here:
AFP says it has blocked 10 websites hosting child abuse content - iTWire
Serverless vs. containers: Which one for microservices apps? – TechTarget
When you're designing a microservices app, the natural inclination is to deploy microservices using containers. Containers have become the de facto solution for hosting microservices, thanks to their scalability and ability to host independent services as part of a cohesive application architecture.
But sometimes it makes more sense to run at least some of your microservices as serverless functions. Let's look at some factors that can help you choose between containers and serverless when deploying microservices.
A serverless function is a deployment model that allows developers to trigger the execution of software on demand. Using a serverless platform, such as AWS Lambda or Azure Functions, teams can upload code they want to run without having to provision a host server. This can also be done with a framework such as Knative, which makes it possible to run serverless functions inside a Kubernetes cluster.
The serverless platform configures the conditions necessary to deploy code into production. From there, the serverless platform automatically executes the code whenever the preconfigured conditions are met. The code runs to completion after each invocation, which then stops and waits to be called again by the serverless platform. In most cases, serverless functions are priced based on their execution time, meaning you pay only for the time they spend running.
With a container, individual applications or microservices run in relative isolation from the host server. To run software inside containers, developers must build a container image based on the code they want to execute, configure a hosting environment (using a technology such as Kubernetes) and deploy containers into it with the container image. Once deployed, a container runs continuously until it is shut down by either an engineer who spins it down manually or an orchestration service that terminates the container for you.
The pricing models for containers vary widely depending on which type of hosting model you use. In general, running containers requires you to pay for infrastructure resources on an ongoing basis. So, even if your containers are not actively handling requests, you still pay for the resources required to host them because the containers run continuously rather than on demand.
Thus, the key differences between serverless functions and containers include the following:
Now let's look at what those differences mean for hosting microservices.
If you choose to deploy one or more microservices using serverless, those microservices will be active within your application only when they are invoked by your serverless platform. Unlike containerized microservices, they won't be switched on, ready to receive requests from other microservices.
In addition, the deployment and update processes for functions-based microservices are different. Rather than pushing out a new container image inside an orchestrator to update your microservice, you'd have to deploy a new version of the serverless function.
This can be a more complicated and manual process than updating a containerized microservice; generally, serverless hosting platforms don't integrate as tightly with CI/CD pipelines. This means that you can't automate as many of the tasks required to move updated microservice code from your development environment into a production serverless platform.
It's also likely to be more difficult to integrate serverless microservices into a service mesh if you use one to help manage your microservices. A service mesh offers limited support for proprietary serverless platforms, such as Lambda, but strong support for Knative-based functions. However, they are primarily designed to manage communications between containerized microservices. Configuring your service mesh to work with serverless functions would be extraordinarily complicated.
A microservice could be a good candidate for deployment as a serverless function under certain conditions.
Whether the differences between serverless and containerized microservices create an advantage or disadvantage depends on what you plan to do with your microservices.
A microservice could be a good candidate for deployment as a serverless function under certain conditions. Those would include the following:
Be aware you can use serverless functions alongside containers within the same microservices app. Some of your microservices can run in serverless functions while you deploy others with containers.
This approach requires more effort. You'll have to manage both serverless functions and containers inside the same app. Even so, it gives you access to the best of both worlds, allowing you to deploy some microservices using serverless functions while sticking with containers for those that aren't a good fit for the serverless model.
Go here to see the original:
Serverless vs. containers: Which one for microservices apps? - TechTarget
Cloud Migration vs. Cloud Transformation: Similarities and Differences – ITPro Today
Cloud migration and cloud transformation might sound like pretty similar processes and they are. They both involve moving workloads into the cloud and taking advantage of cloud services in ways that benefit your business.
But like many seemingly interchangeable technical terms, cloud migration and cloud transformation refer to distinct types of operations. Understanding the differences is important for deciding which approach to take when increasing your organization's use of the cloud.
Related: 5 Considerations for Migrating Data to the Cloud
Keep reading for a look at the similarities and differences between cloud migration and cloud transformation, along with tips on which type of strategy to pursue when.
Cloud migration is the process of moving workloads from an on-premises environment to the cloud. Typically, organizations undertake cloud migration when they decide that they can increase the reliability, scalability, or cost-efficiency of their workloads by hosting them in a cloud platform rather than using their own infrastructure.
Related: 5 Best Practices for Optimizing Costs During Cloud Migration
Cloud migration often involves some level of workload reconfiguration; for example, you might migrate to a different operating system when you move on-prem virtual machines to the cloud so that you can use a type of OS that your cloud provider actively supports.
However, with cloud migration, workload reconfiguration is typically limited. The goal of cloud migration is to replicate on-prem workloads within the cloud, not to rebuild the workloads from the ground up.
Cloud transformation is the process of making your business cloud-centric by taking full advantage of modern cloud services.
Thus, when you perform a cloud transformation, you don't just move on-prem workloads into the cloud. You redesign and transform (hence the term "cloud transformation") those workloads to optimize them for the cloud. You might refactor a monolithic application to run as microservices so that it can operate more efficiently in distributed cloud environments, for example.
At the same time, from a strategic perspective, cloud transformation means making the cloud the default approach to meeting your business's technological requirements. In other words, you don't just turn to the cloud when it's convenient while keeping some workloads on-prem; you move as completely into the cloud as possible.
Cloud migration and transformation are similar in that they both involve moving workloads into the cloud to take advantage of the scalability, flexibility, and efficiency of cloud environments. Both processes also have the result of making businesses more dependent on third-party cloud platforms to meet their infrastructure needs.
And they both require organizations to rethink their approaches to processes like security and observability in order to address the unique challenges of the cloud.
But in other ways, there are clear differences between cloud migration and cloud transformation:
In short, cloud migration is a simpler process than cloud transformation. But migration also may reduce your ability to capitalize fully on the cloud because, unlike cloud transformation, migration doesn't entail a complete shift in way of thinking or extensive reconfiguration of workloads.
If you want to move workloads into the cloud as quickly as possible to reduce the burden of managing on-prem infrastructure and workloads, a cloud migration will allow you to do that. But if you're seeking a deeper level of change one that allows your business to optimize its use of the cloud a cloud transformation is the path to get there.
About the author
More here:
Cloud Migration vs. Cloud Transformation: Similarities and Differences - ITPro Today
Upper Echelon adds IT managed services offering – delawarebusinessnow.com
North Wilmington-based Upper Echelon Technology Group (UET), a managed services company,has partnered with Illinois-based High Wire Networks and Overwatch Managed Cybersecurity team, to expand the portfolio of customer cybersecurity solutions.
In addition to cybersecurity services, Upper Echelon offers backup and disaster recovery solutions, cloud hosting, endpoint management, IT support, network security, Microsoft 365, andVirtual Chief Information Officer services.
All UET customers will benefit from the High Wire Networks Overwatch Managed Cybersecurity platform that includes cybersecuritythreat prevention, detection, and response capabilities, layered with automation for faster remediation and higher fidelity alerts.
The announcement was made by Todd Thanhauser, Upper Echelon founder & CEO, and Mark Porter, CEO at High Wire Networks.
Citing the increase in cybercrimes, Thanhauser added, Effective cybersecurity services require vigilance and multiple layers of defense. We can now offer customers peace of mind and more importantly, allow them to focus on business, not technology.
Upper Echelon Technology Groupis a cybersecurity-focused managed services provider with offices in Pennsylvania, Delaware and Florida. Further information is available at this website.
High Wire Networks has been delivering tech and managed cybersecurity services for more than 23 years. For more information, visittheir website.
Go here to read the rest:
Upper Echelon adds IT managed services offering - delawarebusinessnow.com
NetApp CEO: Hybrid cloud will be the only way to capitalize on … – ITPro
An increased focus on generative AI and hybrid cloud strategies have aligned at a perfect time for organizations globally, according to NetApp CEO George Kurian.
Speaking to ITPro at the NetApp Insight conference in Las Vegas, Kurian said that hybrid cloud will enable businesses exploring the use of generative AI tools to fine tune their strategy before going to market and provide more robust, safe, and consumer-ready services.
Businesses flirting with the prospect of generative AI-powered tools, services, or platforms will, in the first instance, naturally gravitate toward public cloud, he said.
However, long-term, a hybrid cloud approach will likely be the go-to model for hosting generative AI workloads due to the flexibility afforded by the practice.
I think the hybrid cloud is a classic AI business use case, he told ITPro. Customers will want to try proof of concepts on the public cloud where all the modern tools are available, and once they understand which makes sense, they can scale it in their own data center.
This will depend entirely on the unique circumstances of individual businesses, Kurian stressed. Larger organizations operating in critical industries will likely focus heavily on on-prem data architectures to ensure safety and security.
Smaller businesses could unlock benefits with a blended hybrid of multi-cloud approach, however.
Im sure there will be different models for different classes of customers, he said. For those that are large and are regulated, theyll do everything on-prem. For those that are smaller and more agile, theyll probably do everything on cloud and get a blend.
Its easier, for example, in the cloud to collaborate across organizations. So if you are a research institution thats collaborating with another on an AI tool chain, it might be easier to do it on the public cloud rather than your own data center environments.
RELATED RESOURCE
The ultimate guide to switching POS providers
This guide will help you switch from a legacy POS provider to an efficient modern POS system.DOWNLOAD NOW
Kurians comments on the confluence of generative AI and hybrid cloud practices arent an entirely unique perspective.
VMware, for example, recently highlighted a growing focus on supporting organizations to harness multi-cloud capabilities to drive generative AI innovation and development.
NetApp confirmed the expansion of its AFF C-Series capacity flash storage option will also now be integrated with its ONTAP AI architecture in a bid to further support customers developing AI tools in a hybrid cloud environment.
The move is part of a concerted effort to help drive AI development among customers and leverage the flexibility advantages of hybrid cloud.
Businesses globally are coming to realize the benefits of hybrid cloud and the ability to shift applications and workloads on-and-off the cloud and into data center environments.
In 2022, 48% of technology leaders revealed they were mostly hybrid compared to just 40% in 2020, according to a study by Google which underlines a gradual, yet steady shift in focus.
Go here to read the rest:
NetApp CEO: Hybrid cloud will be the only way to capitalize on ... - ITPro