Page 372«..1020..371372373374..380390..»

Promoting customer choice: AWS takes another step to lower costs for customers changing IT providers | Amazon Web … – AWS Blog

Changing IT providers has always required time, effort, and money, but cloud computing has made that process easier than ever. Before cloud services, switching was often prohibitively difficult and expensive: over a multi-year process, companies would make up-front investments in new hardware and rewrite software to conform to their new providers proprietary operating system.

At AWS, we design cloud services to give customers the freedom to choose technology that best suits their needs, and our commitment to interoperability is a key reason customers choose AWS in the first place. Our open APIs and Software Development Kits (SDKs), services such as Amazon ECS and Amazon EKS Anywhere, as well as our hybrid infrastructure services like AWS Outposts and AWS Snow, allow customers and third parties to build compatible software and solutions. We have been at the forefront of developing technical solutions that allow customers to run their applications on AWS and still connect to other cloud providers, or on-premises, for any application dependencies.

We also interconnect directly with many other networks, including those of other cloud providers, to help customers enjoy a reliable data transfer experience across different providers and networks. If a customer decides to move to another IT provider, we want to remove barriers which make it harder to do so, because our focus is on building long-term customer trust and removing these barriers makes AWS attractive to new and returning customers.

Our customers make hundreds of millions of data transfers each day, and any time a customer transfers data using our infrastructure, there are inherent costs to enabling this. AWS has built a best-in-class global network by investing in networking solutions such as custom semiconductors, equipment and software, and millions of miles of terrestrial and undersea cable. These investments improve transfer speeds, reduce lag, and increase security and reliability across the entire AWS global network.

Just as AWS has innovated to reduce the cost of our different services, we work to reduce our data transfer costs and pass these cost savings on to our customers. Since 2021, over 90% of customers already pay nothing for data transfer fees out of AWS because we provide them with 100 gigabytes per month for free, to use for any purpose. Were now taking this a step further for the small percentage of customers not captured by this free tier. Beginning today, customers globally, are now entitled to free data transfers out to the internet if they want to move to another IT provider. To get started, customers should contact AWS Customer Support.

While this change may help the small percentage of customers outside our free data transfer tier, the biggest barrier to changing cloud providers continues to be unfair software licensing. Some IT providers impose licensing restrictions on their software that make it financially unworkable for their customers to choose a cloud provider other than them. In some cases, it is impossible for customers to run the software on other popular cloud environments. This issue must be solved to promote customer choice, and like many others across the industry, we believe embracing the Principles for Fair Software Licensing is the best route to achieving this.

AWS offers industry-leading cloud services to millions of customers worldwide, and we will continue to innovate and help customers deploy in the cloud, grow their cloud footprint, and move data. Review these FAQs or contact AWS Customer Support for more information if you decide to move data off AWS.

The rest is here:
Promoting customer choice: AWS takes another step to lower costs for customers changing IT providers | Amazon Web ... - AWS Blog

Read More..

Healthcare Cloud Computing Market size is set to grow by USD 42.21 billion from 2022 to 2027, North America is … – PR Newswire

NEW YORK, March 6, 2024 /PRNewswire/ -- According to Technavio, theglobal healthcare cloud computing marketsize is projected to grow by USD 42.21 billion from 2022 to 2027. The market is estimated to decrease at a CAGR of 20.55% during the forecast period.However, the growth momentum will decelerate. By geography, the global healthcare cloud computing market is segmented into North America, Europe, APAC, South America, and Middle East and Africa. The report provides actionable insights and estimates the contribution of all regions to the growth of the global healthcare cloud computing market.North America is estimated to account for 39% of the growth of the global healthcare cloud computing market during the forecast period. The US, Canada, and Mexico are the major adopters of healthcare cloud computing solutions in the region. Many healthcare institutions in the US are adopting cloud computing. For instance, LifePoint Health signed a multiyear strategic partnership with Google LLC to implement Google Cloud's healthcare data engine in its hospitals across the US. Such collaborations are expected to drive the growth of the market in the region during the forecast period.

For more insights on the historic period (2017 to 2021) and forecast market size (2023to 2027)

Request a sample report

Report Coverage

Details

Page number

169

Base year

2022

Historical year

2017-2021

Forecast period

2023-2027

Growth momentum & CAGR

Decelerate at a CAGR of 20.55%

Market growth 2023-2027

USD42.21 billion

Market structure

Fragmented

YoY growth (%)

23.08

Regional analysis

North America, Europe, APAC, South America, and Middle East and Africa

Performing market contribution

North America at 39%

Key countries

US, Canada, UK, Germany, and France

Segment Overview This report extensively covers marketsegmentation by product (SaaS, IaaS, and PaaS), component (hardware and services), and geography (APAC, North America, Europe, Middle Eastand Africa, and South America).

Insights on the market contribution of various segments including country and region wise, historic (2017 to 2021) and forecast market size (2023to 2027)

Download a Sample Report

The increased number of cloud vendors is driving the healthcare cloud computing market growth.

Several vendors have entered the market to provide services such as EHR and EMR facilities. For instance, Alibaba has launched ET Medical Brain, an AI-assisted solution suite that acts as a virtual assistant in medical imaging, drug development, and hospital management.The market is dominated by a few players. However, other companies are providing cloud services by partnering with existing vendors. For instance, Rackspace offers cloud services in collaboration with Amazon.com. Thus, with a rise in the number of cloud service providers, the market is expected to grow rapidly during the forecast period.

Insights on Market Drivers, trends, & Challenges, historic period(2017 to 2021), and forecast period(2023 to 2027)

Request asample report!

Analyst Review

In recent years, the Healthcare Cloud Computing Market has witnessed exponential growth, reshaping the landscape of healthcare delivery. Cloud computing, characterized by its scalability, flexibility, and cost-effectiveness, has become a cornerstone of modern healthcare infrastructure. With cloud computing solutions, healthcare providers can store, manage, and analyze vast amounts of patient data securely and efficiently.

One of the primary drivers behind the surge in the Healthcare Cloud Computing Market is the increasing adoption of electronic health records (EHRs) and the need for interoperability among healthcare systems. Cloud-based EHR solutions offer healthcare organizations the ability to access patient information anytime, anywhere, fostering seamless collaboration among care teams. This enhances patient care while streamlining administrative processes.

Security is paramount in healthcare, and cloud computing offers advanced security features such as encryption, access controls, and regular data backups. These measures ensure data security and compliance with stringent healthcare regulations like HIPAA.

Moreover, cloud computing solutions enable healthcare providers to leverage advanced analytics tools for deriving meaningful insights from patient data. By harnessing the power of analytics, healthcare organizations can improve clinical outcomes, identify trends, and personalize patient care strategies.

The Healthcare Cloud Computing Market caters to a wide range of stakeholders, including hospitals, clinics, research institutions, and pharmaceutical companies. These entities rely on cloud computing to enhance operational efficiency, reduce costs, and drive innovation in healthcare delivery.

The COVID-19 pandemic further accelerated the adoption of cloud computing in healthcare. With the sudden surge in demand for telehealth services, healthcare providers turned to cloud-based platforms to deliver remote care effectively. This trend is expected to persist post-pandemic, driving continued growth in the Healthcare Cloud Computing Market.

Looking ahead, the Healthcare Cloud Computing Market is poised for further expansion with the emergence of emerging technologies like artificial intelligence (AI) and the Internet of Medical Things (IoMT). These technologies, coupled with cloud computing, hold the potential to revolutionize healthcare delivery, offering predictive analytics, remote patient monitoring, and personalized treatment plans.

In conclusion, the Healthcare Cloud Computing Market represents a transformative force in the healthcare industry, offering unprecedented opportunities for innovation and efficiency. As healthcare organizations increasingly embrace cloud computing solutions, they are poised to deliver higher quality care, improve patient outcomes, and drive cost savings across the healthcare ecosystem.

Request asample report!

Related reports:

About Us Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provideactionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

Contact Technavio Research Jesse Maida Media & Marketing Executive US: +1 844 364 1100 UK: +44 203 893 3200 Email: [emailprotected] Website: http://www.technavio.com/

SOURCE Technavio

Read the original:
Healthcare Cloud Computing Market size is set to grow by USD 42.21 billion from 2022 to 2027, North America is ... - PR Newswire

Read More..

Misconfigured cloud servers subjected to new Linux malware attack – SC Media

MisconfiguredConfluence, Apache Hadoop, Redis, and Docker servers have been targeted by a new cryptojacking campaign distributing Linux malware,SecurityWeekreports.

Vulnerable internet-exposed cloud servers are being identified and exploited through four novel Golang payloads that would eventually lead to cryptominer deployment, according to a Cado Security report. Intrusions targeted at Confluence servers involved the exploitation of the critical remote code execution vulnerability, tracked as CVE-2022-26134. On the other hand, attacks aimed at Docker instances involved the creation of a container for an executable that would later allow command-and-control communication and payload retrieval. Such an attack is indicative of the extensive initial access methods for Linux and cloud malware, noted Cado Security researchers. "It's clear that attackers are investing significant time into understanding the types of web-facing services deployed in cloud environments, keeping abreast of reported vulnerabilities in those services and using this knowledge to gain a foothold in target environments," researchers added.

Read more:
Misconfigured cloud servers subjected to new Linux malware attack - SC Media

Read More..

What is Nvidia Omniverse? How can it affect your business? – TechTarget

What is Nvidia Omniverse?

Nvidia Omniverse is a computing platform built to enhance digital design and development by integrating 3D design, spatial computing and physics-based workflows across Nvidia tools, third-party apps and artificial intelligence (AI) services. Created specifically for developing applications in the metaverse, the real-time platform is used for building digital twins of products, factories, warehouses and infrastructure. It can also streamline the creation of 3D-related media for entertainment and product demonstrations, as well as enterprise media content rendered on computers, phones and extended reality (XR) devices.

The platform, launched in 2022, is available as a cloud service or a private instance running on premises. Additionally, it supports plugins and integrations for deploying omniverse content, applications and autonomous control systems across cars, robots, building controls, equipment and medical devices.

Nvidia Omniverse helps streamline workflows for designing, simulating and optimizing equipment, products and processes across different roles and expertise for virtual design. For example, Mercedes-Benz and BMW are using it to improve their product and factory designs. It is also helping companies optimize mobile network deployment, warehouse layouts, building construction and smart city deployments.

The platform can also serve as an integration tier for workflows that span tools from different vendors. This can reduce the integration challenges in crafting point-to-point integrations for specific workflows. For example, teams could use design tools from one vendor, simulation tools from another and rendering engines from a third to streamline virtual development efforts.

Nvidia Omniverse, through its Omniverse Replicator and Isaac Sim components, can also help generate synthetic data for testing various autonomous systems, AI algorithms and robot control systems. This function can streamline the development of more capable autonomous cars, warehouse materials handling equipment and robotic controls. The final control software can be sent to various target controllers, including Nvidia-specific embedded hardware or third-party controllers supporting standards such as Unified Robot Description Format or Robot Operating System.

In addition, Nvidia Omniverse also supports more consumer-facing development for generating avatars, asking questions about physical products and visualizing the furniture layout in a 3D representation of rooms.

The Omniverse platform supports various components across physical representations, core platform elements and extensible integration capabilities.

Omniverse supports various specifications and standards that simplify the exchange of 3D-related data across multiple tools. Nvidia is working with the Universal Scene Description (USD) community to extend the specification to support Material Definition Language (MDL) and PhysX capabilities.

The core Nvidia Omniverse platform includes the following elements for storing, connecting, simulating, rendering and developing apps:

The essential value of the Nvidia Omniverse platform comes from its support of a rich collection of Nvidia, third-party, and open source tools and formats. These include plugins, extensions or services, such as the following:

Nvidia Omniverse helps streamline the development lifecycle of physical products and virtual experiences across various roles and expertise. The platform can help businesses manage the complexity of building new products, designing more efficient facilities and creating more engaging user experiences. Here are some specific ways the platform is used by businesses:

Nvidia Omniverse is currently the most comprehensive platform for integrating 3D and physics-based workflows across various cloud services, third-party applications and rendering engines. The platform's tools and supporting services ecosystem have been undergoing rapid innovation. In the short term, Nvidia said it will continue to improve the integration of 3D workflows with its AI hardware and tools.

Nvidia is also actively working with various industry groups and standards bodies to improve the capabilities of multiple standards, specifications and open source tools. For example, it is a member of the OpenXR community, developing standards to streamline XR and spatial computing experiences across different devices. It is also helping guide the Graphics Library Transmission Format standard for exchanging 3D content for consumer-facing applications. Additionally, it is helping to extend the USD format beyond 3D scenes to support more complex engineering and simulation workflows. Nvidia will continue to help weave these capabilities into the Omniverse platform.

The platform also supports a rich marketplace to make it easier for vendors, domain experts and systems integrators to monetize their expertise and services. It will continue to enrich these offerings. These enhance the ability for enterprise users to mix and match design, development, test and monitoring capabilities across various tools. Nvidia currently has partnerships with leading product lifecycle management, geographic information system, CAD, computer-aided engineering, simulation and gaming engine vendors. Nvidia Omniverse will continue to streamline workflows across these tools.

Read the original:
What is Nvidia Omniverse? How can it affect your business? - TechTarget

Read More..

Introducing the AWS Generative AI Competency Partners – AWS Blog

We are thrilled to announce the launch of the AWS Generative AI Competency designed to feature AWS Partners that have shown technical proficiency and a track record of continuing success with customers while implementing generative AI technology powered by AWS. The AWS Generative AI Competency includes APN Technology Partners and APN Services Partners. Software path partners have shown their proficiency in either Generative Ai Applications, Foundation Models & App Development, or Infrastructure & Data. Services path partners have shown proficiency in end-to-end generative AI consulting.

AWS Generative AI Competency Partners are at the tip of the spear in developing and implementing the newest generative AI solutions for their customers to propel their businesses forward with significant efficiency, creativity, and productivity improvements. These partners have proven themselves leaders in leveraging AWS generative AI technology such as Amazon Bedrock, Amazon SageMaker Jumpstart, Amazon CodeWhisperer, AWS Trainium, AWS Inferentia, and accelerated computing instances on Amazon Elastic Compute Cloud (EC2).

AWS Generative AI Competency Partners are represented globally with services and software solutions. These partners undergo rigorous technical and commercial vetting by AWS Partner Solution Architects, guaranteeing customers a consistently high-quality experience.

If youre an APN Technology Partner or APN Services Partner experienced in working with customers on AWS using generative AI, click here to learn more.

Continued here:
Introducing the AWS Generative AI Competency Partners - AWS Blog

Read More..

IT leaders dial back cloud-first strategies as hybrid IT becomes more of an investment priority – ComputerWeekly.com

Cloud-first IT strategies seem to be falling out of favour with technology decision-makers and business leaders, suggests data from the TechTarget/ESG 2024 Technology Spending Intentions study.

More than 1,400 technology decision-makers and business leaders participated in the study, which aims to shine a light on the IT investment priorities of enterprises around the world over the coming year.

The data shows that just over a third of the studys EMEA participants (36%) are taking a cloud-first approach to new application deployments, which is slightly down on a similar study carried out by TechTarget in 2023.

Back then, around half of the participants indicated that their organisation was favouring a cloud-first approach when choosing the best environment to deploy new applications and workloads.

This is in keeping with IT spending trends being seen across the wider IT industry, with public cloud firms, such as Amazon Web Services (AWS), tracking a shift in customer cloud spending, with enterprises focusing more on optimising their existing environments rather than ramping up their migration plans.

Jon Brown, senior analyst for cloud and IT operations at ESG, said this trend is one of the reasons why organisations are adopting FinOps to help eliminate waste within their IT estates and to bring their cloud expenditure under control.

Adopting FinOps is one way to achieve this, he said. With FinOps, individuals and teams can make informed, data-driven and cost-optimising decisions for IT resource consumption and demonstrate the business value of those expenditures.

FinOps is a collaborative methodology that is focused on helping enterprises maximise the value of their cloud investments through the formation of cross-functional teams, made up of people from the cloud, technology and finance teams within an organisation.

As well as the concept of FinOps gaining ground within enterprises, there is anecdotal evidence mounting about enterprises taking less of a broad-brushed, cloud-first approach to IT deployments, and instead focusing on running new workloads in the environment best suited for them, which could be on-premise or in the cloud.

This is perhaps why hybrid cloud emerged as the second most significant technology investment large enterprise participants in the TechTarget/ESG poll plan to make over the next 12 months. Incidentally, the first most significant technology investment area was cloud security tooling.

When the EMEA respondents were asked elsewhere in the report about the most significant investments in compute infrastructure and management technologies their organisations plan to make over the next 12 months, hybrid cloud came out top with 32% of the vote.

Along similar lines, the joint second most popular choice among EMEA respondents were investments in off-premise cloud computing infrastructure, with 29% of the vote, along with multicloud software.

As recently reported by Computer Weekly, enterprise attitudes towards hybrid cloud have evolved compared with previous years, when adopting a hybrid infrastructure was seen as something of a temporary stopping-off point for companies that were seeking to migrate all of their infrastructure to the public cloud in the long term.

As part of the previously talked about cloud cost optimisation trend already playing out in enterprises, some firms are realising now that some of the more predictable, less peaky workloads they had moved to the public cloud work better on-premise from a cost and performance point of view.

This means hybrid cloud is now becoming the desired endpoint for some of their applications, workloads and data rather than a temporary holding place.

With some enterprises looking to move applications and workloads back on-premise, the study also saw respondents asked about their plans to modernise their datacentre over the next 12 to 18 months.

The top response to that question, from 40% of EMEA respondents, was that they were planning to increase their use of IT applications and infrastructure monitoring and observability tools to get a better understanding of how their on-premise estate is behaving.

Meanwhile, 31% of respondents said they planned to deploy more datacentre infrastructure service management, orchestration and automation tools to make their infrastructure easier to manage.

According to Brown, this is indicative of the fact that enterprise infrastructure estates are becoming more complex.

The task of IT operations is becoming simultaneously more critical and complex due to a whole list of factors: modernisation, distributed cloud computing, staffing, cost pressures, digital transformation, security, remote work and tool sprawl, among others, he said.

And this, he added, is why IT departments are having to deploy more tools to get more done in less time with fewer people.

More here:
IT leaders dial back cloud-first strategies as hybrid IT becomes more of an investment priority - ComputerWeekly.com

Read More..

Sailing the seas, Through the clouds: iSoftStone and Huawei Signed Memorandum of Understanding, Launching … – PR Newswire

BEIJING, March 7, 2024 /PRNewswire/ -- On March 4th,iSoftStone and Huawei signed a Memorandum of Understanding launching cooperation in the Middle East and Central Asia in Riyadh, the capital of Saudi Arabia. In the future, both parties will adhere to the principle of mutually beneficial cooperation, jointly enhancing the ecological development and partnership collaboration in the Middle East and Central Asia. Harnessing Huawei's strengths in cloud computing technology, services, and global network coverage, and combining them with iSoftStone's expertise in digital consulting, design, technology, and empowerment capabilities, as well as its advantages in products, services, and solutions for large enterprises and various industries, the two parties will establish dedicated sales teams. Collaboratively, they will formulate market strategies to achieve mutual expansion objectives.

Huawei Saudi Arabia CEO Yang Yang and iSoftStone Chairman and CEO Liu Tianwen engaged in cooperative negotiations. Also present at the discussion were Qi Xiao, President of Huawei's Middle East and Central Asia Cloud Business Department; Zou Siyi, Head of Huawei Saudi Enterprise Business Department; Dong Libin, Head of Huawei Cloud Marketing Department; Luo Shihua, Head of Huawei Cloud Consulting and Solution Sales Department in the Middle East and Central Asia; Xu Liangdong, Head of Huawei Cloud Ecology and Partner Development Department in the Middle East and Central Asia. From iSoftStone, attendees included Director Huang Ying, Senior Vice President Tang Fanghu, Vice President Yin Lu, and Zhang Hai, Overseas ICT Sales General Manager.

During the discussion, Huawei emphasized the business development in the Middle East, particularly in Saudi Arabia. The growth in sectors such as telecommunications, government enterprises, and Huawei Cloud has been rapid and promising. Huawei extended a warm welcome to capable partners like iSoftStone to join forces and explore international markets together, seizing the opportunity to expand collectively. There is significant potential for development in areas such as urban governance, digital parks, and cloud infrastructure. With a collaborative effort between Huawei and iSoftStone, we aim to tailor services and solutions to meet the specific business scenarios in the Middle East market.

During the meeting, Chairman Liu Tianwen of iSoftStone expressed the firm commitment to strategically investing in the Middle East market. He conveyed confidence in seamless collaboration with Huawei's representative office and envisioned a comprehensive synergy to deepen the advancement into international markets alongside Huawei. Chairman Liu introduced iSoftStone's capabilities in digital technology services and industry solutions. Emphasizing key areas such as iSoftStone consulting, cloud intelligence services, implementation services for enterprise software, and capabilities in digital technology research and development. He highlighted a series of matured solutions encompassing digital parks, data middle office, system integration, AI large models, smart vehicles, hardware servers, and more.

As a steadfast strategic partner of Huawei Cloud for many years, iSoftStone has been working hand in hand, leveraging domestic industry expertise and mature cases to replicate success in overseas markets. The focus is particularly on Southeast Asia and the Middle East, driving deep collaboration between Huawei Cloud and local enterprises. In 2023, iSoftStone established its international business headquarters in Singapore, laying the groundwork for the international business management and operational system. This initiative extends support to and covers the digital business expansion and delivery in countries such as Saudi Arabia, the United Arab Emirates, Turkey, Thailand, and Malaysia. iSoftStone views this signing as a pivotal opportunity to escalate its commitment, channeling increased investment towards fortifying its overseas technical service capabilities and solution expertise. iSoftStone aims to bolster its collaboration with Huawei Cloud across various facets, including brand value, technical support, channel expansion, and ecological cooperation, collectively propelling them towards a new frontier of international business cooperation.

In recent years, iSoftStone has strategically aligned itself with the evolving landscape of opportunities presented by digitization, intelligentization, sustainability, and internationalization, closely aligning with national and customer strategies, deepening its international footprint. Huawei Cloud is not only the domain where iSoftStone and Huawei engage in deep collaboration but also serves as the collaborative cornerstone for both parties to expand internationally. This is particularly evident in regions like the Middle East and Central Asia, which hold significant market potential. The joint overseas endeavors are poised to achieve mutual benefits, fostering a win-win scenario and collectively leveraging the digital prowess of China to drive global economic development.

Photo -https://mma.prnewswire.com/media/2357059/image_1.jpg

Photo - https://mma.prnewswire.com/media/2357060/image_2.jpg

Read more:
Sailing the seas, Through the clouds: iSoftStone and Huawei Signed Memorandum of Understanding, Launching ... - PR Newswire

Read More..

The Hidden Cost of Using Managed Databases – InfoQ.com

Key Takeaways

In 2024, cloud computing is everywhere, often unnoticed (e.g., iCloud and Google Docs). Cloud computing has become as ubiquitous as real clouds. Many advantages of cloud computing, such as elasticity, scalability, and ease of use, are well understood at this point. They reduce the time to market for new products and address the scaling challenges of existing ones without going through an arduous planning and procurement process.

Because of these advantages, we have seen a massive demand for managed services for databases, message queues, application runtime, etc. However, this article is about the less discussed side of cloud computing: the hidden cost of using managed services, specifically managed relational databases.

As a database practitioner at Cloudflare and building Omnigres, I have experience developing, managing, and operating databases in environments such as completely on-prem, public cloud, and hybrid. From a business perspective, each model has its pros and cons. Once a company adopts a public cloud, using any managed services is fairly trivial, and databases are just one click away.

The ease of use is the gateway for users to start using a service. For the most part, it just works, so why not continue using it or even take a step ahead? Why not create more of such?

Managed databases from Cloud Providers offer a lot of value in terms of running them, backing them up, and monitoring them. They also take care of high availability. I presented at SCaLE20x the challenges of building an in-house managed database service: offloading that work to a provider reduces the operational costs and time to market and brings more flexibility. To offer these benefits, a provider charges the users.

First, calculating how much it will cost for the managed database isnt straightforward. The cost depends on multiple factors, such as:

Even though its complex, its quantifiable. Some third-party tools make it easier to calculate the pricing. Also, cost optimizations such as disabling multi-AZ and stopping instances for the development environments are quite common. Companies such as Walmart started moving toward a hybrid cloud. At the same time, smaller companies like Basecamp have migrated the majority of their services off the cloud for mainly cost reasons.

To understand whether managed service cost is worth it, one must understand its usage pattern. The major benefit of cloud is the flexibility; if one doesnt need that, they are well off operating their databases on their hardware. Lets go over other areas where the cost is more subjective and somewhat difficult to measure.

One of cloud computings unique value propositions is scalability. If the website or product becomes an overnight hit, there is no need to procure infrastructure to support the workload. Thats great, but there is a catch; it can be a surprise if not used carefully. Imagine a runaway or a rogue workload against the database, and since a lot of the cloud providers charge based on IOPS or CPU time, etc., these workloads can generate a huge bill for no use.

On a multi-cloud or hybrid cloud setup, services need to communicate over a network between different providers. Typically, there is no data transfer cost for bringing the data (ingress) into a managed database. However, getting data out (egress) comes with a cost. The egress fee is a significant cost factor for businesses that move data from their managed database service. In a sense, this incentivizes users to not migrate their data out of the provider.

Providers such as Cloudflare understood this challenge and created the Bandwidth Alliance, an alliance that provides a discount or waives data transfer costs between providers who are part of it. Recently, Google Cloud eliminated data transfer fees for migrating data to another cloud provider. The practice is so unfair that regulators from the EU and the UK are investigating it actively.

While the service provider takes care of Day 0 operations, there are still Day 1, and Day 2 challenges. It is unreasonable to expect a provider to solve all the operational challenges. Still, at least its good to be aware of what those operations look like and the costs involved.

Data is the core of the business. I argue that any software business can be rebuilt if the data is intact. As a database engineer, losing data is by far my biggest nightmare. Being paranoid with backups is not a bad thing. Relying solely on the provider for backups is like putting all the eggs in one basket. Suppose the provider offers an SLA/SLO thats a nice add-on. However, there is also a risk of a provider completely losing backup.

For the most part, its the responsibility of the business to their end customers to protect their data. Most mature organizations have secondary backups outside their primary service provider. In making this happen, there is a cost in terms of the actual dollars for storage and computing, data transfer, and engineering costs.

The quality of backups is determined by their ability to be successfully restored. What are backups worth if they cant be restored? Unfortunately, many providers dont do anything on this front and leave this part for their users. Its understandably a complex problem since the providers dont know every business needs. So, the users need to continuously test their restoration through automation or manually to validate the integrity of the backups and their restoration procedure.

Unfortunately, as things evolve, some services can be discontinued. Last year, MariaDB on Azure was retired. Aurora Serverless V1 will no longer supported after 2024. If the database is a closed source, the only way out is to use whatever tool the provider offers to export it elsewhere. Indeed, data migration has to be architected in such a way as to reduce the data loss and the downtime of the service. If its backed by an open-source database such as Postgres or even through an open protocol (e.g., Postgres wire protocol), its somewhat easier to migrate. Still, database/data migrations are always painful.

As managed services tend to focus on solving common problems, it can sometimes be limiting. Since the provider has to manage many services for thousands of customers, providing complete flexibility is cumbersome or impossible. It may not sound limiting or an issue initially, but as the business grows, it can start hurting. For example, Postgres has a huge extension eco-system.

Many managed services allow only the installation of a subset of the extensions. For example, open source extensions such as pg_ivm (incremental view maintenance) and zombodb (making the search easier within Postgres) are not supported in AWS and GCP, which can severely limit what features you can build or rely on.

As an engineer, nothing frustrates me more than being unable to solve an engineering problem. To an extent, databases can be seen as a black box. Most database users use them as a place to store and retrieve data. They dont necessarily bother about whats going on all the time. Still, when something malfunctions, the users are at the mercy of whatever tool the provider supplied to troubleshoot them.

Providers generally run databases on top of some virtualization (Virtual Machines, Containers) and are sometimes even operated by an orchestrator (e.g., K8s). Also, they dont necessarily provide complete access to the server where the database is running. The multiple layers of abstraction dont make the situation any easier.

While providers dont offer full access to prevent users from "shooting themselves in the foot," an advanced user will likely need elevated permissions to understand whats happening on different stacks and fix the underlying problem. This is the primary factor influencing my choice to self-host software, aiming for maximum control. This could involve hosting on my local data center or utilizing foundational elements like Virtual Machines and Object Storage, allowing me to create and manage my services.

Also, there are healthy discussions around self-hosting vs. managed services in forums like Hacker News. One of the comments from that discussion summarizes it eloquently:

"There are definitely some things to be considered here [self-hosting]. However, I find that most people drastically overestimate the amount of work associated with hosting things. Also, they tend to underestimate the amount of work required when using managed solutions. For example, youll certainly want to do secondary backups and test restores even for managed options."

Another side effect I have noticed is that teams tend to throw more money at the problem (increasing instance size), hoping it will solve some of their challenges when they cant identify the root cause. According to Ottertune, a company specializing in tuning database workloads, even increasing instance types without expertly tuning configurations doesnt bring proportional performance gains.

The challenge is almost unsolvable, irrespective of skill level. For instance, Kyle Kingsbury is a distributed systems specialist and the author of the Jepsen test which is used to verify the safety and consistency of distributed systems. While testing the correctness of the MySQL 8.0 version, he ran into a database replication issue and asked for support from the service provider.

A growing trend involves service providers depending on other managed providers to deliver solutions. Nevertheless, frustration arises when the foundational provider fails to meet expectations or behaves poorly. The point is there is not much one can do, even if they pay hefty prices and have a business SLA with their provider.

One thing you might notice throughout this article is the constant theme around tradeoffs. The purpose of this article is not to deter anyone from using cloud computing or managed services. It is mainly to bring awareness around the cost involved, the fine line between staying open and locked-in, limited feature set, lack of visibility, and having to do Day-2 operations.

These are some of the areas that werent intuitive to me when I first started using managed database services. I hope this helps developers and operators make an informed decision.

View original post here:
The Hidden Cost of Using Managed Databases - InfoQ.com

Read More..

China offers AI computing ‘vouchers’ to its underpowered start-ups – TechRadar

China is putting plans into action to support AI startups, despite ongoing chip shortages both globally and locally as a result of geopolitical tensions and subsequent sanctions.

According to the Financial Times, the Chinese government is rolling out the initiative across 17 cities, whereby AI startups will be able to get cloud computing vouchers to grant them access to the tools they need without having to fork out large initial investments.

In a bid to reduce the financial burden of establishing their own data centers, China is set to provide vouchers worth between $140,000 and $280,000 to cover cloud computing.

The financial aid will allow AI businesses to access crucial AI infrastructure such as data centers, for the purpose of training and running LLMs.

The move is believed to be a response to recent US measures, including restrictions on products and services, which have caused a shortage of chips across the country.

A sudden reduction in chip volume has reportedly seen China resort to stockpiling, acquiring via the black market, and even repurposing other components.

A separate report by BNN has uncovered potential plans by the Chinese government to introduce a subsidy scheme for AI groups using domestic chips in order to reduce dependence on foreign companies.

It is believed that voucher applicants must meet certain criteria in order to be considered eligible, including a minimum revenue threshold or participation in government-sponsored projects.

The voucher scheme is clearly an important and valuable tool for Chinese AI startups, however its merely a temporary solution and doesnt solve the countrys chip shortage, leaving many questioning whether China may be able to sustain such impressive levels of growth in the AI sector.

Follow this link:
China offers AI computing 'vouchers' to its underpowered start-ups - TechRadar

Read More..

Tupan Launches the New Binance Smart Chain TCT – GlobeNewswire

NEW YORK, NY, March 06, 2024 (GLOBE NEWSWIRE) -- Recently, Tupan has announced the launch of the new Binance Smart Chain TCT.

Bitcoin recently surpassed the $46,000 mark on January 8th, due to recent compliance with the SEC and becoming an ETF and consequently a semi-security. This has drawn the attention of the general public to the crypto market once again, and market experience tells us that if the mainstream media has covered it, it's already too late to make significant gains short to mid-term.

Compliance with the SEC was a contributing factor to this resistance break, as it changes the perception people have about the volatility stigma the market still had towards crypto-assets, and by having bitcoin in this position there may be a change of view the market has on tokens, but it also adds the need for a level of regulation towards crypto in the future.

But this is not the first time a cryptocurrency has complied with the SEC. Tupan, a project directed at bringing profitable bio-economy to blockchain while aiding the Amazon rainforest has been in the market since 2019 and was one of the first UN-17-SDG and ESG tokens issued by an investment fund regulated by the SEC USA, reaching $ 7.12 at the time in the Waves blockchain. Now after the last few bearish years, bitcoin finally establishes itself as a security investment, and this sets a trend for tokenized securities.

Tupan has started a never-before-seen program to enable its community to farm an Equity token called Tupan Augreen, which represents the shares of the ForestAu Green Investment Fund. ForestAu Green quotas are estimated to cost $160k and are now available for farming, users only need to own a Tupan NFT and stake it along the community tokens (TCT) to be able to earn tokenized shares of a fund that has a plethora of project with a high chance of appreciation.

Upon further analysis of the Tupan Community Token (TCT), it can be noted that's not only a utility token to be exchanged for digital equity, but also a token representation of the environmental impact for all the projects related to the fund, adding even more value to it, which sets up an entire sustainable ecosystem connecting the traditional market to the blockchain and the investment fund.

To acquire your TCTs before the exchange launch and discover this unique opportunity, join the exclusive list at http://www.tupan.io/launch

Social Links

X: https://twitter.com/tupan_io

Facebook: https://www.facebook.com/tupan.io

LinkedIn: https://www.linkedin.com/company/tupan-token/

Instagram: https://www.instagram.com/tupan.io

Discord: https://discord.com/invite/AKmdvqKkMz

Telegram: https://t.me/cryptoTupan

Media Contact

Brand: Tupan

Email: marketing@tupan.io

Website: https://tupan.io/

SOURCE: Tupan

Original post:

Tupan Launches the New Binance Smart Chain TCT - GlobeNewswire

Read More..