Page 731«..1020..730731732733..740750..»

Cloud Security Market To Reach USD 106.4 Billion By 2032 Says … – GlobeNewswire

Fort Collins, Colorado, Oct. 27, 2023 (GLOBE NEWSWIRE) -- According to DataHorizzon Research, theCloud Security Market was valued at USD 27.4 Billion in 2022 and is expected to reach USD 106.4 Billion by 2032 and is expected to reach a CAGR of 14.6%.

Cloud security refers to policies outlining the security structure for cloud-based systems and data. In addition, there are security measures to follow for cloud data, with regulatory compliance, protecting customers' privacy, and incorporating security levels for individual users and devices.

The main factor that helps in the growth of the cloud security market is the rise in cloud computing amongst businesses and the implementation of BYOD (Bring Your Own Device). With online breaches, cloud security has been required to protect and safeguard the system, helping the market grow.

The cloud's scalability and smooth user experience are credited for the continually growing trend in demand for hybrid, serverless computing and DevOps development, which helps the market grow. Post-COVID-19, enterprises have shifted to data centers and the cloud. Also, the remote work settings have pushed the market forward for cloud security as it transforms the remote settings. Organizations looking to utilize the benefits of both private and public clouds are increasingly adopting the hybrid cloud approach and allows them to benefit from both the security and cost savings of the private cloud. A hybrid cloud offers cost and flexibility advantages and greater control over essential data and applications.

Request Sample Report:

https://datahorizzonresearch.com/request-sample-pdf/cloud-security-market-2503

Segmentation Overview:

The globalcloud security market has been segmented into security type, enterprise size, end-user, and region. The IAM category is expected to lead the market in terms of security.The rapid growth in the enterprise size segment accounts for extensive growth in the infrastructure segment as organizations need to maintain the infrastructure for support and services; this leads to an increase in growth in the segment.

Buy This Research Report:

https://datahorizzonresearch.com/checkout-page/cloud-security-market-2503

Cloud Security Market Report Highlights:

Industry Trends and Insights:

Looking Exclusively For Region/Country Specific Report?

https://datahorizzonresearch.com/ask-for-customization/cloud-security-market-2503

OR

Ask For Discount

https://datahorizzonresearch.com/ask-for-discount/cloud-security-market-2503

Cloud Security Market Segmentation:

About DataHorizzon Research:

DataHorizzon is a market research and advisory company that assists organizations across the globe in formulating growth strategies for changing business dynamics. Its offerings include consulting services across enterprises and business insights to make actionable decisions. DHRs comprehensive research methodology for predicting long-term and sustainable trends in the market facilitates complex decisions for organizations.

Contact:

Mail: sales@datahorizzonresearch.com

Ph: +1-970-672-0390

Website: https://datahorizzonresearch.com/

Follow Us: LinkedIn

Recent Publications

Cybersecurity Market 2023 to 2032Contact Center Software Market2023 to 2032Regtech Market2023 to 2032Smart Mirrors Market2023 to 2032Digital Dentistry Market2023 to 2032

Go here to see the original:
Cloud Security Market To Reach USD 106.4 Billion By 2032 Says ... - GlobeNewswire

Read More..

VMware warns of critical vulnerability affecting vCenter Server product – The Record from Recorded Future News

Cloud computing giant VMware warned this week of new vulnerabilities affecting a server management product present in VMware vSphere and Cloud Foundation (VCF) products.

The affected product, VMware vCenter Server, provides a centralized platform for controlling customers vSphere environments.

On Tuesday, the company released an advisory and FAQ document outlining concerns around CVE-2023-34048, a vulnerability with a critical CVSS severity score of 9.8 out of 10.

Discovered by Grigory Dorodnov of Trend Micro Zero Day Initiative, the bug allows a hacker to compromise vulnerable servers.

VMware noted that while it typically does not mention end-of-life products in most advisories, due to the critical severity of this vulnerability and lack of workaround VMware has made a patch generally available for vCenter Server 6.7U3, 6.5U3, and VCF 3.x.

VMware noted that because it affects the popular vCenter Server, the scope is large and customers should consider this an emergency change that necessitates acting quickly.

The company is not currently aware of exploitation in the wild.

Viakoo Labs Vice President John Gallagher said the vulnerability is as serious as it gets because vCenter Server is a widely-used centralized platform for managing multiple VMware instances, and is used by a wide range of organizations and engineering teams.

Successful exploit of this CVE gives complete access to the environment, and enables remote code execution for further exploitation. A sign of how deeply serious this is can be seen in how VMware has published patches for older, end of support/end of life versions of the product, Gallagher said.

Given the breadth of usage and how even older versions are still being used, its likely that patching will take some time leaving open the window of vulnerability for some time.

Irfan Asrar, director of threat research at Qualys, backed Gallaghers assessment, warning that the affected products are highly prevalent applications with large enterprise customers globally.

Given the fact that its a remote code exploit with a high severity score, organizations should take this very seriously, especially with the current geopolitical climate, Asrar added. Other than the obvious use case as a vector for ransomware, this could also be used to send messages by threat actors on a hacktivist agenda.

Ransomware gangs have a history of targeting VMWare vCenter servers with attacks, with several groups going after the products using Log4Shell attacks.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Jonathan Greig is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.

Visit link:
VMware warns of critical vulnerability affecting vCenter Server product - The Record from Recorded Future News

Read More..

Cloud Computing Market To Reach USD 1,954.7 Billion By 2032 Says DataHorizzon Research – Yahoo Finance

DataHorizzon Research

A Comprehensive Analysis of the Cloud Computing Market Report.

Fort Collins, Colorado, Oct. 27, 2023 (GLOBE NEWSWIRE) -- According to DataHorizzon Research, theCloud Computing Market was valued at USD 523.4 Billion in 2022 and is expected to have a market size of USD 1,954.7 Billion by 2032 with a CAGR of 14.2%.

Cloud Computing is the delivery of various services via the Internet. These resources include data storage, servers, databases, networking, and software. Instead of storing files on a proprietary hard drive or local storage device, cloud-based storage allows them to be saved to a remote database. Cloud computing saves money by removing the need for businesses to invest considerably in upfront infrastructure and maintenance. Businesses can utilize cloud services to scale resources up or down based on their needs, paying only for what they use. The pay-as-you-go strategy saves capital expenditures and helps organizations to more effectively deploy additional resources.

Several industries are undergoing digital transformation with projections to increase competitiveness and operational efficiency. Cloud computing enables digital transformation by offering the infrastructure, platforms, and software applications needed for creativity, agility, and collaboration.

The proliferation of IoT devices and apps generates enormous volumes of data that must be collected, processed, and analyzed in real-time. Cloud computing provides the infrastructure required to support IoT installations by providing scalable storage, processing power, and networking services. Therefore, cloud computing relays the potential to handle IoT-generated data, which has been predominant for organizations to adopt cloud-based solutions.

Request Sample Report:

https://datahorizzonresearch.com/request-sample-pdf/cloud-computing-market-2478

Segmentation Overview:

The global cloud computing market has been segmented into type, service, end-use, and region. In terms of type, the private segment dominates the overall market. Private cloud computing services are provided to a limited number of users. Due to the increased acceptance of cloud-based computing among small and medium-sized businesses, the IT & telecommunications category leads the market based on end-use.

Story continues

Buy This Research Report:

https://datahorizzonresearch.com/checkout-page/cloud-computing-market-2478

Cloud Computing Market Report Highlights:

Cloud computing allows an organization to scale its resources swiftly. It offers the adaptability to meet corporate demands such as software upgrades, improving processing power, increasing storage space, or introducing a new in-house technology. Because of their potential to scale, businesses can take advantage of market opportunities better.

The Software as a Service (SaaS) segment is expected to have the biggest market share. This expansion can be attributed to its ease of deployment, cheap maintenance expenses, and low cost of ownership.

Some prominent players in the cloud computing market report include Adobe Inc., Alibaba Group Holding Ltd., Amazon.com Inc., Google LLC, IBM Corp., Microsoft Corp., Oracle Corp., Salesforce.com Inc., SAP SE, and Workday, Inc.

Industry Trends and Insights:

Helio, a Zurich-based cloud computing startup, has raised 4.9 million in seed funding. QBIT Capital is leading the round, which includes Uebermorgen Ventures, seed+speed Ventures, Combination VC, and others.

Looking Exclusively For Region/Country Specific Report?

https://datahorizzonresearch.com/ask-for-customization/cloud-computing-market-2478

OR

Ask For Discount

https://datahorizzonresearch.com/ask-for-discount/cloud-computing-market-2478

Cloud Computing Marke Segmentation:

By End-use: BFSI, IT, Government, Healthcare, Manufacturing, Retail, Others.

By Region: North America, Latin America, Europe, Asia Pacific, the Middle East and Africa.

About DataHorizzon Research:

DataHorizzon is a market research and advisory company that assists organizations across the globe in formulating growth strategies for changing business dynamics. Its offerings include consulting services across enterprises and business insights to make actionable decisions. DHRs comprehensive research methodology for predicting long-term and sustainable trends in the market facilitates complex decisions for organizations.

Contact:

Mail: sales@datahorizzonresearch.com

Ph: +1-970-672-0390

Website: https://datahorizzonresearch.com/

Follow Us: LinkedIn

Recent Publications

Cybersecurity Market 2023 to 2032Contact Center Software Market2023 to 2032Regtech Market2023 to 2032Smart Mirrors Market2023 to 2032Digital Dentistry Market2023 to 2032

Follow this link:
Cloud Computing Market To Reach USD 1,954.7 Billion By 2032 Says DataHorizzon Research - Yahoo Finance

Read More..

Microsofts Radius and the future of cloud-native development – InfoWorld

The complexity of cloud-native applications appears bottomless. In addition to the familiar Kubernetes, cloud-native apps build on a growing ecosystem of services baked into the public cloud platforms. Developing and managing these applications requires a lot more than coding, going beyond devops into platform and infrastructure engineering. If you want a stable application, you need to have all the teams working together, with the aim of delivering a reproducible set of code and configurations that can be deployed as and when needed.

That requires having a way of bringing together all the various working parts of a modern cloud-native application, building on the various tools were already using. After all, we dont want to reinvent the wheel. For one thing, those tools work; its simply that they dont work in unison.

Weve made various strides along the way. Infrastructure-as-code (IaC) tools such as Terraform and Azure Resource Manager allow you to automate the management of infrastructure services and platforms, defining and then building the networks, servers, and services your code needs. These tools are increasingly mature, and able to work directly against cloud service management APIs, offering familiar syntax with both declarative and programmatic approaches to infrastructure definitions.

On the code side we have frameworks that simplify building applications, managing APIs, and helping us to define the microservices that make up a typical cloud-native application. Using a modern application framework, we can go from a few CLI commands to a basic application skeleton we can flesh out to deliver what our users need.

So how do we bring those two distinctly different ways of working together, and use them to build and manage our cloud-native applications? Microsoft recently unveiled a new platform engineering tool thats intended to do exactly that.

Developed by the Azure Incubations Team, Radius brings together existing distributed application frameworks and familiar infrastructure-as-code tools, as well as automated connections to cloud services. The idea is to provide one place to manage those different models, while allowing teams to continue to use their current tools. Radius doesnt throw away hard-earned skills; instead it automatically captures the information needed to manage application resources.

I had an email conversation with Azure CTO Mark Russinovich about Radius, how he envisions it developing, and what its role in cloud-native development could be. He told me,

We want developers to be able to follow cost, operations, and security best practices, but we learned from customers that trying to teach developers the nuances of how Kubernetes works, or the configuration options for Redis, wasnt working. We needed a better way for developers to fall into the pit of success.

Russinovich noted another driver, namely the growth of new disciplines:

Weve watched the emergence of platform engineering as a discipline. We think Radius can help by providing a kind of self-service platform where developers can follow corporate best practices by using recipes, and recipes are just a wrapper around the Terraform modules that enterprises already have. If weve got this right, we think this helps IT and development teams to implement platform engineering best practices, while helping developers focus on what they love, which is coding.

Radius is perhaps best thought of as one of the first of a new generation of platform operations tools. We already have tools like Dapr to manage apps, and Bicep to manage infrastructure. What Radius does is bring applications and infrastructure together, working in the context of cloud-native application development. Its intended to be the place where you manage key platform information, like connection strings, roles, permissions... all the things we need to link our code to the underlying platform in the shape of Kubernetes and cloud services.

Youll need a Kubernetes cluster to run Radius, which runs as a Kubernetes application. However, most of Radius operation is done through a command line that installs under most shells, including support for both Windows Subsystem for Linux and PowerShell, as well as macOS. Once installed, you can check the installation by running rad version. Youre now ready to start building your first Radius-managed application.

Use the rad init command to start Radius in the current context of your development cluster, add its namespace, and set up an environment to start work. At the same time, rad init sets up a default Radius application, creating a Bicep app that will load a demo container from the Azure Radius repository. To run the demo container, use the rad run command to launch the Bicep infrastructure application. This configures the Kubernetes server and starts the demo container, which contains a basic web server running a simple web application.

Youre not locked into using the command line, as Radius also works with a set of Visual Studio Code extensions. The most obvious first step is adding the Radius Bicep extension with support for Azure and AWS resources. Note this isnt the same as the full Bicep extension and is not compatible with it. Microsoft intends to merge Radius support into the official Bicep extension, but this will take some time. You can use the official HashiCorp Terraform extension to create and edit recipes.

Under the hood is a Helm chart that manages the deployment to your Kubernetes servers, which Radius builds from your application definition. This approach allows you to deploy applications to Kubernetes using existing Helm processes, even though youre using Radius to manage application development. You can build applications and infrastructures using Radius, store the output in an OCI-compliant registry, and use existing deployment tools to deliver the code across your global infrastructure. Radius will generate the Helm YAML for you, based on its Bicep definitions.

Thats all pretty much run-of-the-mill for a basic cloud-native application, where you can use your choice of tools to build containers and their contents. However, where things get interesting with Radius is when you start to add what Microsoft calls recipes to the Bicep code. Recipes define how you connect your containers to common platform services or external resources, like databases.

Whats perhaps most useful about recipes is that theyre designed to automatically add appropriate environment variables to a container, such as adding database connection strings so your code can consume resources without additional configuration beyond what is in your Bicep. This allows platform teams to ensure that guardrails are in place, for example, to keep connections secure.

You can author a recipe in either Bicep or Terraform, Terraform being the more obvious choice for cross-cloud development. If youre already using infrastructure-as-code techniques, you should find this approach familiar, treating a recipe as an infrastructure template with the same Bicep parameters or Terraform variables you use elsewhere.

Recipes define the parameters used to work with the target resource, managing the connections to the services your code uses. These connections are then defined in your application definition. In this way Radius recipes mark the separation of responsibilities between platform engineering and application development. If I want a Redis cache in my application, I add the appropriate recipe to my Radius application. That recipe is built and managed by the platform team, which determines how that functionality is deployed and what information I must provide to use it in my application.

Out the box Radius provides a local set of basic recipes for common services. These can be used as templates for building your own recipes, if you want to connect an application to Azure OpenAI, for example, or define an object store, or link to a payment service.

One interesting option is using Radius to build the scaffolding for a Dapr application. Here you define your application as a Dapr resource, using a Radius recipe to attach a state store using your preferred database. Youll find a number of sample Dapr containers in the Radius repository to help you get started.

All you need to do is add your connections to the state store recipe and add an extension for the Dapr sidecar. In practice, youll build your own containers using Dapr, using your usual microservice development tools, before adding them to a local repository and then managing the resulting application in Radius.

Perhaps the biggest challenge Radius is designed to solve is the lack of visibility into the myriad resources and dependencies that make up sprawling cloud-native applications. Here Radius gives us a structure that ensures we have a map of our applications and a place where we can deliver architectural governance, with the aim of building and delivering stable, secure enterprise applications.

A big advantage of a tool like Radius is the ability to quickly visualize application architectures, mapping the connections between containers and services as a graph. For now, the Radius application graph is a text-only display, but theres scope for adding more user-friendly visualizations that could go a lot further to help us understand and debug large-scale applications. As Russinovich noted,

We make it easy to query Radius and retrieve the full application graph. A third party could integrate our application graph with another source of data, like telemetry data or networking data. Seeing those graphs in relation to each other could be really powerful.

In addition to giving us an understanding of what is composed together to create our application, the application graph will play a role in helping teams go from development to production, Russinovich said.

For example, we could look at how the application is defined by a developer versus how the application is deployed in production. [] Having an application graph enables these teams to work together on how the application is defined as well as how its deployed. Cost is one part, infrastructure is another, but we can also imagine other overlays like performance, monitoring, and trace analysis.

Cloud-native development needs to move from a world of hand-crafted code, as nice as that is, to one where we can start to apply trusted and reliable engineering principles as part of our everyday work. Thats why the arrival of platforms like Radius is important. Not only is it coming from Microsoft, but its also being developed and used by Comcast, BlackRock, and Portuguese bank Millennium BCP, shipping as an open-source project on GitHub.

At the end of our email conversation, Mark Russinovich indicated how the Radius platform might evolve, along with community involvement through the Cloud Native Computing Foundation (CNCF). He said,

Radius has multiple extension points. Wed love to see partners like Confluent or MongoDB contributing Radius recipes that integrate Radius with their services. We also think that cloud providers like AWS or GCP could extend the way Radius works with their clouds, improving multi-cloud support thats inherent to Radius. Finally, we envision extensions to support serverless computing like AWS Fargate and Azure Container Instances.

Read this article:
Microsofts Radius and the future of cloud-native development - InfoWorld

Read More..

Why Capistrano Got Usurped by Docker and Then Kubernetes – The New Stack

While listening to the much appreciated intellectual property and digital rights advocate Cory Doctorow reading a little of his new book, I heard him mention the place in California called Capistrano. But of course, I remembered Capistrano, a remote server automation tool popular in the early 2010s it was effectively a pre-containers and pre-Kubernetes tool.

Im sometimes interested in what happened to commonly used technology that lost popularity over time. Of course, Capistrano isnt actually dead even if I am using the past tense to describe it. Open source tools never truly die, they just become under-appreciated (and possibly placed in the attic). I remember using Capistrano as a remote server automation tool a little over a decade ago. Using SSH, it would follow a script to allow you to deploy your updates to target servers. An update might be a new executable file, maybe some code, maybe some configuration, maybe some database changes. Great, but why look back at a system that is no longer in regular use?

Firstly, to understand trends it helps to look at past examples. It also helps to note the point at which something decreased in popularity, while checking that we havent lost anything along the way. Current tech is just a blip on the timeline, and it is much easier to predict what is going to happen if you glance backwards occasionally. If you find yourself having to work on a deployment at a new site, it is good to have a grab bag of tools other than just your one personal favourite. You might even have to use Capistrano in an old stack. So let us evaluate the antique, to see what it might be worth.

Capistrano understood the basic three environments that you would work on: typically production, staging and development. A development environment is probably a laptop; a staging environment is probably some type of cloud server that QA can get at. Using these definitions, Capistrano could act on specific machines.

The basic command within Capistrano was the task. These were executed at different stages of a deployment. But to filter these, you used roles to describe which bit of the system you were working with:

role :app, "my-app-server.com"role :web, "my-static-server.com"role :db, "my-db-server.com"

role :app, "my-app-server.com"

role :web, "my-static-server.com"

role :db, "my-db-server.com"

This represents the application server (the thing generating dynamic content), the web pages or web server, and the database as separate parts. You can of course create your own definitions.

Alternatively, you could focus more on environment separation, with roles operating underneath. For a description of production, we might set the following:

# config/deploy/production.rbserver "11.22.333.444", user: "ubuntu", roles: %w{app db web}

# config/deploy/production.rb

server "11.22.333.444", user: "ubuntu", roles: %w{app db web}

The default deploy task had a number of subtasks representing the stages of deployment:

Here is an example of a customized deploy task. This ruby-like code uses both the roles to filter the task, as well as the stage of deployment. In this case, we can update the style.css file just before we are done:

namespace :deploy do after :finishing, :upload do on roles(:web) do path = "web/assets" upload! "themes/assets/style.css", "#{path}" end on roles(:db) do # Migrate database end end end

namespace :deploy do

after :finishing, :upload do

on roles(:web) do

path = "web/assets"

upload! "themes/assets/style.css", "#{path}"

end

on roles(:db) do

# Migrate database

end

end

end

To fire this off on the command line, you could use the following after Capistrano was installed:

There is a default deploy flow as well as a corresponding rollback flow. Here is a more detailed look at how that could go:

deploy deploy:starting [before] deploy:ensure_stage deploy:set_shared_assets deploy:check deploy:started deploy:updating git:create_release deploy:symlink:shared deploy:updated [before] deploy:bundle [after] deploy:migrate deploy:compile_assets deploy:normalize_assets deploy:publishing deploy:symlink:release deploy:published deploy:finishing deploy:cleanup deploy:finished deploy:log_revision

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

deploy

deploy:starting

[before]

deploy:ensure_stage

deploy:set_shared_assets

deploy:check

deploy:started

deploy:updating

git:create_release

deploy:symlink:shared

deploy:updated

[before]

deploy:bundle

[after]

deploy:migrate

deploy:compile_assets

deploy:normalize_assets

deploy:publishing

deploy:symlink:release

deploy:published

deploy:finishing

deploy:cleanup

deploy:finished

deploy:log_revision

You can see the hooks started, updated, published and finished which correspond to the actions starting, publishing, etc. These are used to hook up custom tasks into the flow with before and after clauses like we saw above.

Note that after publishing, a current symlink pointing to the latest release is created or updated. If the deployment fails in any step, the current symlink still points to the old release.

The run this, then run that model wasnt always a good way of predicting what your system would be like after deployments. Tools like Chef were better at handling sprawling systems, because they started with a model and said make this setup true. Chef worked in terms of convergence and idempotence. Missing bits were added, but after that re-applying the same steps didnt change anything. Hence, multiple executions of the same action did not cause side effects on the state.

The flexibility of Capistrano would allow less experienced developers to build Jenga towers of working but unstable deployments.

By contrast, a single Docker image allowed systematic control of OS, packages, libraries, and code. It also allowed a laptop and a cloud server to be treated similarly just as places to mount containers.

And finally, Kubernetes handled clusters without having to worry about slowdowns and time-outs. Having a fully transparent infrastructure, with the ability to get lists of the services and exact configurations needed to run all aspects made life much easier for DevOps teams. Instead of changing already-running services, one could create new containers and terminate the old ones.

One other sticking point with Capistrano from a modern point of view is that it is built from Ruby. The Ruby language is unfairly tied to the popularity of Ruby On Rails; and that has fallen out of favor with the rise of Node.js and JavaScript. Overall, other languages and language trends have overtaken it in popularity: Python has become the favored scripting language, for example. The tasks shown above use a DSL that is effectively the ruby Rake build tool.

Has anything been lost? Possibly. Having a set of customized tasks to make quick changes does encourage a hacking approach, but it also allowed for smaller temporary event-based changes. Make this change happen as opposed to I always want the server to look like this.

It might be better to say that tools like Capistrano appeared as a waypoint on a deployment journey for any team, before a wider view was needed. But even as a dusty relic, Capistrano remains a great modular tool for automating the deployment and maintenance of web applications.

As for Capistrano the place in California? Bad news Im afraid.

See the article here:
Why Capistrano Got Usurped by Docker and Then Kubernetes - The New Stack

Read More..

Amazon Strong Results Point Toward Boost for Cloud Business – Yahoo Finance

(Bloomberg) -- Amazon.com Inc. Chief Executive Officer Andy Jassy gave investors much of what they wanted this earnings season: robust sales and profit growth along with a hint that the cloud division earnings machine is regaining momentum.

Most Read from Bloomberg

While third-quarter revenue at Amazon Web Services, the cloud computing unit, fell just short of projections, Jassy said the business is stabilizing. The company signed several new deals with customers that took effect this month, and demand for generative artificial intelligence is likely to boost the division well into the future, he said Thursday on a conference call after the results were released. The shares gained sharply after his comments and were up about 6% as the markets opened on Friday. The stock has gained almost 50% this year.

Amazons CEO has been winning over Wall Street with deep cost cuts and a focus on boosting profit. Under his guidance, the Seattle-based company has become increasingly reliant on services that tend to make more money than the original business of hawking goods online, including advertising, logistics services for independent merchants and renting computing power to corporations.

Cloud unit sales increased 12% to $23.1 billion, a growth rate that was just enough to keep the goblins away, analysts at Jefferies said in a note to clients. It was slightly higher than the previous period, marking the first quarter-to-quarter rise in AWS revenue growth in almost two years.

The businesss operating income of $6.98 billion was about $1.3 billion more than analysts expected. AWS, which generally accounts for more profit than the rest of the company combined, reported the highest operating margin since the first quarter of 2022.

Story continues

On the call, Jassy acknowledged that some companies are still seeking to cut their spending on rented computing power and software, a phenomenon that has hobbled growth at AWS, along with rivals Microsoft Corp. and Alphabet Inc. But many clients are now turning to running new projects on Amazons servers, Jassy said.

The CEO laid out Amazons aims to become a major player in generative AI, which is software that can be prompted to produce writing or images based on an enormous amount of data. Jassy said the technology represents tens of billions in potential revenue for AWS over the next several years. Its unclear to what degree such applications are boosting the units sales now, but the business is growing very, very quickly, he said.

During the quarter, Amazon announced a partnership with AI startup Anthropic. Under the deal, Anthropic will use AWS technology and make its tools available to cloud customers. Amazon, widely seen as lagging behind Microsoft and Google in generative AI applications, is investing $1.25 billion and as much as $4 billion in Anthropic.

Even as investors parsed Jassys words about the health of the companys most profitable business, the quarterly results provided a clear indication that Amazons cost-cutting effort was paying off. Executives have scrutinized expenses in the last year, eliminating jobs, curbing hiring and shuttering marginal projects.

Amazons spending on sales and marketing declined from a year earlier, a first since at least 2015. The growth in spending on technology and infrastructure, a category that includes the salaries of software engineers and costs for AWS servers, rose by just 8.8%, about a quarter of the rate of a year ago.

Third-quarter revenue gained 13% to $143.1 billion, Amazon said in a statement. Analysts, on average, estimated $141.6 billion, according to data compiled by Bloomberg. Operating income increased to $11.2 billion, compared with $2.5 billion in the period a year ago. Analysts, on average, estimated $7.71 billion.

The companys central online stores also produced a better-than-expected performance. The unit generated $57.3 billion, a 7% increase from the period a year earlier. The quarter included Prime Day, Amazons mid-summer shopping bonanza. Operating profit in the companys catchall North America segment was the highest since early 2021. Advertising sales jumped 26% to $12 billion, also topping estimates.

Amazon projected sales of $160 billion to $167 billion in the quarter ending in December, compared with analysts average estimate of $166.6 billion, according to data compiled by Bloomberg. Operating income will be $7 billion to $11 billion in the period. Analysts, on average, projected $8.71 billion.

While company executives were cautious about holiday-quarter spending, Zak Stambor, an analyst at Insider Intelligence, was more optimistic. Amazons successful Prime Big Deal Days event which Insider Intelligence believes generated $5.9 billion in US retail e-commerce sales, an 8% gain year-over-year gives it strong momentum as we head into the heart of the holiday season, Stambor said.

(Updated with shares)

Most Read from Bloomberg Businessweek

2023 Bloomberg L.P.

Read the original:
Amazon Strong Results Point Toward Boost for Cloud Business - Yahoo Finance

Read More..

Africa’s cloud market is small but growing fast, and everyone wants a … – TechCabal

Amazon recently announced it was launching its e-commerce service in South Africaits second dedicated African market. But like its other big tech rivals, Microsoft and Oracle, the groups heart is in Africas fast-growing cloud services market.

Sales of cloud services are slowing down in North America the worlds biggest cloud market. Amazons cloud computing unit has been particularly affected and is losing market share to Google Cloud and Microsoft Azure among others. Africa is one of the global regions where a significant portion of demand for cloud services is expected to come from. According to digital research consultancy, Xalam Analytics, demand for cloud computing services in Africa is growing at between 25% and 30% annually. This compares favourably with Europe where compound annual growth rates (CAGR) is estimated at 11.27% between 2023 and 2028. In North America, the figure is 10.34%.

Elastic cloud computing is the technology at the heart of cloud computing. Elastic computing refers to virtual server programs that allow users to rent units of storage space, network connectivity, and computing power from clusters of data centres globally. Created by a small Amazon team in Cape Town, South Africa in 2003, the service that has become AWS, Amazons largest subsidiary accounted for half of Amazons operating profit in 2022 and helped reduce heavy losses incurred by Amazons e-commerce business, investments and movie streaming platform last year. AWS has been described as Amazons profit engine. But this profit engine is slowing down in the most developed markets and showing signs of promise in smaller markets that are rapidly adopting digital technology.

African banks, insurance companies, airlines and airports are moving their data and IT systems to virtual servers and shuttering costly self-operated data centres. Even telecommunications giants in Africa like MTN Group are not left out. In March, the telco announced it had deployed the core service for its 5G service on Microsofts cloud platform Azure. South Africas Old Mutual, shut down its physical data centres to move its workloads almost entirely to the cloud. At TechCabals Moonshot conference, Osahon Akpata, Ecobanks Head of Consumer Payments, noted that the pan-African bank was progressively moving assets to cloud platforms. African startups which continue to raise billions in funding from investors are naturally built on cloud platforms.

While Africa represents an opportunity for cloud service providers, the market is still small. The German research service, Statista predicts that revenue from public cloud services in Africa will reach ~$8.3 billion by the end of 2023. By comparison, public cloud revenue in India last year reached $6.2 billion, market intelligence firm International Data Corp reported.

Unlike more mature markets where cloud services like AWS face growing competition, slowing investment into technology startups will not significantly affect the growth of cloud computing services in Africa. Our priority customers are enterprise businesses with deep pockets and $100 million in annual revenue, a cloud engineer at AWS told TechCabal. Startups are a distant second in terms of revenue, and video streaming is beginning to make its mark in cloud service demand. Senior AWS staff who spoke with TechCabal say digital content including video streaming from local media outlets, like Arise News, is helping grow demand for cloud services in Nigeria. But the big cloud service providers have not cracked Africas public purse yet.

Governments are hesitant to move data to public cloud platforms. On-premise data systems or contracts with smaller cloud service providers continue to dominate government cloud spending. Moreover, new data localisation rules threaten to constrain the private cloud market, where some of the biggest customers are financial institutions. Concerns about data localisation requirements were partly behind AWSs decision to open a Local Zone in Lagos earlier this year.

In their latest report released during Mobile World Congress in Kigali in October 2023, the GSM Association (GSMA) says smartphones will account for 88% of total mobile connections in Africa (with the exception of North Africa) by 2030. In the same year, they expect 200 million new unique mobile subscribers to join the growing number of Africans who use mobile phones.

The variable but growing use of digital platforms for government services, private businesses, and personal life in Africa is forcing IT companies to find dynamic ways of serving this demand. Cybersecurity concerns and the ability to quickly ramp up services to respond to brief spikes in service demand increase this pressure.

For example, when a central bank demonetisation program forced Nigerians to use digital money transfer options earlier this year, the money transfer services offered by Nigerian banks were frequently down and failed transactions were a common complaint. Fintechs with cloud capability were better able to handle the spike in digital transactions and grabbed valuable market share as a result.

As digital financial services providers and other technology startups begin to become an embedded part of African economies, cloud platforms are in a race to grab market share. Almost all of AWSs staff in Lagosabout 20 in totalare involved in sales and marketing. Flutterwave recently announced a 5-year partnership with Microsoft that will see the $3 billion (at last valuation) fintech process payments for its global merchants on Azure. Oracle, on the other hand, is leveraging its existing relationships with clients who use other Oracle products to cross-sell its cloud service. Last year South African retailer began the process of moving its digital operations to Oracles Retail Merchandising Cloud Services. The retailer shed its internal inventory management system for the costly change which was completed in March 2023.

We needed to modernise our retail infrastructure and leverage cloud technology to establish a sustainable and stable application foundation for our high volume processes, Kim Sim, Chief Information Officer, Mr Price said. Our vision is to be the most valuable retailer in Africa and we know that Oracles proven cloud platform can help us meet the needs of our growing community.

The transition to the cloud did not come without cost all ERP implementation projects typically do. Earlier in the year, the retailer acknowledged that the switch negatively impacted its revenue for the full year ending April 2023.

See the original post:
Africa's cloud market is small but growing fast, and everyone wants a ... - TechCabal

Read More..

Most data lives in the cloud. What if it lived under the sea? – The Conversation

Where is the text youre reading, right now? In one sense, it lives on the internet or in the cloud, just like your favourite social media platform or the TV show you might stream tonight.

But in a physical sense, its stored and transmitted somewhere in a network of thousands of data centres across the globe. Each of these centres is whirring, buzzing and beeping around the clock, to store, process and communicate vast amounts of data and provide services to hungry consumers.

All this infrastructure is expensive to build and run, and has a considerable environmental impact. In search of cost savings, greater sustainability and better service, data centre providers are looking to get their feet wet.

Tech giant Microsoft and other companies want to relocate data centres into the worlds oceans, submerging computers and networking equipment to take advantage of cheap real estate and cool waters. Is this a good thing? What about the environmental impact? Are we simply replacing one damaging practice with another?

Microsofts Project Natick has been pursuing the idea of data centres beneath the waves since 2014. The initial premise was that since many humans live near the coast, so should data centres.

An initial experiment in 2015 saw a small-scale data centre deployed for three months in the Pacific Ocean.

A two-year follow-up experiment began in 2018. A total of 864 servers, in a 12 by 3 metre tubular structure, were sunk 35 metres deep off the Orkney Islands in Scotland.

Microsoft is not the only company experimenting with moving data underwater. Subsea Cloud is another American company doing so. Chinas Shenzhen HiCloud Data Center Technology Co Ltd has deployed centres in tropical waters off the coast of Hainan Island.

Underwater data centres promise several advantages over their land-locked cousins.

1) Energy efficiency

The primary benefit is a significant cut in electricity consumption. According to the International Energy Agency, data centres consume around 11.5% of global electricity use, of which some 40% is used for cooling.

Data centres in the ocean can dissipate heat in the surrounding water. Microsofts centre uses a small amount of electricity for cooling, while Subsea Clouds design has an entirely passive cooling system.

2) Reliability

The Microsoft experiment also found the underwater centre had a boost in reliability. When it was brought back to shore in 2020, the rate of server failures was less than 20% that of land-based data centres.

This was attributed to the stable temperature on the sea floor and the fact oxygen and humidity had been removed from the tube, which likely decreased corrosion of the components. The air inside the tube had also been replaced with nitrogen, making fires impossible.

Another reason for the increased reliability may have been the complete absence of humans, which prevents the possibility of human error impacting the equipment.

Read more: The environmental cost of data centres is substantial, and making them energy-efficient will only solve half the problem

3) Latency

More than one third of the worlds population lives within 100 kilometres of a coast. Locating data centres close to where people live reduces the time taken for data to reach them, known as latency.

Offshore data centres can be close to coastal consumers, reducing latency, without having to pay the high real-estate prices often found in densely populated areas.

4) Increased security and data sovereignty

Moving data centres into the ocean makes them physically more difficult for hackers or saboteurs to access. It can also make it easier for companies to address data sovereignty concerns, in which certain countries require certain data to be stored within their borders rather than transmitted overseas.

5) Cost

Alongside savings due to reduced power bills, fewer hardware failures, and the low price of offshore real estate, the way underwater data centres are built may also cut costs.

The centres can be made in a modular, mass-produced fashion using standardised components, and shipped ready for deployment. There is also no need to consider the comfort or practicality for human operators to interact with the equipment.

At present there is no evidence placing data centres in the worlds oceans will have any significant negative impact. Microsofts experiments showed some localised warming, but the water just metres downstream of a Natick vessel would get a few thousandths of a degree warmer at most.

The Microsoft findings also showed the submerged data centre provided habitat to marine life, much like a shipwreck:

[] crabs and fish began to gather around the vessel within 24 hours. We were delighted to have created a home for those creatures.

If underwater data centres go ahead, robust planning will be needed to ensure their placement follows best practise considering cultural heritage and environmental values. There are also opportunities to enhance the environmental benefits of underwater data centres by incorporating nature-positive features in the design to enhance marine biodiversity around these structures.

Several companies are actively exploring, or indeed constructing, underwater data centres. While the average end-user will have no real awareness of where their data are stored, organisations may soon have opportunities to select local, underwater cloud platforms and services.

Companies with a desire to shout about their environmental credentials may well seek out providers that offer greener data centres a change that is likely to only accelerate the move to the ocean.

So far, it looks like this approach is practical and can be scaled up. Add in the environmental and economic savings and this may well be the future of data centres for a significant proportion of the planet.

Read more: We are ignoring the true cost of water-guzzling data centres

Read the original here:
Most data lives in the cloud. What if it lived under the sea? - The Conversation

Read More..

Web Development Degree: Do You Need One? – Dice Insights

The software development world is divisible into multiple subcategories. Web development is one of the most popular, and it requires a particular set of skills to do effectively, including extensive knowledge of front-end and back-end development. Which skills do you need, and should you think about acquiring a degree in web development in order to land a job?

Web development includes building and maintaining code for the front end (i.e., the browser and anything else the end-user sees) as well as the back end (i.e., the servers that power the web experience). The browser includes an entire software development platform within which software can run; such software is written in JavaScript and requires full programming skills.

The back end is typically a server running its own software that communicates with the front-end infrastructure. With a bank website, for instance, the browser will let you perform a search of your transactions; the browser will send the search request to the back-end server. That server, in turn, will likely look up matching data in a database thats running on yet another machine, and send that information back to the browser.

The web developer creates the software that runs in the browser, the back end, and possibly even the database.

Although a web application might feel like a single program, in reality it features multiple programs all working together.

Developing in the front end requires skills such as:

Although a full stack approach to software development puts a lot of focus on the front end and back end, theres a third tier: the database. Most web developers handle this part, as well. As such, a web developer must be proficient in various database technologies including:

The above skills relate specifically to the front end, back end, and database. But there are other skills youll need to learn, including:

Finally, vital to web development, and nearly every career on the planet, are soft skills. These are non-technical skills. There are many, but communication and teamwork are probably the most important.

The most common program for web development is computer science; from there you can get a bachelors, masters, and even a Ph.D. Most likely you would want to start with a bachelors and wait until you have experience before getting a masters. Masters degrees tend to be highly specific, and there are relatively limited cases where it will give you an edge in terms of securing a job.

A computer science degree will teach you the foundations of software development, regardless of specialization. Youll learn topics like the basics of programming, algorithms, data structures, database design, computer architecture, and higher-level aspects of software engineering. Additionally, most computer science programs also include web development, with both back-end and front-end components. Youll also learn several programming languages.

For the most part, a typical bachelors degree in computer science is a well-rounded program that will get your foot in the door for any specialization, including web development.

The short answer is no. Given the demand for tech professionals (and the historically low tech unemployment rate), more organizations than ever are willing to overlook the lack of a degree if you can prove you have the technical chops to do the job. Just make sure your resume, application materials, and potential job interview answers all put your skills in the best possible light.

If you dont want to get a degree, there are other options for getting your foot in the door with web development. Many companies want to see that you know the skills, regardless of whether you have a degree. Here are some options:

Self-taught. Many people have been successful with this approach: Google, read blogs and articles, watch YouTube videos, and read official documentation for the topic youre learning. The catch is youll need to make sure you cover all the above skills, or find a coach who can guide you. Its incredibly easy to become isolated and not even know youre skipping important topics. So be vigilant on discovering what topics are important.

Bootcamps: Bootcamps have become popular over the past decade. These are programs that are typically three to six months and teach you specifically the skills you need, without the extracurricular courses a university requires, such as world history and such. With the rising cost of higher education, often resulting in massive student loans, bootcamps have become a good option for people who want to minimize their debt. The catch is bootcamps are intense. They move quickly and you need to keep up. But many people like this because they finish the program in a fraction of the time of a bachelors program.

Self-directed university courses: Some universities let you take the regular degree courses without working towards a degree. Many have strict entry requirements; for example, some require that you live in the same state as the university, and even then its often up to the whims of the professors on whether to allow you into the course. Nevertheless, this is an option, and it can even go along with the self-taught approachCoursera has a selection of university-powered courses you can take in a number of tech disciplines including web development, often on your own time.

Regardless of your approach to learning the material, perhaps the most important pro tip is to practice, practice, practice. Work on your web development every day. Try to learn something new every day. As each day passes, you will improve as you master the skills needed to enter this exciting career.

Continued here:
Web Development Degree: Do You Need One? - Dice Insights

Read More..

Google Cloud C3D Shows Great Performance With AMD EPYC Genoa – Phoronix

Back in August Google Cloud announced the C3D instances powered by AMD EPYC 9004 "Genoa" processors while only last week was C3D promoted to general availability. Curious about the performance of C3D after being impressed by AMD EPYC Genoa bare-metal server performance at Phoronix as well as what I've seen with Genoa in the cloud at Microsoft Azure and Amazon EC2 / AWS, here are some benchmarks of the new C3D up against other GCE instances.

In Google's announcement last week they noted the general purpose C3D VMs can offer up to 45% performance increases over prior generation VMs. Given what I've seen out of Genoa and the great Zen 4 benefits like AVX-512 and transitioning to DDR5 system memory, these claims aren't all that surprising. Some of the specifics Google shared in their announcement was around 54% better NGINX performance, MySQL and PostgreSQL database servers up to 62%, and in-memory databases like Redis up to 60%.

Google Cloud offers C3D VMs up to 360 vCPUs (the C3D vCPUs are made up of a mix of physical cores and SMT sibling threads) and up to 2.8TB of DDR5 memory. For the purposes of this initial Google Cloud C3D benchmarking on Phoronix, I was focusing on the 60 vCPU C3D VM instance. For this article the following instance types were compared:

c3d-standard-60: The new C3D AMD EPYC Genoa instance with 60 vCPUs, powered by AMD EPYC 9B14.

c2d-standard-56: The prior generation AMD EPYC Milan instance with EPYC 7B13 processor. This instance was at 56 vCPUs with not having a 60 vCPU option.

n2d-standard-64: The AMD EPYC Rome based instance with EPYC 7B12 processors. Here the closest sized instance was 64 vCPUs.

c2-standard-60: The Intel Xeon Cascade Lake competition at 60 vCPUs for reference.

Unfortunately for Google's new C3 machine types that are powered by the latest-generation Sapphire Rapids processors, they are sized at 22 / 44 / 88 vCPUs (among other smaller and larger sizes). So unfortunately nothing in the 56~64 range as tested with the other machine types, which is why there wasn't any C3 instance tested for this article due to the starkly different sizing. In any case this article is mostly focused on the AMD EPYC generational performance in Google Cloud.

All of the tested instances were on Ubuntu 22.04 LTS. In addition to raw performance the performance-per-dollar based on the current hourly pricing was also tabulated.

Page 1 - IntroductionPage 2 - Google Cloud C3D HPC BenchmarksPage 3 - Google Cloud C3D BenchmarksPage 4 - Google Cloud C3D AI + TensorFlow / OpenVINO BenchmarksPage 5 - Google Cloud C3D Benchmarks

See original here:
Google Cloud C3D Shows Great Performance With AMD EPYC Genoa - Phoronix

Read More..