Category Archives: Cloud Computing
Google Cloud says Microsoft is seeking cloud computing monopoly – Reuters By Investing.com – Investing.com
Google Cloud, a division of Alphabet Inc. (NASDAQ:), stepped up its critique of Microsoft's (NASDAQ:) cloud computing tactics on Monday, expressing concerns over Windows makers potential monopolistic behaviors, Reuters reported.
According to Google Cloud, Microsofts practices could impede the advancement of nascent technologies, including generative artificial intelligence, the report says.
The move comes as Microsoft, along with Amazon (NASDAQ:), faces increasing scrutiny from regulatory bodies in the United Kingdom, the European Union, and the United States for their dominant positions in the cloud computing sector.
Google positions itself as the third-largest provider in this highly competitive market, trailing behind the leading duo.
The heightened attention towards Microsoft's strategic partnership with OpenAI, the entity behind ChatGPT, further amplifies these concerns.
"We worry about Microsoft wanting to flex their decade-long practices where they had a lot of monopoly on the on-premise software before and now they are trying to push that into cloud now," Amit Zavery, Google Clouds Vice President told Reuters.
"So they are creating this whole walled garden, which is completely controlled and owned by Microsoft, and customers who want to do any of this stuff, you have to go to Microsoft only," he said.
"If Microsoft cloud doesn't remain open, we will have issues and long-term problems, even in next generation technologies like AI as well, because Microsoft is forcing customers to go to Azure in many ways," Zavery said, referring to Microsoft's cloud computing platform.
Moreover, Zavery called for intervention from antitrust authorities, saying they should offer guidance and perhaps introduce rules to curb Microsoft's approach to expanding its Azure cloud business and prevent the extension of its on-premise dominance into cloud monopolization.
He also took issue with Microsoft's contracts with specific cloud providers, arguing that they overlook wider concerns.
Last month, the trade association CISPE announced it was negotiating with Microsoft to address its EU antitrust complaint regarding the company's licensing practices in cloud computing.
Microsoft denied Zaverys claims.
"We have listened to and work constructively and directly with independent cloud providers to change our licensing terms, addressing their concerns and providing more opportunity for them. Worldwide, more than 100 cloud providers have already taken advantage of these changes," a companys spokesperson said.
See the original post:
Google Cloud says Microsoft is seeking cloud computing monopoly - Reuters By Investing.com - Investing.com
How Blockchain and AI Is Combining to Reimagine the Cloud Computing Industry – Analytics Insight
As we stand at the precipice of the artificial intelligence revolution, blockchain technology is set to reign in its worst aspects, and help form a democratized AI landscape.
Cloud computing has become an invaluable tool for internet services providers and application developers in the past decade, offering simplified, ready-to-use distributed hardware and infrastructure that reduces costs and manpower for upstart businesses.
Worth an estimated $495 billion in 2022, the value of the cloud computing industry is expected to near $2.5 trillion by 2032. But the rapid growth of cloud computing isnt without its concerns.
Consider that 45% of data breaches are thought to emanate from cloud infrastructure alone, and we see that the rise of the cloud has also attracted a never-ending array of would-be hackers, who seek to scrape sensitive information (usually customers personal details) to sell on the black market.
Facebook, LinkedIn and Alibaba are just some of the household names to have experienced horrific data breaches in the past few years with over 2.3 billion users personal information being stolen in those three breaches alone.
But now, developers have caught on to a way to apply the benefits of the blockchain to the cloud computing model, presenting an innovation that could also address another major data privacy concern which looms on the horizon namely, artificial intelligence (AI).
Blockchain has earned a reputation for being a secure, private, fault-tolerant solution for record-keeping, but its distributed nature where nodes are spread across thousands of computers around the world also makes it an attractive prospect for decentralizing the cloud computing industry.
This moves cloud computing away from large, centralized data centers, creating less of a target for would-be hackers. Additionally, because blockchain networks are built using the latest, cutting-edge encryption techniques, a blockchain-based cloud brings much enhanced security to the process of setting up on the cloud.
Combined with the fact that blockchain-based applications allow the end user to retain control over their own data at all times, it becomes clear that infusing blockchain into cloud computing can effectively eradicate all of the clouds weak points in one fell swoop.
In the short time that artificial intelligence chatbots have been live, there have already been several major data breaches from companies like OpenAI, Microsoft, and many others. In addition, some AI chatbots are now being shown to be subject to human bias, dictated by the individuals or corporations who control what data theyre trained on.
So what could the blockchain-cloud combination do to address the AI problem? The answer, according to NeuroChainAI founder, Julius Serenas, lies in doing the same for AI as what blockchain networks like Ethereum did for decentralized app-creation.
Just like Ethereum created distributed, community-driven infrastructure for blockchain developers free from top-down interference I foresee blockchain having a similarly democratizing effect on AI, said Serenas.
NeuroChainAI is a Decentralized AI as a Service (DAIAS) ecosystem which employs blockchain to host AI models, using the power of the distributed community to provide computational power and source and validate training data from the ground up. The company is also developing a DePIN (Decentralized Physical Infrastructure Network) that connects community GPUs to form a shared computation network crucial for the operation of AI models.
A system of gamification and rewards encourages the community of users to add to a high-quality data pool, which can then be leveraged by developers looking to build their own decentralized AI programs hosted on NeuroChainAIs blockchain network.
This grass-roots approach saves time, cuts costs, and ensures enhanced security thanks to the decentralized nature of the blockchain, but more importantly, according to Serenas, it all adds up to the creation of an open-source AI ecosystem where anyone can spin up their own bespoke AI programs backed by the power of a decentralized community.
AI tools are poised to become part of our everyday lives in the very near future, and the all-encompassing role theyre set to play is too important to be left to a small handful of giant tech companies. By applying the benefits of the blockchain we can free AI from its corporate constraints and make it open-source of the community, for the community and by the community, added Serenas.
To recap: unlike the current plethora of corporately owned AIs, a blockchain-based AI ecosystem would eradicate data breaches (because users control their own data), ensure data suitability (because it is vetted by the community of users), reduce costs (because computing power can be gathered from a worldwide network of CPUs and GPUs), and foster a community-driven, collaborative approach to AI creation.
Our AI-based future is already assured, says Serenas. We just need to make sure its a future that we as users are in control of, he added.
Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates
Excerpt from:
How Blockchain and AI Is Combining to Reimagine the Cloud Computing Industry - Analytics Insight
Amazon executive credits cloud Web Services for AI boom – Quartz
When it comes to generative AI, the companies that come to mind are Google, Meta, and OpenAI. But dont count Amazon out of the AI race.
Tarek El Moussa's road out of debt to being a millionaire | Your Wallet
One of Amazons biggest focuses is in cloud computing, where the companys Amazon Web Services is a market leader. The online tech and retail giant has also built in-house chips to power the cloud and has partnered with OpenAI rival Anthropic on infrastructure that enables large-scale AI modeling.
Deepak Singh, an AWS vice president, talked to Quartz about how generative AI is affecting Amazon. Here are some highlights from the interview.
Amazon has been using machine learning for a long time, such as the robotics in its fulfillment centers or its cashierless Amazon Go stores, Singh said. There are two areas where Amazon has invested early when it comes to generative AI, Singh said. One is providing the cloud infrastructure for those building AI models. Last year, for example, Amazon launched Bedrock, a service that helps AWS users build and scale AI models. The other area is providing features to help AWS customers build their generative AI applications faster.
In other words, rather than being a leader in whats known as the large-language models that power generative AI products like chatbots, Amazon is focused on providing the infrastructure to house these AI models.
Generative AI very directly was enabled because the cloud exists, Singh said.
Both [Amazon CEO Andy Jassy] and [AWS CEO Adam Selipsky] like to say that we are early in a marathon, he added. We are learning new lessons every day about what large-language models are capable of right now.
If youre a software developer, and especially if youre a new software developer, chances are youre using generative AI as part of your day-to-day experience, Singh said. Large-language models, or LLMs, are good at coding in part because theres a lot of open-source code out there in the world that AI systems can learn from, he said.
Singh added that Amazon employees are starting to use the companys own internal AI chatbots to write marketing copy.
When AWS started, I still had to continue to answer a lot of questions like, why is the bookstore building [cloud] infrastructure? Singh said. You know, I havent heard that one in 15 years now, or 14 years or something like that. Its been a long time. At the same time, companies were more hesitant to migrate to the cloud, which helps businesses run their websites and applications.
But now, Singh said your biggest most stodgy bank has jumped right into the deep end [of generative AI].
But again, he added, I think because theyve adopted the cloud over the last several years, its been easier for them to do that.
Read more:
Amazon executive credits cloud Web Services for AI boom - Quartz
Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices
Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.
Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.
These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.
Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.
Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.
Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.
See the original post:
Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices
Docker Build Cloud: Using Cloud Compute and Cache to Decrease Image Build Times – InfoQ.com
Docker recently announced the general availability of their cloud-based container image builder, Docker Build Cloud. Docker Build Cloud offers a remote shared cache and native builders for AMD64 and ARM64 CPU architectures, with the goal of "improving collaboration" and decreasing image build times.
Docker is a virtualisation technology that provides a loosely isolated environment for an application at runtime. Each environment, called a container, has to be built prior to its execution on a host machine and is distributed as an immutable collection of files called an Image. Each file within the collection is referred to as a layer and is a compressed set of changes to the filesystem. The Docker Builder is responsible for converting a series of instructions in a Dockerfile to an image.
How Dockerfiles get converted to Images(Source: Docker User Guide)
Since Dockers launch in 2013, containers have become a ubiquitous mechanism to achieve the vision of build once, run anywhere, a notion originally made popular by Sun Microsystems to highlight the cross-platform operability of the Java language. A key enabler of this success within the Linux ecosystem has been the use of QEMU, an open-source machine emulator and virtualizer. While this allowed users to build images on a host of one CPU architecture, for example, ARM32v7, and run them on others, it also became a pain point due to the relatively slow emulation process. In late 2019, Dockerintroduced Remote Builders as a mechanism to resolve this challenge.
With Remote Builders, users could register a host machine accessible over mutual transport layer security (mTLS) or secure shell (SSH) as a node for building images. By registering nodes of different target architectures and referencing them during the build stage, users could now build once, or multiple times at once, and run native images without emulation. The major downside of Remote Builders at the time of the functionalitys release was the requirement of its users to set up and maintain a fleet of builder nodes. Docker Build Cloud aims to removethis operational burden by offering them as a service.
How a localDocker client uses a Remote Builder(Source: Docker Remote Builders blog post)
Docker Build Cloud nodes run as Amazon EC2 instances with dedicated EBS volumes. EC2 specification begins with 8vCPUs and 16GB of RAM for customers on Personal and Pro plans, and goes up to 16vCPU with 32GB RAM for Team and Business customers. All Docker plans include a limited number of build minutes shared across an organization, with Docker Build Cloud Team sold as an add-on starting from $5 per user per month. The Docker Build Cloud add-on offers an additional 200 build minutes per user and a 200GB shared cache across the organization. A rough cost estimate of an equivalent remote builder configuration self-hosted on AWS gave a monthly running cost of $850, which suggests that Docker Build Cloud, at $5 per user, is priced favorably towards teams with less than 170 engineers.
AWS Cost Estimate for an equivalent self-hostedconfiguration to Docker Build Cloud (Source: AWS Pricing Estimator)
Finally, Docker Build Cloud is currently only available in the AWS US East region and further information on getting started can be found in the user guide.
Read the rest here:
Docker Build Cloud: Using Cloud Compute and Cache to Decrease Image Build Times - InfoQ.com
Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – TechRadar
Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.
Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.
These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.
Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.
Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.
Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.
Leonardo: kick off for the project of the first Space Cloud System for Defense – EDR Magazine
Rome, 19/02/2024 Supercomputers, artificial intelligence, and cloud are aboard a constellation of cyber-secure satellites orbiting the Earth this is the objective of the Military Space Cloud Architecture (MILSCA) study project assigned to Leonardo by the Italian Ministry of Defense (through the contractual agency Teledife), as part of the National Military Research Plan (PNRM).
For the first time in Europe, similar to what happens with the terrestrial cloud, the project intends to define a space architecture capable of providing government and national Armed Forces with high-performance computing and storage capacity directly in space.
The system, designed with integrated cyber security models, will guarantee greater speed and flexibility in the processing and sharing information. The Space Cloud, which will be tested by creating a digital twin of the architecture, will be able to store over 100 Terabytes of data generated on Earth and in space on board each constellation satellite. It can perform processing with a power exceeding 250 TFLOPS (250 thousand billion operations per second) at single precision, adopting advanced algorithms which use artificial intelligence, machine learning techniques, and extensive data analysis. They can also communicate and exchange data autonomously with other satellites.
A cyber-secure supercomputer and archive system in space will guarantee users access to strategic data such as communication, earth observation, and navigation data, anywhere, even in the most remote places, and at any time. Furthermore, a Space Cloud system significantly reduces data processing times, which is processed directly in orbit, providing real-time information, and thus facilitating multi-domain and multi-nation operations. The transmission networks will be left free for other connections thanks to the only transfer of information of interest to Earth. In addition, storing data in orbit will also represent a useful back-up of the Earth centers, which are most exposed to natural disasters.
The project sees Leonardo at the forefront with the participation of the joint ventures Telespazio (67% Leonardo, 33% Thales) and Thales Alenia Space (67% Thales, 33% Leonardo). With a duration of 24 months, the study includes a first phase for defining the architecture and a second phase that will end with developing a digital twin of the satellite with the HPC and the multi-constellation satellite terminal demonstrator. The goal is to simulate the different application scenarios in a digital environment. These tests will be carried out thanks to Leonardos supercomputer, the davinci-1, among the first aerospace and defense HPCs in the world in terms of computing power and performance. The study will be a precursor to a further experimental phase, which, if confirmed, will involve the deployment in orbit of a demonstrative constellation of satellites.
Space Cloud is a hi-tech and multi-domain project, which takes advantage of Leonardos combined capabilities in data acquisition, management, and cyber protection, as well as artificial intelligence and supercomputing with the HPC davinci-1; the development of MILSCA is the first project in the Space domain that fits within the growth guidelines of Leonardos new Industrial Plan.
In a multi-domain scenario, management, security, and rapid exchange of an ever-increasing amount of data, much of which is tactical, become strategic elements for the countrys defense. We will be the first in Europe to develop a Space Cloud project, demonstrating feasibility and benefits deriving from the use of an architecture of this type and enabling a new paradigm of cloud & edge computing, said Simone Ungaro, Leonardos Chief Innovation Officer. Leonardos know-how will allow the development of a Space Cloud network to contribute to digitalization and technological innovation processes, responding to future challenges to guarantee the needs of government and national Armed Forces.
The Space Cloud for Defense project also sets the basis for future uses to support civil Earth observation programs and space exploration missions to the Moon and Mars, which could benefit from an in-orbit cloud computing architecture to download and process data more quickly.
Image courtesy Leonardo
See the original post here:
Leonardo: kick off for the project of the first Space Cloud System for Defense - EDR Magazine
Why companies continue to struggle with cloud visibility and code vulnerabilities – CloudTech News
A new report from the Cloud Security Alliance (CSA) has thrown up more difficulties organisations are facing in security remediation and achieving visibility from code to cloud.
The report, produced in collaboration with security firm Dazz, polled just over 2,000 IT and security professionals to better understand current cloud environments and security tools. The results were less than confident.
Less than a quarter (23%) of organisations polled reported full visibility in their cloud environments. Around two thirds (63%) of those polled consider duplicate alerts either a moderate or significant challenge, while a similar number (61%) use anywhere between three and six different detection tools.
At code level, just under two in five (38%) of those polled said that between 21% and 40% of their code contains vulnerabilities. 4% said more than 80% of their code was vulnerable, while only just over a quarter (27%) of respondents were confident in the security of at least 80% of their code.
The report also found that more than half of the vulnerabilities addressed by organisations tended to recur within a month of being remediated. The causes for such reoccurrences are myriad; the report noted limited resources, insufficient expertise, as well as the inherent complexity of vulnerabilities as possible factors.
Manual overhead is considered another issue. The report noted general inefficiencies with organisational practices, with initial phases of vulnerability management appear[ing] to consume a disproportionate amount of time. Three quarters of organisations analysed said they had security teams spending at least 20% of their time performing manual tasks when addressing alerts. The report added that lack of definition in roles could be a symptom, while automation in remediation processes was currently underutilised.
In total, more than 70% of organisations polled said they had either limited or moderate visibility from code to cloud.
As cybersecurity threats evolve, organisations must adapt by seeking better visibility into their code to cloud environment, identifying ways to accelerate remediation, strengthening organisational collaboration, and streamlining processes to counter risks effectively, the report concluded.
You can read the full report by visiting the CSA website (pdf).
Photo by Pixabay
Want to learn more about cybersecurity and the cloud from industry leaders? Check outCyber Security & Cloud Expotaking place in Amsterdam, California, and London.Explore other upcoming enterprise technology events and webinars powered by TechForgehere.
Tags: Cloud Security, code to cloud, cybersecurity, Security
Read the original post:
Why companies continue to struggle with cloud visibility and code vulnerabilities - CloudTech News
Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech
With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.
The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.
Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.
While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.
Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.
"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."
Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.
"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."
The rest is here:
Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech
Akamai Takes Cloud Computing to the Edge with Gecko Initiative – ITPro Today
Akamai Technologies unveiled an ambitious new strategy today dubbed Generalized Edge Compute, or Gecko, that aims to embed cloud computing capabilities into the company's massive global network edge.
Akamai announced Gecko at the same time it reported its fourth-quarter fiscal 2023 financial results. For the quarter, Akamai reported revenue of $995 million, up 7% year-over-year, with full-year revenue coming in at $3.8 billion for a 5% gain over 2022.
Related: The Rise of Linux in Edge Computing and IoT
The Gecko initiative builds on Akamai's acquisition of Linode in 2022 for $900 million. Since bringing the smaller cloud provider onboard, Akamai has rolled out 25 core computing regions worldwide and outlined a strategy it calls the Connected Cloud.
With Gecko, the goal is to inject smaller Linode-like compute capacity directly into the Akamai edge network.
Related: Why Edge Computing vs. Cloud Computing Misses the Point
"Akamai's new initiative, code-named Gecko, which stands for Generalized Edge Compute, combines the computing power of our cloud platform with the proximity and efficiency of the edge to put workloads closer to users than any other cloud provider," Akamai CEO Tom Leighton said during his company's earnings call.
With Gecko, Akamai plans to leverage its network of more than 4,100 edge locations around the world to run compute workloads closer to end users and devices. According to Leighton, traditional cloud providers support virtual machines and containers in a relatively small number of core data centers.
Gecko, however, is designed to extend cloud capability to Akamai's edge points of presence (POPs). As such, he said Akamai is bringing full-stack computing power to hundreds of previously hard-to-reach locations.
Leighton said Akamai aims to embed compute with support for virtual machines into about 100 cities by the end of the year.
"We've deployed new Gecko-architected regions in four countries already, as well as in cities that lack a concentrated hyperscaler presence," he said.
The new Gecko-architected regionsincludeHong Kong Special Administrative Region (SAR); Kuala Lumpur, Malaysia; Quertaro, Mexico; and Johannesburg, South Africa. Additionally, Gecko is coming to a number ofcities, including Bogot, Colombia; Denver; Houston; Hamburg, Germany; and Marseille, France.
Early customer trials of Gecko are underway. Akamai expects media, gaming, artificial intelligence/machine learning (AI/ML), and internet of things (IoT) applications to be early adopters of general edge computing capabilities.
Leighton said that early feedback about Gecko from some enterprise customers has been positive.
"Their early feedback has been very encouraging, as they evaluate Gecko for tasks such as AI inferencing, deep learning for recommendation engines, data analytics, multiplayer gaming, accelerating banking transactions, personalization for e-commerce, and a variety of media workflow applications, such as transcoding," he said. "In short, I'm incredibly excited for the prospects of Gecko as we move full-stack compute to the edge."
About the author
View original post here:
Akamai Takes Cloud Computing to the Edge with Gecko Initiative - ITPro Today