Category Archives: Cloud Servers

Akamai Grows Connected Cloud Effort with New Sites and Services – ITPro Today

Akamai continued to advance its cloud ambitions this week, with the announcement of new services and cloud locations around the world.

For much of its history, Akamai was primarily known as a content delivery network (CDN) and security service provider. That changed in 2022, when Akamai acquired alternative cloud provider Linode in a $900 million deal. Earlier this year, Akamai outlined its Connected Cloud strategy, which ties together its CDN and edge network assets with its Linode public cloud footprint, an approach designed to create more value for end users.

Related: 5 Cloud Cost Optimization Best Practices You Might Have Missed

With this week's news, Akamai is delivering on some of its Connected Cloud promises by expanding its public cloud sites and services. The new services include premium cloud instances, more object storage capacity, and a new global load balancer. The five new sites are located in Paris; Washington, D.C.; Chicago; Seattle; and Chennai, India.

Hillary Wilmoth, director of product marketing, cloud computing, at Akamai, told ITPro Today that the locations of new sites were selected based on customer feedback and to bring computing services closer to users as part of the Akamai Connected Cloud strategy.

Related: 6 Tips for Controlling Your Cloud Costs in a Recession

"These sites start to marry the cloud computing of Linode with the CDN scale of Akamai," she said.

A primary element of any public service is some form of virtual compute instances. As part of the new updates, Akamai introduced what the company refers to as "premium" instances.

Premium instances guarantee compute resources, a minimum processor model (currently an AMD EPYC 7713), and easy upgrades to newer hardware as it becomes available, Wilmoth explained.

"Premium CPU instances enable businesses to design their applications and infrastructure around a consistent bare minimum performance spec," she said.

Storage capacity is also getting a big boost. Akamai is now expanding its object storage bucket size to a maximum of 1 billion objects and 1 petabyte of data per storage bucket, representing a doubling of prior limits.

The other new piece of Akamai's public cloud service is the Akamai Global Load Balancer. The new load balancer expands on capabilities that Linode had been offering with its NodeBalancers service. Linode NodeBalancers direct traffic from the public internet to instances within the same data center, Wilmoth said. For example, a NodeBalancer can distribute traffic evenly between a cluster of web servers.

In contrast, she said the Akamai Global Load Balancer provides load balancing based on performance, weight, and content (HTTP/S headers, query strings, etc.) while being able to support multi-region or multicloud deployments, independent of Akamai.

The new sites and services are all part of the momentum that Akamai has been building since its acquisition of Linode.

Wilmoth said the vision is to build a cloud for the future that challenges the existing centralized design of current cloud architectures. Looking forward, she said to expect the Akamai Connected Cloud to include additional core sites as well as distributed sites designed to bring powerful compute resources to hard-to-reach locations. Akamai is also planning to expand capabilities and capacity for object storage, including multi-cluster support for horizontal scaling and automated bucket placement to optimize resource utilization.

"We'll continue to support open source, cloud-native, and partner integrations in pursuit of portable cloud workloads that align with best practices for multicloud," Wilmoth said.

About the author

Here is the original post:
Akamai Grows Connected Cloud Effort with New Sites and Services - ITPro Today

Ingine and QuoVadis announce partnership – Royal Gazette

Updated: Jul 12, 2023 07:23 AM

Bermudian-based cloud solutions: Gavin Dent, QuoVadis CEO and Fernando De Deus, Ingine CEO (Photograph supplied)

As Bermuda companies increasingly seek out the next generation in technologies, a new collaboration will leverage that by offering enhanced cloud-based IT solutions.

Managed services provider, Ingine, is partnering its IT services with QuoVadis, the provider of managed data-centre, co-location, and cloud hosting services to the local and international business communities.

Ingine will now offer a complete suite of cloud solutions to new and existing customers as a managed service to help them fully digitalise their workplace through tighter integration with existing applications, softwares, and IT management systems.

The company said the integration of these cloud solutions offers customers an end-to-end managed service, cloud migration expertise and access to Ingines extensive capabilities in managing complex technology environments.

Fernando De Deus, chief executive officer of Ingine, commented: Bermuda is fast becoming a hotspot for companies investing in the latest innovative technologies and our strategic partnership ensures they have the right infrastructure and security to be successful.

Weve seen many local businesses adopt cloud computing in an effort to become more agile, lower IT costs, and have the ability to scale so its important the solutions are strategically managed to avoid costly inefficiencies and security risks.

By joining forces, Ingine will be able to leverage QuoVadis' expertise and advanced infrastructure platform to deliver Bermuda-based cloud solutions which encompass servers, storage, networking, security, and trusted IT specialists.

This will help Bermuda-based companies to access the full benefits of a managed infrastructure service.

Whats more, we are working with Bermuda's first technology training centre, ConnecTech, to offer clients bespoke training to adopt new technologies effectively and drive operational excellence from within their teams.

Gavin Dent, QuoVadis chief executive, added: Theres no one-size-fits-all solution when investing in cloud-based infrastructure.

By offering our partnered services to Bermuda clients we are helping more businesses to thrive and remain secure.

We also believe that true transformation requires investment in a digital workplace which is why the decision to work collaboratively with ConnecTech is going to ensure that what we do is sustainable for local businesses.

Read the original here:
Ingine and QuoVadis announce partnership - Royal Gazette

SITAs passenger processing solutions to power airports authority of India across 43 airports – Travel Daily

Indiam business woman checking flight information at airport

SITA secured a landmark deal with Airports Authority of India to support one of the biggest growth markets globally, providing technology to 43 of Indias biggest airports.

Indias Civil Aviation is among the fastest-growing aviation markets globally and will be a major growth engine to make India a USD 5 trillion economy by 2024.

The deal will see improvements over 2,700 passenger touchpoints, paving the way for the adoption of new-age solutions to meet the modern passengers expectations. Initially deployed across 43 airports, the technologies are scalable to an additional 40 airports over the next seven years. Over 500 million passengers are expected to be processed during this period.

The rollout of new cloud technology will enable Indian airports to shift to common use passenger experiences where multiple airlines can leverage the same infrastructure, such as check in counters, self-service kiosks, and boarding gates.

The adoption of cloud solutions also brings new agility and flexibility to scale airport operations efficiently as passenger numbers grow. The cloud first approach enforces better security and offers airlines a platform to host new progressive technologies and move away from native applications. Centralized cloud hosting of all servers means reduced on-premise infrastructure costs and results in centralized control, enabling proactive monitoring and control of services.

Sumesh Patel, President, Asia Pacific, SITA, said: Were excited to partner with AAI on this large-scale deployment of leading passenger processing solutions.The number of airports in India is expected to increase from 148 today to 220 by 2025. The new airports will bring closer together Indias almost 50 cities with populations exceeding one million people, creating substantial economic value in the long term. By connecting these cities better, air travel and transport will help unlock the full potential of Indias economic growth too. Ensuring efficient and fluid operations and a seamless passenger experience at these airports will be critical to delivering on Indias air transport industry opportunity.

The solutions will give passengers more control over their journey, offering a low-touch, efficient check-in, bag drop, and collection process through assisted and self-service mechanisms. The airports will benefit from a reduced infrastructure footprint and increased operational efficiency. At the same time, airlines will see service charges drop and can take advantage of an agile technology platform to build bespoke passenger experiences on.

Under the agreement, SITA will deploy its state-of-the-art solutions, includingSITA Flex,CUPPS, SITA CUSS, andSITA Bag Manager. These IATA-certified platforms offer airlines and ground handlers the benefits of common use technologies, enabling scalable operations to meet their specific operational requirements.

The Baggage Reconciliation System ensures a high level of baggage accountability, preventing losses and security concerns, which is critical to industry recovery amidst a spiraling baggagemishandling rate.

This project represents a significant shift towards adopting cutting-edge technological platforms, enhancing operational efficiencies, and paving the way for the future adoption of biometric passenger processing. Additionally, real-time dashboard-based information will be made available to governing agencies, promoting transparency and informed decision-making.

AAI officers at the airport and headquarters now have access to online real time dashboards, replacing the monthly service availability reports. The solution ensures they are better informed and know the availability of systems across all airports at every point in time, optimizing efficiency and promoting smooth operations.

The project commenced in May 2022 and involved a comprehensive revamp of existing services without disrupting ongoing operations.

Originally posted here:
SITAs passenger processing solutions to power airports authority of India across 43 airports - Travel Daily

Cloud Server Market ICT Industry Global Latest Trends and Insights 2023 to 2030 IBM, HP, Dell – openPR

The global Cloud Server Market size was valued at $538.8 billion in 2022 and it is projected to reach $1,212.9 billion by the end of 2029, growing at a CAGR of 15.9%. Cloud Servers are expanding due to increased automation and agility, the need to deliver enhanced customer experience, increased cost savings and return on investment, rise in the adoption of remote work culture, growth in the demand for cloud-based business continuity tools and services. An increase in spending on cloud-based services, business expansions by large vendors across geographies to acquire untapped customer base, the proliferation of digital content and upsurge in internet usage, and need for disaster recovery and contingency plans are also expected to drive the market growth.

A Free Research Sample of Cloud Server Market is Available- https://www.infinitybusinessinsights.com/request_sample.php?id=1499107?utm_source=OP_PPS

In this age of digitization, organizations are forced to shift to the cloud for cost-effective and flexible data storage options on an on-demand basis. Thus, governments across countries are investing in Cloud Server delivery models. The cost of acquiring, establishing, running, and maintaining technological services is decreased via Cloud Server. Governments may considerably increase productivity by streamlining their technology operations due to Cloud Server, particularly reflected in the time it takes to process citizen-facing transactions. Cloud Server also enables governments to adapt quickly to user needs and grow public services as needed. IoT devices produce data that needs to be collected and processed either locally or remotely in a server. Remote data hosting and analytics are a more practical and cost-effective solution in many IoT applications.

Profitable players of the Cloud Server market are:

IBMHPDellOracleLenovoSugonInspurCISCONTTSoftlayerRackspaceMicrosoftHuawei

Have Any Questions Regarding Global Cloud Server Market Report, Ask Our Experts- https://www.infinitybusinessinsights.com/enquiry_before_buying.php?id=1499107?utm_source=OP_PPS

Product types of the Cloud Server industry are:

IaaS (Infrastructure-as-a-Service)PaaS (Platform-as-a-Service)SaaS (Software-as-a-Service)

Applications of this report are:

EducationFinancialBusinessEntertainmentOthers

Accomplishing best results in the business becomes easy with this effective Cloud Server Market study report. It becomes easy to handle business operations and attract more consumers. It helps business players to identify business risks and moving ahead in the business. Important market aspects are covered in this Cloud Server Market report to help newly entering key players to survive in the competitive marketplace. Experimenting with more consumers is easy with this Market report as it captures all the latest updates regarding market expansion. Improving product launch helps business owners to survive in the long run.

To Remain 'ahead' of Your Competitors, Request for a Free Sample- https://www.infinitybusinessinsights.com/request_sample.php?id=1499107?utm_source=OP_PPS

Essential regions of the Cloud Server market are:

- Cloud Server North America Market includes (Canada, Mexico, USA)- Cloud Server Europe Market includes (Germany, France, Great Britain, Italy, Spain, Russia)- Cloud Server Asia-Pacific Market includes (China, Japan, India, South Korea, Australia)- Middle East and Africa (Saudi Arabia, United Arab Emirates, South Africa)- Cloud Server South America Market includes (Brazil, Argentina)

Global Cloud Server Market Research FAQs

- What is the study period of this market?- What is the growth rate of Cloud Server Market?- What is Cloud Server Market size?- Which region has highest growth rate in Cloud Server Market?- Which region has largest share in Cloud Server Market?- Who are the key players in Cloud Server Market?

The Cloud Server market research report contains the following TOC:1 Report Overview1.1 Study Scope2 Global Growth Trends2.1 Global Cloud Server Market Perspective (2017-2030)2.2 Growth Trends by Regions2.3 Global Industry Dynamic3 Competition Landscape by Key Players3.1 Global Top Players by Revenue3.2 Global Cloud Server Market Share by Company Type (Tier 1, Tier 2 and Tier 3)3.3 Players Covered: Ranking by Cloud Server Revenue4 Global Breakdown Data by Provider4.1 Historic Cloud Server Market Size by Provider (2017-2023)4.2 Forecasted Cloud Server Market Size by Provider (2023-2030)5 Breakdown Data by End User5.1 Historic Cloud Server Market Size by End User (2017-2023)5.2 Forecasted Cloud Server Market Size by End User (2023-2030)6 North America7 Europe8 Asia-Pacific9 Latin America10 Middle East and Africa11 Key Players Profiles12 Analyst's Viewpoints/Conclusions13 Appendix

Browse Full Reports with TOC Here- https://www.infinitybusinessinsights.com/reports/global-cloud-server-market-global-outlook-and-forecast-2023-2030-1499107?utm_source=OP_PPS

Contact Us

Sales Co-OrdinatorInternational: +1-518-300-3575Email: inquiry@infinitybusinessinsights.comWebsite: https://www.infinitybusinessinsights.comFacebook: https://facebook.com/Infinity-Business-Insights-352172809160429LinkedIn: https://www.linkedin.com/company/infinity-business-insightsTwitter: https://twitter.com/IBInsightsLLP

About Us:

Infinity Business Insights is a well-known market research company that specializes in syndicated research, personalized research, and consultancy. We are committed to delivering data that perfectly fits our client's business needs, thanks to employees of highly qualified analysts and a depth of expertise across many industrial disciplines. We deliver unique resilience and integrated methods due to our interdisciplinary expertise and constant dedication to excellence. We strive constantly to find the most promising market prospects and provide useful information to help your business grow in the market. Our goal is to give customized solutions to multifaceted business challenges, allowing for a more streamlined decision-making process.

This release was published on openPR.

Visit link:
Cloud Server Market ICT Industry Global Latest Trends and Insights 2023 to 2030 IBM, HP, Dell - openPR

How Broadcoms acquisition of VMware will accelerate multi-cloud … – ComputerWeekly.com

First impressions can be misleading. Case in point: when Broadcom first announced its plan to acquire VMware, initial concerns from regulators focused on perceived anti-competitive effects resulting from the deal.

Others suggested that VMwares value proposition could be diluted. Fortunately, since then, Broadcom has been making its strategic case with regulators and customers. Most notably, Broadcom president and CEO Hock Tan, in a blog post, outlined how a combined Broadcom and VMware will create new competitive pressure on the public cloud by accelerating private and multi-cloud capabilities for enterprises, while pledging $2bn a year for research and development (R&D) and improved VMware deployments.

So, were regulators initial concerns about the deal justified? This article outlines why ABI Research believes this acquisition is a positive development for the market overall and, in fact, essential in the current cloud market.

Virtualisation companies like VMware create vendor-agnostic software that can operate on any server. VMwares products started with virtual machines and hypervisors. Today, its product offerings also include Kubernetes clusters (Tanzu), container-as-a-service (CaaS) products, and software that can automate and orchestrate private, public, and hybrid clouds. So, why has VMware not become a standard in the cloud market for large businesses and small and medium enterprises (SMEs)?

VMwares virtual machines and hypervisors were typically deployed by enterprises and SMEs to maximise workload productivity in their own datacentres. As hyperscalers public clouds became competitive with on-premises datacentres, enterprises and SMEs moved their workloads away from on-premises servers and into the public cloud.

VMware has responded by developing private and multi-cloud products, but sales and adoption of these products are more complicated due in part to the difficulty of deployment, orchestrating, and automating between different cloud domains. Moreover, VMwares ability to create alternatives to the public cloud and democratise multi-cloud deployments is challenging for many reasons.

VMware has a smaller R&D team relative to the hyperscalers, meaning that future market expansions will require significant spending. VMware may not be able to afford this. To meet the demands of multi-cloud environments, VMware would greatly benefit from a strategic partnership with a robust entity that has access to ample financial resources and advanced technical capabilities.

Broadcom could play the role of a strong strategic partner, helping VMware to meet its multi-cloud challenges and ambitions

If VMware is to grow its multi-cloud footprint and position itself as a disruptive alternative, it is crucial to ensure a seamless transition when customers move workloads. VMware will benefit from Broadcoms investment in improving the ease of moving workloads.

Broadcom could play the role of a strong strategic partner, helping VMware to meet its multi-cloud challenges and ambitions. The company has a track record of bringing disruptive technologies to market and helping enterprises better position those technologies to achieve their business outcomes.

VMwares market capitalisation is nearly $60bn as of mid-2023, but this is dwarfed by public cloud heavyweights like Microsoft, for example, which has a market worth of $2.5tn and the ability to invest significantly in gaining and securing cloud market share.

Broadcoms strategic vision for VMware, and its commitment to invest in it, are essential if VMware is to evolve its market position and create viable alternatives to the public cloud. But why are regulators concerned?

Both the European Commission and the UKs Competition and Markets Authority (CMA) launched formal inquiries about the acquisition, citing concerns that Broadcom can use VMwares position in the virtualisation market to gain unfair advantages in discrete hardware products that interact with VMware software, such as network interface cards (NICs), storage adapters, and fibre channel host bus adapters.

ABI Research believes there are three main reasons why concerns about interoperability are misplaced in relation to Broadcoms acquisition of VMware:

First, Broadcoms business model has long been based around hardware and software products that are designed to function with no bias to any one vendor. Broadcoms products are integrated into hundreds of vendor products in the market, many of which are also competing with one another. Broadcom provides the infrastructure and chipset solutions to enable these products without bias.

Second, VMwares ubiquity and interoperability are key to Broadcoms broader strategic goals for the combined group. In commercial terms, Broadcom would undermine, if not destroy, the value of its acquisition if it were to risk VMware softwares ubiquity to favour a few niche Broadcom hardware products that are of concern to regulators that is, NICs, storage channels, and fibre channel host bus adapters. As margins for VMwares software are significantly higher than for the hardware products of concern, it simply makes no sense to degrade a more profitable product to attempt to sell less profitable products.

And third, it would not be possible for Broadcom, technically speaking, to target particular hardware competitors, even were such a strategy to become commercially advantageous. Any degradation in VMware softwares interoperability would apply to multiple hardware vendors, across the board, with detrimental commercial consequences.

Ultimately, while different market and product lifecycles would make any effort to interfere with a products interoperability difficult and costly, it is against Broadcoms interests to try and gain an advantage with relatively inexpensive products. Doing so would cause reputational damage and put much larger investments in VMware at risk. Tan has already reiterated that several VMware products are key for the future, including Tanzu and VMwares CaaS layer, which is agnostic to underlying infrastructure and can run in private, public, or multi-cloud environments.

Todays market is not for faint-hearted service providers and is wholly dominated by hyperscalers. Looking at current dynamics, competition between leading public cloud providers is intense and the barriers to entry for new entrants are prohibitive.

Enterprises that have been lured to migrate workloads to the hyperscalers are left with limited options. Many do not have the capabilities to host their own workloads on-premises and are locked into a specific hyperscaler platform and fee structure.

These very same enterprises also report that its difficult to decouple their products from hyperscaler platforms without incurring costs, and in almost all cases, its not possible to port between hyperscaler platforms, again without incurring costs and refactoring efforts for example, migrating from AWS to Google Cloud to reduce costs. These issues of cost and lock-in for customers, as well as lock-out for potential competitors, are central to a broader set of concerns that have prompted inquiries and investigations by regulators in the US, the EU, and the UK.

Broadcoms vision in acquiring VMware is to create products that allow enterprises to freely move their application payloads between private, public, and hybrid cloud environments. Right now, this is not possible without cost and effort. Broadcom aims to address the technological and cost gaps. Application portability is a challenge for enterprises using the public cloud, but VMware alone may not have the resources to develop a world-class solution. With Broadcoms experience and R&D investments, VMware will have a much stronger opportunity to compete and succeed in this market.

Hyperscalers, concentrated principally in the US and China, are not slowing down and hundreds, if not thousands of enterprises are making their application hosting choices right now. With a stronger VMware value proposition enabled by Broadcom investments, businesses could overcome the barrier of hyperscalers lock-in, gaining more flexibility to host their applications across multiple cloud environments according to their business needs.

Based on Broadcoms stated intentions, regulators should welcome its acquisition of VMware. The transaction will, in all likelihood, lead to a stronger VMware, and a healthier multi-cloud ecosystem.

Dimitris Mavrakis, senior research director, manages ABI Researchs telco network coverage, including telco cloud platforms, digital transformation, and mobile network infrastructure. Research topics include artificial intelligence (AI) and machine learning (ML) technologies, telco software and applications, network operating systems, software-defined networking (SDN), network functions virtualisation (NFV), long-term evolution (LTE) diversity, and 5G. ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world.

Read more here:
How Broadcoms acquisition of VMware will accelerate multi-cloud ... - ComputerWeekly.com

Green cloud will lead race to net zero – Technology Decisions

The Decade of Action is in full swing, and with the Australian Government making a big push towards net zero with its investment in a National Net Zero Authority, businesses are feeling the heat to act fast.

Fortunately, Australia is one of the most cloud-mature markets in the world. According to a recent global study by TCS, 81% of the top Australian enterprises agree that they have already achieved most, if not all, of their cloud-based goals for critical apps and workloads, in comparison to just 65% of global enterprises. This means businesses are in a prime position to lead the way in energy transition by leveraging the green cloud a sustainable approach in the design, manufacture, use and disposal of IT resources. In fact, the majority of Australian businesses (70%) are already leveraging cloud technologies to achieve their sustainability goals.

Embracing the sustainable approach to cloud computing has several key benefits for businesses:

As with all technology, it comes with challenges. One is the cost of entry. For many businesses, the initial investment in projects for application modernisation and building sustainable cloud solutions may seem daunting. However, with the right roadmap and business case in place, organisations can assess the true benefits of these initiatives and make informed decisions.

Another challenge is the reluctance to move away from old application architectures and legacy models of IT operations and management. According to the latest global cloud study by TCS, the accumulated complexity and rigidity of business processes and operations is the single biggest obstacle for cloud adoption among more than half (53%) of Australian businesses. But with a clear understanding of the benefits of green cloud services and a commitment to sustainable practices, businesses can overcome this resistance and make strides towards net zero.

While the cost of entry for sustainability solutions may have been prohibitive, there is a surprising positive correlation between sustainability initiatives and profitability. According to news reports, 86% of Australian industry leaders now see a positive connection between taking environmental action and profitability. Around 56% of Australian businesses now believe addressing environmental issues will be material to business results within the next five years.

With sustainability at the front and centre for most business leaders, we are really seeing the fusion of technology and business strategy to support ESG targets. We are being offered a unique opportunity to reboot to a new era, an era where sustainability and prosperity are part of the same set of aspirations and success equation.

Image credit: iStock.com/imaginima

Continue reading here:
Green cloud will lead race to net zero - Technology Decisions

Cyber security concerns and best practices in EV charging … – EVreporter

Pulse Energy is a start-up based out of Bangalore that offers an energy-as-a-service API for EV charging. They predominantly cater to fleet operators and help their vehicles get access to multiple EV charging networks. In this article, Akhil Jp from Pulse Energy shares the cybersecurity concerns they have noticed in this space over the last two years and recommends the best practices that Charge Point Operators (CPOs) should follow to create a secure charging ecosystem.

In our estimate, 70% of DC chargers in India are insecure. The main security breaches we observed in the India EV charging ecosystem are:

Today, it is potentially possible for one to snoop into the traffic between the charger and the server.

Here is the typical and simplified form of an EV charging network, where green dots represent the EV, light orange represents the user information such as payments and user credentials, dark orange boxes represent the charger management system, and the blue dots represent the charger.

Image source: Pulse Energy

In the majority of cases, the communication link between the charger and the CMS today is insecure. If we take a basic charging setup, every charger has a LAN cable that runs all the way to the modem or the communication module. In case of an insecure system, one could place an interceptor and start capturing traffic. The interceptor can easily be built by taking a Raspberry pi and placing it between the charger and modem. A simple Nginx reverse proxy server with websockets enabled can do the trick. It is not even expensive to build one and can be done for INR 2,000 to 3,000. Most of the cabinets in public charging areas are not locked; someone can open them and place these hardware interceptors. If you are a CPO, make sure that you talk to your charger OEMs about enabling TLS or secure websockets, so such threats can be avoided.

Image source: Pulse Energy

Many charger manufacturers do not support secure communication, although there are some who do and some who are working towards enabling it. Our attempts to promote secure communication are sometimes met with resistance from these manufacturers, as their hardware does not accommodate it.

Below are a few examples of how these vulnerabilities can be exploited.

Every CPO is trying to enable easy charging access through their mobile app or website. I am sharing a few basic best practices that can be implemented with low effort.

Image source: Pulse Energy

Certificate pinning If you have an EV charging app, make sure that you do certificate pinning. This is a process of ensuring that your app only speaks to your server, as it will only trust the certificate that your server provides. You can pin the root certificate in case you want to avoid having to update your app every time your domain certificate gets rotated. Certificate pinning helps secure the system from a man-in-the-middle attack.

Enable secure websockets (TLS) Ask your charger OEM to start supporting secure websockets. Getting CMS vendors to enable TLS is easy, but its not worth it if your hardware does not support it. This can prevent MITM (Man In The Middle) attacks between the charger and the cloud server.

Obfuscation Enable code obfuscation within your EV charging app. Reverse engineering mobile apps are easy these days, poor security can lead to leakage of hardcoded secrets and payment gateway keys. It is possible that one can reconstruct entire API requests and figure out what keys are used for those APIs.

No hard coding keys There are applications and websites out there that have hardcoded keys with which you can start and stop charging sessions using. One needs to actively avoid doing that.

Over the last couple of years, the Indian EV Charging industry has been rapidly growing, and everyone has been trying to keep up. However, we have now reached an inflection point where we need to focus on strengthening our systems. This applies to us too, Pulse Energy is not perfect either. We have a long way to go and each of have to take trade offs. However, It is crucial for every developer working in this field to be well-informed about security measures and to prioritize making their chargers and cloud interfaces more secure.

This article was first published in EVreporter July 2023 magazine.

Also read: Profitability analysis of an EV charging station

Subscribe & Stay Informed

Subscribe today for free and stay on top of latest developments in EV domain.

Read the original post:
Cyber security concerns and best practices in EV charging ... - EVreporter

How to create and manage Amazon EBS snapshots via AWS CLI – TechTarget

While Amazon EC2 provides the ability to launch a variety of virtual servers, there is also the need to deploy data storage for each virtual server.

Amazon Elastic Block Storage (EBS) volumes are virtual storage devices in the cloud that get attached to EC2 instances and provide internal storage for essential areas in any application, such as OS files, source code, libraries and configuration files. Given the critical nature of this data, enterprises need a reliable way to create and manage backups of EBS volumes.

Learn how EBS snapshots can help with disaster recovery, as well as how to create and manage them through the AWS Command Line Interface.

EBS snapshots are a point-in-time backup of an EBS volume. With EBS snapshots, users can control how data in an EBS volume is backed up, stored, managed and recovered. Developers can trigger the creation of an EBS snapshot using the AWS CLI or SDK, or through the EC2 console. Some core features of EBS snapshots include the following:

EBS snapshots are a better fit for the creation of regular backups of EBS volumes. For a solid disaster recovery strategy, users should create snapshots routinely and copy them to other AWS Regions and AWS accounts. EBS snapshots can also be transitioned to cold storage, which is an option to consider to significantly reduce storage costs.

EBS snapshots cost $0.05 per GB, per month, for the standard tier. The cost to create and keep an Amazon Machine Image depends solely on the EBS snapshots that are essential to that AMI. A 100 GB EBS snapshot or AMI would cost $5 per month. Subsequent snapshots of a particular EBS volume are charged based only on the blocks that have changed since the previous snapshot.

To reduce costs, users can transition snapshots to the Archive tier, which costs $0.0125 per GB, per month. In the Archive tier, the same 100 GB snapshot would then cost $1.25 per month. Data retrieval from the Archive tier is billed at $0.03 per GB. Be aware that there's a minimum 90-day retention period.

When users create an EBS snapshot, they can use it to configure an EC2 AMI. An AMI supplies a storage image, which a new EC2 instance requires at launch.

An active EC2 instance can also create an AMI. The EC2 service first creates one or more EBS snapshots, which are then automatically used for the new AMI. The number of snapshots produced depends on how many EBS volumes are attached to the source EC2 instance. The EC2 service creates one EBS snapshot per attached EBS volume.

EC2 instances can be launched only from an existing AMI and not directly from an EBS snapshot. If an EBS snapshot is used as the source data to launch new EC2 instances, it must first be used to create an AMI. Then, the AMI will be the input when launching new EC2 instances.

Since AMIs are a way to manage the starting point of new EC2 instances, there is no need to create AMIs as frequently as EBS snapshots. Also, AWS recommends maintaining cross-region and cross-account copies of AMIs for disaster recovery purposes, as well as enabling encryption when creating EBS volumes and snapshots.

It's convenient to launch and manage EBS snapshots through the AWS CLI. The create-snapshot command provides a way to quickly start snapshot creation. A basic example of this CLI command looks like the following example.

This command returns, among other data, the SnapshotId value of the recently created snapshot, as well as its creation state, the value of which will be Pending right after calling this command.

The describe-snapshots command in the following example offers a way to list and filter snapshots in your account.

This command also offers a useful filters parameter to narrow the list of returned EBS snapshots.

As with any AWS CLI command, it's important to define region and profile parameters, or set them through the AWS_DEFAULT_REGION and AWS_PROFILE environment variables, respectively. The examples provided show basic scenarios. Keep in mind that the CLI offers additional parameters that deliver more options.

See more here:
How to create and manage Amazon EBS snapshots via AWS CLI - TechTarget

What’s Holding up WebAssembly’s Adoption? – The New Stack

The promise for WebAssembly is this: Putting applications in WebAssembly (Wasm) modules can improve their runtime performance and lower latency speeds, while improving compatibility across the board.

WebAssembly requires only a CPU instruction set. This means that a single deployment of an application in a WebAssembly module theoretically should be able to run and be updated on a multitude of different disparate devices whether that might be for servers, edge devices, multiclouds, serverless environments, etc.

In this way, WebAssembly is already being widely deployed to improve application performance when running on the browser or on the backend. However, the full realization of WebAssemblys potential has yet to be achieved.

While the WebAssembly core specification has become the standard, server-side Wasm remains a work in progress. The server-side Wasm layer helps to ensure endpoint compatibility among the different devices and servers on which Wasm applications are deployed. Without a standardization mechanism for server-side WebAssembly, exports and imports will be required to be built for each language so that each runtime will understand exports/imports differently, and so on.

As of today, Wasm components is the component model but there are other verities being worked upon; Wasi is an approach that configures WASM for specific hardware. wasi-libc is the posixlike kernel group or world; wasi-cloud-core is a proposal for a serverless world. As such, the day when developers can create applications in the language of their choice for distribution across any environment simultaneously, whether its on Kubernetes clusters, servers, edge devices, etc. has yet to come.

Indeed, telling the WebAssembly story beyond the browser has taken a considerable amount of fundamental work, Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack. Some of this is just pure engineering: Weve had to build the tooling. Some of it, though, has been good old-fashioned product management, Butcher said. That means identifying the things that frustrate the user, and then solving them, We are on the very cusp of seeing these two threads converge, as the practical output of product management intersects with the engineering work behind the component model.

Wasms value proposition can be summed up by supersonic performance, reduced cost of operations and platform neutrality, but the component model remains the sticking point, Butcher said. Performance was the easy one, and I think we can already check it off the list. At Fermyon, were seeing total cost of ownership plummet before our eyes as thousands of users sign up for our cloud, Butcher said. But platform neutrality at the level we care about requires the component model. On that front, tomorrow cant come soon enough.

WebAssembly is designed to run applications written in a number of languages it can host in a module. It now accommodates Python, JavaScript, C++, Rust and others. Different applications written with different programming languages should be able to function within a single module, although again, this capability largely remains under development.

Making programming languages truly interchangeable at the system level might be the final frontier on the way toward achieving the code-once, deploy-anywhere paradigm. But for this to work out, we need a common standard to integrate different languages with their specific feature sets and design paradigms, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said.

This is a classic collective action problem where individual for-profit organizations have to collaborate for all of them to collectively achieve the ultimate goal of language interoperability. Additionally, they need to agree on pragmatic compromises when it comes to standardizing and fleshing out feature sets across languages.

Meanwhile, engineers from numerous companies and universities are working on the component model, Wasi proposals and language toolchains under the auspices of a binary instruction format, with the goal of putting the specifications into the World Wide Web Consortium (W3C), Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, said.

The engineers are actively contributing to the common pool of knowledge by contributing or maintaining open source projects, taking part in efforts such as the ByteCode Alliance or sharing their knowledge and experiences at conferences, such as during the KubeCon + CloudNativeCon Europes co-located event Cloud Native Wasm Day.

As always when it comes to standards, all major parties involved need to be able to tell their stakeholders why it makes sense to spend valuable developer hours on this endeavor. This becomes especially tricky when different parties follow different incentive structures, e.g. cloud service providers are interested in customers spending as much money as possible on their services without getting sufficiently frustrated to move to another cloud, Volk said. This means that some level of lock-in is desired, while enterprise software vendors need to focus on a high degree of customizability and portability to open up their products to the largest possible audience. All this combined shows the high level of difficulty involved in bringing interoperability for Wasm over the finish line. I hope that we will because the payoff should definitely be worth it.

A number of tools members offering PaaS offerings to distribute applications with Wasm continue to proliferate in wait for Wasms expected coming heyday. Entrants include Fermyon and Cosmonic. The newer player Dylibso is developing tailored solutions for observability; these solutions include Modsurfer, used to analyze the complexity and potential risks associated with running specific code in your environment.

Meanwhile, most large software companies are actively contributing to Wasm without necessarily creating a formal department to support Wasm-related open source projects, development, integrations with infrastructure and network topologies or to develop applications for Wasm, tech leaders are almost invariably working with Wasm in production or as sandbox projects.

To facilitate the incorporation of WebAssembly (Wasm) and bridge any existing gaps, VMwares Wasm Labs launched the Wasm Language Runtimes project. The primary goal is to be ready to run language runtimes, libraries and components, for developers interested in embracing WebAssembly, according toDaniel Lopez Ridruejo,a senior director atVMwareand CEO ofBitnami/.

These language runtimes can be utilized in conjunction with various other initiatives, including mod_wasm (for running conventional web applications like WordPress) and Wasm Workers Server (for executing edge/serverless apps). Ridruejo also mentioned the compatibility of the Language Runtime project with open-source endeavors such as Fermyons Spin.

Others, such as Chronosphere and Microsoft, have already begun to use WebAssembly to support their operations mostly, while continuing to actively contribute to the development of Wasm for the community. In Microsofts case, its work with WebAssembly dates back years. Microsoft Flight Simulator for some years now has used WebAssembly for mod protection, for example, when it was shown to improve both security and portability for add-ons distributed as WebAssembly modules. Excel online uses WebAssembly for calculating Lambda functions.

Most of Microsofts work now consists of investing in the upcoming component model, Microsofts Squillace said. For example, Microsoft is expanding the Azure Kubernetes Service WASI NodePool preview and giving its services additional hypervisor protection per request on top of the Wasm sandbox with the Hyperlight project, Squillace said. This serves very small bare-metal micro-vms very fast for use with wasm functions, Squillace said.

Outside of the edge browser, Microsoft is investing mainly in server-based Wasm, the system interface (wasi) and the Wasm component ecosystem surrounding the Bytecode Alliance Foundation, as well as in infrastructure and language tooling to enable productive use, Squillace said. That means open source investments like the CNCFs Containerd runwasi shim for Kubernetes integration, but also TinyGo-compatible Wasm component tooling, VSCode extensions and serverless proposals like wasi-cloud-core, Squillace said. It also means Azure investments in security like hyperlight and Azure services like AKS WASI NodePool Preview and AKS Edge Essentials, among others.

WebAssembly trajectory reflects similar cycles that happen with technologies, such as Java, containers, etc., Ridruejo said. Each one of them have seen an ecosystem grow around it with new ways of doing monitoring, security etc. It is too early to know yet what that looks like, Ridruejo said. The question is whether that change will be incremental and existing vendors like, say, Datadog for monitoring will add Wasm support as a new feature or it will be distributive and new companies will take Datadogs place (again just an example) and become the Datadog of Wasm.

The million-dollar question is what needs to happen before tool providers and large enterprises can begin using WebAssembly to make money. To that, Squillace said:

Customers already tell us they need a comprehensible (if not great) developer experience and a deployment and management experience that is solid. They also need networking support (coming in Preview 2); no networking means no service hosts in IoT without runtime support, for example. And finally, they need coherent interactive debugging. That last one is going to be hard across all languages and runtimes.

Read more from the original source:
What's Holding up WebAssembly's Adoption? - The New Stack

8 benefits of data center virtualization – TechTarget

In a digital-first world, virtualization is a driving factor -- and for the data center, it's key to scaling and maintaining efficiency. Data center virtualization is another option organizations have to simplify operations.

Whether you're an admin or a stakeholder, consider data center virtualization for its benefits in cost, scalability and other areas.

Virtualization is the process of creating a "virtual" instance of a system or application. In other words, virtualization uses software to simulate hardware and abstracts resources from a physical piece of equipment to create a virtual version of it. This enables the use of multiple virtual systems with fewer physical resources.

Traditional data centers typically require large footprints to house dozens and sometimes hundreds of physical servers, storage devices and networking equipment. Data center virtualization is the process of transforming these physical resources into digital ones -- for example, by creating virtual servers from physical servers. Fully virtualized data centers usually have a completely interconnected system of virtualized hardware and other digital components.

With virtualization, data centers can drastically cut back on their physical hardware, but that's not the only benefit afforded by this digital technology.

From increased scalability to improved resource provisioning, virtual data centers can provide a variety of benefits for data center admins and end users.

1. Reduced hardware costs

A virtualized data center requires significantly less physical hardware, so organizations can save on data center costs in the form of smaller footprints and less equipment management. It also allows you to do more with your physical resources -- you can get more out of each server, and with greater server utilization, you'll get a nice boost to efficiency.

2. Less need for -- and more efficient -- cooling

It's not easy maintaining ambient and equipment temperatures in a data center crowded with hot, power-hungry servers. Along with reduced costs due to less physical hardware, virtualized data centers have significantly reduced energy and cooling needs and can process the same workloads as traditional data centers at a fraction of the cost.

3. Increased scalability

Setting up a physical server can be expensive and time-consuming and often requires intensive, hands-on work to get it up and running. In comparison, a virtual server can be set up simply, quickly and inexpensively.

Traditional data centers are also forced to work with limited space. If you run out of space for more servers, you're looking at a multiyear construction project to clear space, which can be harmful to the environment, not to mention the increased energy burden. With data center virtualization, you can do significantly more with significantly less, allowing you to scale up without as many constraints.

4. Increased flexibility

In addition to easier setup, virtual resources are easy to spin down as needed. If your data center experiences a rapid spike in processing demands, you can easily flex to meet those needs with a virtualized data center and vice versa: If those demands decrease drastically after the spike, you can easily downsize any unnecessary servers or resources.

5. Better for compliance and security

Moving to a virtualized data center can help you better meet regulatory requirements in a few ways. For example, organizations can encapsulate data traffic within a virtual ecosystem, separated from your physical hardware. By keeping data isolated in this way, you can better protect it and prevent a bad actor from gaining access to other data by moving laterally across your network.

Another advantage is more flexible policy management. In a virtual environment, it's easier to confine data workloads with distinct security policies. With differing local and global compliance demands, you can configure a variety of policies and duplicate them as needed. This can help simplify data governance while maintaining a high level of compliance.

6. Enhanced disaster recovery and backup

In a traditional data center, recovery plans are limited, most often coming down to duplicating data on backup machines housed in a recovery site. Should your main site go down due to a disaster or a piece of hardware fails, you can fail back to the data at the recovery site, though it might be outdated, and it might take a while to get a new machine up and running.

With virtualization, you can automate much of the backup process. Organizations can periodically save snapshots of virtual resources more frequently. And because you'll have less hardware, you can vastly consolidate your recovery site. If hardware fails, you can move all your virtual machines to another host or server immediately and often automatically, resulting in far less downtime.

7. Improved resource provisioning

Virtual data center architecture allows you to greatly simplify resource provisioning. Physical servers can be more resilient if they are decoupled from applications and your admins can focus on optimizing resources at a more granular level. With a single source of truth and superior orchestration capabilities at their fingertips, admins can adjust and fine-tune resource utilization on the fly, maximizing performance.

8. Greater data mobility

Hand in hand with resource provisioning is data mobility. Virtualized data centers often enjoy more efficient data workloads, as admins can spend less time managing technical infrastructure and more time helping data get where it needs to go, faster. With a leaner network, data can move across and through it easier, reducing traffic obstacles and even bandwidth bottlenecks.

Moving away from a physical data center setup to virtualized infrastructure is easier said than done. The digital transformation process takes time, careful planning, a sizable upfront investment and cloud expertise -- not to mention an even more fervent dedication to clearly defining configuration policies. There's also a risk of virtual machine sprawl, a level of IT complexity that can prove difficult to curb once it gets out of control.

Compared to traditional data centers, the virtualized data center clearly offers a host of benefits. And despite the considerations that come with transitioning, we're entering an era of high data center demand on a global scale. The flexibility provided by virtualization can be key to keep up with the rapid pace of the industry and growing customer needs.

Read this article:
8 benefits of data center virtualization - TechTarget