Page 1,097«..1020..1,0961,0971,0981,099..1,1101,120..»

SITAs passenger processing solutions to power airports authority of India across 43 airports – Travel Daily

Indiam business woman checking flight information at airport

SITA secured a landmark deal with Airports Authority of India to support one of the biggest growth markets globally, providing technology to 43 of Indias biggest airports.

Indias Civil Aviation is among the fastest-growing aviation markets globally and will be a major growth engine to make India a USD 5 trillion economy by 2024.

The deal will see improvements over 2,700 passenger touchpoints, paving the way for the adoption of new-age solutions to meet the modern passengers expectations. Initially deployed across 43 airports, the technologies are scalable to an additional 40 airports over the next seven years. Over 500 million passengers are expected to be processed during this period.

The rollout of new cloud technology will enable Indian airports to shift to common use passenger experiences where multiple airlines can leverage the same infrastructure, such as check in counters, self-service kiosks, and boarding gates.

The adoption of cloud solutions also brings new agility and flexibility to scale airport operations efficiently as passenger numbers grow. The cloud first approach enforces better security and offers airlines a platform to host new progressive technologies and move away from native applications. Centralized cloud hosting of all servers means reduced on-premise infrastructure costs and results in centralized control, enabling proactive monitoring and control of services.

Sumesh Patel, President, Asia Pacific, SITA, said: Were excited to partner with AAI on this large-scale deployment of leading passenger processing solutions.The number of airports in India is expected to increase from 148 today to 220 by 2025. The new airports will bring closer together Indias almost 50 cities with populations exceeding one million people, creating substantial economic value in the long term. By connecting these cities better, air travel and transport will help unlock the full potential of Indias economic growth too. Ensuring efficient and fluid operations and a seamless passenger experience at these airports will be critical to delivering on Indias air transport industry opportunity.

The solutions will give passengers more control over their journey, offering a low-touch, efficient check-in, bag drop, and collection process through assisted and self-service mechanisms. The airports will benefit from a reduced infrastructure footprint and increased operational efficiency. At the same time, airlines will see service charges drop and can take advantage of an agile technology platform to build bespoke passenger experiences on.

Under the agreement, SITA will deploy its state-of-the-art solutions, includingSITA Flex,CUPPS, SITA CUSS, andSITA Bag Manager. These IATA-certified platforms offer airlines and ground handlers the benefits of common use technologies, enabling scalable operations to meet their specific operational requirements.

The Baggage Reconciliation System ensures a high level of baggage accountability, preventing losses and security concerns, which is critical to industry recovery amidst a spiraling baggagemishandling rate.

This project represents a significant shift towards adopting cutting-edge technological platforms, enhancing operational efficiencies, and paving the way for the future adoption of biometric passenger processing. Additionally, real-time dashboard-based information will be made available to governing agencies, promoting transparency and informed decision-making.

AAI officers at the airport and headquarters now have access to online real time dashboards, replacing the monthly service availability reports. The solution ensures they are better informed and know the availability of systems across all airports at every point in time, optimizing efficiency and promoting smooth operations.

The project commenced in May 2022 and involved a comprehensive revamp of existing services without disrupting ongoing operations.

Originally posted here:
SITAs passenger processing solutions to power airports authority of India across 43 airports - Travel Daily

Read More..

How Broadcoms acquisition of VMware will accelerate multi-cloud … – ComputerWeekly.com

First impressions can be misleading. Case in point: when Broadcom first announced its plan to acquire VMware, initial concerns from regulators focused on perceived anti-competitive effects resulting from the deal.

Others suggested that VMwares value proposition could be diluted. Fortunately, since then, Broadcom has been making its strategic case with regulators and customers. Most notably, Broadcom president and CEO Hock Tan, in a blog post, outlined how a combined Broadcom and VMware will create new competitive pressure on the public cloud by accelerating private and multi-cloud capabilities for enterprises, while pledging $2bn a year for research and development (R&D) and improved VMware deployments.

So, were regulators initial concerns about the deal justified? This article outlines why ABI Research believes this acquisition is a positive development for the market overall and, in fact, essential in the current cloud market.

Virtualisation companies like VMware create vendor-agnostic software that can operate on any server. VMwares products started with virtual machines and hypervisors. Today, its product offerings also include Kubernetes clusters (Tanzu), container-as-a-service (CaaS) products, and software that can automate and orchestrate private, public, and hybrid clouds. So, why has VMware not become a standard in the cloud market for large businesses and small and medium enterprises (SMEs)?

VMwares virtual machines and hypervisors were typically deployed by enterprises and SMEs to maximise workload productivity in their own datacentres. As hyperscalers public clouds became competitive with on-premises datacentres, enterprises and SMEs moved their workloads away from on-premises servers and into the public cloud.

VMware has responded by developing private and multi-cloud products, but sales and adoption of these products are more complicated due in part to the difficulty of deployment, orchestrating, and automating between different cloud domains. Moreover, VMwares ability to create alternatives to the public cloud and democratise multi-cloud deployments is challenging for many reasons.

VMware has a smaller R&D team relative to the hyperscalers, meaning that future market expansions will require significant spending. VMware may not be able to afford this. To meet the demands of multi-cloud environments, VMware would greatly benefit from a strategic partnership with a robust entity that has access to ample financial resources and advanced technical capabilities.

Broadcom could play the role of a strong strategic partner, helping VMware to meet its multi-cloud challenges and ambitions

If VMware is to grow its multi-cloud footprint and position itself as a disruptive alternative, it is crucial to ensure a seamless transition when customers move workloads. VMware will benefit from Broadcoms investment in improving the ease of moving workloads.

Broadcom could play the role of a strong strategic partner, helping VMware to meet its multi-cloud challenges and ambitions. The company has a track record of bringing disruptive technologies to market and helping enterprises better position those technologies to achieve their business outcomes.

VMwares market capitalisation is nearly $60bn as of mid-2023, but this is dwarfed by public cloud heavyweights like Microsoft, for example, which has a market worth of $2.5tn and the ability to invest significantly in gaining and securing cloud market share.

Broadcoms strategic vision for VMware, and its commitment to invest in it, are essential if VMware is to evolve its market position and create viable alternatives to the public cloud. But why are regulators concerned?

Both the European Commission and the UKs Competition and Markets Authority (CMA) launched formal inquiries about the acquisition, citing concerns that Broadcom can use VMwares position in the virtualisation market to gain unfair advantages in discrete hardware products that interact with VMware software, such as network interface cards (NICs), storage adapters, and fibre channel host bus adapters.

ABI Research believes there are three main reasons why concerns about interoperability are misplaced in relation to Broadcoms acquisition of VMware:

First, Broadcoms business model has long been based around hardware and software products that are designed to function with no bias to any one vendor. Broadcoms products are integrated into hundreds of vendor products in the market, many of which are also competing with one another. Broadcom provides the infrastructure and chipset solutions to enable these products without bias.

Second, VMwares ubiquity and interoperability are key to Broadcoms broader strategic goals for the combined group. In commercial terms, Broadcom would undermine, if not destroy, the value of its acquisition if it were to risk VMware softwares ubiquity to favour a few niche Broadcom hardware products that are of concern to regulators that is, NICs, storage channels, and fibre channel host bus adapters. As margins for VMwares software are significantly higher than for the hardware products of concern, it simply makes no sense to degrade a more profitable product to attempt to sell less profitable products.

And third, it would not be possible for Broadcom, technically speaking, to target particular hardware competitors, even were such a strategy to become commercially advantageous. Any degradation in VMware softwares interoperability would apply to multiple hardware vendors, across the board, with detrimental commercial consequences.

Ultimately, while different market and product lifecycles would make any effort to interfere with a products interoperability difficult and costly, it is against Broadcoms interests to try and gain an advantage with relatively inexpensive products. Doing so would cause reputational damage and put much larger investments in VMware at risk. Tan has already reiterated that several VMware products are key for the future, including Tanzu and VMwares CaaS layer, which is agnostic to underlying infrastructure and can run in private, public, or multi-cloud environments.

Todays market is not for faint-hearted service providers and is wholly dominated by hyperscalers. Looking at current dynamics, competition between leading public cloud providers is intense and the barriers to entry for new entrants are prohibitive.

Enterprises that have been lured to migrate workloads to the hyperscalers are left with limited options. Many do not have the capabilities to host their own workloads on-premises and are locked into a specific hyperscaler platform and fee structure.

These very same enterprises also report that its difficult to decouple their products from hyperscaler platforms without incurring costs, and in almost all cases, its not possible to port between hyperscaler platforms, again without incurring costs and refactoring efforts for example, migrating from AWS to Google Cloud to reduce costs. These issues of cost and lock-in for customers, as well as lock-out for potential competitors, are central to a broader set of concerns that have prompted inquiries and investigations by regulators in the US, the EU, and the UK.

Broadcoms vision in acquiring VMware is to create products that allow enterprises to freely move their application payloads between private, public, and hybrid cloud environments. Right now, this is not possible without cost and effort. Broadcom aims to address the technological and cost gaps. Application portability is a challenge for enterprises using the public cloud, but VMware alone may not have the resources to develop a world-class solution. With Broadcoms experience and R&D investments, VMware will have a much stronger opportunity to compete and succeed in this market.

Hyperscalers, concentrated principally in the US and China, are not slowing down and hundreds, if not thousands of enterprises are making their application hosting choices right now. With a stronger VMware value proposition enabled by Broadcom investments, businesses could overcome the barrier of hyperscalers lock-in, gaining more flexibility to host their applications across multiple cloud environments according to their business needs.

Based on Broadcoms stated intentions, regulators should welcome its acquisition of VMware. The transaction will, in all likelihood, lead to a stronger VMware, and a healthier multi-cloud ecosystem.

Dimitris Mavrakis, senior research director, manages ABI Researchs telco network coverage, including telco cloud platforms, digital transformation, and mobile network infrastructure. Research topics include artificial intelligence (AI) and machine learning (ML) technologies, telco software and applications, network operating systems, software-defined networking (SDN), network functions virtualisation (NFV), long-term evolution (LTE) diversity, and 5G. ABI Research is a global technology intelligence firm delivering actionable research and strategic guidance to technology leaders, innovators, and decision makers around the world.

Read more here:
How Broadcoms acquisition of VMware will accelerate multi-cloud ... - ComputerWeekly.com

Read More..

Green cloud will lead race to net zero – Technology Decisions

The Decade of Action is in full swing, and with the Australian Government making a big push towards net zero with its investment in a National Net Zero Authority, businesses are feeling the heat to act fast.

Fortunately, Australia is one of the most cloud-mature markets in the world. According to a recent global study by TCS, 81% of the top Australian enterprises agree that they have already achieved most, if not all, of their cloud-based goals for critical apps and workloads, in comparison to just 65% of global enterprises. This means businesses are in a prime position to lead the way in energy transition by leveraging the green cloud a sustainable approach in the design, manufacture, use and disposal of IT resources. In fact, the majority of Australian businesses (70%) are already leveraging cloud technologies to achieve their sustainability goals.

Embracing the sustainable approach to cloud computing has several key benefits for businesses:

As with all technology, it comes with challenges. One is the cost of entry. For many businesses, the initial investment in projects for application modernisation and building sustainable cloud solutions may seem daunting. However, with the right roadmap and business case in place, organisations can assess the true benefits of these initiatives and make informed decisions.

Another challenge is the reluctance to move away from old application architectures and legacy models of IT operations and management. According to the latest global cloud study by TCS, the accumulated complexity and rigidity of business processes and operations is the single biggest obstacle for cloud adoption among more than half (53%) of Australian businesses. But with a clear understanding of the benefits of green cloud services and a commitment to sustainable practices, businesses can overcome this resistance and make strides towards net zero.

While the cost of entry for sustainability solutions may have been prohibitive, there is a surprising positive correlation between sustainability initiatives and profitability. According to news reports, 86% of Australian industry leaders now see a positive connection between taking environmental action and profitability. Around 56% of Australian businesses now believe addressing environmental issues will be material to business results within the next five years.

With sustainability at the front and centre for most business leaders, we are really seeing the fusion of technology and business strategy to support ESG targets. We are being offered a unique opportunity to reboot to a new era, an era where sustainability and prosperity are part of the same set of aspirations and success equation.

Image credit: iStock.com/imaginima

Continue reading here:
Green cloud will lead race to net zero - Technology Decisions

Read More..

Cyber security concerns and best practices in EV charging … – EVreporter

Pulse Energy is a start-up based out of Bangalore that offers an energy-as-a-service API for EV charging. They predominantly cater to fleet operators and help their vehicles get access to multiple EV charging networks. In this article, Akhil Jp from Pulse Energy shares the cybersecurity concerns they have noticed in this space over the last two years and recommends the best practices that Charge Point Operators (CPOs) should follow to create a secure charging ecosystem.

In our estimate, 70% of DC chargers in India are insecure. The main security breaches we observed in the India EV charging ecosystem are:

Today, it is potentially possible for one to snoop into the traffic between the charger and the server.

Here is the typical and simplified form of an EV charging network, where green dots represent the EV, light orange represents the user information such as payments and user credentials, dark orange boxes represent the charger management system, and the blue dots represent the charger.

Image source: Pulse Energy

In the majority of cases, the communication link between the charger and the CMS today is insecure. If we take a basic charging setup, every charger has a LAN cable that runs all the way to the modem or the communication module. In case of an insecure system, one could place an interceptor and start capturing traffic. The interceptor can easily be built by taking a Raspberry pi and placing it between the charger and modem. A simple Nginx reverse proxy server with websockets enabled can do the trick. It is not even expensive to build one and can be done for INR 2,000 to 3,000. Most of the cabinets in public charging areas are not locked; someone can open them and place these hardware interceptors. If you are a CPO, make sure that you talk to your charger OEMs about enabling TLS or secure websockets, so such threats can be avoided.

Image source: Pulse Energy

Many charger manufacturers do not support secure communication, although there are some who do and some who are working towards enabling it. Our attempts to promote secure communication are sometimes met with resistance from these manufacturers, as their hardware does not accommodate it.

Below are a few examples of how these vulnerabilities can be exploited.

Every CPO is trying to enable easy charging access through their mobile app or website. I am sharing a few basic best practices that can be implemented with low effort.

Image source: Pulse Energy

Certificate pinning If you have an EV charging app, make sure that you do certificate pinning. This is a process of ensuring that your app only speaks to your server, as it will only trust the certificate that your server provides. You can pin the root certificate in case you want to avoid having to update your app every time your domain certificate gets rotated. Certificate pinning helps secure the system from a man-in-the-middle attack.

Enable secure websockets (TLS) Ask your charger OEM to start supporting secure websockets. Getting CMS vendors to enable TLS is easy, but its not worth it if your hardware does not support it. This can prevent MITM (Man In The Middle) attacks between the charger and the cloud server.

Obfuscation Enable code obfuscation within your EV charging app. Reverse engineering mobile apps are easy these days, poor security can lead to leakage of hardcoded secrets and payment gateway keys. It is possible that one can reconstruct entire API requests and figure out what keys are used for those APIs.

No hard coding keys There are applications and websites out there that have hardcoded keys with which you can start and stop charging sessions using. One needs to actively avoid doing that.

Over the last couple of years, the Indian EV Charging industry has been rapidly growing, and everyone has been trying to keep up. However, we have now reached an inflection point where we need to focus on strengthening our systems. This applies to us too, Pulse Energy is not perfect either. We have a long way to go and each of have to take trade offs. However, It is crucial for every developer working in this field to be well-informed about security measures and to prioritize making their chargers and cloud interfaces more secure.

This article was first published in EVreporter July 2023 magazine.

Also read: Profitability analysis of an EV charging station

Subscribe & Stay Informed

Subscribe today for free and stay on top of latest developments in EV domain.

Read the original post:
Cyber security concerns and best practices in EV charging ... - EVreporter

Read More..

How to create and manage Amazon EBS snapshots via AWS CLI – TechTarget

While Amazon EC2 provides the ability to launch a variety of virtual servers, there is also the need to deploy data storage for each virtual server.

Amazon Elastic Block Storage (EBS) volumes are virtual storage devices in the cloud that get attached to EC2 instances and provide internal storage for essential areas in any application, such as OS files, source code, libraries and configuration files. Given the critical nature of this data, enterprises need a reliable way to create and manage backups of EBS volumes.

Learn how EBS snapshots can help with disaster recovery, as well as how to create and manage them through the AWS Command Line Interface.

EBS snapshots are a point-in-time backup of an EBS volume. With EBS snapshots, users can control how data in an EBS volume is backed up, stored, managed and recovered. Developers can trigger the creation of an EBS snapshot using the AWS CLI or SDK, or through the EC2 console. Some core features of EBS snapshots include the following:

EBS snapshots are a better fit for the creation of regular backups of EBS volumes. For a solid disaster recovery strategy, users should create snapshots routinely and copy them to other AWS Regions and AWS accounts. EBS snapshots can also be transitioned to cold storage, which is an option to consider to significantly reduce storage costs.

EBS snapshots cost $0.05 per GB, per month, for the standard tier. The cost to create and keep an Amazon Machine Image depends solely on the EBS snapshots that are essential to that AMI. A 100 GB EBS snapshot or AMI would cost $5 per month. Subsequent snapshots of a particular EBS volume are charged based only on the blocks that have changed since the previous snapshot.

To reduce costs, users can transition snapshots to the Archive tier, which costs $0.0125 per GB, per month. In the Archive tier, the same 100 GB snapshot would then cost $1.25 per month. Data retrieval from the Archive tier is billed at $0.03 per GB. Be aware that there's a minimum 90-day retention period.

When users create an EBS snapshot, they can use it to configure an EC2 AMI. An AMI supplies a storage image, which a new EC2 instance requires at launch.

An active EC2 instance can also create an AMI. The EC2 service first creates one or more EBS snapshots, which are then automatically used for the new AMI. The number of snapshots produced depends on how many EBS volumes are attached to the source EC2 instance. The EC2 service creates one EBS snapshot per attached EBS volume.

EC2 instances can be launched only from an existing AMI and not directly from an EBS snapshot. If an EBS snapshot is used as the source data to launch new EC2 instances, it must first be used to create an AMI. Then, the AMI will be the input when launching new EC2 instances.

Since AMIs are a way to manage the starting point of new EC2 instances, there is no need to create AMIs as frequently as EBS snapshots. Also, AWS recommends maintaining cross-region and cross-account copies of AMIs for disaster recovery purposes, as well as enabling encryption when creating EBS volumes and snapshots.

It's convenient to launch and manage EBS snapshots through the AWS CLI. The create-snapshot command provides a way to quickly start snapshot creation. A basic example of this CLI command looks like the following example.

This command returns, among other data, the SnapshotId value of the recently created snapshot, as well as its creation state, the value of which will be Pending right after calling this command.

The describe-snapshots command in the following example offers a way to list and filter snapshots in your account.

This command also offers a useful filters parameter to narrow the list of returned EBS snapshots.

As with any AWS CLI command, it's important to define region and profile parameters, or set them through the AWS_DEFAULT_REGION and AWS_PROFILE environment variables, respectively. The examples provided show basic scenarios. Keep in mind that the CLI offers additional parameters that deliver more options.

See more here:
How to create and manage Amazon EBS snapshots via AWS CLI - TechTarget

Read More..

8 benefits of data center virtualization – TechTarget

In a digital-first world, virtualization is a driving factor -- and for the data center, it's key to scaling and maintaining efficiency. Data center virtualization is another option organizations have to simplify operations.

Whether you're an admin or a stakeholder, consider data center virtualization for its benefits in cost, scalability and other areas.

Virtualization is the process of creating a "virtual" instance of a system or application. In other words, virtualization uses software to simulate hardware and abstracts resources from a physical piece of equipment to create a virtual version of it. This enables the use of multiple virtual systems with fewer physical resources.

Traditional data centers typically require large footprints to house dozens and sometimes hundreds of physical servers, storage devices and networking equipment. Data center virtualization is the process of transforming these physical resources into digital ones -- for example, by creating virtual servers from physical servers. Fully virtualized data centers usually have a completely interconnected system of virtualized hardware and other digital components.

With virtualization, data centers can drastically cut back on their physical hardware, but that's not the only benefit afforded by this digital technology.

From increased scalability to improved resource provisioning, virtual data centers can provide a variety of benefits for data center admins and end users.

1. Reduced hardware costs

A virtualized data center requires significantly less physical hardware, so organizations can save on data center costs in the form of smaller footprints and less equipment management. It also allows you to do more with your physical resources -- you can get more out of each server, and with greater server utilization, you'll get a nice boost to efficiency.

2. Less need for -- and more efficient -- cooling

It's not easy maintaining ambient and equipment temperatures in a data center crowded with hot, power-hungry servers. Along with reduced costs due to less physical hardware, virtualized data centers have significantly reduced energy and cooling needs and can process the same workloads as traditional data centers at a fraction of the cost.

3. Increased scalability

Setting up a physical server can be expensive and time-consuming and often requires intensive, hands-on work to get it up and running. In comparison, a virtual server can be set up simply, quickly and inexpensively.

Traditional data centers are also forced to work with limited space. If you run out of space for more servers, you're looking at a multiyear construction project to clear space, which can be harmful to the environment, not to mention the increased energy burden. With data center virtualization, you can do significantly more with significantly less, allowing you to scale up without as many constraints.

4. Increased flexibility

In addition to easier setup, virtual resources are easy to spin down as needed. If your data center experiences a rapid spike in processing demands, you can easily flex to meet those needs with a virtualized data center and vice versa: If those demands decrease drastically after the spike, you can easily downsize any unnecessary servers or resources.

5. Better for compliance and security

Moving to a virtualized data center can help you better meet regulatory requirements in a few ways. For example, organizations can encapsulate data traffic within a virtual ecosystem, separated from your physical hardware. By keeping data isolated in this way, you can better protect it and prevent a bad actor from gaining access to other data by moving laterally across your network.

Another advantage is more flexible policy management. In a virtual environment, it's easier to confine data workloads with distinct security policies. With differing local and global compliance demands, you can configure a variety of policies and duplicate them as needed. This can help simplify data governance while maintaining a high level of compliance.

6. Enhanced disaster recovery and backup

In a traditional data center, recovery plans are limited, most often coming down to duplicating data on backup machines housed in a recovery site. Should your main site go down due to a disaster or a piece of hardware fails, you can fail back to the data at the recovery site, though it might be outdated, and it might take a while to get a new machine up and running.

With virtualization, you can automate much of the backup process. Organizations can periodically save snapshots of virtual resources more frequently. And because you'll have less hardware, you can vastly consolidate your recovery site. If hardware fails, you can move all your virtual machines to another host or server immediately and often automatically, resulting in far less downtime.

7. Improved resource provisioning

Virtual data center architecture allows you to greatly simplify resource provisioning. Physical servers can be more resilient if they are decoupled from applications and your admins can focus on optimizing resources at a more granular level. With a single source of truth and superior orchestration capabilities at their fingertips, admins can adjust and fine-tune resource utilization on the fly, maximizing performance.

8. Greater data mobility

Hand in hand with resource provisioning is data mobility. Virtualized data centers often enjoy more efficient data workloads, as admins can spend less time managing technical infrastructure and more time helping data get where it needs to go, faster. With a leaner network, data can move across and through it easier, reducing traffic obstacles and even bandwidth bottlenecks.

Moving away from a physical data center setup to virtualized infrastructure is easier said than done. The digital transformation process takes time, careful planning, a sizable upfront investment and cloud expertise -- not to mention an even more fervent dedication to clearly defining configuration policies. There's also a risk of virtual machine sprawl, a level of IT complexity that can prove difficult to curb once it gets out of control.

Compared to traditional data centers, the virtualized data center clearly offers a host of benefits. And despite the considerations that come with transitioning, we're entering an era of high data center demand on a global scale. The flexibility provided by virtualization can be key to keep up with the rapid pace of the industry and growing customer needs.

Read this article:
8 benefits of data center virtualization - TechTarget

Read More..

What’s Holding up WebAssembly’s Adoption? – The New Stack

The promise for WebAssembly is this: Putting applications in WebAssembly (Wasm) modules can improve their runtime performance and lower latency speeds, while improving compatibility across the board.

WebAssembly requires only a CPU instruction set. This means that a single deployment of an application in a WebAssembly module theoretically should be able to run and be updated on a multitude of different disparate devices whether that might be for servers, edge devices, multiclouds, serverless environments, etc.

In this way, WebAssembly is already being widely deployed to improve application performance when running on the browser or on the backend. However, the full realization of WebAssemblys potential has yet to be achieved.

While the WebAssembly core specification has become the standard, server-side Wasm remains a work in progress. The server-side Wasm layer helps to ensure endpoint compatibility among the different devices and servers on which Wasm applications are deployed. Without a standardization mechanism for server-side WebAssembly, exports and imports will be required to be built for each language so that each runtime will understand exports/imports differently, and so on.

As of today, Wasm components is the component model but there are other verities being worked upon; Wasi is an approach that configures WASM for specific hardware. wasi-libc is the posixlike kernel group or world; wasi-cloud-core is a proposal for a serverless world. As such, the day when developers can create applications in the language of their choice for distribution across any environment simultaneously, whether its on Kubernetes clusters, servers, edge devices, etc. has yet to come.

Indeed, telling the WebAssembly story beyond the browser has taken a considerable amount of fundamental work, Matt Butcher, co-founder and CEO of Fermyon Technologies, told The New Stack. Some of this is just pure engineering: Weve had to build the tooling. Some of it, though, has been good old-fashioned product management, Butcher said. That means identifying the things that frustrate the user, and then solving them, We are on the very cusp of seeing these two threads converge, as the practical output of product management intersects with the engineering work behind the component model.

Wasms value proposition can be summed up by supersonic performance, reduced cost of operations and platform neutrality, but the component model remains the sticking point, Butcher said. Performance was the easy one, and I think we can already check it off the list. At Fermyon, were seeing total cost of ownership plummet before our eyes as thousands of users sign up for our cloud, Butcher said. But platform neutrality at the level we care about requires the component model. On that front, tomorrow cant come soon enough.

WebAssembly is designed to run applications written in a number of languages it can host in a module. It now accommodates Python, JavaScript, C++, Rust and others. Different applications written with different programming languages should be able to function within a single module, although again, this capability largely remains under development.

Making programming languages truly interchangeable at the system level might be the final frontier on the way toward achieving the code-once, deploy-anywhere paradigm. But for this to work out, we need a common standard to integrate different languages with their specific feature sets and design paradigms, Torsten Volk, an analyst for Enterprise Management Associates (EMA), said.

This is a classic collective action problem where individual for-profit organizations have to collaborate for all of them to collectively achieve the ultimate goal of language interoperability. Additionally, they need to agree on pragmatic compromises when it comes to standardizing and fleshing out feature sets across languages.

Meanwhile, engineers from numerous companies and universities are working on the component model, Wasi proposals and language toolchains under the auspices of a binary instruction format, with the goal of putting the specifications into the World Wide Web Consortium (W3C), Ralph Squillace, a principal program manager for Microsoft, Azure Core Upstream, said.

The engineers are actively contributing to the common pool of knowledge by contributing or maintaining open source projects, taking part in efforts such as the ByteCode Alliance or sharing their knowledge and experiences at conferences, such as during the KubeCon + CloudNativeCon Europes co-located event Cloud Native Wasm Day.

As always when it comes to standards, all major parties involved need to be able to tell their stakeholders why it makes sense to spend valuable developer hours on this endeavor. This becomes especially tricky when different parties follow different incentive structures, e.g. cloud service providers are interested in customers spending as much money as possible on their services without getting sufficiently frustrated to move to another cloud, Volk said. This means that some level of lock-in is desired, while enterprise software vendors need to focus on a high degree of customizability and portability to open up their products to the largest possible audience. All this combined shows the high level of difficulty involved in bringing interoperability for Wasm over the finish line. I hope that we will because the payoff should definitely be worth it.

A number of tools members offering PaaS offerings to distribute applications with Wasm continue to proliferate in wait for Wasms expected coming heyday. Entrants include Fermyon and Cosmonic. The newer player Dylibso is developing tailored solutions for observability; these solutions include Modsurfer, used to analyze the complexity and potential risks associated with running specific code in your environment.

Meanwhile, most large software companies are actively contributing to Wasm without necessarily creating a formal department to support Wasm-related open source projects, development, integrations with infrastructure and network topologies or to develop applications for Wasm, tech leaders are almost invariably working with Wasm in production or as sandbox projects.

To facilitate the incorporation of WebAssembly (Wasm) and bridge any existing gaps, VMwares Wasm Labs launched the Wasm Language Runtimes project. The primary goal is to be ready to run language runtimes, libraries and components, for developers interested in embracing WebAssembly, according toDaniel Lopez Ridruejo,a senior director atVMwareand CEO ofBitnami/.

These language runtimes can be utilized in conjunction with various other initiatives, including mod_wasm (for running conventional web applications like WordPress) and Wasm Workers Server (for executing edge/serverless apps). Ridruejo also mentioned the compatibility of the Language Runtime project with open-source endeavors such as Fermyons Spin.

Others, such as Chronosphere and Microsoft, have already begun to use WebAssembly to support their operations mostly, while continuing to actively contribute to the development of Wasm for the community. In Microsofts case, its work with WebAssembly dates back years. Microsoft Flight Simulator for some years now has used WebAssembly for mod protection, for example, when it was shown to improve both security and portability for add-ons distributed as WebAssembly modules. Excel online uses WebAssembly for calculating Lambda functions.

Most of Microsofts work now consists of investing in the upcoming component model, Microsofts Squillace said. For example, Microsoft is expanding the Azure Kubernetes Service WASI NodePool preview and giving its services additional hypervisor protection per request on top of the Wasm sandbox with the Hyperlight project, Squillace said. This serves very small bare-metal micro-vms very fast for use with wasm functions, Squillace said.

Outside of the edge browser, Microsoft is investing mainly in server-based Wasm, the system interface (wasi) and the Wasm component ecosystem surrounding the Bytecode Alliance Foundation, as well as in infrastructure and language tooling to enable productive use, Squillace said. That means open source investments like the CNCFs Containerd runwasi shim for Kubernetes integration, but also TinyGo-compatible Wasm component tooling, VSCode extensions and serverless proposals like wasi-cloud-core, Squillace said. It also means Azure investments in security like hyperlight and Azure services like AKS WASI NodePool Preview and AKS Edge Essentials, among others.

WebAssembly trajectory reflects similar cycles that happen with technologies, such as Java, containers, etc., Ridruejo said. Each one of them have seen an ecosystem grow around it with new ways of doing monitoring, security etc. It is too early to know yet what that looks like, Ridruejo said. The question is whether that change will be incremental and existing vendors like, say, Datadog for monitoring will add Wasm support as a new feature or it will be distributive and new companies will take Datadogs place (again just an example) and become the Datadog of Wasm.

The million-dollar question is what needs to happen before tool providers and large enterprises can begin using WebAssembly to make money. To that, Squillace said:

Customers already tell us they need a comprehensible (if not great) developer experience and a deployment and management experience that is solid. They also need networking support (coming in Preview 2); no networking means no service hosts in IoT without runtime support, for example. And finally, they need coherent interactive debugging. That last one is going to be hard across all languages and runtimes.

Read more from the original source:
What's Holding up WebAssembly's Adoption? - The New Stack

Read More..

Prepare for the Holiday Shopping Frenzy: Nexcess Releases The … – PR Newswire

With ecommerce sales surging,Nexcess prepares businesses to thrive during the most important time of the year for retail

ATLANTA, July 11, 2023 /PRNewswire/ -- Nexcess, the fully managed, high-performance, digital commerce platform built to optimize online sites and stores, today announced the release of The 2023 Ecommerce Holiday Survival Guide, a definitive resource for ecommerce store owners preparing for the busiest and most critical time of year.

Ecommerce sales continue to trend upward and break records. Throughout 2023, U.S. ecommerce sales are expected to exceed $1 trillion. Ecommerce sales will hit 20.8% of all retail sales across the globe this year, according to Forbes. That number is expected to reach 24% by 2026.

The holiday season puts a lot of pressure on business owners. And Nexcess wants to help make sure it is profitable.

Knowing what to do and when is crucial for all ecommerce business owners. To help online store owners prepare for a successful holiday selling season, the guide provides everything needed to optimize sites, increase conversions, and prepare stores for increased traffic. The handbook, categorized by season, also outlines when to start testing sites, running ads, and how to refine selling strategies.

"As ecommerce continues to see significant growth in every sector, store owners will see increased traffic and competition this year," says Terry Trout, SVP of Marketing. "The holiday season puts a lot of pressure on ecommerce business owners. And Nexcess wants to help make sure it is profitable. For more than 23 years, we've been helping businesses build, launch, and grow online. This expert guide provides key insights and strategies that will help online stores increase customer satisfaction, generate loyalty, and take advantage of the holiday traffic surge."

To ensure success, business owners need ample time to strategize and plan for the busiest retail time of the year. By initiating ecommerce holiday preparations early, business owners will gain an edge over competitors and maximize revenue. The guide addresses every critical aspect of ecommerce success and includes a comprehensive checklist to ensure businesses don't miss a beat.

To learn more about holiday readiness, read The 2023 Ecommerce Holiday Survival Guide. For more information about Nexcess, visit nexcess.net.

About NexcessNexcess is the best place to build your business online. Optimized for your hosting and solution needs, we provide a managed hosting infrastructure, curated tools, and a team of experts that make it easy to build, manage, and grow your business online. Serving SMBs and the designers, developers, and agencies who create for them for more than 23 years, we provide a fully managed, high-performance cloud solution built to optimize WordPress, WooCommerce and Magento sites and stores. As a company within The Liquid Web Family of Brands, we collectively manage 10 global data centers, have more than 500,000+ sites under management, serve over 187,000 paying and 2.5 million freemium software customers spanning 150 countries, and provide unparalleled service from a dedicated group of experts 24/7/365. As an industry leader in customer service, the rapidly expanding brand family has been recognized as an industry leader and among INC. Magazine's 5000 Fastest-Growing Companies for twelve years.

Media ContactAndy BissonetteSr. Director of Marketing, Nexcess[emailprotected]

SOURCE Nexcess LLC

Follow this link:
Prepare for the Holiday Shopping Frenzy: Nexcess Releases The ... - PR Newswire

Read More..

Search launched for new dean of the Penn State College of … – Pennsylvania State University

UNIVERSITY PARK, Pa. Penn State has launched a national search for the next Harold and Inge Marcus Dean of the College of Engineering. Tracy Langkilde, dean of the Eberly College of Science, will chair the search committee.

Tonya Peeples, senior associate dean and professor of chemical engineering, has been serving as interim dean since July 1. She took over for Anthony Atchley, who retired from the position at the end of June. In August 2022, when Justin Schwartz, then the Harold and Inge Marcus Dean of Engineering, was named interim executive vice president and provost, Atchley was appointed acting dean of engineering, a title that changed to interim dean when Schwartz was appointed Penn States permanent executive vice president and provost on May 1.

Established in 1896, the College of Engineering is Penn States largest academic college and houses 13 academic departments and degree programs and 30 major research centers and laboratories. Its more than 570 faculty members support over 12,000 students and 95 postdoctoral researchers. The dean, reporting directly to the executive vice president and provost of the University, serves as the principal academic and administrative officer of the college.

The new dean will be instrumental in shaping future developments within the college, including incorporating Penn States strategic plan and new budget model into college operations; creating and maintaining an equitable, inclusive community; recruiting and retaining a diverse and talented faculty, staff and student body; and building on the colleges commitment to student success and impactful research. The incoming dean also will be a strong partner to University leadership and build relationships across Penn State, including the Commonwealth Campuses, to promote and expand interdisciplinary education and research.

The College of Engineering seeks a scholar with achievements appropriate for a tenured, full professorship in a department within the college. The dean must have a substantial record of administrative leadership; a dedication to diversity, equity, inclusion and belonging; and administrative, fiscal and operational capability. The dean must possess inspiring leadership qualities and exceptional interpersonal, collaboration and communication skills. Additionally, the dean should be an experienced fundraiser and, in coordination with the development team within the college, forge strong relationships and engage donors, alumni and industry leaders in supporting strategic priorities of the college.

The firm WittKieffer will be assisting Penn State with the search. Interested parties should include a CV or resume and a letter of interest addressing the themes in the leadership profile that is available at wittkieffer.com. Application materials should be submitted using WittKieffers candidate portal. Candidate materials should be received no later than Aug. 28. Nominations and inquiries can be directed to Suzanne Teer, Jessica Herrington and Cathryn Davis at PennStateEngineeringDean@wittkieffer.com.

Members of the search committee include:

Andrea Arguelles, associate head for diversity and inclusion; assistant professor of engineering science and mechanics

Sydney Assalita, undergraduate student in engineering science and Schreyer Honors Scholar

Kathleen Bieschke, vice provost for Faculty Affairs and professor of education

Chitaranjan Das, department head of Computer Science and Engineering

Mary Frecker, department head of Mechanical Engineering

Sara Hackett, human resources consultant

Erin Hostetler, director of Student Research and Graduate Equity, College of Engineering

Tracy Langkilde, search committee chair; dean and professor of biology, Eberly College of Science

David Mazyck, head of the School of Engineering Design and Innovation

Martin Nieto-Perez, associate teaching professor of nuclear engineering

Tracy Peterson, director of Student Transitions and Pre-College Programs, College of Engineering

Debrina Roy, doctoral candidate in industrial and manufacturing engineering

Tahira Reid Smith, Arthur L. Glenn Professor of Engineering Education

Uday Shanbhag, Gary and Sheila Bello Chair Professor of Industrial and Manufacturing Engineering

Timothy Simpson, Paul Morrow Professor in Engineering Design and Manufacturing

Stephanie Velegol, acting associate department head and teaching professor of chemical engineering

Elizabeth Wright, chancellor, Penn State Hazleton; and associate professor of English, Academic Affairs

Continued here:

Search launched for new dean of the Penn State College of ... - Pennsylvania State University

Read More..

OX Security Named a 2023 Gartner Cool Vendor for Platform … – PR Newswire

Integrating OX Security into internal developer platforms reduces cost, effort and risk and enables secure software development at scale

TEL AVIV, Israel and BOSTON, July 11, 2023 /PRNewswire/ -- OX Security, the industry's first holistic software supply chain security platform, today announced that it has been named a Cool Vendor by Gartner, in the research firm's 2023 Cool Vendors in Platform Engineering for Scaling Application Security Practices report.

In organizations today, individual product teams often implement security tools and practices at their own discretion, exposing their organizations to significant risk. To address this liability, forward-thinking companies have begun establishing platform teams, charged with creating a bespoke stable, efficient and secure environment for internal developers to work on their applications. This "ensures consistency and reduces the cognitive load of implementing security controls," according to the report.

Gartner predicts that "by 2026, 70% of platform teams will integrate application security tools as part of internal developer platforms (IDP) to scale DevSecOps practices, up from 20% in 2023."

OX Security was named a Gartner Cool Vendor for making it "easier to orchestrate security workflows and provide visibility into the software supply chain." The report noted that OX Security may be of particular interest to platform teams in order to:

"We are honored to be selected as a 2023 Gartner Cool Vendor in Platform Engineering for Scaling Application Security Practices," said Neatsun Ziv, co-founder and CEO of OX Security. "Before OX, DevSecOps suffered from fragmented workflows and noise from tool overload. OX secures an organization's path to production, from code to cloud to code, and is hyper-focused on the user experience. Working with platform teams is a very natural fit for us."

OX recently launchedOX-GPT, the first AppSec ChatGPT integration, which gives developers full transparency with respect to identified risks and customized cut and paste code fixes. Developer adoption of OX-GPT has been rapid, with some users seeing an almost 80% decrease in false positives and a 40% increase in the resolution of critical issues.

To learn more about how OX Security empowers customers to reduce the potential attack surface while still enabling development teams to deliver at scale, visit http://www.ox.security or schedule a demo.

To view a complimentary copy of the 2023 Gartner Cool Vendor in Platform Engineering for Scaling Application Security Practices report, visit http://www.ox.security/Gartner-cool-vendor-2023

Gartner DisclaimerGartner does not endorse any vendor, product, or service depicted in our research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner, Cool Vendors in Platform Engineering for Security Application Security Practices, 6 July 2023, Manjunath Bhat, et. al.

GARTNER is a registered trademark and service mark and COOL VENDORS ais a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

ABOUT OX SECURITYAt OX Security, we believe that security should be an integral part of the software development process, not an afterthought. Found by Neatsun Ziv and Lion Arzi, two former Check Point executives, OX security is the first and only platform to scan the entire software supply chain - from code to cloud to code - eliminating any blind spots and delivering complete visibility, context, prioritization of security issues, all in a single pane of glass. Through a combination of best practices from risk management and cybersecurity and a user experience focused on developers, OX makes software supply chain security processes effortless for security teams to manage and easy for developers to adopt.

For more information visit http://www.ox.security and follow OX Security on LinkedIn.

SOURCE Ox Security

See original here:

OX Security Named a 2023 Gartner Cool Vendor for Platform ... - PR Newswire

Read More..