Page 1,621«..1020..1,6201,6211,6221,623..1,6301,640..»

Flexibility is at the Heart of Ansys Gateway powered by AWS – HPCwire

The cloud enables companies to capitalize on computing resources without installing and maintaining them on-premises. It helps businesses to optimize CapEx costs, improve agility, and simplify scaling. Simulation software enables companies to explore and predict how products will work or wont work in the real world. It can speed time-to-market, lower manufacturing costs, improve quality, and decrease risk. It also thrives on computing power.

Ansys, a global provider of simulation software, recently announced a strategic collaboration with Amazon Web Services, Inc. (AWS), which led to a new product offering available via AWS Marketplace: Ansys Gateway powered by AWS. After customers subscribe and complete a series of onboarding steps, they can deploy and scale Ansys applications on a virtual desktop infrastructure (VDI) or high-performance computing (HPC) cluster via the Ansys Gateway powered by AWS web portal.

Instead of being limited by static, resources-constrained, on-premises high-performance computing (HPC) resources, or not being able to access HPC resources at all, customers can leverage their own AWS cloud subscription via Ansys Gateway powered by AWS. By logging onto the Ansys Gateway powered by AWS web portal, they can access the extensive compute and storage resources from AWS to initiate a simulation run of any size, from anywhere, and at any time. By leveraging an easy-to-use, web-based platform that enables quick and reliable deployment of Ansys and other third-party applications in a cloud environment, Ansys customers can accelerate innovation.

The product cycle is becoming shorter day by day, said Neehar Kulkarni, Manager, Cloud Product Management at Ansys. Nowadays, simulation plays a key role in removing the need to perform expensive and time-consuming prototyping and testing. By enabling engineers to analyze product designs early in the development cycle, simulation can arrive at the optimal design faster. However, our customers are asking themselves how they can remove the hardware barrier that prevents them from achieving their product development goals so they can move even faster.

Simulation requires an adequate computing infrastructure to efficiently run through hundreds, thousands, or even millions of potential product concepts and improvements. The cloud plays a big role in removing any hardware barriers, but there are workflow challenges it helps to solve as well.

As products have become increasingly complex, customers are relying on a stack of different design, analysis, and optimization software tools, said Kulkarni. Installing, tuning, and managing these software toolchains is a big challenge for IT teams and engineering managers. Ansys Gateway powered by AWS facilitates streamlined deployment of these simulation tools and workflows in the cloud so that companies can focus on their core competencies.

Product design and development can require multiphysics or multiple physics computer-aided engineering (CAE) software toolchains, in addition to computer-aided design (CAD) software. Ansys Gateway powered by AWS enables seamless deployment of several Ansys applications that are optimized for AWS cloud infrastructure and makes it easier for customers to use third-party CAD/CAE tools. This helps reduce the burden on a companys IT team to deploy, install, and tune complex engineering simulation tools in the cloud.

Kulkarni said the idea of openness and flexibility is ingrained in the Ansys Cloud product collection.

The first thing that we observed is that there is no one-size-fits-all solution, he said. Indeed, all customers are different whether they are a startup or big enterprise account. The level of need will vary when it comes to running simulation workloads. One thing weve learned from our customers is that they all want flexibility on when, where, and how they want to run their simulation workloads.

He said some companies have been born in the cloud and therefore, they have not invested in building on-premises compute and storage capacity. As these companies run complex workflows that generate more and more simulation data, cloud computing enables them to scale storage up and down as needed. It can be difficult to provision on-demand storage like that on-premises. Other companies might have invested in on-premises cluster capacity and use the cloud to burst their simulation workloads when they have peak demand to complete urgent, time-sensitive projects.

Cloud is essentially an enabler for our customers to get maximum value from our best-in-class simulation products, said Kulkarni. Based on our open ecosystem strategy, we believe in providing flexibility to our customers on how they want to adopt the cloud. For some, this flexibility means enabling customers to deploy and use their own cloud infrastructure while making it easier for them to use their existing Ansys licenses as effectively as possible. This is exactly what we are offering to our mutual Ansys and AWS customers via Ansys Gateway powered by AWS.

Learn more about Ansys Gateway powered by AWS.

More here:
Flexibility is at the Heart of Ansys Gateway powered by AWS - HPCwire

Read More..

Amazon to lay off 9,000 more workers in addition to earlier cuts – CNBC

The latest round will primarily impact Amazon's cloud computing, human resources, advertising and Twitch livestreaming businesses, Jassy said in the memo.

Amazon is undergoing the largest layoffs in company history after it went on a hiring spree during the Covid-19 pandemic. The company's global workforce swelled to more than 1.6 million by the end of 2021, up from 798,000 in the fourth quarter of 2019.

Jassy is also undergoing a broad overview of the company's expenses as it reckons with an economic downturn and slowing growth in its core retail business. Amazon froze hiring in its corporate workforce, axed some experimental projects and slowed warehouse expansion.

While the company aims to operate leaner this year, Jassy said he remains optimistic about the company's "largest businesses," retail and Amazon Web Services, as well as other, new divisions it continues to invest in.

Shares of Amazon were down more than 2% in afternoon trading Monday.

As we've just concluded the second phase of our operating plan ("OP2") this past week, I'm writing to share that we intend to eliminate about 9,000 more positions in the next few weeksmostly in AWS, PXT, Advertising, and Twitch. This was a difficult decision, but one that we think is best for the company long term.

Let me share some additional context.

As part of our annual planning process, leaders across the company work with their teams to decide what investments they want to make for the future, prioritizing what matters most to customers and the long-term health of our businesses. For several years leading up to this one, most of our businesses added a significant amount of headcount. This made sense given what was happening in our businesses and the economy as a whole. However, given the uncertain economy in which we reside, and the uncertainty that exists in the near future, we have chosen to be more streamlined in our costs and headcount. The overriding tenet of our annual planning this year was to be leaner while doing so in a way that enables us to still invest robustly in the key long-term customer experiences that we believe can meaningfully improve customers' lives and Amazon as a whole.

As our internal businesses evaluated what customers most care about, they made re-prioritization decisions that sometimes led to role reductions, sometimes led to moving people from one initiative to another, and sometimes led to new openings where we don't have the right skills match from our existing team members. This initially led us to eliminate 18,000 positions (which we shared in January); and, as we completed the second phase of our planning this month, it led us to these additional 9,000 role reductions (though you will see limited hiring in some of our businesses in strategic areas where we've prioritized allocating more resources).

Some may ask why we didn't announce these role reductions with the ones we announced a couple months ago. The short answer is that not all of the teams were done with their analyses in the late fall; and rather than rush through these assessments without the appropriate diligence, we chose to share these decisions as we've made them so people had the information as soon as possible. The same is true for this note as the impacted teams are not yet finished making final decisions on precisely which roles will be impacted. Once those decisions have been made (our goal is to have this complete by mid to late April), we will communicate with the impacted employees (or where applicable in Europe, with employee representative bodies). We will, of course, support those we have to let go, and will provide packages that include a separation payment, transitional health insurance benefits, and external job placement support.

If I go back to our tenetbeing leaner while doing so in a way that enables us to still invest robustly in the key long-term customer experiences that we believe can meaningfully improve customers' lives and Amazon as a wholeI believe the result of this year's planning cycle is a plan that accomplishes this objective. I remain very optimistic about the future and the myriad of opportunities we have, both in our largest businesses, Stores and AWS, and our newer customer experiences and businesses in which we're investing.

To those ultimately impacted by these reductions, I want to thank you for the work you have done on behalf of customers and the company. It's never easy to say goodbye to our teammates, and you will be missed. To those who will continue with us, I look forward to partnering with you as we make life easier for customers every day and relentlessly inventing to do so.

Andy

Link:
Amazon to lay off 9,000 more workers in addition to earlier cuts - CNBC

Read More..

Is Gaia-X on course to challenge the big tech platforms? – Raconteur

If anyone were in doubt of the impact that the misuse of data can have on businesses and nation states, theyd need to look no further than the recent investigations surrounding Team Jorge in Israel, the disinformation unit that allegedly worked to disrupt elections in countries worldwide.

Five years on, the Cambridge Analytica scandal is a reminder of how data is increasingly woven into the fabric of modern society and the dangers when it is weaponised.

While arguably it was Edward Snowdens 2013 whistle-blowing of National Security Agency activities that triggered global discussions on data sovereignty, the Cambridge Analytica events accelerated it.

Just a year later, aware of the growing importance of cloud computing as the backbone of modern technology, governments in Germany and France came up with a cunningplan.

Today, that plan has evolved into what is called Gaia-X, an association of governments, technology firms, academics, public bodies and not-for-profits that is working to define a common way to solve Europes digital sovereignty conundrum. The need, according to Francesco Bonfiglio, CEO of Gaia-X, is being driven by the fact that big tech platforms are controlling everything.

Its time for a change, he says, adding that this change has to be in the direction of a distributed, decentralised, federated, transparent, interoperable cloud that is orthogonal to the model of the top hyperscalers.

Interestingly, all the cloud hyperscalers rank among the associations 373 members. Bonfiglio says that despite stories suggesting this was to destroy the initiative or to condition our decisions, these businesses (which are all US-based except for Chinas Alibaba Cloud) need the European market and understand that currently, the region is missing a common definition of trust for digitalservices.

Figures from Synergy Research Group in September last year put the big three cloud providers firmly in the driving seat in Europe, with a 72% market share. Its difficult not to feel a power struggle brewing, which is why Gaia-X needs critical mass. The member list is impressive but far from comprehensive.

While Bonfiglio talks about a monopoly of a handful of private commercial operators, the reality is that the hyperscalers are dominating for a reason. Hes clear, though, that this cannot be an us or them scenario. Gaia-X is, he says, a sort of bridge, to help any cloud provider solve data trust issues within European sovereign boundaries.

Gaia-X or not, Europe cannot do without the hyperscalers, says Dario Maisto, senior analyst at Forrester. These companies invest some $40bn in new services every year. Furthermore, the hyperscalers can build partnerships with local providers to ensure that their offering stays compliant with sovereignty requirements, which does help in overcoming some of the European organisations concerns.

This is the edge revolution and the data gravity concept is drivingit

Maisto adds that some SaaS vendors deploy their solutions on the hyperscalers infrastructure and take a federated approach to data. This data, he says, is anonymised before leaving the sovereign environment to be processed by the external AI or machine learning solutions sitting in non-sovereign environments.

While this ticks a few data-sovereignty boxes, Bonfiglios point that Europe still needs a trusted, federated system, whereby data is shared regardless of the cloud provider, still stands tall. In many respects, he is advocating a future-proof framework for data that is cloud agnostic and acts as a gateway to European organisations andmarkets.

This is why the hyperscalers are involved. Chris Drake, senior research director at IDC, points out that data sovereignty is increasingly a key factor in the selection of cloud service providers.

This partly reflects the growing importance of regulation, including GDPR, which emphasises the importance of personal data protection and provides specific rules about data storage and transfer, saysDrake.

For Bonfiglio, the key to solving the sovereignty issue lies within the Gaia-X digital clearing house (GXDCH), described as the one-stop place to be verified against the Gaia-X rules and obtain compliance in an automated way. The GXDCH is built on a framework of fundamental bricks to build the data economy, says Bonfiglio. This consists of federation, data exchange and compliance. Where much of this concerns the practicalities of data management and exchange, it is underpinned by the need for trust. Everything is measured against a set of compliance rules, such as GDPR.

This month, we will get to see what this all means at the Market-X event in Vienna, Austria. Bonfiglio admits that 2023 is a big year for Gaia-X and Vienna represents the first showcase of what it is all about. There are already trials, or what Gaia-X calls Lighthouse projects underway. Everything from automotive to manufacturing, tourism, transportation, agriculture and smart cities is being explored, using Gaia-Xs principles and components.

Bonfiglio says they have already learnt some lessons here, a key one being that the federative approach is now proven to be a necessary economical element to create resilient value chains, that can resist unplanned dramatic events, and compete in a market where no single operator can survive alone. The idea is that no single business can operate without sharing data with the others in thechain.

Gaia-X or not, Europe cannot do without the hyperscalers

What, then, would a world without Gaia-X look like? Bonfiglio has no hesitation in saying that countries would be scared of sharing data due to a distrust of monopolist platforms or giving data to platforms without insurance of trust. This, he says, would run the risk of losing value and competition, where the major technology players would act as de facto regulators.

While the current generation of cloud services is hypercentralised and hyperscalable, Bonfiglio believes Gaia-X is needed for these providers to stay relevant. The new generation of cloud and digital services must be distributed, federated and interoperable by definition, hesays.

This is the edge revolution, the data gravity concept is driving it, and Gaia-X is the only concrete initiative addressing a need not satisfied yet by any of the large cloud operators. We are doing something the market needs, hesays.

He may have a point. Last year, French vendor OVHcloud sued Microsoft for unfair commercial practices and objected that the personal data of French healthcare patients should not be stored on Azure but in the data centre of a native French cloud provider, to grant EU standard privacy rights. This one is still in the hands of the EUs competition department. Meanwhile, the future of our data, businesses and societies may be in the hands of the data goddessGaia-X.

Read the original here:
Is Gaia-X on course to challenge the big tech platforms? - Raconteur

Read More..

Software as a Medical Device (SaMD) Market Report 2028: A Comprehensive Assessment of Growth Drivers, Challenges, Size, Share, Trends, Opportunities,…

XCELLENT INSIGHTS LLP

Pune, March 20, 2023 (GLOBE NEWSWIRE) -- Xcellent Insights announces the release of the recent report "Software as a Medical Device (SaMD) Market Report 2028: A Comprehensive Assessment of Growth Drivers, Challenges, Size, Share, Trends, Opportunities, and Competitive Landscape". The global Software as a Medical Device (SaMD) market size was USD 1049.15 million in 2021 and is expected to register a robust revenue CAGR of 38.78% during the forecast period. Key factors such as rapid advancements in medical technology, high penetration of internet, AI, cloud computing, and smart gadgets in the healthcare sector, and rising prevalence of various chronic and lifestyle-associated diseases are expected to drive global market growth during the forecast period.

Software as a medical device (SaMD) is a class of medical software used for one or more medical and non-medical purposes, without the software being a part of any hardware device. These range from software or applications intended to diagnose, treat, cure, and prevent disease. In recent years, the demand for SaMD has drastically increased with the increasing prevalence of chronic illnesses, rising patient admissions, and the need for advanced solutions to handle large data in all healthcare centers. SaMD software products serve different purposes which mostly include diagnosis, disease prevention, treatment, and modernizing care. In addition, it also supports sharing of health records, handling workflows, and storing huge medical data.

GetSamplePagesof Report (Including Full TOC, List of Tables & Figures, and Chart) @https://www.xcellentinsights.com/enquiry/sample/193409

Factors such as growing awareness about importance of SaMD, rising geriatric population, increasing product launches, growing demand for cost-effective healthcare services, improving healthcare infrastructure, and rising overall healthcare spending are expected to boost global market growth between 2023 and 2028.

Story continues

However, high cost of software as a medical device, lack ofcybersecurity and rising data breaching activities, low awareness about benefits of SaMD, and inadequate infrastructure and funds to deploy advanced software in many small to medium healthcare centers are some key factors that can hamper overall market growth to a certain extent during the forecast period.

Key Companies Operating in the Market:

MindMaze

Medtronic

Viz. AI

Siemens Healthcare

iSchemaView

Arterys

Adherium

Digital Diagnostics

Allscripts

Software as a Medical Device (SaMD) Industry Recent Developments:

The global software as a medical device (SaMD) market has been segmented based on type, application, and region:

Market Segment by Type:

Browse complete Report Summary with TOC here: https://www.xcellentinsights.com/reports/software-as-a-medical-device-samd-market-193409

Screening and Diagnosis Segment to Register Rapid Revenue CAGR:

The screening and diagnosis segment is expected to register rapid revenue CAGR over the forecast period owing to rapid advancements in SaMD, rising need for identifying and diagnosis disease conditions, and high use of advanced SaMD capable of analyzing patients medical history and diagnostics data.

Market Segment by Application:

Cloud-based Segment to Account for Largest Revenue Share:

The cloud-based segment is expected to account for largest revenue share over the forecast period. This can be attributed to large patient base, rising need for deploying advanced software and solutions in healthcare centers to handle huge patient and diagnostics data, and high adoption of AI, cloud computing, and cloud-based software in medium to large-scale hospitals, clinics owing to its benefits such as hassle-free scalability and flexibility, and low IT costs.

Segment by Region:

North America (USA, Canada, Mexico)

Europe (Germany, France, UK, Italy, Russia, Rest of Europe)

Asia Pacific (China, Japan, South Korea, India, Australia, Taiwan, Indonesia, Thailand, Malaysia, Philippines, Vietnam, Rest of Asia Pacific)

South America (Brazil, Argentina, Columbia, Rest of South America)

The Middle East & Africa (Turkey, Saudi Arabia, U.A.E, South Africa, Rest of MEA)

Asia Pacific to Register Robust Revenue Growth:

Asia Pacific market is expected to register robust revenue growth during the forecast period attributable to improving healthcare infrastructure, rising adoption of software as a medical device across healthcare centers, rising geriatric population, and increasing number of healthcare technology-based startups in the region.

Highlights of the Report:

Estimates 2023 to 2028 software as a medical device (SaMD) market current market trends and development trends

Market dynamics along with growth opportunities of the market in the years to come

Market segmentation analysis including qualitative and quantitative research incorporating the impact of economic and policy aspects

Regional and country level analysis integrating the demand and supply forces that are influencing the growth of the Software as a Medical Device (SaMD) market

Competitive landscape involving market share of the major players, new strategies and projects adopted by players in the last five years

Comprehensive company profiles covering product offerings, key financial information, SWOT analysis and strategies employed by major market players

Brief about impact of COVID-19 on the global Software as a Medical Device (SaMD) market

Key Questions Addressed in the Report:

What revenue CAGR is the global market expected to register during the forecast period?

Which key players are leading in the global software as a medical device (SaMD) market?

What is the expected market size of the global software as a medical device (SaMD) market between 2023 and 2028?

What factors are expected to open new growth avenues and opportunities for existing and emerging market players?

What are some of the key challenges that the global market is expected to face during the forecast period?

Which region is expected to account for largest revenue share over the forecast period?

You can Buy this Premium Research Report @https://www.xcellentinsights.com/checkout/193409

About Us:Xcellent Insights is a one-stop solution for market research and consulting. Our portfolio of services includes syndicate and bespoke research reports driven by market intelligence studies that allow you to add value to your trading decisions. You can count on us for end-to-end market research, market intelligence and service research and services. Having a diverse portfolio across multiple industries, Market Reports excels in providing in-depth analysis and covering the latest market and industry trends.

See the original post here:
Software as a Medical Device (SaMD) Market Report 2028: A Comprehensive Assessment of Growth Drivers, Challenges, Size, Share, Trends, Opportunities,...

Read More..

Why is the Cloud Enabler of Better Technologies? – TimesTech

Cloud communication is now essential for all businesses, not just SMEs. Start-ups, in particular, rely more on cloud-based technology to increase productivity, reduce expenses, and promote better teamwork. Thanks to quickly evolving cloud-based as-a-service platforms, consumers will have access to machine learning capabilities like AI-powered virtual assistants and chatbots like Chat GPT. This will help businesses meet modern demands for faster, round-the-clock communication and automated customer service. This year, we predict that firms will use this technology to gain a competitive edge.

Anyone with internet access may access it from any location and on any device. This is the key advantage of cloud-based technology. In addition, enabling teleworking and hybrid work saves organisations expense and hassle of buying and maintaining their own physical servers. SaaS, PaaS, and IaaS all call for more advanced and robust cloud technologies. Hence, as the demand for these services increases, so does the requirement for a broad and stable cloud computing basis.

Adaptive AI systems can learn in development and runtime contexts. Further, they can swiftly retrain themselves in response to new information and shifting conditions. This enables AI applications to adapt continuously and quickly, regardless of how initially programmed. They are perfect for situations with plenty of unknown factors or when change happens quickly in general.

The resources enterprises need to develop and use machine learning models that are available through cloud computing. They are necessary for AI applications such as market research, dynamic pricing, recommendation engines, and information extraction from large documents and data.

The infrastructure and resources for processing and analysing significant data volumes, the foundation of many AI applications like predictive maintenance, customer behaviour analysis, and fraud detection, are available via cloud computing.

IoT is a powerful technology of this decade (the internet of things). It is constantly changing due to cloud computing and real-time data analytics developments. Data sharing and M2M communication are two processes that take place simultaneously. The cloud makes it simple to manage all of this.

5G is one of the most important cloud technologies. Lower latency and better data transit between locations and devices are made possible by the fifth generation (5G) of network connectivity. 5G and edge computing are anticipated to coincide as 5G coverage grows globally, delivering new capacity and speed to networks. Real-time streaming between facilities is a good fit for 5G networks. This substantially benefits companies that rely on time and massive data, including metrological and space bureaus. As events happen, they will be stored and disseminated. High-quality production, to be sure.

Security is the most prominent feature of cloud technology. By preventing any assault or hacking attempt by unauthorised users, these services guarantee comprehensive data security and safeguard these platforms from cyberattacks.

When businesses move to the cloud, security concerns are typically the first thing that springs to mind. However, even if risks and challenges are significant, you may prevent security issues by selecting a reliable and secure cloud service provider. Because of its dependability sectors, including government administration, finance, defence, health care, and industry, among others, currently use this type of service.

Adapting to changing technologies is never simple. To maximise your investment, it is crucial to understand the dangers involved. Moreover, service providers have already taken significant steps; cloud computing can give businesses the tools to assess the environmental impact of their operations and track their progress towards sustainability. In 2023, well see more companies looking for more advanced tools to assess their sustainable initiatives and analyse data related to their environmental impact and carbon footprint. A greater acceptance of cloud solutions, such as the emergence of multi-cloud, and advancements in technologies, such as the expanding usage of AI and ML in enterprises, will enable organisations to remain resilient and unearth new opportunities and thrive in the year to come.

Vidhu Nautiya is the Co-Founder and Chief Revenue Officer at CloudConnect Communications Pvt. Ltd.

Here is the original post:
Why is the Cloud Enabler of Better Technologies? - TimesTech

Read More..

What Is Fog Computing? definition, Applications, Everything to Know – EC-Council

Fog computing is an important trend to understand for anyone working in or planning to work in technology. It has many potential applications, from industrial and manufacturing settings to hospitals and other healthcare facilities. But what is fog computing, and how does it differ from cloud computing? Lets take a look.

Fog computing is a form of distributed computing that brings computation and data storage closer to the network edge, where many IoT devices are located. By doing this, fog computing reduces the reliance on the cloud for these resource-intensive tasks, improving performance and reducing latency (TechTarget, 2022).

Mist computing takes cloud fog computing even further by bringing computation and data storage even closer to the edge, often using devices such as mist computing servers, which are low-power servers that can be deployed in large numbers.

There are several reasons why fog computing is used:

Fog computing is a term for technology that extends cloud computing and services to the edge of an enterprises network. It allows data, applications, and other resources to be moved closer to, or even on top of, end users.

The four main types of fog computing are mentioned below.

There are many potential applications for fog computing, including:

Fog computing can be used to support a wide range of applications that require data to be processed at the edge of the network. In many cases, moving compute and storage resources closer to the data source improves performance and reduces costs. For example, connected cars generate a significant volume of data that needs to be analyzed in real-time to enable features such as autonomous driving.

Fog computing is often used in cases where real-time response is needed, such as with industrial control systems, video surveillance, or autonomous vehicles. It can also be used to offload computationally intensive tasks from centralized servers or to provide backup and redundancy in case of network failure.

Some of the key components of cloud fog computing include the following:

The internet of things (IoT) is a system of interconnected devices, sensors, and software components that share data and information. The power of the IoT comes from its ability to collect and analyze massive volumes of data from various sources. This data can be used to improve efficiency, optimize operations and make better decisions.

Fog computing in IoT is a decentralized computing model that brings computation and data storage closer to the edge of the network. In other words, fog computing moves processing power and data storage away from centralized server farms and into local networks where IoT devices are located.

There are several advantages to using a fog computing architecture:

There are also several disadvantages to using a fog computing architecture:

Edge computing, a distributed computing model, processes data and applications at the edge of the network, close to the data source. By contrast, in the traditional centralized model of cloud computing, data and applications are stored in a central location and accessed over the network.

The main difference between fog and edge computing is that fog computing extends cloud services and connectivity to devices at the edge of the network. In contrast, edge computing brings computation and data storage closer to devices at the edge of the network.

Heavy.AI is a powerful artificial intelligence platform that enables businesses and developers to easily build and deploy AI-powered applications. Heavy.AI is built on top of the popular TensorFlow open-source library, making it easy to get started with deep learning and neural networks. With Heavy.AI, you can quickly train and deploy your custom models or use one of the many pre-trained models available in the Heavy.AI marketplace.

Heavy.AI also offers a fog computing solution that can be used to manage and process data from IoT devices at the edge of the network. This solution can improve the performance of IoT applications by reducing latency and ensuring data is processed locally.

iFogSim is also an open-source fog computing simulator that can evaluate the performance of different fog computing architectures. iFogSim includes a library of modules that can simulate various aspects of fog computing, such as network topologies, device types, and application characteristics.

Aspiring ethical hackers can get certified through EC-Councils C|EH course. The comprehensive C|EH course covers a wide range of topics related to ethical hacking, including network scanning, enumeration, social engineering, denial-of-service attacks, web application attacks, SQL injection, buffer overflows, and much more.

TechTarget. (2022, September 22). What is fog computing? https://www.techtarget.com/iotagenda/definition/fog-computing-fogging

HiTechWhizz. (2022, September 22). 5 Advantages and Disadvantages of Fog Computing | Drawbacks & Benefits of Fog Computing. https://www.hitechwhizz.com/2020/04/5-advantages-and-disadvantages-drawbacks-benefits-of-fog-computing.html

Ryan Clancy is a writer and blogger. With 5+ years of mechanical engineering experience, hes passionate about all things engineering and tech. He also loves bringing engineering (especially mechanical) down to a level that everyone can understand. Ryan lives in New York City and writes about everything engineering and tech.

Continue reading here:
What Is Fog Computing? definition, Applications, Everything to Know - EC-Council

Read More..

Perhaps The Most Disruptive Technology In History Is Coming And Its Expected To Change Everything. Businesses And Marketers Need To Get Quantum Ready….

Perhaps The Most Disruptive Technology In History Is Coming And Its Expected To Change Everything. Businesses And Marketers Need To Get Quantum Ready.  Forbes

The rest is here:
Perhaps The Most Disruptive Technology In History Is Coming And Its Expected To Change Everything. Businesses And Marketers Need To Get Quantum Ready....

Read More..

Access Control Market worth 14.9 billion by 2028 – Exclusive Report by MarketsandMarkets – Yahoo Finance

CHICAGO, March 20, 2023 /PRNewswire/ --The global Access Control Market size is projected to grow from USD 9.9 billion in 2023 to USD 14.9 billion by 2028; it is expected to grow at a CAGR of 8.4% from 2023 to 2028 according to a new report by MarketsandMarkets. Factors such as increasing adoption of IoT-based security systems with cloud computing platforms and rising number of smart infrastructure and smart city projects are contributing to the growth of access control market. Furthermore, rapid urbanization in emerging countries and adoption of ACaaS as cost-effective and flexible solution is expected to provide the market with greater growth opportunities.

MarketsandMarkets_Logo

Download PDF Brochure:https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=164562182

Browse in-depth TOC on "Access Control Market"

177 Tables58 Figures255 Pages

Card based readers by hardware is expected to account for the largest share in 2022.

Smart cards are more reliable than magnetic stripes and proximity cards. The increasing use of smart cards for monitoring and recording employee activities in organizations is one of the major factors driving the growth of the market for card-based readers.

Hosted ACaaS held the largest share in Access Control as a Service Market

Owing to the high rate of adoption of access control as a service, hosted ACaaS is expected to hold the largest market share during the forecast period. The benefits offered by hosted and managed services such as reduced labor costs, minimized capital expenditure, and the freedom provided to companies to stay focused on core business areas contribute to the growth of the ACaaS market.

Commercial Vertical to hold largest market share of access control market

This growth of commercial vertical can be attributed to the increasing incidence of robbery and theft in commercial buildings. To protect commercial assets, property owners are buying security systems embedded with access control for remote monitoring. Breaches pertaining to information security are a major concern nowadays.

Story continues

Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=164562182

Access control market in Asia Pacific to exhibit highest growth rate during forecast period

The increasing adoption of access control solutions by small and medium-sized enterprises is also expected to drive market growth. The growing risks of terrorist threats and crime rates and a low police-to-population ratio in Asia Pacific countries are also expected to create a high demand for access control systems.

The key players operating in the access control market are ASSA ABLOY (Sweden), Johnson Controls (Ireland), dormakaba Holding (Switzerland), Allegion plc (Ireland), Honeywell International (US), Identiv, Inc. (US), Nedap N.V. (Netherlands), Suprema HQ Inc. (South Korea), Bosch Security Systems Inc. (Germany), and Thales (France).

Get 10% Free Customization on this Report: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=164562182

Browse Adjacent Market: Semiconductor and Electronics MarketResearch Reports &Consulting

Related Reports:

Biometric System Marketby Authentication Type (Single Factor, Fingerprint, Iris, Face, Voice; Multi-factor), Type (Contact-based, Contactless, Hybrid), Offering Type, Mobility, Vertical & Region (2022-2027)

Smart Card Marketby Interface (Contact, Contactless, Dual), Type (Memory, MPU Microprocessor), Functionality (Transaction, Communication, Security and Access Control), Offering, Vertical, and Region 2026

Fingerprint Sensor Marketby Technology (Capacitive, Optical, Thermal and, Ultrasonic), Type (Area, Touch, and Swipe), Sensor Technology, Product, EndUse Application, and Region (2021-2026)

RFID Marketby Offering (Tags, Readers, Software & Services), Tag Type (Passive, Active), Wafer Size, Frequency, Form Factor (Card, Implant, Key Fob, Label, Paper Ticket, Band), Material, Application & Region - Global Forecast to 2030

Mobile Biometrics Marketby Component (Fingerprint Readers, Scanners, Cameras, Software), Authentication Mode (Single factor (Fingerprint, Voice, Face, Iris, Vein, & Retina Scan) and Multifactor), Industry, and Geography - Global Forecast to 2022

About MarketsandMarkets

MarketsandMarkets is a blue ocean alternative in growth consulting and program management, leveraging a man-machine offering to drive supernormal growth for progressive organizations in the B2B space. We have the widest lens on emerging technologies, making us proficient in co-creating supernormal growth for clients.

The B2B economy is witnessing the emergence of $25 trillion of new revenue streams that are substituting existing revenue streams in this decade alone. We work with clients on growth programs, helping them monetize this $25 trillion opportunity through our service lines - TAM Expansion, Go-to-Market (GTM) Strategy to Execution, Market Share Gain, Account Enablement, and Thought Leadership Marketing.

Built on the 'GIVE Growth' principle, we work with several Forbes Global 2000 B2B companies - helping them stay relevant in a disruptive ecosystem. Our insights and strategies are molded by our industry experts, cutting-edge AI-powered Market Intelligence Cloud, and years of research. The KnowledgeStore (our Market Intelligence Cloud) integrates our research, facilitates an analysis of interconnections through a set of applications, helping clients look at the entire ecosystem and understand the revenue shifts happening in their industry.

To find out more, visit http://www.MarketsandMarkets.com or follow us on Twitter, LinkedIn and Facebook.

Contact: Mr. Aashish MehraMarketsandMarkets INC. 630 Dundee RoadSuite 430Northbrook, IL 60062USA: +1-888-600-6441Email: sales@marketsandmarkets.comVisit Our Web Site: https://www.marketsandmarkets.com/Research Insight: https://www.marketsandmarkets.com/PressReleases/access-control.aspContent Source: https://www.marketsandmarkets.com/ResearchInsight/access-control-market.asp

Logo: https://mma.prnewswire.com/media/660509/MarketsandMarkets_Logo.jpg

Cision

View original content:https://www.prnewswire.com/news-releases/access-control-market-worth-14-9-billion-by-2028---exclusive-report-by-marketsandmarkets-301775905.html

SOURCE MarketsandMarkets

Continued here:
Access Control Market worth 14.9 billion by 2028 - Exclusive Report by MarketsandMarkets - Yahoo Finance

Read More..

Linux Foundation Training & Certification & Cloud Native Computing … – PR Newswire

Free Enrollment Now Open

SAN FRANCISCO, March 16, 2023 /PRNewswire/ -- The Linux Foundation Training & Certification and the Cloud Native Computing Foundation (CNCF) today announced a partnership with CoRise to support its newest learning path designed to significantly increase the number of qualified and certified DevOps professionals worldwide by 50,000 over the next two years.

The curriculum consists of three instructor-led courses, the first of which is free, that prepare professionals to sit for the Certified Kubernetes Administrator (CKA) exam. The courses are:

Increasing the number of technical professionals certified in Kubernetes is important for any organization.

The demand for Kubernetes proficient DevOps professionals has increased significantly in recent years due to the explosive growth of digital transformation and cloud native projects. DevOps emphasizes collaboration and communication between development and operations teams, enabling them to deliver software quickly and reliably. Kubernetes provides a platform for automating many of the tasks that are critical to DevOps, such as continuous deployment, scaling, and monitoring.

"Increasing the number of technical professionals certified in Kubernetes is important for any organization to improve efficiency, reduce downtime, deliver higher quality software, and ultimately provide a better end user experience," said Chris Aniszczyk, CTO of the Cloud Native Computing Foundation. "We are pleased to partner with CoRise to deliver a free, live training experience on their learning platform."

"Our cutting-edge, AI-powered, edtech platform, combined with the Linux Foundation's industry leading CKA certification, will help tens of thousands of professionals acquire the DevOps and Kubernetes skills needed to propel their careers forward through an immersive and personalized, cohort-based experience," said Julia Stiglitz, CEO and Co-Founder, CoRise.

Each course will blend today's best-in-class online learning technologies, allowing each cohort to:

CoRise instructors will bring extensive, real-world experience to their classes and include such industry leaders as:

Enrollment is now open for DevOps Crash Course, the first course in the series, which is free, at: https://corise.com/course/devops-crash-course. The cohort will begin on June 12th, and runs for 2 weeks. Enrollment will open shortly for the follow-on courses, which will cost $400 individually or $1,000 as part of a CoRise annual subscription that provides access to more than 40 courses.

CoRise recommends that prospective students have some experience with command lines and containers. While some programming experience is recommended, it is not required. Students looking to gain experience prior to enrolling should review the Linux Foundation's free training catalog. IT professionals can find more information about the Linux Foundation's 14 professional certifications, including the Certified Kubernetes Administrator (CKA), here.

About the Linux FoundationFounded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world's leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation's projects are critical to the world's infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation's methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of the Linux Foundation, please see its trademark usage page: http://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

About The Cloud Native Computing FoundationCloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry's top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 800 members, including the world's largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit http://www.cncf.io.

About CoRiseCoRiseis on a mission to upskill the world's workforce, achieving quality at scale by leveraging community and AI. While the need for upskilling and reskilling has accelerated in recent years, solutions have not kept pace. Completion of asynchronous courses remains woefully low (4-6%), and live courses are challenging and costly to scale across large organizations. CoRise offers a needed alternative---engaging and interactive cohort-based courses that deliver real learning results at scale. CoRise completion rates are 78%, with 95% of learners saying that they learned skills that will help them on the job.

Founded by early Coursera employees Julia Stiglitz, Sourabh Bajaj, and Jacob Samuelson, and backed by Greylock Partners, GSV Ventures and Cowboy Ventures, CoRise is working with industry-leading professionals, talent-centric enterprises, and ambitious learners.

Contacts:

For the Linux Foundation Training & CertificationScott Punk[emailprotected]

For CNCFJessie Adams-Shore[emailprotected]

For CoRiseBarbara Kaplan-Marans[emailprotected]

SOURCE The Linux Foundation Training & Certification

Follow this link:
Linux Foundation Training & Certification & Cloud Native Computing ... - PR Newswire

Read More..

How eBPF unlocks cloud native innovation – InfoWorld

Barbara Liskovthe brilliant Turing Award winner whose career inspired so much modern thinking around distributed computingwas fond of calling out the power of abstraction and its role in finding the right interface for a system as well as finding an effective design for a system implementation.

Liskov has been proven right many times over, and we are now at a juncture where new abstractionsand eBPF, specificallyare driving the evolution of cloud native system design in powerful new ways. These new abstractions are unlocking the next wave of cloud native innovation and will set the course for the evolution of cloud native computing.

Before we dive into eBPF, lets first examine what cloud native is and why it needs to evolve.

Cloud native embraces a container model where a single kernel becomes the common denominator for managing many networking objects. We see related trends, like networks becoming namespace-based, where full-blown VMs are being replaced by containers or lightweight VMs. Cloud native shifts the scale and scope from a few VMs to many containers with higher per-node container density for efficient resource use and shorter container lifetimes. These dynamic IP pools for containers also have high IP churn.

The challenges dont end there.

Once you have stood up and bootstrapped your cluster there are Day 2 challenges like observability, security, multicluster and cloud management, and compliance. You dont just move to a cloud native environment with a flick of a switch. Its a progressive journey.

Once you have a cloud native environment set up, you will face integration requirements with external workloads (e.g., through more predictable IP addresses via service abstractions or egress gateways, like BGP for pod networking, CIDRs, services, and gateways). You will also have to deal with the successive migration toward IPv6-only clusters for better IAM flexibility, and NAT46/64 for interaction with legacy workloads and be able to connect multiple clusters on/off-prem in a scalable manner, with topology-aware routing and traffic encryption, and so much more.

These problems are only going to grow larger, with Gartner estimating that by 2025 over 95% of new digital workloads will be deployed on cloud native platforms, up from 30% in 2021.

The Linux kernel, as usual, is the foundation to solving these challenges, with applications using sockets as data sources and sinks and the network as a communication bus. Linux and Kubernetes have come together as the cloud OS.

But cloud native needs newer abstractions than currently available in the Linux kernel because many of these building blocks, like cgroups (CPU, memory handling), namespaces (net, mount, pid), SELinux, seccomp, netfiler, netlink, AppArmor, auditd, perf, were designed more than 10 years ago.

These tools dont always talk together, and some are inflexible, allowing only for global policies and not per-container policies. They dont have awareness of pods or any higher-level service abstractions, and many rely on iptables for networking.

As a platform team, if you want to provide developer tools for a cloud native environment, you can still be stuck in this box where cloud native environments cannot be expressed efficiently.

eBPF is a revolutionary technology that allows us to dynamically program the kernel in a safe, performant, and scalable way. It is used to safely and efficiently extend the cloud native capabilities of the kernel without requiring changes to kernel source code or loading kernel modules.

eBPF:

These capabilities allow us to safely abstract the Linux kernel and make it ready for the cloud native world.

Next lets dive into 10 ways the eBPF abstraction is helping evolve the cloud native stack, from speeding up innovation to improving performance.

Adding a new feature or functionality to the Linux kernel is a long process. In the typical patch lifecycle, you need to develop a patch, get it merged upstream, then wait until major distributions get released. Users typically stick to LTS kernels (for example, Ubuntu is typically on a two year cadence). So innovation with the traditional model requires kernel modules or building your own kernels, leaving most of the community out. And the feedback loop from developers to users is minimal to nonexistent. eBPF managed to break this long cycle by decoupling from kernel releases. For example, changes in Cilium can be upgraded on the fly with the kernel running and work on a large range of kernel releases. This allows us to add new cloud native functionality years before it would otherwise be possible.

New features can increase functionality, but also bring new risks and edge cases. Development and testing costs much more for kernel code versus eBPF code for the same functionality. The eBPF verifier ensures that the code wont crash the kernel. Portability for eBPF modules across kernel versions is achieved with CO-RE, kconfigs, and BPF type info. The eBPF flavor of the C language is also a safer choice for kernel programming. All of these make it safer to add new functionality to the kernel than patching directly or using a kernel module.

Traditional feedback loops required patching the in-house kernel, gradually rolling out the kernel to the fleet to deploy the change, starting to experiment, collecting data, and bringing the feedback into the development cycle. It was a very long and fragile cycle where nodes needed to restart and drain their traffic, making it impossible to move quickly especially in dynamic cloud native environments. eBPF decouples this feedback loop from the kernel and allows atomic program updates on the fly, dramatically shortening this feedback loop.

Instead of requiring rewrites of large parts of the user space stack, eBPF is able to piggyback on parts to the kernel and use them as-is while making integration dramatically easier. eBPF adds building blocks to the kernel that are too complex for other kernel subsystems, especially for new cloud native use cases. With eBPF, Cilium was able to easily add a NAT 46/64 gateway to connect IPv6-only Kubernetes clusters to IPv4-based infrastructure.

Recently, eBPF was used to fix a kernel bug in the veth (virtual Ethernet) driver that was affecting queue selection. (See the eBPF Summit talk, All Your Queues Are Belong to Us.) This on-the-fly fix enabled by eBPF avoided complex rollouts of new kernels, an especially time-consuming process for cloud providers. Cloud native workloads can bring new edge cases to the kernel, but on-the-fly fixes with eBPF make packet processing more resilient and reduce the attack surface from bad actors.

Traditional virtualized networking functions, such as load balancers and firewalls, are solved at a packet level. Every packet needs to be inspected, modified, or dropped, which is computationally expensive for the kernel. eBPF reframed the original problem by moving as close to the event source as possible, toward per-socket hooks, per-cgroup hooks, and XDP (eXpress Data Path), for example. This resulted in significant resource cost savings and allowed the migration from dedicated boxes to generic worker nodes. Seznam.cz was able to reduce their load balancer CPU consumption by 72x using eBPF.

By using eBPF for forwarding, we allow many parts of the networking stack to be bypassed, greatly improving networking efficiency and performance. For example, with eBPF, Cilium was able to implement a bandwidth manager that reduced p99 latency by 4.2x. It also helped enable BIG TCP and a new veth driver replacement that lets containers achieve host networking speeds.

eBPF reduces the kernels feature creep that slows down data processing by keeping the fast path to a minimum. Complex, custom cloud native use cases dont need to become part of the kernel. They simply become more building blocks in eBPF that can be leveraged in different edge cases. For example, by decoupling helpers and maps from entry points in eBPF, Cilium was able to create a faster and more customizable kube-proxy replacement in eBPF that can continue to scale when iptables falls short.

Given the churn in cloud native workloads, it can be difficult to find and debug issues. eBPF collectors make it possible to build low-overhead, fleet-wide tracing and observability platforms. Instead of having to modify application code or add sidecars, eBPF allows zero instrumentation observability. Troubleshooting production issues on-the-fly also can be done safely via bpftrace while allowing significantly richer visibility, programmability, and ease-of-use than old-style perf.

In cloud native environments, eBPF allows you to abstract away from high pod IP churn towards more long-lasting identities. IPs are meaningless given that everything is centered around pod labels and that the pod lifetime is generally very short with ephemeral workloads. By understanding the context of the process in the kernel, eBPF helps abstract from the IP to provide more concrete identity abstractions. With a secure identity abstraction for workloads, Cilium was able to build features like egress gateways for short-lived pods and mTLS.

Cloud native is shifting the requirements for platforms that need to support higher levels of performance and scalability along with constant change. Many of the Linux kernel building blocks that support these demanding workloads are decades old. Luckily, eBPF allows us to dynamically change the kernel to create abstractions that are ready for the cloud native world. eBPF is unlocking cloud native innovation, creating new kernel building blocks, and dramatically improving the performance of application platforms.

Bill Mulligan is a Cilium maintainer and heavily involved in the eBPF ecosystem. He works at Isovalent.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries tonewtechforum@infoworld.com.

Read more:
How eBPF unlocks cloud native innovation - InfoWorld

Read More..