Page 2,544«..1020..2,5432,5442,5452,546..2,5502,560..»

Comprehensive report of Internet Security Audit Market Projected to Gain Significant Value by 2026 – Northwest Diamond Notes

Growth Analysis Report on Internet Security Audit Market size | Market Segment by Applications (Government , Education , Enterprise , Financial , Medical , Aerospace, Defense and Intelligence , Telecommunication , Other , ,By Region , North America , U.S. , Canada , Europe , Germany , France , U.K. , Italy , Russia , Nordic , Rest of Europe , Asia-Pacific , China , Japan , South Korea and Southeast A), by Type (System Level Audit , Application Level Audit and User Level Audit), Regional Outlook Opportunity, Market Demand, Latest Trends, Internet Security Audit Market Growth & Revenue by Manufacturers, Company Profiles, Forecasts 2026. Analyzes current market size and upcoming Few years growth of this industry.

The Internet Security Audit market report in question is a detailed gist of this industry and encompasses myriad details pertaining to some of the vital ongoing and future trends of this market. Also included in the research document are details about the Internet Security Audit market size, share, as well as the present remuneration. The study projects that the Internet Security Audit market would procure substantial returns by the end of the forecast timeframe while recording a modest annual growth rate over the expected duration. The Internet Security Audit market summary also claims that the growth rate which the industry is expected to register will be propelled by specific driving parameters, and provides details pertaining to the same. Further, the report presents a gist of the numerous challenges, growth opportunities, and risks prevailing in the Internet Security Audit market.

This Internet Security Audit report begins with a basic overview of the market. The analysis highlights the opportunity and Internet Security Audit industry trends that are impacted the market that is global. Players around various regions and analysis of each industry dimensions are covered under this report. The analysis also contains a crucial Internet Security Audit insight regarding the things which are driving and affecting the earnings of the market. The Internet Security Audit report comprises sections together side landscape which clarifies actions such as venture and acquisitions and mergers.

Request Sample Copy of this Report @ https://www.nwdiamondnotes.com/request-sample/44784

Our best analysts have surveyed the market report with the reference of inventories and data given by the key players:

The Report offers SWOT examination and venture return investigation, and other aspects such as the principle locale, economic situations with benefit, generation, request, limit, supply, and market development rate and figure.

The Internet Security Audit market study report was prepared with the major objective of outlining the market sizes that include market segments and sub-segments. The Internet Security Audit market research report was compiled considering fix time period, that is known as forecast period for the study. The report consists of both qualitative and quantitative methods of study along with descriptive analysis related to various geographies and various market segmentations. Also, the Internet Security Audit market research report includes the detailed study of various elements of the Internet Security Audit market such as various market growth drivers and market challenges, these elements analyze the market from different angles. To analyze the growth prospects of the market from the future perspective, market opportunities, competitive landscape, product offerings, market investments and other market matrixes were studied in detail.

Market segment by Type, the product can be split into

Market segment by Application, split into

Market segment by Regions/Countries, this report covers

United States

Europe

China

Japan

Southeast Asia

India

Central & South America

Quantifiable Data: -

Market Data Breakdown by Key Geography, Type & Application / End-User

By type (past and forecast)

Internet Security Audit Market-Specific Applications Sales and Growth Rates (Historical & Forecast)

Internet Security Audit revenue and growth rate by the market (history and forecast)

Internet Security Audit market size and growth rate, application and type (past and forecast)

Research objectives and Reason to procure this report: -

To study and analyze the global consumption (value & volume) by key regions/countries, product type, and application, history data from 2020, and forecast to 2026.

To understand the structure of Internet Security Audit Market by identifying its various sub-segments.

To receive comprehensive information about the key factors influencing the market growth (opportunities, drivers, industry-specific challenges and risks).

To analyze competitive developments such as expansions, agreements, new product launches, and acquisitions, mergers in the market.

To strategically outline the key players in the market and extensively analyze their growth strategies.

Finally, the global Internet Security Audit market provides a total research decision and also sector feasibility of investment in new projects will be assessed. Internet Security Audit industry is a source of means and guidance for organizations and individuals interested in their market earnings.

Request Customization on This Report @ https://www.nwdiamondnotes.com/request-for-customization/44784

More:
Comprehensive report of Internet Security Audit Market Projected to Gain Significant Value by 2026 - Northwest Diamond Notes

Read More..

Honeywells Anthem System Connects The Cockpit To The Cloud For Returns That Come With Risk – Forbes

Honeywell's cloud-connected Anthem cockpit system offers "always on" connectivity and a smart-phone ... [+] like user interface.

Honeywell Aerospace is touting the benefits of its new Anthem flight deck system, an always-on cloud connectivity platform that it claims will improve flight efficiency, operations, safety and comfort. But whether connecting the cockpit of a bizjet or Urban Air Mobility vehicle to the internet 24/7 provides sufficient benefit to outweigh its risk is a daunting question.

The cabins of modern business jets already connect to the internet via the cloud on a routine basis. But with their relatively newfound ability to interact with the cloud/internet for extended windows, bringing busy VIPs live-streaming or videoconference calls, has come the recognition that such convenience comes with vulnerability.

In fact the International Civil Aviation Organization designated 2020/2021 as the Year of Security Culture, calling for a cybersecurity action plan for all sectors of aviation (including business and air transport) in response to the many cyber threats.

These have arisen in a post-pandemic environment in which highly placed or high net-worth individuals are spending more time aboard corporate/private business aircraft to bypass the risks and individual liberty-inhibiting hassles of commercial air travel.

Combining the connectivity-enhanced properties of aircraft cabins with the more highly prized information of the individuals and enterprises which travel in them sets the motivational table for data breaches and other cyber malfeasance.

With Anthem, it could be argued that Honeywell is inadvertently setting another place at the table for unauthorized access to flight deck information, despite its best intentions.

Making Pilots Lives Easier All The Time

Honeywell says its Anthem cloud connected flight deck system will make life easier for pilots like ... [+] this pair striding from a Bombardier Challenger 350.

Vipul Gupta, vice president and general manager of avionics for Honeywell, is keen to stress that Anthem is the first comprehensive cloud-connected cockpit system on the market.

There are lots of [aircraft] systems which can connect to the cloud, he says. What were trying to drive differently with Anthem is being always connected, not just when youre on the ramp Its always connected and architected in a way to provide that capability in the future.

The near to mid-term future is key for Honeywell, eager to regain market share from Garmin International, which has come to dominate the general aviation and increasingly business aviation avionics markets in the past couple decades. Those old enough to remember when a Bendix/King avionics panel was the gold standard (Bendix/King is now a Honeywell brand) will understand the primacy to which the company would like to return.

Anthems always-connected quality and user friendliness theoretically pave a way for that return. Even when an aircraft so-equipped sits on a patch of tarmac, powered down, cold and dark, Honeywells Integrated Network Server Unit (INSU) is running on battery power, keeping the cloud connection active.

The INSU connects the Anthem flight deck to internet via WiFi or 4G LTE cellular connections on the ground. In the air it connects through high-speed Ka/Ku band satellite links.

In so doing, Anthem doesnt just bring internet into the forward display stack Gupta says. It provides unprecedented ease of access to information, including third-party applications, to the flight crew at any point in a mission.

When we say we have an always on cloud connected avionics or flight deck suite, it ultimately has a purpose of reducing pilot workload. It will make pilots lives easier and everyone associated with that flight, maintenance technicians, operations directors, Gupta affirms.

Anthems touch-and-swipe interface plays its part in easing information access. Though he admits he loves buttons and knobs, Gupta explains that Honeywell told Anthem developers they could use only the companys flat panel displays when designing the control interfaces for the system. The resulting smartphone-like UI can speed pilot tasks and minimize interruption in-flight, Gupta says.

Gupta relates an example wherein a bizjet flight from Phoenix to London has just reached cruise altitude. The pilot is making some flight plan changes via the instrument panel or a tablet in response to weather variations when air traffic control interrupts that task to warn of traffic with instructions to contact another Traffic Center on a different frequency.

The amount of time which the pilot devotes, from an interruption perspective, is quite high, Gupta maintains.

With Anthem, a pilot could simply type in a new frequency on the smart scratch pad window and then the system will prompt for selection into the correct field (COM1/COM2) while remaining in the flight planning page.

You dont have to go back to the radio tuning page, you dont have to get out of the flight panning page. Gupta says. You just put information into the smart scratch pad and the system will prompt you. Once you put it in the system automatically takes you back to the page you were in.

If this feature saves time and work in-flight, as Honeywell maintains, the savings are marginal. When its pointed out that such a cross-ocean flight would have a pilot and co-pilot, the latter of which typically copies radio traffic and adjusts comms, Gupta acknowledges the small advantage such a feature would yield.

For single-pilot operations it might be different. But autonomy is likely the main point. The companys press release notes that Anthem supports growing levels of aircraft autonomy, leading to complete autonomous capabilities in the future as regulations allow.

Future neutral pilot reviews should tell us if Anthem truly reduces workload or if its value-add is mostly marketing. Whether obviating the need for pilots entirely makes things easier is a conundrum those reviewers may want to take up as well.

In the shorter term, the benefits of its cockpit connectivity may best be seen in terms of remote flight planning, according to Honeywell.

Any Time, Every Time

Remote flight plan loading is a headline Anthem capability. Vipul Gupta asserts that its a precedent-setting feature. The ability to [remotely plan/load] any time, whenever you want, is not there today.

Indeed, the example Gupta gives would be precedent setting.

There is very deep integration with electronic flight bag applications, the capability to complete a flight plan and then upload that flight plan from the hotel room to straight into the airplane.

Honeywell asserts that this remote flight planning/uploading can dramatically reduce pilots preflight preparation time by up to 45 minutes per flight.

However when one considers that the vast majority of business aviation flight plans are known canned routes, the up to 45 minutes claim looks spurious. Gupta concedes the point as well as the fact that spur-of-the-moment flight plans are routinely crafted, sent and approved in 15 minutes or so.

Nevertheless, loading a flight plan while riding the WiFi from the Hilton or Embassy Suites certainly would be a step from transferring critical data like maintenance status and flight plans via wired connections or drives at the airplane. One noted cybersecurity expert we ran it past on background said it would also be a tremendous cyber risk.

That risk appears more pervasive than ever. Earlier this month, the heavily defended Reserve Bank of Australia characterized the possibility of a potentially destabilizing attack on Australias financial system as inevitable. The layered cybersecurity of Volkswagen AG was breached along with three other multinational firms in the same month this summer. Forbes recently reported that the cybercriminal group SnapMC is breaching corporate systems and issuing extortion threats in 30 minutes or less.

Honeywell seems undaunted by the possibility that Anthem could be a conduit to data theft, monitoring or aircraft disruption.

You can always say, no connectivity on the airplane. Thats an easy answer for anyone making a [digital] flight deck today, Gupta asserts. Weve chosen to have full connectivity with the flight deck with the full realization that cybersecurity is the number one concern for us.

As such, the company has created an internal organization which supports cyber security 24/7, Gupta says. Anthem has been designed with zero-trust architecture baked in, Honeywell adds, aligning with NISTs 800-207 cyber standard. It also carries the spirit of this standard within the Anthem gateway for internal communication.

The gateway provides hardware partitioning between avionics and communications. Its a logical, vital safeguard but one clouded by Guptas revelation that the third party applications which Anthem can host arent limited to weather, radar, maintenance or catering apps.

Honeywell is also working with airframers and their partners to provide OEM Autonomy as a capability. Anthem can host aircraft system management apps flaps controllers, battery management systems, fuel computers as software on its processing modules.

Whether such critical hosted applications could be accessed is up for argument as is the efficacy of cyber security in general. Another expert reminded us of a line from the movie Anchorman. Referring to the alluring cologne he uses, Ron Burgundys broadcast cohort, Brian Fantana, says 60% of the time it works every time.

A couple of aviation security insiders were willing to go on record about Anthem. Both contend that such systems are the way of the future and that Honeywells timing is appropriate.

Chris Bartlett, president of CCX Technologies, which makes cybersecurity-focused cabin routers, components, and security plans, cautioned, This new product deserves an immense amount of thought, research, and development around cyber security to ensure it functions in a highly secure way and does not become a vulnerability."

Britton Wanik is VP of marketing with the air-to-ground network provider SmartSky Networks, for which Honeywell is a value-added reseller. He observes that there are many landmines for the kind of remote flight planning examples Honeywell posits. Its a concept that goes back at least 20 years he adds and it does raise concerns.

But those are surmountable problems that can be solved with existing security tools.

Wanik sees a bigger challenge for Anthem security in the airborne environment where the latency and unreliability of connectivity yields an unstable, often unsecure connection to the aircraft. SmartSkys low latency, high bandwidth networks provide a solution, he says.

The human element poses just as much of a challenge. Vipul Guptas affirmation that Everyone who touches a flight is able to get information that matters to them when they need it via Anthem is also a reminder that individuals sometimes have malevolent intentions.

Anthem guards against these with internal processes and no single point of failure with respect to safety-of-flight and other information, Gupta says. His contention that human-enabled exploits are probably less than .01% of the threat might be weighed against a recent report from Verizon VZ which concluded that 85% of cyber security breaches involve the human element.

Theres also a question as to whether interested parties could build an electronic profile (as done in cyber circles) of Anthem-configured aircraft for the purpose of monitoring their movements and electronic activity.

Thanks to the FAA-required Automatic Dependent Surveillance-Broadcast (ADS-B) Out ATC feature, the public can freely see when a general aviation, business or air transport aircraft is airborne, read its altitude, N-number and departure/destination information.

But ADS-B allows GA and business aircraft to opt-out, rendering their tail numbers, origin/destination and flight information unreadable. The option was not lost on Honeywell and is where Anthems always on mantra takes a pause.

Connectivity can always be stopped if [the customer] chooses to do so, Gupta acknowledges. I fully expect that in a business aviation environment protecting the tail number and information on flying from where to where will probably be crucial. In an air transport environment it will be a different story.

Scheduled UAM

The seven-seater Lillium approaches New York City in this artist's rendering. If the scenario ... [+] becomes real, Honeywell's Anthem system may dominate the flight deck.

Air transport is another point of focus for Honeywell which sees Anthem in numerous cockpits of the UAM variety. One of the keys to Anthem seen both by outside observers and within Honeywell is its scalability. Its size, weight and power requirements can be scaled to fit a large bizjet, a GA piston-single, or a small 4-passenger UAM eVTOL aircraft.

Contrary to analysts who see the UAM market emerging as a high-cost, business oriented on-demand transport mode akin to chartered helicopter service, Honeywell sees the segment in scheduled-service airline terms.

Some of the early [UAM] segments which we see coming out are more like air transport operations rather than business aviation, Gupta maintains. Four to five regularly scheduled UAM flights per day between paired destinations exemplify an operational model that Honeywell believes will be a firm market for Anthem.

The company has already stood up a dedicated UAM organization and its work with would-be UAM provider and customer Lillium obviously informs its outlook. Gupta says Honeywell expects Lilliums eVTOL transport to be certified by late 2023 and operational in 2024.

If that comes to pass, Anthem will ride along and the challenges it will have to surmount in the dense RF environment of proposed UAM operations will require balancing the benefits of flight deck connectivity with the risks in an even more thorough-going way.

Read this article:
Honeywells Anthem System Connects The Cockpit To The Cloud For Returns That Come With Risk - Forbes

Read More..

Billions of Google Chrome Users At Risk of New High-Level Hacks | Here’s What You Need To Do – Tech Times

Google Chrome users are currently at risk of new high-level hacks, as the search engine giant confirmed. Because of this, the tech developer issued a warning to a total of 2.65 billion consumers across the globe, saying that they discovered new malicious campaigns in the browser.

This is the third time that Google has issued a high alert level warning for its Chrome users. The company also published a new blog post that to specify the high-level and medium-level vulnerabilities.

(Photo : Photo credit should read FREDERIC J. BROWN/AFP via Getty Images)This photo taken on January 7, 2010 shows a woman typing on the keyboard of her laptop computer in Beijing. China declared its Internet "open" on January 14 but defended censorship that has prompted Web giant Google to threaten to pull out of the country, sparking a potential new irritant in China-US relations. China employs a vast system of Web censorship dubbed the "Great Firewall of China" that blocks content such as political dissent.

Also Read:Google Chat 'Mark as Unread' Feature Rolls Out To Spaces, Direct Messages | Mobile, Desktop Compatibility and More

The giant tech firm confirmed a total of five severe flaws in its popular Chrome browser:

According to Google's official blog post, the company's security team discovered a total of five high-level browser flaws, eight medium-level vulnerabilities, as well as two low-level issues.

(Photo : Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)A picture taken on October 17, 2016 shows an employee typing on a computer keyboard at the headquarters of Internet security giant Kaspersky in Moscow. (Photo by Kirill KUDRYAVTSEV / AFP) / TO GO WITH AFP STORY BY Thibault MARCHAND

On the other hand,Forbesreported that Chrome was also affected by UAF (Use-After-Free) exploits more than ten times back in September. Aside from this, Google also suffered from a zero-day UAF exploit during that period.

Google is just one of the companies that are currently targeted by cybercriminals. Recently, it was reported thatTwitch hackerstargeted the popular streaming platform for hours.

On the other hand, anSMS routing companywas also hacked. Experts said that the malicious campaign against Syniverse lasted for five years.

Since Chrome users are facing severe browser flaws, Google decided to release a critical update. The new Chrome version 95.0.4638.54 is expected to prevent and fix the mentioned vulnerabilities in the company's popular browser service.

To check the update, you need to visit your Chrome's Settings. After that, go to the Help section and choose the "About Google Chrome" option. More details will be provided once you are there.

For more news updates about Google Chrome and other popular browsing services, always keep your tabs open here at TechTimes.

Related Article:Global YouTube Crypto Livestream Scam Involves 1,000 Malicious Domains | Other Things Google Discovers

This article is owned by TechTimes

Written by:Griffin Davis

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

The rest is here:
Billions of Google Chrome Users At Risk of New High-Level Hacks | Here's What You Need To Do - Tech Times

Read More..

iPhone 13 Pro Running iOS 15 Hacked in Just 1 Second and Were Not Even Kidding! – Beebom

Apple is a company that has always touted privacy as one of the key selling point for its devices. If you have ever watched an Apple launch event, you might know how many times the Cupertino giant addresses its newest devices, be it iPhones, iPads, or Macs, as the most secure device ever. However, at a recent Hackathon competition, some Chinese white-hat hackers broke into Apples latest iPhone 13 Pro running iOS 15.0.2 in a mere second! It was an achievement, and for that, they bagged a $300,000 cash prize.

During the recent hacking championship in China known as Tiangfu Cup, not one but two hacking teams were able to break the iPhone 13 Pro in a matter of seconds. As per the competitions official website, participating teams had to break into the iPhone 13 Pro to gain control of the phone while it ran on the latest iOS 15.0.2 version.

There were three tiers of rewards for hacking the iPhone 13 Pro. For remote code execution (RCE), the prize was $120,000, for RCE plus a sandbox escape, the reward was $180,000, and for the remote jailbreak of the device, the prize money was $300,000.

Amongst the two winning teams, Team Pangu, which is a popular name in the iPhone jailbreak community, was able to remotely jailbreak the iPhone 13 Pro in a record time of 1 second. It is not a joke and pretty surprising that the hacking group was able to get into the iPhone 13 Pros system, which Apple calls the most secure, so quickly and effortlessly. However, it is evident that the team has been preparing for the competition for a generous amount of time.

Another team from Chinas Kunlun Lab was able to exploit a vulnerability in Safari for iOS 15 to get into the iPhone 13 Pro. The CEO of Kunlun Lab, who is also the former CTO of the internet security company Qihoo 360, broke into the device live in merely 15 seconds.

Both the teams won a big cash reward for their achievements. They are expected to contact Apple to inform them about the vulnerabilities, so the company could deploy a fix with a future update.

More here:
iPhone 13 Pro Running iOS 15 Hacked in Just 1 Second and Were Not Even Kidding! - Beebom

Read More..

Norton Consumer Cyber Safety Pulse Report Finds Tech Support Scams are the No. 1 Phishing Threat – KY3

New Threat Insights Identified Across Gaming, Banking, Gift Cards and Religious Institutions

Published: Oct. 19, 2021 at 8:00 AM CDT

TEMPE, Ariz., Oct. 19, 2021 /PRNewswire/ -- NortonLifeLock's global research team, Norton Labs, today published its third quarterly Consumer Cyber Safety Pulse Report, detailing the top consumer cybersecurity insights and takeaways from July to September 2021. The latest findings show tech support scams, which often arrive as a pop-up alert convincingly disguised using the names and branding of major tech companies, have become the top phishing threat to consumers. Tech support scams are expected to proliferate in the upcoming holiday season, as well as shopping and charity-related phishing attacks1.

Norton blocked more than 12.3 million tech support URLs, which topped the list of phishing threats for 13 consecutive weeks between July and September. The effectiveness of this type of scam has escalated during the pandemic due to consumers' increased reliance on their devices to manage hybrid work schedules and family activities.

"Tech support scams are effective because they prey on consumers' fear, uncertainty and doubt to trick recipients into believing they face a dire cybersecurity threat," says Darren Shou, head of technology, NortonLifeLock. "Awareness is the best defense against these targeted attacks. Never call a number listed on a tech support pop-up, and instead reach out to the company directly through their official website to validate the situation and next steps."

Norton successfully blocked nearly 860 million Cyber Safety threats over the past quarter, including 41 million file-based malware, 309,666 mobile-malware files, nearly 15 million phishing attempts and 52,213 ransomware detections.

Additional findings from the Consumer Cyber Safety Pulse Report include:

For more information and Cyber Safety guidance, visit the Norton Internet Security Center.

About NortonLifeLock Inc.NortonLifeLock Inc. (NASDAQ: NLOK) is a global leader in consumer Cyber Safety, protecting and empowering people to live their digital lives safely. We are the consumer's trusted ally in an increasingly complex and connected world. Learn more about how we're transforming Cyber Safety at http://www.NortonLifeLock.com.

###

____________________

1No one can prevent all cybercrime or identity theft.

View original content to download multimedia:

SOURCE NortonLifeLock Inc.

The above press release was provided courtesy of PRNewswire. The views, opinions and statements in the press release are not endorsed by Gray Media Group nor do they necessarily state or reflect those of Gray Media Group, Inc.

Continued here:
Norton Consumer Cyber Safety Pulse Report Finds Tech Support Scams are the No. 1 Phishing Threat - KY3

Read More..

GITEX 2021: Investing in cyber protection with Acronis – ITP.net

Exhibiting their flagship product Acronis Cyber Protect Cloud at GITEX 2021, Mareva Koulamallah, Head of Marketing and Communication MEA spoke to ITP.net about future trends and the pressing need for online security.

The increased use of the internet, especially during the pandemic, has exposed organisations and individuals to a series of cyber threats. While some organisations have a cybersecurity strategy in place to protect their valuable data, others are still not quite there yet and pretty much lagging behind. GITEX gives us the perfect platform to amplify the conversation about the need for cyber protection. We shall be using this opportunity to educate IT teams on matters around cyber protection while at the same time highlighting our flagship product Acronis Cyber Protect Cloud and we will have a surprise with one of our sports partners too.

Some of the key discussions we are looking forward to exploring at this years GITEX event include conversations with Managed Service Providers (MSP) and Service Providers (SP). We want to understand their needs and how we can help them grow their business while protecting their assets and their customers.

Etisalat in Zabeel Hall 1 they are always at the forefront of innovation and amazing hosts for the latest in technology, as they always bring all sorts of incredible and futuristic prototypes to their booths. This year, we have partnered with Etisalat to bring, for the very first time in the Middle East, a breathtaking innovation that will bring joy to the sporting world and especially the racing fans. Acronis and Etisalat will be co-hosting Airspeeders prototype the first electric manned flying car, in a mission to show how technology and innovation can be used to develop interesting sporting activities for the future.

We were already present for the last few years and are always happy to be able to support the region and GITEX. We definitely see a lot of value in having in-person meetings back. Now, we can still continue with online meetings and events from time to time, as it allows us to manage costs, but any time we will have an opportunity where it is relevant for us to attend, we will. In-person meetings and events allow us to actively engage with our core audience. This way we are able to get instant feedback about our products and services. These insights, in turn, help us make our products and services better, as well as improve our customer experience.

We launched our solution around the beginning of the pandemic, in order to respond to an accrued critical need around the right cyber protection solution and an easily deployable tool that can be integrated to existing systems and used remotely with multiple teams. Due to the dynamic nature of consumer needs and preferences, we are constantly making upgrades to the tool. These upgrades are largely driven by the feedback we receive from our partners or customers.

We have been attending GITEX for several years and this year is no different. Despite the prevailing circumstances, economies across countries are bouncing back and the UAE is leading the way on the global scale around this trend. We know the organisers of GITEX quite well and the quality of support they provide in order for us to get the most out of the event, which made our participation this year a no-brainer.

The pandemic has definitely boosted and accelerated innovation and research across segments and industries; from technology to pharmaceuticals. For instance, we have even added new features to our own products such as the remote desk control option. Indeed, smaller organizations were seeking alternatives that could allow them to continue to operate within a heavily digitalized world.

Competition is good as long as it is healthy. Having people with various innovations can only push the next person or company to want to improve not just for themselves but for our society. Prototypes or solutions that are created for a particular project or market could end up being used by a larger audience a few years down the line. It happened to planes, cars, computers, phones, cameras, and more. So, lets continue to encourage innovation across all sectors.

It might not be directly innovation related but still have a great impact on it; I would say an accrued investment in diversity, of all sorts and all aspects. We need people that have different outlooks on life and various creative minds to continue to innovate. Otherwise, I would say to watch out for flying cars!

Read more from the original source:
GITEX 2021: Investing in cyber protection with Acronis - ITP.net

Read More..

Pushing to the edge with hybrid cloud – iTWire

Over the last few years, technology has evolved through an acceleration of innovation across industries, bringing forth new combinations of technologies, new use cases and new business models. Technologies such as the Internet of Things (IoT), cloud computing, machine learning and big data have combined to solve business challenges that plagued industries for decades.

What is a hybrid cloud?

According to IBM, a hybrid cloud is an infrastructure that connects at least one public cloud and at least one private cloud, but the definition can vary. A hybrid cloud provides orchestration, management, and application portability between public and private clouds to create a single, flexible, optimal cloud infrastructure for running a companys computing workloads.

Despite the growth in public cloud computing, enterprises often need to use a combination of public and private (on-prem) clouds. Often overlooked in the hype around public cloud computing, private clouds offer greater flexibility, security and compliance.

A private cloud environment is generally accessible only through private and secure network links, rather than the public internet. Industries such as healthcare and finance have specific regulations about storing and processing data and thus favour using private clouds. A company can run a private cloud on-premises in its data centre, local server room or access it as a securely hosted offering by a cloud service provider (CSP).

Crucially, hybrid cloud computing enables companies to accelerate their digital transformation efforts, primarily if they work with legacy hardware and infrastructure. They can extend their existing infrastructure by adding one or more public cloud deployments modernizing applications and processes in stages rather than a complete digital transformation upheaval.

What is computing on the edge?

IoT technology is ubiquitous, with connected devices collecting more and more information through sensors, cameras, accelerometers, LiDAR and depth sensors. All this information requires collection, storage, processing and analysis to create data-driven insights. Some of this data comes from mission-critical applications where a split-second delay can have significant consequences. For example, factories, smart traffic consoles, an insulin pump, and smoke and noxious gas monitoring.

As a consequence, edge computing use cases have grown. Edge computing places processing (and some storage) capabilities close to the data source, enabling fast data analysis in real-time. Its particularly useful in poorly connected environments such as oil refineries, mines and wells. Companies are moving more of their compute and financial investments toward edge computing. Grand View Research predicts that companies will spend $43.4 billion on edge computing by 2027, a compound annual growth rate of 37.4%.

Despite the predictions of some analysts, this does not mean the death of cloud computing. Cloud computing and edge computing have a beneficial functional relationship. And this relationship extends the hybrid cloud concept.

According to Gartner, Edge computing augments and expands the possibilities of todays primarily centralized, hyper-scale cloud model and supports the systemic evolution and deployment of the IoT and entirely new application types, enabling next-generation digital business applications.

Combining hybrid cloud and edge computing

A hybrid environment with workloads at the edge and various cloud locations offer advantages to companies seeking greater efficiency and cost savings. Running a business and time-critical workloads at the edge ensures low latency and self-sufficiency. This means transactions can occur even in rugged environments where internet connections are poor.

Take the example of industrial IoT and a factory that uses sensors to monitor machines for temperature, sound, pressure and vibration. The factory can use a locally hosted compute device from a nearby cloud provider, or even something like a Raspberry Pi, to process and filter and aggregate data from the machines in near real-time. If this edge compute instance detects an urgent anomaly, then it can generate an alert for investigation. It can send the filtered and aggregated data to a public cloud instance during regular operation to perform further analysis, machine learning processing, decision making, and storage with a service that provides better efficiency and value for such tasks.

Connected cars are another example, which are effectively data centres on wheels with hundreds of in-car sensors creating a deluge of data. Autonomous driving systems, such as those tested by Equinix customer Continental, must aggregate, analyse and distribute that data, as well as data from other sources such as traffic and weather information, in real-time with all the necessary security and privacy controls in place. And as the degree of autonomy advances (from level one for some driver assistance to level five for fully autonomous), the amount of data to aggregate and analyze will continue to soar. Current test drives for L2 autonomy are generating up to 20 terabytes (TB) of data a day, while more advanced sensor sets for higher levels of autonomy (L4 and above) may generate up to 100 TB/day.

A car needs some of this data in real-time to make split-second decisions, like whether to move lanes or whether the road is clear of pedestrians. The processing of this data could happen on the onboard computer or on any available local edge compute instances the vehicle happens to be near at the time. When the car returns to a WiFi connection, it can then upload any other less important data to a public cloud instance, receive software and machine learning model updates, a driver can review their data, or the manufacturer can download for analytical purposes.The communication between edge computing and the rest of the hybrid cloud neednt be in one direction. Once compute services have processed, analyzed and reached decisions on the data they have, they can then push relevant updates to edge compute instances.

Are you looking to introduce a hybrid cloud solution?

Like many other aspects of modern infrastructure, containers and orchestrating them with Kubernetes can help standardize edge and cloud deployments. Kubernetes standard runtime layer enables you to develop, run and operate workloads consistently across computing environments and move workloads between edge and cloud.

Equinix Metal provides the foundational building blocks that give businesses the ability to create and consume interconnected infrastructure with the choice and control of physical hardware and the low overhead and developer experience of the cloud. Digital leaders use Equinix Metal to create a digital advantage by activating infrastructure globally, connecting it to thousands of technology ecosystem partners, and leveraging DevOps tools to deploy, maintain and scale their applications. This means that on-demand bare metal servers with dedicated GPUs optimized for edge-type workloads such as machine learning are within your reach.

Metal integrates with a range of common hybrid cloud toolings such as Anthos, VMWare Tanzu, and RedHat OpenShift, allowing public cloud vendors and users alike to leverage any existing infrastructure and tooling.

Equinix Fabric supplements Equinix Metal by offering software-defined interconnection to connect Equinix Metal and your other infrastructure together, including all leading cloud providers. Equinix Fabric helps companies who want to take advantage of hybrid multi-cloud but need to reinforce privacy and security for data as it travels between edge and public cloud locations. On top of providing these security guardrails, Equinix Fabric is affordable and performant, not adding any other overheads to applications.

To learn more about how to enable the hybrid cloud for your organisation today, download the Equinix Whitepaper on enabling the hybrid cloud.

By Equinix

References

View original post here:
Pushing to the edge with hybrid cloud - iTWire

Read More..

7 Open Source Cloud-Native Tools For Observability and Analysis – Container Journal

In 2021, observability is close to gaining buzzword status. This is perhaps because, for years, monitoring wasnt as standardized in software development. Tracing was given less forethought, and applications produced logs in varying formats and styles. Without unifying layers to analyze a growing number of services, this led to a chaotic mess of jumbled application analysis.

Now, with cloud-native technology, engineers arent trying to repeat these mistakes from the past. Also, with increased user expectations and digital innovations demands, there is now more focus on maintaining overall stability, performance, and availability. This has given rise to the growth of observability and analysis tools. These open source projects are making logs more actionable, tracing events with detailed metadata, and exposing valuable metrics from Kubernetes environments. Such insights can inform business metrics, help pinpoint bugs and spur quick recovery measures. For these reasons, deep observabilty across the cloud-native application stack is a must.

So, below well explore six well-established CNCF projects related to observability, telemetry and analysis. Many of these projects help collect and manage observability data such as metrics, logs and traces.

The popular monitoring system and time series database

GitHub | Website

Prometheus is the most popular graduated CNCF project related to observability and likely needs no introduction, as many engineers are already familiar with it. Large companies such as Amadeus, Soundcloud, Ericsson and others already use Prometheus to power their monitoring and alerting systems.

Prometheus has built-in service discovery and functions by collecting data via a pull model over HTTP. It then stores metrics organized as time-series key-value pairs. These metrics can be customized to the application at hand and set to trigger alerts for example; an e-commerce site may need to identify slow load times to stay competitive. Prometheus has great querying abilities; the PromQL query language can be used to search data and generate visualizations.

A Prometheus environment is comprised of the main Prometheus server, client libraries, a push gateway, special-purpose exporters, an alert manager and various support tools. To get started, developers can review the getting started guide here.

Open source, end-to-end distributed tracing

GitHub | Website

With the move toward distributed systems, the process of debugging, networking and supporting observability for many components has become exponentially more challenging. Jaeger is one project that aims to solve this dilemma; its designed to monitor and troubleshoot transactions in complex distributed systems. According to the documentation, its features are as follows:

Jaeger works by implementing various APIs for retrieving data. This data follows the OpenTracing Standard, which organizes traces into spans; each span details granular details like the operation name, a start timestamp, a finish timestamp and other metadata. Jaeger backend modules can export Prometheus metrics, and logs are structured using zap, a logging library.

A unified logging layer

GitHub | Website

Fluentd is a logging layer designed to be decoupled from backend systems. The philosophy is that a Unified Logging Layer can rid the chaos of incompatible logging formats and disparate logging routines.

Fluentd can track events from many sources, such as web apps, mobile apps, NGINX logs and others. Fluentd centralizes these logs and can also port them to external systems and database solutions, like Elasticsearch, MongoDB, Hadoop and others. To enable this, Fluentd sports over 500 plugins. Using Fluentd could be helpful if you need to send out alerts in response to certain logs or enable asynchronous, scalable logging for user events.

To get started with Fluentd for logging, one can download it here for any operating system or find it on Docker. Once installed, Fluentd offers a graphical UI to configure and manage it.

Highly available Prometheus setup with long-term storage capabilities

GitHub | Website

For those that want to get more out of Prometheus, Thanos is an option. Its framed as an available metric system with unlimited storage capacity that can be placed on top of existing Prometheus deployments. Using Thanos to obtain a global view of metrics could be helpful for organizations that use multiple Prometheus servers and clusters. Thanos also enables extensions to your own storage of choice, making data retention theoretically limitless. As Thanos is designed to work with larger amounts of data, it incorporates downsampling to speed up queries.

Horizontally scalable, highly available, multi-tenant, long-term Prometheus.

GitHub

Cortex is another CNCF project designed to work with multiple Prometheus setups. Using Cortex, teams can collect metrics from various Prometheus servers and perform globally aggregated queries on all the data. Availability is a plus with Cortex, as it can replicate itself and run on multiple machines. Like Thanos, Cortex provides long-term storage capabilities, with integrations for S3, GCS, Swift and Microsoft Azure.

According to the documentation, Cortex is primarily used as a remote write destination for Prometheus, with a Prometheus-compatible query API. To begin working with Cortex, check out the getting started guide here.

An observability framework for cloud-native software.

GitHub | Website

OpenTelemetry is a project built to collect telemetry data, such as metrics, logs and traces, from various sources to integrate with many types of analysis tools. The package supports integrations with popular frameworks such as Spring, ASP.NET Core, Express and Quarkus, making it easy to add observability mechanics to a project. Of note is that OpenTracing and OpenCensus recently merged to form OpenTelemetry, making this one powerhouse of an open source telemetry solution.

In todays digital age, metrics are the lifeblood of a business. Having a holistic assortment of application performance data and end-user actions information is vital for analysis. But thats not the only end goal quality filtering and navigation for such data are just as crucial for turning stale metadata into actionable insights.

Above, weve covered some of the most adopted CNCF projects related to observability, monitoring, and analysis. But these arent the only options available there is a lot more exciting development occurring within CNCF-hosted projects and the surrounding ecosystem.

At the time of writing, CNCF hosts the following projects in sandbox status. As you can see, these emerging projects involve more active monitoring, such as via chaos engineering and Kubernetes health checks, as well as deeper Kubernetes-first observability.

Related

Read the original:
7 Open Source Cloud-Native Tools For Observability and Analysis - Container Journal

Read More..

Pros and cons of cloud infrastructure types and strategies – Information Age

Abby Kearns, CTO of Puppet, delves into the pros and cons of cloud infrastructure types and strategies in the market today

Establishing what kind of environment and strategy is right for your business is key to cloud success.

You dont have to do much Googling to find articles, podcasts and tweets featuring me talking about both multi-cloud and hybrid cloud. More companies than ever are choosing a hybrid cloud approach that leverages a multi-cloud strategy, so its prudent to revisit the pros and cons of hybrid and multi-cloud, as well as public and private cloud infrastructure.

I am defining public and private clouds as the environments in which an organisation chooses to host its infrastructure. Hybrid and multi-cloud are the strategies organisations employ for these environments.

I will caveat everything Im about to say with the fact that you should choose the right environment for the right workloads. What problem are you solving and why? There is a case to be made for private cloud infrastructure, and there is a case to be made for public cloud it entirely depends on what type of applications you are running and what the requirements are around that application.

Today, one of the key private cloud advantages is data be it addressing data sovereignty requirements, or because you have a large data lake in your private cloud and need close access to it (for low latency application requirements), or you have specific regulation requirements on who has access to that data and where that data sits. Data is often at the heart of private cloud strategies.

Another benefit of private cloud to an organisation is the customisation it gives the business, granting greater flexibility and the means to design a bespoke environment for specific business needs and users.

So, what are some of the drawbacks of private cloud?

They can be high maintenance. A dedicated team is required to manage the environment full-time, ensuring the environment stays up to date, including addressing any CVEs, as well as reliability, minimising any failures or downtime. A private cloud can be costly, as it requires a data centre as well as the physical infrastructure, in addition to the customised private cloud software to manage the environment in a way that mimics a public cloud experience (ease of access, self-service, etc.). You also run into limitations on scale that you would not have in a public cloud; while this is addressable, it does require forward-looking planning on what possible scale your environment would need to run in a variety of scenarios.

What about public clouds?

Public cloud can be less expensive because the data centre, hardware, and software are owned and operated by a third-party provider. Because a public cloud provider is responsible for dozens or hundreds or thousands of customers, the network of servers is vast and largely diminishes the risk of failure, so count high reliability as yet another perk of public cloud. This combination of a massive network of servers and a 24/7 service team provides an additional benefit: scalability.

Are there any downsides to public cloud?

Remember how private clouds are able to be customised for an organisations specific business needs? Public cloud is often a one-size-fits-all solution, meaning a company no longer has as much control or flexibility with the public cloud. Public clouds can also be costly, especially if you have a growing footprint of workloads that require intensive data requirements. Additionally, egress fees can be quite high if you are looking to pull your data out of the cloud.

Adrian Rowley, senior director EMEA at Gigamon, discusses why the accidental hybrid cloud exists, and how to effectively manage it. Read here

A multi-cloud strategy simply means that an organisation has chosen to use multiple public cloud providers to host their environments. A hybrid cloud approach means that a company is using a combination of on-premises infrastructure, private cloud and public cloud and possibly more than one of the latter, meaning that company would be implementing a multi-cloud strategy with a hybrid approach. At times, these terms are used interchangeably.

Companies choose a multi-cloud strategy for a multitude of reasons, not least of which is avoiding vendor lock-in. Spreading workloads across multiple cloud providers increases reliability, as a company is able to fail over to a secondary provider if another provider experiences an outage.

Optionality is a huge benefit to companies who want to be able to pick and choose which services will most seamlessly integrate into their environments, as each major public cloud provider provides some unique services for different types of workloads. Furthermore, when a company uses multiple public cloud providers, it retains flexibility and can transfer workloads from one provider to another. Finally, global organisations can leverage multi-cloud to address complex compliance requirements, which vary from country to country.

These are all very strong cases for multi-cloud, but what are the downsides?

Cost forecasting and containment can be challenging when using multiple providers charging different rates for different services. Also, spreading workload across multiple cloud providers does increase reliability, but it can also increase risk and make it more difficult to know where data is and who has access to it. There are both benefits and downsides to multi-cloud, but I am a proponent of a multi-cloud strategy whenever possible.

Why do organisations choose a hybrid approach?

Many companies especially large enterprises that existed for decades host their environments in the data centre, and lack of resources, funding, staff, executive buy-in, or a host of other reasons may prevent them from shifting their legacy architecture to the cloud. However, certain teams within the company may be spinning up cloud-native environments for new projects, and other teams may be working on implementing a lift-and-shift to the cloud from the data centre.

For most organisations large and small, a multi-cloud strategy with a hybrid cloud approach is the way of the future. As applications grow across organisations, their infrastructure needs change as well. For example, you might be running a large CRM system in a private cloud, but you may choose to run newer, cloud-native applications in a public cloud where you can leverage the cloud infrastructure to the fullest extent.

At the end of the day, organisations should choose the right infrastructure for the right workload and business needs, whether thats a hybrid and/or multi-cloud strategy using either/both public and/or private clouds.

View post:
Pros and cons of cloud infrastructure types and strategies - Information Age

Read More..

AWS admits cloud ain’t always the answer, intros on-prem vid-analysing box – The Register

Amazon Web Services, the outfit famous for pioneering pay-as-you-go cloud computing, has produced a bit of on-prem hardware that it will sell for a once-off fee.

The device is called the "AWS Panorama Appliance" and the cloud colossus describes it as a "computer vision (CV) appliance designed to be deployed on your network to analyze images provided by your on-premises cameras".

"AWS customers agree the cloud is the most convenient place to train computer vision models thanks to its virtually infinite access to storage and compute resources," states the AWS promo for the new box. But the post also admits that, for some, the cloud ain't the right place to do the job.

"There are a number of reasons for that: sometimes the facilities where the images are captured do not have enough bandwidth to send video feeds to the cloud, some use cases require very low latency," AWS's post states. Some users, it adds, "just want to keep their images on premises and not send them for analysis outside of their network".

Hence the introduction of the Panorama appliance, which is designed to ingest video from existing cameras and run machine learning models to do the classification, detection, and tracking of whatever your cameras capture.

Sometimes the facilities do not have enough bandwidth to send video feeds to the cloud

AWS imagines those ML models could well have been created in its cloud with SageMaker, and will charge you for cloud storage of the models if that's the case. The devices can otherwise run without touching the AWS cloud, although there is a charge of $8.33 per month per camera stream.

The appliance itself costs $4,000 up front.

Charging for hardware is not AWS's usual modus operandi. Its Outposts on-prem clouds are priced on a consumption model. The Snow range of on-prem storage and compute appliances are also rented rather than sold.

The Panorama appliance's specs page states that it contains Nvidia's Jetson Xavier AGX AI edge box, with 32GB RAM. The spec doesn't mention local storage, but lists a pair of gigabit ethernet ports, the same number of HDMI 2.0 slots, and two USB ports.

AWS announced the appliance at its re:invent gabfest in December 2020, when The Register opined that the cloudy concern may be taking a rare step into on-prem hardware, but by doing so would be eating the lunches of server-makers and video hardware specialists alike. Panorama turns out to not have quite the power to drive cloud services consumption as other Amazonian efforts, since the ML models it requires could come from SageMaker or other sources. That fact, and the very pre-cloud pricing scheme, mean the device could therefore be something of a watershed for AWS.

See the article here:
AWS admits cloud ain't always the answer, intros on-prem vid-analysing box - The Register

Read More..