Page 1,315«..1020..1,3141,3151,3161,317..1,3201,330..»

The New Frontiers of Cybersecurity Exponential Increase in Complexity – Security Boulevard

Author:Itzik Kotler, CTO & Co-Founder, SafeBreach

The New Frontiers of Cybersecurity is a three-part thought-leadership series investigating the big-picture problems within the cybersecurity industry. In the first post, we explored the reasons malicious actors have been able to enhance their ability to execute and profit from attacks. In the second post, we discussed how the massive increase in endpoints and systems online has dramatically increased the attack surface. A differentbut equally criticaldimension that well discuss for our third and final installment is that alongside this increase in attack surface comes a significant increase in complexity that is plaguing security teams.

The simple combinatorial mathematics of the sheer increase in endpoints not only means a greater number of systems to manage but also much more complex network architectures and webs of connections underlying IT and technology infrastructure and systems. The rise of cloud computing added a further layer of complexity for individuals trying to keep their applications and data secure. For example, an organization like Twitter that is composed of thousands of microservices will have a vastly more complex endpoint infrastructure than an enterprise that is guarding a handful of servers or even a few cloud instances.

Rather than linear complexity increases with each new node, we see exponential increases in complexity for every added node. Then there is the element of time. It is hard enough to guard and proactively protect an IT infrastructure that is growing quickly but steadily and constantly. It is entirely another issue to protect an IT infrastructure with a growing number of endpoints or systems attached to IP addresses that only exist for short periods and then morph into something else.

This combinatorial complexity is the new reality of Kubernetes and containers, serverless computing, and IPv6 (the newer IP numbering structure that enables billions more endpoints and systems to have their own unique IP addresses). In Kubernetes and containers, new endpoints with IP addresses may spin up and shut down every hour or even on a minute-by-minute basis. Unlike the billions of connected devices, which are far more limited in compute resources and other restrictions, containers and serverless are general purpose and can be more easily adapted for almost any type of payload or attack.

So we are now in a world where anyone can provision hundreds or even thousands of general-purpose servers or lightweight computers with the push of a button. This means a lot more complexity to protect, but also that attackers can generate significantly more complex attacks. Remember, the nature of cloud computing is that it is open to everyone. This includes the Kubernetes engines offered by cloud providers, as well as more abstracted systems to scale up and manage large fleets of containers like Amazons Fargate platform.

We already see signs of this new complexity. A scan by security researchers in mid-2022 pulled in over 900,000 exposed Kubernetes management endpoints. To be clear, these endpoints were not necessarily vulnerable or unprotected. But in security, exposing endpoints provides attackers information they can use to create more targeted attacks. Likewise, public compute clouds have unpatched security flaws that can allow rogue users to break out of a container and potentially access the management plane of the public cloud. This can then allow them to attack other tenants in the cloud, violating the core proposition of secure multi-tenancy.

In the legacy world of tightly controlled network perimeters and less secure internal networks, there was little need to harden endpoints not designed to be exposed to the world. In the datacenter era, a firewall on the edge of the center guarded against unauthorized probes and kept everything private. Even a misconfigured internal server was not accessible to the world. A firewall engineer had to explicitly change firewall rules to open that server to access from the Internet. Today, the opposite is true, with open-to-the-Internet being the default state and the burden falling on developers, DevOps teams, and security teams to set up firewalls, API gateways, and other protections to guard against probes and attacks.

Kubernetes can (and often does) expose endpoints as default behavior, providing a handy map to attackers. We are already seeing attackers exploit the complexity of containers and Kubernetes as a new attack vector, driven in part by the elimination or limitation of favorite older vectors such as macros.

The big push behind the cloud and Kubernetes is to allow developer teams, DevOps, and IT to be more agile, flexible, and resilient. However, this is a paradigm shift with many implications that may be hard for IT and security teams to address. In the cloud, the default is public. In the legacy datacenter world, the default was private, and IT or security would need to grant access. In the public cloud, IT or security governs access. The default premise of cloud, going back to Jeff Bezoss policies at AWS, is to make services, APIs, storage, computing, and networking accessible to anyone with a credit card. In the cloud, therefore, the default for a service is exposed to the world. In the traditional datacenter and legacy networking world, a service must be configured to be exposed.

This paradigm shift injects a new layer of complexity into security and can lead to configuration mistakes, even for cloud-native companies. A developer may build a test application and load code onto it that communicates with other services out in the cloud or even opens an API to the public Internet. The developer may not realize that the cloud server the test application is on the same namespace and security groups as other key production assets. That test server might also be left open by mistake for days, becoming a pivot or jump point for a malicious actor. Another point to consider is that in the past, storage was physically attached to networks and segregated from public access. To access data contained in that storage, you had to go through the server that was attached to it. Cloud computing broke that paradigm and allowed the easy storage of data in object stores and other online storage buckets. In the cloud, developers and even security teams often store data in public cloud storage buckets without properly configuring the buckets to secure access to them.

While physical data centers are somewhat obscured and blocked from public access or even scans, cloud service providers operate using well-known blocks of public IP addresses. This is true even down to individual services. For example, the IP blocks used by Amazons S3 storage service are well documented and publicly shared on the Internet. Because malicious actors know the IP addresses, this makes running continuous probes of those blocks searching for vulnerabilities far less resource intensive and expensive. Attackers also know the default configurations of Kubernetes clusters and connecting APIs. They know the default security configurations of most server images deployed as part of the default public compute cloud catalogs, as well as what ports are protected and opened by default in commonly deployed public cloud Web Application Firewalls. The upshot of all this? We face the opposing trends of operating and security infrastructure being made more complicated by the shift to the cloud, while at the same time identifying attack targets is becoming simpler.

The days of firewalling the data center to guard infrastructure are long gone. Many organizations maintain a global firewall in front of their infrastructure. These firewalls are necessarily porous due to the growing number of APIs and services that must connect to the outside world. In the cloud, the initial approach was to create security groups. Critical processes, services, and instances were placed inside more security groups. Access controls were applied on a per-group basis, associated with identity providers and authentication systems. Security groups are still necessary but insufficient to handle the cloud infrastructures complexity.

The answer is defense-in-depth. Security teams put in place more defense technologies, protecting data assets and applications in multiple ways. APIs are guarded by API gateways. Kubernetes clusters are guarded by specialized Web Application Firewalls and Ingress Controllers. SecDevOps teams mandate smaller, more lightweight firewalls in front of every public service or API. Application security teams require that SAST and SCA scans be run on any code iterations. Cloud providers add technology to ensure that cloud services, such as storage buckets, are properly secured. Endpoint detection and response is mandatory for all devices interacting with enterprise and cloud assets. Security is also placed in the content delivery network (CDN), extending web firewalls and denial of service (DoS) protection further away from core app servers to intercept attacks further upstream. These layered systems require proper configuration and managementa never-ending task.

Complexity increases the probability for mistakes. This complexity also provides malicious actors with potential opportunities to hide and attack. The high degrees of complexity are precisely what hackers use and abuse to get their way. An enterprise may have multiple directories that maintain user permissions, and an admin forgets to update one of them. There may be five valid authentication methods and the last method is the weakest. This will be the one malicious actors invariably choose to exploit. While 90% of development use cases and user requirements are satisfied with the standard catalog of infrastructure and devices, the remaining 10% of non-standard use cases will be the last to be updated and will likely present the best opportunities for exploits. Complexity creeps up on CISOs one exception at a time, one additional service or software or SaaS tool at a time.

So what can security teams do? According to recommendations from leading security agencies like the Cybersecurity and Infrastructure Security Agency (CISA), organizations must begin to invest in automated, continuous security validation to keep up. Unlike an annual penetration test by a third party, organizations must continually evaluate their security control stack. This means performing adversary simulations to test that the defensive controls are working correctly to detect, log, and stop attacks. This continuous test also helps organizations identify those temporary resources that may have been brought up and not protected correctly. Security teams should also make sure they do not limit themselves to external attack surface validation only. Any network can become an entry, exit, or pivot point for malicious actors to use.

Connect with a SafeBreach cybersecurity expert or request a demo of our advanced platform today to see what continuous security validationpowered by breach and attack simulation (BAS)can do for you.

Go here to see the original:
The New Frontiers of Cybersecurity Exponential Increase in Complexity - Security Boulevard

Read More..

IBM Furthers Flexibility, Sustainability and Security within the Data … – IBM Newsroom

ARMONK, N.Y., April 4, 2023 /PRNewswire/ -- IBM (NYSE: IBM)today unveiled new single frame and rack mount configurations of IBM z16and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need.The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

Rack Mount in Use

"IBM remains at the forefront of innovation to help clients weather storms generated by an ever-changing market," said Ross Mauri, General Manager, IBM zSystems and LinuxONE. "We're protecting clients' investments in existing infrastructure while helping them to innovate with AI and quantum-safe technologies. These new options let companies of all sizes seamlessly co-locate IBM z16 and LinuxONE Rockhopper 4 with distributed infrastructure, bringing exciting capabilities to those environments."

Designed for today's changing IT environment to enable new use cases

Organizations in every industry are balancing an increasing number of challenges to deliver integrated digital services. According to a recent IBM Transformation Index report, among those surveyed, security, managing complex environments and regulatory compliance were cited as challenges to integrating workloads in a hybrid cloud. These challenges can be compounded by more stringent environmental regulations and continuously rising costs.

"We have seen immense value from utilizing the IBM z16 platform in a hybrid cloud environment," said Bo Gebbie, president, Evolving Solutions. "Leveraging these very secure systems for high volume transactional workloads, combined with cloud-native technologies, has enabled greater levels of agility and cost optimization for both our clients' businesses and our own."

The new IBM z16 and LinuxONE 4 offerings are built for the modern data center to help optimize flexibility and sustainability, with capabilities for partition-level power monitoring and additional environmental metrics. For example, consolidating Linux workloads on an IBM LinuxONE Rockhopper 4 instead of running them on compared x86 servers with similar conditions and location can reduce energy consumption by 75 percent and space by 67 percent.1These new configurations are engineered to deliver the same hallmark IBM security and transaction processing at scale.

Designed and tested to the same internal qualifications as the IBM z16 high availability portfolio2, the new rack-optimized footprint is designed for use with client-owned, standard 19-inch racks and power distribution units. This new footprint opens opportunities to include systems in distributed environments with other servers, storage, SAN and switches in one rack, designed to optimize both co-location and latency for complex computing, such as training AI models.

Installing these configurations in the data center can help create a new class of use cases, including:

Securing data on the industry's most available systems3

For critical industries, like healthcare, financial services, government and insurance, a secure, available IT environment is key to delivering high quality service to customers. IBM z16 and LinuxONE 4 are engineered to provide the highest levels of reliability in the industry, 99.99999% availability to support mission-critical workloads as part of a hybrid cloud strategy. These high availability levels help companies maintain consumer access to bank accounts, medical records and personal data. Emerging threats require protection, and the new configurations offer security capabilities that include confidential computing, centralized key management and quantum-safe cryptography to help thwart bad actors planning to "harvest now, decrypt later."

"IBM z16 and LinuxONE systems are known for security, resiliency and transaction processing at scale," said Matt Eastwood, SVP, WW Research, IDC. "Clients can now access the same security and resiliency standards in new environments with the single frame and rack mount configurations, giving them flexibility in the data center. Importantly, this also opens up more business opportunity for partners who will be able to reach an expanded audience by integrating IBM zSystems and LinuxONE capabilities to their existing footprints."

With the IBM Ecosystem of zSystems ISV partners, IBM is working to address compliance and cybersecurity. For clients that run data serving, core banking and digital assets workloads, an optimal compliance and security posture is key to protecting sensitive personal data and existing technology investments.

"High processing speed and artificial intelligence are key to moving organizations forward," said Adi Hazan, director ofAnalycat. "IBM zSystems and LinuxONE added the security and power that we needed to address new clients, use cases and business benefits. The native speed of our AI on this platform was amazing and we are excited to introduce the IBM LinuxONE offerings to our clients with large workloads to consolidate and achieve corporate sustainability goals."

IBM Business Partners can learn more about the skills required to install, deploy, service and resell single frame and rack mount configurations in this blog.

Complementary Technology Lifecycle Support Services

With the new IBM LinuxONE Rockhopper 4 servers, IBM will offer IBM LinuxONE Expert Care. IBM Expert Care integrates and prepackages hardware and software support services into a tiered support model, helping organizations to choose the right fit of services. This support for LinuxONE Rockhopper 4 will offer enhanced value to clients with predictable maintenance costs and reduced deployment and operating risk.

The new IBM z16 and LinuxONE 4 single frame and rack mount options, supported by LinuxONE Expert Care, will be generally available globally[4] from IBM and certified business partners beginning on May 17, 2023. To learn more:

About IBMIBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries.Nearly 3,800 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently, and securely.IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients.All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity, and service. For more information, visitwww.ibm.com

Media Contact:Ashley Petersonashley.peterson@ibm.com

1 DISCLAIMER: Compared IBM Machine Type 3932 Max 68 model consisting of a CPC drawer and an I/O drawer to support network and external storage with 68 IFLs and 7 TB of memory in 1 frame versus compared 36 x86 servers (2 Skylake Xeon Gold Chips, 40 Cores) with a total of 1440 cores. IBM Machine Type 3932 Max 68 model power consumption was measured on systems and confirmed using the IBM Power estimator for the IBM Machine Type 3932 Max 68 model configuration. x86 power values were based on Feb. 2023 IDC QPI power values and reduced to 55% based on measurements of x86 servers by IBM and observed values in the field. The x86 server compared to uses approximately .6083 KWhr, 55% of IDC QPI system watts value. Savings assumes the Worldwide Data Center Power Utilization Effectiveness (PUE) factor of 1.55 to calculate the additional power needed for cooling. PUE is based on Uptime Institute 2022 Global Data Center Survey (https://uptimeinstitute.com/resources/research-and-reports/uptime-institute-global-data-center-survey-results-2022). x86 system space calculations require 3 racks. Results may vary based on client-specific usage and location.2 DISCLAIMER: All the IBM z16 Rack Mount components are tested via same process requirements as the IBM z16 traditional Single Frame components. Comprehensive testing includes a wide range of voltage, frequency, temperature testing.3 Source: Information Technology Intelligence Consulting Corp. (ITIC). 2022. Global Server Hardware, Server OS Reliability Survey. https://www.ibm.com/downloads/cas/BGARGJRZ4 Check local availability for rack mount here.

SOURCE IBM

Excerpt from:
IBM Furthers Flexibility, Sustainability and Security within the Data ... - IBM Newsroom

Read More..

Ofcom concerned about Microsoft and Amazon domination of cloud market – Yahoo Finance UK

Ofcom could call in the competition regulator after finding concerns in the cloud services market, a backbone of the online world which is dominated by two companies.

The telecoms regulator proposed on Wednesday that the Competition and Markets Authority open its own probe into the sector amid concerns customers find barriers in their way when trying to switch suppliers.

The cloud space is dominated by two players, Amazon and Microsoft, which together hold an approximate 60%-70% market share.

Ofcom said it was particularly concerned about the two companies practices because of their dominant position.

Millions of people and businesses have come to rely on cloud computing in recent years.

The cloud loosely refers to a series of massive servers around the world which users can tap into to store photographs or emails, or run software from.

Ofcom said there was still competition in the sector, with innovative products and discounts offered to new customers.

However, it was concerned for customers trying to move from one cloud supplier to another.

The massive suppliers charge significantly higher fees than smaller providers to move data out of the cloud and to another companys servers, Ofcom said.

Users might also struggle to use more than one companys services at the same time because the leading firms prevent some of their services working effectively alongside those from other suppliers.

There is a risk that the features we have identified could lead the market to concentrate further towards the market leaders, Ofcom said.

It said the Competition and Markets Authority would be best-placed to investigate this further.

Fergal Farragher, the Ofcom director who led its study into the sector, said: Weve done a deep dive into the digital backbone of our economy and uncovered some concerning practices, including by some of the biggest tech firms in the world.

Story continues

High barriers to switching are already harming competition in what is a fast-growing market.

We think more in-depth scrutiny is needed, to make sure its working well for people and businesses who rely on these services.

Ofcom said it would take feedback on its findings until mid May and would make its final decision in October.

Microsoft said: We look forward to continuing our engagement with Ofcom on their cloud services market study.

We remain committed to ensuring the UK cloud industry stays highly competitive, and to supporting the transformative potential of cloud technologies to help accelerate growth across the UK economy.

Amazon Web Services said: These are interim findings and AWS will continue to work with Ofcom ahead of the publication of its final report.

The UK has a thriving and diverse IT industry with customers able to choose between a wide variety of IT providers.

At AWS, we design our cloud services to give customers the freedom to build the solution that is right for them, with the technology of their choice.

This has driven increased competition across a range of sectors in the UK economy by broadening access to innovative, highly secure, and scalable IT services.

View post:
Ofcom concerned about Microsoft and Amazon domination of cloud market - Yahoo Finance UK

Read More..

Cloud WAF Pricing: All You Need to Know – Security Boulevard

Choosing the right Cloud WAF pricing model is like finding the perfect pair of shoes: its all about comfort, fit, and style for your organizations needs.

In this guide, well help you navigate the world of Cloud WAF pricing, exploring different options and factors so that you can find the perfect fit for your web application security requirements.

For those still evaluating Cloud vs. on-prem WAF, heres a detailed article onwhy cloud WAFs are better than on-premise WAFs.

WAFs provided by public clouds such as AWS and Azure typically price on a pay-as-you-go model.

On the other hand, specialized WAF providers such as Indusface, Akamai, and Cloudflare offer a subscription model.

There are many pay-as-you-go features offered even by subscription providers. The value addition that specialized WAFs provide is the availability of core rules that provide by-default protection against OWASP Top 10 vulnerabilities.

In public Cloud WAFs, youll typically need to either:

That said, several pay-as-you-go features are provided even by specialized WAF providers.

In the next section, we will cover all the factors that affect WAF pricing.

This is the first parameter that affects pricing. Even within this, there are two models:

a. Domain: One license for the domain, and this includes subdomains too. This model is typically used when similar applications are on different sub-domains, for example, qa.acme.com vs. acme.com.

While you can use this model for sub-domains that host different applications, the possibility of false positives is more as the same rule set is applied on multiple applications.

b. Application: Since every application differs, this model helps get fine-grained protection and custom rules. Usually, the license depends on a per-website model or a Fully Qualified Domain Name (FQDN).

For example, youll typically be charged one license for http://www.acme.com and one more for abc.acme.com.

Cloud WAFs act as filters before traffic hit your origin server. All the traffic passed over to your origin servers is billed as the bandwidth cost.

Here also, there are three models:

a. Requests: The pricing plan might have a set cost for a specific number of requests each month, plus extra charges for any extra requests over the set limit. Another option is that the pricing depends only on the total number of requests, so customers pay for what they use.

b. Peak Mbps : Some WAF companies use a peak Mbps (megabits per second) pricing plan. They charge customers based on the highest bandwidth (mainly in the 95th percentile) used in a set time, like a month. This model looks at the most traffic the WAF handles, not the total requests or data moved. Its important for organizations with changing traffic or different bandwidth needs.

c. Bandwidth: Some WAFs use a pricing plan based on the bandwidth over the wire. This includes both the request and response data. They charge customers for data moving through the system. This pricing model is easy to understand and works well for many organizations.

As discussed earlier, depending on the WAF provider, you may get charged for the following features:

a. DDoS & Bot Mitigation:This is probably the single most expensive feature addition. As per the application, the subscription to this feature alone typically costs a couple of thousand dollars per month in the subscription. In addition, some vendors even bill you for the bandwidth in case of a DDoS attack. In the case of Indusface AppTrana,DDoS is bundled as part of the monthly subscription plans.

b. API Security: Most popular WAFs now include an API security solution. This category is now called WAAP. However, this is generally priced as an add-on as API security needs special configuration, especially to create a positive security model. The AppTrana WAAP, by default, protects all APIs that are part of the same FQDN.See more details here.

c. Analytics: Getting analytics on the kind of attacks blocked is also, a big add-on, especially if you just get one WAF license and use that to protect multiple applications such as acme.com, payroll.acme.com, crm.acme.com along with acme.com. As these are all different applications, storing attack logs and analytics on these logs would be extremely expensive.

Hence, most WAF providers dont provide access on a single license. At Indusface, we often suggest taking additional licenses for critical applications requiring attack logs and analysis.

d. DAST scanners: DAST and WAF are not integrated and separate products in most organizations. This is a lost opportunity, as vulnerabilities found on a DAST could quickly be patched on the WAF. This process is called virtual patching, and it buys developers time before they patch these vulnerabilities on code.

At Indusface, we bundle DAST scanner Indusface WAS as part of the AppTrana WAAP. You save costs on subscriptions and integrate DAST and virtual patching into CI/CD pipelines so that security is handled even in an agile development cycle.

e. CDN: Since WAAP providers have some pricing component dependent on data transfer, enabling a CDN will lead to significant cost savings. In most WAFs, this is an add-on.

f. Support:24X7 phone, email, and chat support is yet another feature that most WAF vendors add only in enterprise contracts. At Indusface, you will get enterprise support at SMB pricing; see the WAAP pricing page here.

Managed services play a big part in application security, especially as threats evolve. For example,200+ application-level critical/high zero-day vulnerabilitiesare discovered monthly. Compute power is so cheap that a one-hour DDoS attack can be bought for $5, and this will get cheaper.

To combat all of this, any WAAP/WAF solution needs to evolve. While most Cloud WAFs keep the software updated, a key part of defense is the rule set, and unless the security teams have highly skilled security engineers, they wouldnt be able to touch any of the rule sets.

The other problem is that even if rules are sent as patches, the onus is on the application team to monitor for false positives and ensure 99.99% availability while preventing downtime. Often, application teams do not apply these patches; worse, most WAFs are perpetually in log mode, as in they dont block any attacks!

Then theres the problem of DDoS, which is a big ransomware threat, and sophisticated actions such as rate limits, Tarpitting, CAPTCHA, and blocks need careful monitoring as there is a high possibility of false positives.

So managed services are essentially an extended SOC/IT team to help with the following:

While every vendor can promise managed services, evaluating the SLAs with which they operate is critical. We highly recommend checking the support response times and SLAs, uptime guarantee, and latency with the vendor.

At Indusface, we are proud to ensure a 24-hour SLA on virtual patches for critical vulnerabilities.You can find more details on the SLA here.

Heres a step-by-step framework to help people choose a WAF based on pricing:

1. Identify your organizations requirements:

2. Research WAF providers

3. Analyse pricing models:

4. Evaluate included features and additional services

5. Assess data center locations and regions

6. Compare technical support and SLAs

7. Calculate the total cost of ownership (TCO)

8. Rank various WAF providers

9. Run product trials

By following this framework, you can systematically evaluate and compare different WAFs based on pricing, features, support, and other factors, ultimately selecting the most suitable and cost-effective solution for your organization.

In conclusion, selecting the right Cloud WAF is crucial for safeguarding your web applications and maintaining a strong security posture. A thorough understanding of Cloud WAF pricing, features, and service level agreements will enable your organization to make informed decisions, ensuring you invest in a solution that fits your budget and provides robust protection against ever-evolving cyber threats.

Stay tuned for more relevant and interesting security updates. Follow Indusface onFacebook,Twitter, andLinkedIn

The post Cloud WAF Pricing: All You Need to Know appeared first on Indusface.

*** This is a Security Bloggers Network syndicated blog from Indusface authored by Indusface. Read the original post at: https://www.indusface.com/blog/cloud-waf-pricing-all-you-need-to-know/

See the rest here:
Cloud WAF Pricing: All You Need to Know - Security Boulevard

Read More..

iExec RLC: Unlocking New Possibilities in the Cloud Computing … – The Southern Maryland Chronicle

In a world where cloud computing is becoming increasingly popular, iExec RLC offers a unique solution for businesses looking to make the most out of their resources. By providing access to distributed applications and services, iExec RLC unlocks new possibilities in the cloud computing space.

The platform allows users to securely access and deploy any application or service from anywhere in the world without having to worry about data security or reliability. It also eliminates the need for complex infrastructure setup and maintenance, as all applications and services are hosted on an Ethereum-based blockchain network. As such, businesses can benefit from reduced costs associated with theiExec RLC Priceand hosting fees while also having the advantage of increased flexibility when scaling their solutions according to their needs.

Sign up to receive our free Daily Digest newsletter, in your inbox each morning.

Furthermore, iExec RLC gives users complete control over their data privacy settings, allowing them to decide who can access what information they store on the platform. All of these features make iExec RLC an attractive option for businesses looking for a reliable and secure way to unlock new possibilities in the cloud computing space.

iExec RLC (RLC stands for Run on Lots of Computers) is a decentralized cloud computing platform that enables users to rent out their computing resources in exchange for cryptocurrency. It was created by the French startupiExec, which has been developing blockchain-based solutions since 2016. The platform allows users to access distributed applications and services without owning or managing any hardware. Instead, they can rent out the necessary computing power from other users on the network. This makes it easier and more cost-effective for developers to create and deploy distributed applications and for businesses to access powerful computing resources without investing in expensive hardware, allowing them to tap into new digital markets like theMetaverse, accessing a new market of digital consumers. Additionally, iExec provides a marketplace where developers can list their applications and services, allowing them to monetize their work while giving users easy access to high-quality products.

To buy and sell iExec RLC tokens, you will need to use a cryptocurrency exchange. First, you will need to create an account on the exchange platform of your choice. Once your account is created, you can deposit funds using various payment methods such as bank transfer or credit card. After your funds have been deposited, you can then search for the iExec RLC token and place an order to buy or sell it at the current market price. Once your order has been filled, you will be able to withdraw your tokens from the exchange into a secure wallet that supports them.

iExec RLC uses distributed ledger technology (DLT) to ensure the integrity of its network by providing an immutable record of all transactions on the platform. This makes it an ideal solution for companies looking for a secure way to store sensitive information such as customer data or financial records. iExec also offers a range of advanced analytic capabilities which allow businesses to gain valuable insights into their operations and make better decisions based on real-time data analysis. All user data is encrypted using industry-standard encryption algorithms, and all communication between servers and the customers device is done over a secure HTTPS connection. Two-factor authentication has also been implemented for added security, so you can be sure that only you have access to your account. Additionally, the company regularly monitors its systems for any suspicious activity or potential threats. By combining these various security measures, iExec RLC ensures that its users data remains safe and secure at all times.

Show your love for Southern Maryland by powering authoritative, in-depth reporting about your community, and keeping access free for neighbors who need it.

Like Loading...

Related

View post:
iExec RLC: Unlocking New Possibilities in the Cloud Computing ... - The Southern Maryland Chronicle

Read More..

What are the sustainability benefits of using a document … – Journalism.co.uk

Press Release

Document management specialists Filestream discusses the sustainability benefits of combining document management and Cloud storage. Filestream works with partner Sire Cloud to provide businesses with a seamless solution and also aid productivity

The benefits of combining document management and cloud storage fall into two main areas, sustainability and productivity. They work together, one effortlessly leading to the other.

In todays world, ambitious, growing SMEs, and corporates, large or small, are keen to ensure their ESG (Environmental, Social and Governance) credentials are meeting current standards. Linking their document management and cloud storage is a huge step to attaining this.

We have worked with our partners at SIRE Cloud to produce a solution using the combined advantages of File Stream document management and the UK based SIRE Cloud platform.

How does this help any business meet sustainability goals?

Increasingly, businesses are taking sustainability seriously. Many make the leap for their own ethical reasons. However, many often realise theyhave little choice as their customers are insisting more and more that suppliers showevidence they are actively working to be more sustainable. Failure to do so can be veryserious and even long-standing, successful, productive, and profitable businessrelationships can come to end.

Here are some examples of how a Cloud-based approach to document

management and storage can help sustainability goals and improve business

practices:

Why use our Cloud storage?

All backups, antivirus/malware software, firewalls, and Microsoft 365 are maintained to the highest standards. This removes a considerable burden of responsibility as well as freeing up valuable time.

Additionally, a program like File Stream which has a zero-carbonfootprint (similar to an on-line banking application), enables access to the Cloud where the documents are stored from any device and from anywhere, via the internet.

The SIRE Cloud servers (and therefore the documents) remain in the UK. They are protected in different locations (data centres) that are also in the UK. This gives confidence to businesses that their important information is stored as locally as possible.

What are the sustainability advantages of the SIRE Cloud servers?

Once the data is stored it will remain on storage devices which are three times more power-efficient than a PC hard disk.

SIRE select data centres that use 100 per cent renewable energy to power the data centres and have been doing so since 2011. Working with Sire on sustainable technologies and policies has ensured a PUE of 1.14. This is lower than the global average of 1.57.(to understand what a PUE is, see the fact-file below).

Cold-aisle containment:

There are many different ways for data centres to deliver cooling to the servers on their data floors. At the Sire data centre one way this is managed is using cold-aisle containment which forces cool air over servers rather than escaping: The server racks make up the walls of cold aisles, with doors and a roof sealing the corridor.

Chilled air is delivered through the floor into the aisle. Since it has nowhere else to go, the chilled air is forced through the walls of the corridor, and over the servers.

Adiabatic cooling towers:

Adiabatic cooling towers are one of the ways to generate chilled water. They use the natural process of evaporation to cool water down, so the only power used is to pump the water through the towers. These cooling towers can keep up with cooling on the data floor, even on the hottest days of the year.

Efficient UPSs:

They have invested in state-of-the-art UPSs with line-interactive / Smart Active efficiency of up to 98.5 per cent. This means only 1.5 per cent of energy is lost in the transfers, significantly less than data centres.(See fact-file below for more information on UPSs).

LED lights on motion detectors:

Reducing energy consumption goes beyond just the data floor. Throughout the data centres there are energy-efficient LED bulbs. These are also fitted with motion detector switches, so that they turn off automatically when no one is using a room.

Want to know more?

Get in touch with us (link to enquiry form) to find out how this partnership can help your business or organisation become more efficient, productive, and sustainable. We look forward to hearing from you.

Fact-file:

Read more from the original source:
What are the sustainability benefits of using a document ... - Journalism.co.uk

Read More..

The Silent Platform Revolution: How eBPF Is Fundamentally … – InfoQ.com

Key Takeaways

Kubernetes and cloud native have been around for nearly a decade. In that time, weve seen a Cambrian explosion of projects and innovation around infrastructure software. Through trial and late nights, we have also learned what works and what doesnt when running these systems at scale in production. With these fundamental projects and crucial experience, platform teams are now pushing innovation up the stack, but can the stack keep up with them?

With the change of application design to API-driven microservices and the rise of Kubernetes-based platform engineering, networking, and security, teams have struggled to keep up because Kubernetes breaks traditional networking and security models. With the transition to cloud, we saw a similar technology sea change at least once. The rules of data center infrastructure and developer workflow were completely rewritten as Linux boxes in the cloud began running the worlds most popular services. We are in a similar spot today with a lot of churn around cloud native infrastructure pieces and not everyone knowing where it is headed; just look at the CNCF landscape. We have services communicating with each other over distributed networks atop a Linux kernel where many of its features and subsystems were never designed for cloud native in the first place.

The next decade of infrastructure software will be defined by platform engineers who can take these infrastructure building blocks and use them to create the right abstractions for higher-level platforms. Like a construction engineer uses water, electricity, and construction materials to build buildings that people can use, platform engineers take hardware and infrastructure software to build platforms that developers can safely and reliably deploy software on to make high-impact changes frequently and predictably with minimal toil at scale. For the next act in the cloud native era, platform engineering teams must be able to provision, connect, observe, and secure scalable, dynamic, available, and high-performance environments so developers can focus on coding business logic. Many of the Linux kernel building blocks supporting these workloads are decades old. They need a new abstraction to keep up with the demands of the cloud native world. Luckily, it is already here and has been production-proven at the largest scale for years.

eBPF is creating the cloud native abstractions and new building blocks required for the cloud native world by allowing us to dynamically program the kernel in a safe, performant, and scalable way. It is used to safely and efficiently extend the cloud native and other capabilities of the kernel without requiring changes to kernel source code or loading kernel modules unlocking innovation by moving the kernel itself from a monolith to more modular architecture enriched with cloud native context. These capabilities enable us to safely abstract the Linux kernel, iterate and innovate at this layer in a tight feedback loop, and become ready for the cloud native world. With these new superpowers for the Linux kernel, platform teams are ready for Day 2 of cloud nativeand they might already be leveraging projects using eBPF without even knowing. There is a silent eBPF revolution reshaping platforms and the cloud native world in its image, and this is its story.

eBPF is a decades-old technology beginning its life as the BSD Packet Filter (BPF) in 1992. At the time, Van Jacobson wanted to troubleshoot network issues, but existing network filters were too slow. His lab designed and created libpcap, tcpdump, and BPF as a backend to provide the required functionality. BPF was designed to be fast, efficient, and easily verifiable so that it could be run inside the kernel, but its functionality was limited to read-only filtering based on simple packet header fields such as IP addresses and port numbers. Over time, as networking technology evolved, the limitations of this classic BPF (cBPF) became more apparent. In particular, it was stateless, which made it too limiting for complex packet operations and difficult to extend for developers.

Despite these constraints, the high-level concepts around cBPF of having a minimal, verifiable instruction set where it is feasible for the kernel to prove the safety of user-provided programs to then be able to run them inside the kernel have provided an inspiration and platform for future innovation. In 2014, a new technology was merged into the Linux kernel that significantly extended the BPF (hence, eBPF) instruction set to create a more flexible and powerful version. Initially, replacing the cBPF engine in the kernel was not the goal since eBPF is a generic concept and can be applied in many places outside of networking. However, at that time, it was a feasible path to merge this new technology into the mainline kernel. Here is an interesting quote from Linus Torvalds:

So I can work with crazy people, thats not the problem. They just need to sell their crazy stuff to me using non-crazy arguments and in small and well-defined pieces. When I ask for killer features, I want them to lull me into a safe and cozy world where the stuff they are pushing is actually useful to mainline people first. In other words, every new crazy feature should be hidden in a nice solid Trojan Horse gift: something that looks obviously good at first sight.

This, in short, describes the organic nature of the Linux kernel development model and matches perfectly to how eBPF got merged into the kernel. To perform incremental improvements, the natural fit was first to replace the cBPF infrastructure in the kernel, which improved its performance, then, step by step, expose and improve the new eBPF technology on top of this foundation. From there, the early days of eBPF evolved in two directions in parallel, networking and tracing. Every new feature around eBPF merged into the kernel solved a concrete production need around these use cases; this requirement still holds true today. Projects like bcc, bpftrace, and Cilium helped to shape the core building blocks of eBPF infrastructure long before its ecosystem took off and became mainstream. Today, eBPF is a generic technology that can run sandboxed programs in a privileged context such as the kernel and has little in common with BSD, Packets, or Filters anymoreeBPF is simply a pseudo-acronym referring to a technological revolution in the operating system kernel to safely extend and tailor it to the users needs.

With the ability to run complex yet safe programs, eBPF became a much more powerful platform for enriching the Linux kernel with cloud native context from higher up the stack to execute better policy decisions, process data more efficiently, move operations closer to their source, and iterate and innovate more quickly. In short, instead of patching, rebuilding, and rolling out a new kernel change, the feedback loop with infrastructure engineers has been reduced to the extent that an eBPF program can be updated on the fly without having to restart services and without interrupting data processing. eBPFs versatility also led to its adoption in other areas outside of networking, such as security, observability, and tracing, where it can be used to detect and analyze system events in real time.

Moving from cBPF to eBPF has drastically changed what is possibleand what we will build next. By moving beyond just a packet filter to a general-purpose sandboxed runtime, eBPF opened many new use cases around networking, observability, security, tracing, and profiling. eBPF is now a general-purpose compute engine within the Linux kernel that allows you to hook into, observe, and act upon anything happening in the kernel, like a plug-in for your web browser. A few key design features have enabled eBPF to accelerate innovation and create more performant and customizable systems for the cloud native world.

First, eBPF hooks anywhere in the kernel to modify functionality and customize its behavior without changing the kernels source. By not modifying the source code, eBPF reduces the time from a user needing a new feature to implementing it from years to days. Because of the broad adoption of the Linux kernel across billions of devices, making changes upstream is not taken lightly. For example, suppose you want a new way to observe your application and need to be able to pull that metric from the kernel. In that case, you have to first convince the entire kernel community that it is a good ideaand a good idea for everyone running Linuxthen it can be implemented and finally make it to users in a few years. With eBPF, you can go from coding to observation without even having to reboot your machine and tailor the kernel to your specific workload needs without affecting others. eBPF has been very useful, and the real power of it is how it allows people to do specialized code that isnt enabled until asked for, said Linus Torvalds.

Second, because the verify checks that programs are safe to execute, eBPF developers can continue to innovate without worrying about the kernel crashing or other instabilities. This allows them and their end users to be confident that they are shipping stable code that can be leveraged in production. For platform teams and SREs, this is also crucial for using eBPF to safely troubleshoot issues they encounter in production.

When applications are ready to go to production, eBPF programs can be added at runtime without workload disruption or node reboot. This is a huge benefit when working at a large scale because it massively decreases the toil required to keep the platform up to date and reduces the risk of workload disruption from a rollout gone wrong. eBPF programs are JIT compiled for near native execution speed, and by shifting the context from user space to kernel space, they allow users to bypass or skip parts of the kernel that arent needed or used, thus enhancing performance. However, unlike complete kernel bypasses in user space, eBPF can still leverage all the kernel infrastructure and building blocks it wants without reinventing the wheel. eBPF can pick and choose the best pieces of the kernel and mix them with custom business logic to solve a specific problem. Finally, being able to modify kernel behavior at run time and bypass parts of the stack creates an extremely short feedback loop for developers. It has finally allowed experimentation in areas like network congestion control and process scheduling in the kernel.

Growing out of the classic packet filter and taking a major leap beyond the traditional use case unlocked many new possibilities in the kernel, from optimizing resource usage to adding customized business logic. eBPF allows us to speed up kernel innovation, create new abstractions, and dramatically increase performance. eBPF not only reduces the time, risk, and overhead it takes to add new features to production workloads, but in some cases, it even makes it possible in the first place.

So many benefits begs the question if eBPF can deliver in the real worldand the answer has been a resounding yes. Meta and Google have some of the worlds largest data center footprints; Netflix accounts for about 15% of the Internets traffic. Each of these companies has been using eBPF under the hood for years in production and the results speak for themselves.

Meta was the first company to put eBPF into production at scale with its load balancer project Katran. Since 2017, every packet going into a Meta data center has been processed with eBPFthats a lot of cat pictures. Meta has also used eBPF for many more advanced use cases, most recently improving scheduler efficiency, which increased throughput by 15%, a massive boost and resource saving at their scale. Google also processes most of its data center traffic through eBPF, using it for runtime security and observability, and defaults its Google Cloud customers to using an eBPF-based dataplane for networking. In the Android operating system, which powers over 70% of mobile devices and has more than 2.5 billion active users spanning over 190 countries, almost every networking packet hits eBPF. Finally, Netflix relies extensively on eBPF for performance monitoring and analysis of their fleet, and Netflix engineers pioneered eBPF tooling, such as bpftrace, to make major leaps in visibility for troubleshooting production servers and built eBPF-based collectors for On-CPU and Off-CPU flame graphs.

eBPF clearly works and provides extensive benefits for Internet-scale companies and has been for the better part of a decade, but those benefits also need to be translated to the rest of us.

At the beginning of the cloud native era, GIFEE (Google Infrastructure for Everyone Else) was a popular phrase, but largely fell out of favor because not everyone is Google or needs Google infrastructure. Instead, people want simple solutions that solve their problems, which begs the question of why eBPF is different. Cloud native environments are meant to run scalable applications in modern, dynamic environments. Scalable and dynamic are key to understanding why eBPF is the evolution of the kernel that the cloud native revolution needs.

The Linux kernel, as usual, is the foundation for building cloud native platforms. Applications are now just using sockets as data sources and sinks, and the network as a communication bus. But cloud native needs newer abstractions than currently available in the Linux kernel because many of these building blocks, like cgroups (CPU, memory handling), namespaces (net, mount, pid), SELinux, seccomp, netfiler, netlink, AppArmor, auditd, perfare decades old before cloud even had a name. They dont always talk together, and some are inflexible, allowing only for global policies and not per-container or per-service ones. Instead of leveraging new cloud native primitives, they lack awareness of Pods or any higher-level service abstractions and rely on iptables for networking.

As a platform team, if you want to provide developer tools for a cloud native environment, you can still be stuck in this box where cloud native environments cant be expressed efficiently. Platform teams can find themselves in a future they are not ready to handle without the right tools. eBPF now allows tools to rebuild the abstractions in the Linux kernel from the ground up. These new abstractions are unlocking the next wave of cloud native innovation and will set the course for the cloud native revolution.

For example, in traditional networking, packets are processed by the kernel, and several layers of network stack inspect each packet before reaching its destination. This can result in a high overhead and slow processing times, especially in large-scale cloud environments with many network packets to be processed. eBPF instead allows inserting custom code into the kernel that can be executed for each packet as it passes through the network stack. This allows for more efficient and targeted network traffic processing, reducing the overhead and improving performance. Benchmarks from Cilium showed that switching from iptables to eBPF increased throughput 6x, and moving from IPVS-based load balancing to eBPF based allowed Seznam.cz to double throughput while also reducing CPU usage by 72x. Instead of providing marginal improvements on an old abstraction, eBPF enables magnitudes of enhancement.

eBPF doesnt just stop at networking like its predecessor; it also extends to areas like observability and security and many more because it is a general-purpose computing environment and can hook anywhere in the kernel. I think the future of cloud native security will be based on eBPF technology because its a new and powerful way to get visibility into the kernel, which was very difficult before, said Chris Aniszczyk, CTO of Cloud Native Computing Foundation. At the intersection of application and infrastructure monitoring, and security monitoring, this can provide a holistic approach for teams to detect, mitigate, and resolve issues faster.

eBPF provides ways to connect, observe, and secure applications at cloud native speed and scale. As applications shift toward being a collection of API-driven services driven by cloud native paradigms, the security, reliability, observability, and performance of all applications become fundamentally dependent on a new connectivity layer driven by eBPF, said Dan Wendlandt, CEO and co-founder of Isovalent. Its going to be a critical layer in the new cloud native infrastructure stack.

The eBPF revolution is changing cloud native; the best part is that it is already here.

While the benefits of eBPF are clear, it is so low level that platform teams, without the luxury of Linux kernel development experience, need a friendlier interface. This is the magic of eBPFit is already inside many of the tools running the cloud native platforms of today, and you may already be leveraging it without even knowing. If you spin up a Kubernetes cluster on any major cloud provider, you are leveraging eBPF through Cilium. Using Pixie for observability or Parca for continuous profiling, also eBPF.

eBPF is a powerful force that is transforming the software industry. Marc Andreessens famous quote on software is eating the world has been semi-jokingly recoined by Cloudflare as eBPF is eating the world. However, success for eBPF is not when all developers know about it but when developers start demanding faster networking, effortless monitoring and observability, and easier-to-use security solutions. Less than 1% of developers may ever program something in eBPF, but the other 99% will benefit from it. eBPF will have completely taken over when theres a variety of projects and products providing massive developer experience improvement over upstreaming code to the Linux kernel or writing Linux kernel modules. We are already well on our way to that reality.

eBPF has revolutionized the way infrastructure platforms are and will be built and has enabled many new cloud native use cases that were previously difficult or impossible to implement. With eBPF, platform engineers can safely and efficiently extend the capabilities of the Linux kernel, allowing them to innovate quickly. This allows for creating new abstractions and building blocks tailored to the demands of the cloud native world, making it easier for developers to deploy software at scale.

eBPF has been in production for over half a decade at the largest scale and has proven to be a safe, performant, and scalable way to dynamically program the kernel. The silent eBPF revolution has taken hold and is already used in projects and products around the cloud native ecosystem and beyond. With eBPF, platform teams are now ready for the next act in the cloud native era, where they can provision, connect, observe, and secure scalable, dynamic, available, and high-performance environments so developers can focus on just coding business logic.

Read more:
The Silent Platform Revolution: How eBPF Is Fundamentally ... - InfoQ.com

Read More..

What are quantum computers and how ‘quantum’ are they? – Lexology

Huge waves of interest are being generated by the development of powerful quantum computers by a select group of the worlds leading companies. So much so that a (quite beautiful) quantum computer recently made it onto the cover of Time Magazine. Here, our expert Chris Lester explores the history of this fascinating field and asks what makes quantum computers quantum?

The age of quantum and computers

The exact starting point of the quantum age is difficult to pinpoint, but its fair to say that many of the theoretical underpinnings of quantum mechanics were first identified in the early 20th century. In the first decade of that century, Max Planck and Albert Einstein both found that they could more accurately explain physical phenomena concerning light and matter (blackbody radiation and the photoelectric effect) by assuming that light is quantised in discrete packets of energy.

Later developments by many others demonstrated the surprising result that light and physical matter can exhibit properties of both particles and waves. This wave-particle duality led to the discovery of many unexpected effects, including quantum tunnelling, where (for example) an electron can leap to the other side of a barrier that according to the pre-quantum theories the particle really doesnt have enough energy to overcome. Rather than being of purely academic or intellectual interest, quantum tunnelling has found real-world application in tunnel diodes a type of semiconductor device that exhibits negative differential resistance.

The 20th century also saw the dawn of the age of digital computers, from early systems that filled entire rooms to the smartphones of today, carried in the pockets of billions of people. This rapid development has been driven by the well-known trend of being able to fit ever more transistors of ever smaller sizes onto a single chip. As famously noted by Richard Feynman, as electronic components used in computers reach ever-smaller microscopic scales, the unusual effects predicted by quantum mechanics are likely to become increasingly important. Its therefore tempting to ask (as Feynman did) whether the strange effects of the quantum world could be exploited to make more powerful computers?

A quantum (ish) computer?

A quantum computer is often described as a device that exploits the quantum mechanical properties of matter to perform computations. So, does this mean that all modern computers which rely on subatomic particles having a distinctly quantum character are, to some extent, quantum computers?

Take for example the computer circuit disclosed in UK patent application GB952610A, first published in 1964, which uses tunnel diodes to perform calculations. The circuit receives two signals consisting of binary bits (0s and 1s) and the negative differential resistance (a key characteristic of the tunnel diodes) is used to add the two signals together. The tunnel diodes exploit the quantum mechanical effect known as quantum tunnelling and the circuit uses these quantum-mechanical tunnel diodes to perform calculations. So does this qualify the circuit in GB952610A as a kind of quantum computer? Or is there something missing?

Quantum all the way down

While its true that all modern digital computers rely on subatomic particles to work and some even use components that exploit quantum mechanical effects many would say that in a truly quantum computer, everything from the encoding of data to the logic of the calculations must be quantum. In other words, a quantum computer must be quantum all the way down.

So even though GB952610A discloses a computer that relies on a quantum mechanical effect (quantum tunnelling) to perform calculations, it still adds together binary bits (0s and 1s). In contrast, for the kinds of computers that are generally described as quantum computers, even the bits themselves are quantum bits, or qubits.

A qubit is a quantum system that has (for example) two levels or states, usually written as 0 and 1. The qubit can be in either state, or unlike the bits in a digital computer in a combination or mixture of both states. Such mixing of states is known as superposition and this is another fundamental idea from quantum mechanics. Being able to manipulate and perform calculations with qubits as opposed to bits opens the door to a whole world of new and exciting possibilities.

New and specialised algorithms that use qubits could one day enable quantum computers to perform calculations much faster than their digital counterparts. For example, Grovers algorithm could allow faster searches to be performed using qubits. In fact, the first experimental demonstration of a quantum computer in 1998 used a 2-qubit quantum computer to implement this very algorithm.

While this early quantum computer was able to solve only the most basic of problems and could maintain coherence for just a few nanoseconds it was the forerunner of the cutting-edge quantum computers (with many dozens of qubits) in use today. And while it may seem that there is a long way to go before quantum computers overtake digital computers in terms of power and computing speed, there exists a real appetite to realise their benefits sooner rather than later.

Rise of the quantum computing era

Although quantum effects have been known and used in computers for quite some time, it seems that the age of quantum computing proper is just getting started. Each year, increasing numbers of patent applications relating to quantum computation are being filed in jurisdictions across the globe, perhaps reflecting the huge sums being invested.

Particularly interesting recent developments include the emergence of hybrid computers those which combine both digital and quantum processors and the expanding list of commercially available systems that are being used by some of the worlds leading companies. From minimising passenger transit times in airports to transforming financial services and detecting fraud, quantum computers are already being used to help solve real-world problems across many industries.

See more here:
What are quantum computers and how 'quantum' are they? - Lexology

Read More..

Data Backup And Recovery Global Market Report 2023 – GlobeNewswire

New York, April 06, 2023 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Data Backup And Recovery Global Market Report 2023" - https://www.reportlinker.com/p06443941/?utm_source=GNW , Cohesity, Broadcom Inc., Carbonite Inc., Actifio Technologies and Redstor Limited.

The global data backup and recovery market grew from $12.18 billion in 2022 to $14.15 billion in 2023 at a compound annual growth rate (CAGR) of 16.2%. The Russia-Ukraine war disrupted the chances of global economic recovery from the COVID-19 pandemic, at least in the short term. The war between these two countries has led to economic sanctions on multiple countries, a surge in commodity prices, and supply chain disruptions, causing inflation across goods and services and affecting many markets across the globe. The data backup and recovery market is expected to grow to $23.64 billion in 2027 at a CAGR of 13.7%.

The data backup and recovery market includes revenues earned by entities by providing disks/tape backup, hybrid cloud backup, and direct-to-cloud backup, recovery from local device, recovery from cloud and recovery right in the cloud.The market value includes the value of related goods sold by the service provider or included within the service offering.

Only goods and services traded between entities or sold to end consumers are included.

Data backup and recovery refer to the area of onshore and cloud-based technology solutions that allow enterprises to secure and maintain their data for legal and business requirements. The data backup and recovery are used in the process of making a backup copy of data, keeping it somewhere safe in case it becomes lost or damaged, and then restoring the data to the original location or a secure backup so it can be used once more in operations.

North America was the largest region in the data backup and recovery market in 2022.Asia-Pacific is expected to be the fastest-growing region in the forecast period.

The regions covered in the data backup and recovery market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, Middle East and Africa.

The main types of data backup and recovery are service backup, media storage backup, and email backup.The services backup are used to connect systems to a private, public, or hybrid cloud managed by the outside provider in place of doing backup with a centralized, on-premises IT department.

The services backup is a method of backing up data that entails paying an online data backup provider for backup and recovery services.The various components of data backup and recovery are software and services which are deployed on cloud and on-premises.

The various industry verticals that use backup and recovery are IT and telecommunications, retail, banking, financial services, and insurance, government and public sector, healthcare, media and entertainment, manufacturing, education, and other industry verticals.

An increase in the adoption of cloud data backup is expected to propel the growth of the data backup and recovery market.Cloud backup is storing a copy of a physical or virtual file, database, or other data in a secondary, off-site location in case of equipment failure or other emergencies.

Cloud-based data backup helps to store data in the cloud which is accessible anywhere and anytime.This helps the data to be safe and easily recoverable.

For instance, in November 2020, according to Gartner, a US-based management consulting company, following the COVID-19 crisis, there will an increase in IT investment toward the cloud, which is predicted to account for 14.2% of all worldwide enterprise IT spending in 2024 as opposed to the 9.1% in 2020. Therefore, an increase in the adoption of cloud data backup is driving the growth of the data backup and recovery market.

Technological advancement is a key trend gaining popularity in the data backup and recovery market.Major data backup and recovery companies are advancing in their new technologies and research and development to adopt efficient alternatives such as multi-cloud data backup and recovery.

Data can be backed up across many cloud services from different providers using multi-cloud data backup and recovery systems.These systems frequently copy backups from one service to another and store them there for disaster recovery.

These solutions ought to allow recovery from many sources, ideally. For instance, in June 2022, Backblaze, Inc., a US-based cloud storage and data backup company, partnered with Veritas Technologies LLC. to offer multi-cloud data backup and recovery. Customers who use Backup Exec to synchronize their data backup and recovery procedures can use their combined solutions simple, inexpensive, and S3-compatible object storage. The Backup Exec service from Veritas enables companies to safeguard almost any data on any storage medium, including tape, servers, and the cloud. Veritas Technologies LLC is a US-based data management company.

In September 2021, HPE, a US-based information technology company, acquired Zerto for $374 million.Through this acquisition, HPE further transforms its storage business into a cloud-native, software-defined data services company and positions the HPE GreenLake edge-to-cloud platform in the fast-growing data protection sector with a tested solution.

Zerto is a US-based company specializing in software for on-premises and cloud data migration, backup, and disaster recovery.

The countries covered in the data backup and recovery market report are Australia, Brazil, China, France, Germany, India, Indonesia, Japan, Russia, South Korea, UK and USA.

The market value is defined as the revenues that enterprises gain from the sale of goods and/or services within the specified market and geography through sales, grants, or donations in terms of the currency (in USD, unless otherwise specified).

The revenues for a specified geography are consumption values that are revenues generated by organizations in the specified geography within the market, irrespective of where they are produced. It does not include revenues from resales along the supply chain, either further along the supply chain or as part of other products.

The data backup and recovery market research report is one of a series of new reports that provides data backup and recovery market statistics, including data backup and recovery industry global market size, regional shares, competitors with a data backup and recovery market share, detailed data backup and recovery market segments, market trends and opportunities, and any further data you may need to thrive in the data backup and recovery industry. This data backup and recovery market research report delivers a complete perspective of everything you need, with an in-depth analysis of the current and future scenario of the industry.Read the full report: https://www.reportlinker.com/p06443941/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Read the rest here:
Data Backup And Recovery Global Market Report 2023 - GlobeNewswire

Read More..

Western Digital Network Breach Hackers Gained Access to Company Servers – GBHackers

Western Digital (WD), a renowned manufacturer of Scandisk drives, has announced a data breach on its network, resulting in unauthorized access to data on multiple systems by attackers.

WD is a company based in the United States that specializes in manufacturing computer drives and data storage devices, providing data center systems, and offering customers cloud storage services.

The incident is ongoing, so the Company has promptly deployed incident responders and collaborated with digital forensic experts to investigate the attack.

Western Digital identified a network security incident involving Western Digitals systems. In connection with the ongoing incident, an unauthorized third party gained access to a number of the Companys systems. WD said in a Press release.

The Company is implementing proactive measures to secure its business operations, including taking systems and services offline, and will continue taking additional steps as appropriate.

Additionally, the Company has stated that they are actively working on restoring the affected systems. They suspect that the unauthorized party obtained detailed data from their systems and are striving to comprehend the nature and extent of that data.

As a result of this incident, several users reported that My Cloud, another cloud operating service, experienced over 12 hours of downtime.

Our team is working urgently to resolve the issue and restore access as soon as possible. We apologize for any inconvenience this may cause and appreciate your patience.

According to their incident report, We are experiencing a service interruption preventing customers from accessing the My Cloud, My Cloud Home, My Cloud Home Duo, My Cloud OS 5, SanDisk ibi, SanDisk Ixpand Wireless Charger service.

Following the attack, the storage manufacturer has taken further security measures to protect its systems and operations, which may affect some of Western Digitals services.

Here the following products are impacted by this security incident:

My CloudMy Cloud HomeMy Cloud Home DuoMy Cloud OS5SanDisk ibiSanDisk IxpWireless Charger

We attempted to contact Western Digital for further information on the incident but did not receive a response. We will provide updates to the article as soon as they become available.

Searching to secure your APIs? Try Free API Penetration Testing

Related Read:

Read more from the original source:
Western Digital Network Breach Hackers Gained Access to Company Servers - GBHackers

Read More..