Page 3,216«..1020..3,2153,2163,2173,218..3,2303,240..»

Cybercriminals to focus on remote and cloud-based systems in UAE next year – Gulf Business

Trend Micro predicts that UAEs home networks, remote working software, and cloud systems will be at the center of a new wave of cyberattacks in 2021.

In a report titled Turning the Tide, the cybersecurity firm forecasts that cybercriminals in 2021 will particularly look to home networks as a critical launch pad to compromising corporate IT and IoT networks.

As the UAE begins to enter a post-pandemic world, the trend for remote and hybrid working is likely going to continue for many organisations, said Majd Sinan, country manager, UAE, Trend Micro. In 2021, we predict that cybercriminals will launch more aggressive attacks to target corporate data and networks in the UAE.

Showing the growing risk of cyberattacks, Trend Micro systems detected a combined 13,100,616 email, URL, and malware cyber-threats cyber-threats during the first half of 2020, according to its Midyear Security Report. Ransomware attacks in the UAE were 4.27 per cent of the worlds ransomware attacks.

In 2021, the UAEs security teams will need to double down on user training, extended detection and response and adaptive access controls, Majd Sinan. This past year, many UAE organisations were focused on surviving: now its time for the UAEs organisations to thrive, with comprehensive cloud security as their foundation.

The report warns that end-users who regularly access sensitive data (e.g. HR professionals accessing employee data, sales managers working with sensitive customer information, or senior executives managing confidential company numbers) will be at the greatest risk. Attacks will likely exploit known vulnerabilities in online collaboration and productivity software soon after they are disclosed, rather than zero-days.

Read: Researchers in Abu Dhabi build first national crypto library for the UAE

Access-as-a-service business models of cybercrime will grow, targeting the home networks of high-value employees, corporate IT and IoT networks. IT security teams will need to overhaul work from home policies and protections to tackle the complexity of hybrid environments where work and personal data comingle in a single machine. Zero-trust approaches will increasingly be favored to empower and secure distributed workforces.

As third-party integrations reign, Trend Micro also warned that exposed APIs will become a new preferred attack vector for cybercriminals, providing access to sensitive customer data, source code and back-end services.

Cloud systems are another area in which threats will continue to persist in 2021, from unwitting users, misconfigurations, and attackers attempting to take over cloud servers to deploy malicious container images.

See the original post:
Cybercriminals to focus on remote and cloud-based systems in UAE next year - Gulf Business

Read More..

The Diminishing Role of Operating Systems | IT Pro – ITPro Today

The role of operating systems is changing significantly. Due in part to trends like the cloud, it feels like the days when operating systems formed the foundation for application development, deployment and management are over.

So, is it time to declare the operating system dead? Read on for some thoughts on the past, present and future role of operating systems.

When I say that the operating system may be a dying breed, I dont mean that operating systems will disappear completely. Youre still going to need an OS to power your server for the foreseeable future, regardless of what that server does or whether it runs locally or in the cloud.

Whats changing, however, is the significance of the role of operating systems relative to other components of a modern software and hardware stack.

In the past, the OS was the foundation on which all else was built, and the central hub through which it was managed. Applications had to be compiled and packaged specifically for whichever OS they ran on. Deployment took place using tooling that was built into the OS. Logging and monitoring happened at the OS level, too.

By extension, OS-specific skills were critical for anyone who wanted to work as a developer or IT engineer. If you wanted to develop for or manage Linux environments, you had to know the ins and outs of kernel flags, init run levels, ext3 (or ext4, when that finally came along), and so on. For Windows, you had to be a master of the System Registry, Task Manager and the like.

Fast forward to the present, and much of this has changed due to several trends:

Perhaps the most obvious is the cloud. Today, knowing the architecture and tooling of a particular cloud--like AWS or Azure--is arguably more important than being an expert in a specific operating system.

To be sure, you need an OS to provision the virtual servers that you run in the cloud. But in an age when you can deploy an OS to a cloud server in seconds using prebuilt images, there is much less you need to know about the OS itself to use it in the cloud.

Likewise, many of the processes that used to happen at the OS level now take place at the cloud level. Instead of looking at logs within the operating system file tree, you manage them through log aggregators that run as part of your cloud tool set. Instead of having to partition disks and set up file systems, you build storage buckets in the cloud. In place of managing file permissions, groups and users within the operating system, you write IAM policies to govern your cloud resources.

In short, it often feels like the cloud is the new OS.

Kubernetes, and orchestration tools in general, are perhaps taking on the roles of operating systems.

If you deploy workloads using an orchestration platform like Kubernetes, knowing how to configure and manage the Kubernetes environment is much more important than understanding how to manage the operating systems that power the nodes within your cluster. From the perspective of Kubernetes, processes like storage management, networking and logging are abstracted from the underlying operating systems.

Alongside Kubernetes, containers are doing their part to make the OS less relevant. Whether you orchestrate containerized applications with a platform like Kubernetes, containers allow you to take an application and deploy it on any family of operating system without having to worry about the specific configuration of the OS.

In other words, a container that is packaged for Linux will run on Ubuntu just as easily as it will on Red Hat Enterprise Linux or any other distribution. And the tools you use to deploy and manage the container will typically be the same, regardless of which specific OS you use.

If the role of operating systems becomes totally irrelevant at some point, it will likely be thanks to unikernels, a technology that fully removes the OS from the software stack.

In a unikernel-based environment, there is no operating system in the conventional sense. Unikernels are self-hosting machine images that can run applications with just snippets of the libraries that are present in a traditional OS.

For now, unikernels remain mostly an academic idea. But projects like Vorteil (which technically doesnt develop unikernels but instead Micro-VMs, which are very similar) are now working to commercialize them and move them into production. It may not be long before its possible to deploy a real-world application without any kind of operating system at all.

In a similar vein, serverless functions are removing the operating system entirely, at least from the users perspective.

Serverless functions, which can run in the cloud or on private infrastructure, dont operate without an operating system, of course. They require traditional OS environments to host them. But from the users point of view, there is no OS to worry about because serverless functions are simply deployed without any kind of OS-level configuration or management.

Indeed, the primary selling point of serverless--at least, as it is presented by vendors like AWS--is that there is zero administration. The operating system may still lurk in the background, but it may as well be absent as far as developers and IT engineers are concerned.

In short, then, the operating system as we have known and loved it for decades is simply much less significant than it once was. Its not going away entirely, but it has been superseded in relevance by other layers that comprise modern software stacks.

If youre developing or deploying an application today, you may want to think less about which operating system it will run on and more about which cloud or orchestration tool will host it. Thats what really matters in modern IT.

Read the original here:
The Diminishing Role of Operating Systems | IT Pro - ITPro Today

Read More..

Top 10 Hyperconverged Infrastructure (HCI) Solutions – Datamation

A hyperconverged infrastructure (HCI) solution is a primary tool for connecting, managing and operating interconnected enterprise systems in a hyperconverged infrastructure (HCI). The technology helps organizations virtualize storage, servers, and networks. While converged infrastructure uses hardware to achieve this objective, HCI takes a software-centric approach.

To be sure, hyperconvergence has its pros and cons. Yet the advantages are clear: HCI boosts flexibility by making it easier to scale according to usage demands and adjust resources faster and more dynamically. By virtualizing components its possible to build more efficient databases, storage systems, server frameworks and more. HCI solutions increasingly extend from the data center to the edge. Many also incorporate artificial intelligence and machine learning to continually improve, adapt and adjust to fast-changing business conditions. Some also contain self-healing functions.

By virtualizing an IT environment an enterprise can also simplify systems management and trim costs. This can lead to a lower total cost of ownership (TCO). Typically, HCI environments use a hypervisor, usually running on a server that uses direct-attached storage (DAS), to create a data center pool of systems and resources. Most support heterogenous hardware and software systems. The end result is a more flexible, agile and scalable computing framework that makes it simpler to build and manage private cloud, public clouds and hybrid clouds.

A number of factors are important when evaluating HCI solutions. These include:

Edge-core cloud integration. Organizations have vastly different needs when it comes to connecting existing infrastructure, clouds and edge services. For instance, an organization may require only the storage layer in the cloud. Or it may want to duplicate or convert configurations when changing cloud providers. Ideally, an HCI solution allows an enterprise to change, upgrade and adjust as infrastructure needs change.

Analytics. Its crucial to understand operations within an HCI environment. A solution should provide visibility through a centralized dashboard but also offer ways to drill down into data, and obtain reports on what is taking place. This also helps with understanding trends and doing capacity planning.

Storage management. An HCI solution should provide support for setting up and configuring a diverse array of storage frameworks, managing them and adapting them as circumstances and conditions change. It should make it simple to add nodes to a cluster and support things like block file and object-oriented storage. Some systems also offer NVMeOF (non-volatile memory express over fabrics) support, which allows an enterprise to rearchitect storage layers using flash memory.

Hypervisor ease of use. Most solutions support multiple hypervisors. This increases flexibility and configuration optionsand its often essential in large organizations that rely on multiple cloud providers. But its important to understand whether youre actually going to use this feature and what you plan to do with it. In many cases, ease of use and manageability are more important than the ability to use multiple hypervisors.

Data protection integration. Its important to plug in systems and services to protect dataand apply policy changes across the organization. Its necessary to understand whether this protection is scalable and adaptable, as conditions change. Ideally, the HCI environment can replace disparate backup and data recovery systems. This greatly improves manageability and reduces costs.

Container support. A growing number of vendors support containers, or plan to do so soon. Not every organization requires this feature, but its important to consider whether your organization may move in this direction.

Serverless support. Vendors are introducing serverless solutions that support code-triggered events. This has traditionally occurred in the cloud but its increasingly an on-premises function that can operate within an HCI framework.

Here are ten leading HCI solutions:

Jump to:

The Cisco HyperFlex HX data platform manages business and IT requirements across a network. The solution accommodates enterprise applications, big data, deep learning and other components that extend from the data center to remote offices and out to retail sites and IoT devices. The platform is designed to work on any system or any cloud.

Pros

Cons

Datacore SDS delivers a highly flexible approach to HCI. It offers a suite of storage solutions that accommodate mixed protocols, hardware vendors and more within converged and hyperconverged SAN environments. The software-defined storage framework, SANsymphony, features block-based storage virtualization. It is designed for high availability. The vendor focuses heavily on healthcare, education, government and cloud service providers.

Pros

Cons

VxRail delivers a fully integrated, preconfigured, and pre-tested VMware hyper-converged infrastructure appliance. It delivers virtualization, compute and storage within a single appliance. The HCI platform takes an end-to-end automated lifecycle management approach.

Pros

Cons

HP Enterprise aims to take hyperconverged architectures beyond the realm of software-defined and into the world of AI-driven with SimpliVity. The HCI platform delivers a self-managing, self-optimizing, and self-healing infrastructure that uses machine learning to continually improve. HP offers solutions specifically designed for data center consolidation, multi-GPU image processing, high-capacity mixed workloads and edge environments.

Pros

Cons

NetApp HCI consolidates mixed workloads while delivering predictable performance and granular control at the virtual machine level. The solution scales compute and storage resources independently. It is available in different compute and storage configurations, thus making it flexible and scalable across data center, cloud and web infrastructures.

Pros

Cons

Nutanix offers a fully software-defined hyperconverged infrastructure that provides a single cloud platform for tying together hybrid and multi-cloud environments. Its Xtreme Computing platform natively supports compute, storage, virtualization and networkingincluding IoTwith the ability to run any app at scale. It also supports analytics and machine learning.

Pros

Cons

StarWind offers a HCI appliance focused on both operational simplicity and performance. It bills its all-flash system as turnkey with ultra-high resiliency. The solution, designed for SMB, ROBO and enterprisesaims to trim virtualization costs through a highly streamline and flexible approach. It connects commodity servers, disks and flash; a hypervisor of choice; and associated software within a single manageable layer.

Pros

Cons

StarWind Virtual SAN is essentially a software version of the vendors HyperConverged appliance. It eliminates the need for physically shared storage by mirroring internal hard disks and flash between hypervisor servers. The approach is designed to cut costs for SMB, ROBO, Cloud and Hosting providers. Like the vendors appliance, StarWind Virtual SAN is a turnkey solution.

Pros

Cons

The vCenter Server delivers centralized visibility as well as robust management functionality at scale. The HCI solution is designed to manage complex IT environments that require a high level of extensibility and scalability. It includes native backup and restore functions. vCenter supports plug-ins for major vendors and solutions, including Dell EMC, IBM and Huawei Technologies.

Pros

Cons

vSAN is an enterprise-class, storage virtualization solution that manages storage on a single software-based platform. When combined with VMwares vSphere, an organization can manage compute and storage within a single platform. The solutions connects to a broad ecosystem of cloud providers, including AWS, Azure, Google Cloud, IBM Cloud, Oracle Cloud and Alibaba Cloud.

Pros

Cons

Analytics Vendor

Pros

Cons

Cisco HyperFlex HX-Series

Supports numerous configurations and use cases

Highly scalable

Supports GPU-based deep learning

Requires Cisco networking equipment

Pricing model can be confusing

Some users find manageability difficult

DataCore Software-Defined Storage

Supports mixed SAN, flash and disk environments

Excels with load balancing and policy management

Strong failover capabilities

User interface can be daunting

Licensing can become complex

Customer support is inconsistent

Dell/EMC VxRail

Delivers a true single point of management and support

Handles multi-cloud clusters well

Integrates well with storage devices

Low TCO

Limited support for mixing older flash clusters and hyper-clusters

Some management challenges

Sometimes pricey

HPE SimpliVity

Strong storage management, backup and data replication capabilities

Users like the interface

Strong partner relationships

Highly scalable

Managing clusters can present challenges

Pricey

Users cite problems with technical and customer support

NetApp HCI

Excellent manageability with granular controls

Strong API framework

Support for numerous workloads from different vendors

Highly scalable

Installation and initial cabling can be difficult

Documentation sometimes lacking

Users say some security features and controls are missing

Nutanix AOS

Feature-rich platform

Single user interface with strong management tools

Users report excellent tech support

Pricey

Users report some complexity with using encryption and micro-segmentation

Can be difficult to integrate with legacy systems

StarWind HyperConverged Appliance

Highly scalable

Supports numerous configurations and technologies

Read the original post:
Top 10 Hyperconverged Infrastructure (HCI) Solutions - Datamation

Read More..

Building a Better U.S. Approach to TikTok and Beyond – Lawfare

One of the defining technology decisions of the Trump administration was its August 2020 ban on TikTokan executive order to which legal challenges are still playing out in the courts. The incoming Biden-Harris administration, however, has indicated its intention to pivot away from Trumps approach on several key technology policies, from the expected appointment of a national cyber director to the reinvigoration of U.S. diplomacy to build tech coalitions abroad. President Biden will need to make policy decisions about software made by companies incorporated in foreign countries, and to what extent that might pose national security risks. There may be a future TikTok policy, in other words, that isnt at all aboutor at least isnt just aboutTikTok.

In April 2020, Republican Rep. Jim Banks introduced legislation in the House of the Representatives that sought to require developers of foreign software to provide warnings before consumers downloaded the products in question. Its highly likely that similar such proposals will enter Congress in the next few years. On the executive branch side, the Biden administration has many decisions ahead on mobile app supply chain security, including whether to keep in place Trumps executive order on TikTok. These questions are also linked to foreign policy: President Biden will need to decide how to handle Indias bans of Chinese software applications, as India will be a key bilateral tech relationship for the United States. And the U.S. government will also have to make choices about cloud-based artificial intelligence (AI) applications served from other countriesthat is, where an organizations AI tools are run on third-party cloud serversin the near future.

In this context, what might a better U.S.policy on the security risks of foreign-made software look like? The Trump administrations TikTok executive order was more of a tactical move against a single tech firm than a fully developed policy. The new administration will now have the opportunity to set out a more fully realized, comprehensive vision for how to tackle this issue.

This analysis offers three important considerations for the U.S. executive branch, drawing on lessons from the Trump administrations TikTok ban. First, any policy needs to explicitly define the problem and what it sets out to achieve; simply asserting national security issues is not enough. Second, any policy needs to clearly articulate the alleged risks at play, because foreign software could be entangled with many economic and security issues depending on the specific case. And third, any policy needs to clearly articulate the degree to which a threat actors supposed cost-benefit calculus makes those different risks likely. This is far from a comprehensive list. But failure to address these three considerations in policy design and implementation will only undermine the policys ultimate effectiveness.

Defining the Problem

First, any policy on foreign software security needs to be explicitly clear about scopethat is, what problem the government is trying to solve. Failure to properly scope policies on this front risks confusing the public, worrying industry and obscuring the alleged risks the government is trying to communicate. This undermines the governments objectives on all three fronts, which is why scoping foreign software policies clearly and explicitlyin executive orders, policy memos and communication with the publicis critical.

Trumps approach to TikTok and WeChat provides a lesson in what not to do. Arguably, the TikTok executive order was not even a policy: It was more a tactical-level move against a single tech firm than a broader specification of the problem set and development of solutions. Trump had discussed banning TikTok in July 2020 as retaliation for the Chinese governments handling of the coronavirusso, putting aside that this undermined the alleged national security motives behind the executive order, the order issued on TikTok wasnt completely out of the blue. That said, the order on WeChat that accompanied the so-called TikTok ban was surprising, and its signing only created public confusion. Until then, much of the congressional conversation on Chinese mobile apps had focused on TikTok, and the Trump administration had given no warning that WeChat would be the subject of its actions too. Whats more, even after the executive orders were signed in August, most of the Trump administrations messaging focused just on TikTok, ignoring WeChat. The administration also wrote the WeChat executive order with troublingly and perhaps sloppily broad language that scoped the ban as impacting Tencent Holdingswhich owns WeChat and many other software applicationsand thus concerned gaming and other software industries, though the administration subsequently stated the ban was aimed only at WeChat.

Additionally, the Trump administrations decisions on U.S.-China tech often blurred together trade and national security issues. The Trump administration repeatedly suggested that TikToks business presence in mainland China inherently made the app a cybersecurity threat, without elaborating on why the executive orders focused solely on TikTok and WeChat rather than other software applications from China too. Perhaps the bans were a possible warning shot at Beijing about potential collection of U.S. citizen databut its worth asking if that warning shot even worked given the legal invalidations of the TikTok ban and the blowback even within the United States. Again, the overarching policy behind these tactical decisions was undeveloped. It was unclear if TikTok and WeChat were one-off decisions or the beginning of a series of similar actions.

Going forward, any executive branch policy on foreign software needs to explicitly specify the scope of the cybersecurity concerns at issue. In other words, the executive needs to clearly identify the problem the U.S. government is trying to solve. This will be especially important as the incoming Biden administration contends with cybersecurity risks emanating not just from China but also from Russia, Iran and many other countries. If the White House is concerned about targeted foreign espionage through software systems, for example, those concerns might very well apply to cybersecurity software developed by a firm incorporated in Russiawhich would counsel a U.S. approach not just limited to addressing popular consumer apps made by Chinese firms. If the U.S. is concerned about censorship conducted by foreign-owned platforms, then actions by governments like Tehran would certainly come into the picture. If the problem is a foreign government potentially collecting massive amounts of U.S. citizen data through software, then part of the policy conversation needs to focus on data brokers, toothe large, unregulated companies in the United States that themselves buy up and sell reams of information on U.S. persons to anyone whos buying.

Software is constantly moving and often communicating with computer systems across national borders. Any focus on a particular company or country should come with a clear explanation, even if it seems relatively intuitive, as to why that company or country poses a particularly different or elevated risk compared to other sources of technology.

Clearly Delineate Between Different Alleged Security Risks

The Trump administrations TikTok ban also failed to clearly articulate and distinguish between its alleged national security concerns. Depending on ones perspective, concerns might be raised about TikTok collecting data on U.S. government employees, TikTok collecting data on U.S. persons not employed by the government, TikTok censoring information in China at Beijings behest, TikTok censoring information beyond China at Beijings behest, or disinformation on the TikTok platform. Interpreting the Trump administrations exact concerns was difficult, because White House officials were not clear and explicit about which risks most concerned them. Instead, risks were blurred together, with allegations of Beijing-compelled censorship thrown around alongside claims that Beijing was using the platform to conduct espionage against U.S. persons.

If there was evidence that these practices were already occurring, the administration did not present it. If the administrations argument was merely that such actions could occur, the administration still did not lay out its exact logic. There is a real risk that the Chinese government is ordering, coercing or otherwise compelling technology companies incorporated in its borders to engage in malicious cyber behavior on its behalf worldwide, whether for the purpose of censorship or cyber operations. Beijing quite visibly already exerts that kind of pressure on technology firms in China to repress the internet domestically. Yet to convince the public, industry, allies, partners, and even those within other parts of government and the national security apparatus that a particular piece or source of foreign software is a national security risk, the executive branch cannot overlook the importance of clear messaging. That starts with clearly articulating, and not conflating, the different risks at play.

The spectrum of potential national security risks posed by foreign software is large and depends on what the software does. A mobile app platform with videos and comments, for instance, might collect intimate data on U.S. users while also making decisions about content moderationso in that case, its possible the U.S. government could have concerns about mass data collection, censorship and information manipulation all at once. Or, to take another example, cybersecurity software that runs on enterprise systems and scans internal company databases and files might pose an array of risks related to corporate espionage and nation-state espionagebut this could have nothing to do with concerns about disinformation and content manipulation.

Software is a general term, and the types and degrees of cybersecurity risk posed by different pieces of software can vary greatly. Just as smartphones are not the same as computing hardware in self-driving cars, a weather app is not the same as a virtualization platform used in an industrial plant. Software could be integrated with an array of hardware components but not directly connect back to all those makers: Think of how Apple, not the manufacturers of subcomponents for Apple devices, issues updates for its products. Software could also directly connect back to its maker in potentially untrusted ways, as with Huawei issuing software updates to 5G equipment. It could constantly collect information, such as with the TikTok app itself and it could learn from the information it collects, like how TikTok uses machine learning and how many smartphone voice-control systems collect data on user speech. This varied risk landscape means policymakers must be clear, explicit and specific about the different alleged security risks posed by foreign software.

Give Cost-Benefit Context on Security Risks

Finally, the U.S. government should make clear to the public the costs and benefits that a foreign actor might weigh in using that software to spy. Just because a foreign government might hypothetically collect data via something like a mobile appwhether by directly tapping into specific devices or by turning to the apps corporate owner for data hand-oversdoesnt mean that the app is necessarily an optimal vector for espionage. It might not yield useful data beyond what the government already has, or it might be too costly relative to using other active data collection vectors. Part of the U.S. governments public messaging on cyber risk management should therefore address why that particular vector of data collection would be more attractive than some other vector, or what supplementary data it would provide. In other words, what is the supposed value-add for the foreign government? This could also include consideration of controls offered by the softwares country of originfor example, transparency rules, mandatory reporting for publicly traded companies, or laws that require cooperation with law enforcement or intelligence servicesmuch like the list of trust criteria under development as part of Lawfares Trusted Hardware and Software Working Group.

In the case of the Trump administrations TikTok executive order, for example, there was much discussion by Trump officials about how Beijing could potentially use the app for espionage. But administration officials spoke little about why the Chinese intelligence services would elect to use that vector over others, or what about TikTok made its data a hypothetical value-add from an intelligence perspective.

If the risk concern is about targeted espionage against specific high-value targets, then the cost-benefit conversation needs to be about what data that foreign software provides, and how easily it provides that benefit, relative to other methods of intelligence collection. If the risk concern is about bulk data collection on all the softwares users, then the cost-benefit conversation needs to be about why that data is different from information that is openly available, was stolen via previous data breaches, or is purchasable from a U.S. data broker. That should include discussing what value that data adds to what has already been collected: Is the risk that the foreign government will develop microtargeted profiles on individuals, supplement existing data, or enable better data analytics on preexisting information?

The point again is not that TikToks data couldnt add value, even if it overlapped with what Chinese intelligence services have already collected. Rather, the Trump administration did not clearly articulate Beijings supposed cost-benefit calculus.

Whatever the specific security concern, managing the risks of foreign espionage and data collection through software applications is in part a matter of assessing the potential payoff for the adversary: not just the severity of the potential event, or the actors capabilities, but why that actor might pursue this option at all. Policy messaging about these questions speaks to the governments broader risk calculus and whether the U.S. government is targeting the most urgent areas of concern. For instance, if the only concern about a piece of foreign software is that it collects data on U.S. persons, but it then turns out that data was already publicly available online or heavily overlaps with a foreign intelligence services previous data theft, would limiting that foreign softwares spread really mitigate the problems at hand? The answer might be yes, but these points need to be articulated to the public.

Conclusion

A key part of designing federal policies on software supply chain security is recognizing the globally interconnected and interdependent nature of software development today. Developers working in one country to make software for a firm incorporated in a second may sell their products in a third country and collect data sent to servers in a fourth. Software applications run in one geographic area may talk to many servers located throughout the world, whether a Zoom call or Gmailand the relatively open flow of data across borders has enabled the growth of many different industries, from mobile app gaming to a growing number of open-source machine-learning tools online.

If the U.S. government wants to draw attention to security risks of particular pieces or kinds of foreign software, or software coming from particular foreign sources, then it needs to be specific about why that software is being targeted. Those considerations go beyond the factors identified here. The WeChat executive order, for instance, wasnt just unclear in specifying the national security concerns ostensibly motivating the Trump administration; it also failed to discuss what a ban on WeChat in the United States would mean for the apps many users. Hopefully, greater attention paid to these crucial details will help better inform software security policies in the future.

More here:
Building a Better U.S. Approach to TikTok and Beyond - Lawfare

Read More..

Legacy IT: The hidden problem of digital transformation – SC Magazine

Companies may want to undertake digital transformation, but often start with legacy servers built in the early 2000s. Todays columnist, Hemanta Swaim, formerly of TiVo, offers some insights on how to secure legacy IT systems. Jemimus CreativeCommons Attribution 2.0 Generic CC BY 2.0)

Legacy IT has become the dirty little secret of digital transformation. These systems, which include servers, OSes, and applications, are relied on by almost every organization for business-critical activities and many CISOs struggle to protect them from attackers.

During my time as CISO for a public company, I got a first-hand look at the depth of the legacy challenge. We had more than 1,000 servers in use that were built in 2003 but no longer supported by vendors, and more than 200 legacy servers were designated for business-critical activity that drove significant annual revenue. Its a non-starter to take these servers offline, and protecting them comes at a significant cost.

The cost and complexity of protecting legacy systems

The complexity of the legacy system lies in the IT teams inability to update and maintain them. Many of these systems and apps have been used for multiple years and may have millions of lines of code written in them. Changing or altering the code could impact one of the revenue-generating applications that keeps the business running.

On top of this, legacy systems are near impossible to patch. This makes them incredibly vulnerable and a target for attack. So how can organizations protect the systems that serve as the core of their business?

Legacy security cant protect legacy systems

Companies have to absorb the cost of protecting legacy systems within current cybersecurity spending. As such, organizations try to retrofit existing solutions like firewalls and endpoint protection.

Digital transformation has made this approach obsolete. Modern infrastructure, data centers, and the move to hybrid clouds give attackers more pathways to target these vulnerable systems.

Legacy systems that were once used by a handful of on-premise applications can now get used by hundreds of applications both on-prem and in the cloud. Containers may even interact with the mainframe. These are connections that firewalls were simply not built to secure. Many firewalls are also legacy devices and dont integrate with modern applications and environments. Using them to secure legacy systems against outside intrusion simply increases the total cost of ownership without actually securing the systems against modern threats.

CISOs require a convergence of security approaches that protect legacy assets, while also minimizing threats across modern assets. The approach we evaluated and trusted was based on the core principles of Zero Trust.

Improve legacy systems with Zero Trust

Establishing Zero Trust around legacy systems and applications requires four critical components: visibility of legacy assets, micro-segmentation, identity management and continuous monitoring.

Companies find it challenging to obtain the proper view of existing legacy assets, but its vital to ensuring the security of the organization. Its not enough to secure most assets it only takes one overlooked server that attackers find to breach the organization.

After an acquisition the first step we took was to create a full view of the entire ecosystem and map everything from legacy systems to cloud environments, containers, and applications. By understanding which workloads present the most risk, we could deduce the prime starting points for enforcing Zero Trust.

Its a recipe for inconsistent policy and blind spots to start on the path to Zero Trust with anything less than a holistic view of the entire network. Taking a holistic view empowers the security teams to identify the critical areas to start the second step: implementing micro-segmentation.

While firewalls have been the traditional choice for segmenting assets from networks, theyre not built to protect legacy and unpatched assets at such a granular level. Older techniques such as firewalls and VLANs are costly to own and maintain, and they frequently place similar legacy systems in a single silo. For an attacker, its like shooting fish in a barrel a single intrusion can lead to multiple critical systems being exploited.

In addition, security and operations teams need to constantly update rules and policies between the firewalls and the applications and assets theyre supposed to protect. This leads to overly permissive policies that may improve workflow, but significantly undermines the security posture the organization will try to build.

We used micro-segmentation technology Guardicore Centra, which let us build tight, granular security policies to prevent lateral movement. In addition, security teams can deploy micro-segmentation across the entire infrastructure and workloads of all types, including data centers, cloud and modern applications. This eliminates high-risk gaps in the security across infrastructure.

Its very important to enhance the organizations identity and access management platform. Proper user identity management plays a critical role in the Zero Trust principle. Users need access to systems and applications. Security teams must grant access based on each individual users role and automate to verify before granting access to minimize the operational burden and enable to scale.

Micro-segmentation technology offers deep visualization capabilities that make policy management easier and provide capabilities to manage segmentation based on application usage. Applying micro-segmentation across production infrastructure helps to minimize the risks with proper visualization of modern and legacy workloads. This enables the enforcement of server-level policy, which allows only specific workflows between legacy systems and between modern environments and applications to and from the legacy systems.

Legacy systems and applications continue to present a tough challenge for organizations. Theyre business critical, but incredibly hard to maintain and properly secure. As organizations embark on digital transformation and introduce hybrid cloud, new applications and data centers, the problem becomes exacerbated.

Securing the business starts with securing the critical assets that make the business run. Visibility of the infrastructure, combined with micro-segmentation and continuous monitoring, controls the risk of legacy systems by building tight segmentation policy that attackers cant exploit. And dont neglect enforcing the basic security hygiene across enterprise.

Hemanta Swain, senior independent consultant, former chief information security officer, TiVo

See the rest here:
Legacy IT: The hidden problem of digital transformation - SC Magazine

Read More..

TGen Leverages phoenixNAP’s Hardware-as-a-Service Powered by Intel to Empower COVID-19 Research – PR Web

Empowering COVID-19 Research with Cutting-Edge Tech

PHOENIX (PRWEB) December 23, 2020

phoenixNAP, a global IT services provider offering security-focused cloud infrastructure, dedicated servers, colocation, and specialized Infrastructure-as-a-Service technology solutions, announced a case study detailing its collaboration with Intel on building an IT platform for a COVID-19 project by Translational Genomics Research Institute (TGen), an affiliate of City of Hope.

In an effort to help the global fight against COVID-19, TGen proposed the creation of a centralized platform for knowledge and information sharing between researchers from all over the world. The platform is intended to automatically pull data related to COVID-19 sequenced genomes from multiple sources and provide an aggregated dataset to enable comparative research. This would help identify previously uncharacterized elements in the SARS-CoV-2 genome and observe important correlation between them for the purpose of improving diagnostics, vaccine constructs, and treatments for COVID-19.

Considering the volume and complexity of biomedical data, the platform needed powerful hardware to ensure seamless processing, reliable storage, and global availability. phoenixNAP and Intel collaborated to provide a customized solution to support these needs. phoenixNAPs hardware-as-a-service (HaaS) powered by Intel Xeon Dual Gold 6258R CPUs and Intel NVMes (P4610) with Intel VROC, Intel NICs, and Intel Optane persistent memory met the needs of the project. The ultrafast network experience is enabled through a customized implementation of Intel Tofino Programmable Ethernet Switch Products, which Intel has offered since the acquisition of Barefoot Networks in June 2019.

We needed a robust computational environment for large data volumes and sophisticated analytical tools. We have maintained compute infrastructure with phoenixNAP for years, but we needed to expand and customize it to support this project. We got a more streamlined, powerful infrastructure that will give us enough power and memory, while at the same time providing us with a great degree of flexibility as our research expands. Intel Optane PMem emerged as a logical solution to support large data sets, said Glen Otero, VP Scientific Computing, TGen.

Healthcare is becoming more intelligent, distributed, and personalized. Intel technologies are helping to enable a new era of smart, connected, value-based patient care, remote medicine and monitoring, individually tailored treatment plans, and more-efficient clinical operations. Intel-enabled technologies help optimize workflow to lower research and development costs, improve operational efficiency, speed time to market, and improve patient health, said Rachel Mushahwar, VP and GM, Intel US Sales, Enterprise, Government and Cloud Server Providers

TGen is doing an amazing job every day and this project is one of the examples of how they are actively working to make life-changing results. We discussed their project and knew that Intel will be open to collaborating with us on building a proper platform for it. We are excited for having the opportunity to work with both Intel and TGen on something this relevant to the entire world, said Ian McClarty, President of phoenixNAP.

TGen has so far identified several new features in the SARS-CoV-2 genome and continues to focus on making new contributions to the cause. Its project addresses a critical need of the global biomedical community and promises to enhance further research on COVID-19. It also demonstrates the potential of using innovative technology to make a difference in the lives of millions of people.

Download full case study here: https://phoenixnap.com/company/customer-experience/tgen

About phoenixNAP

phoenixNAP is a global IT services provider with a focus on cyber security and compliance-readiness, whose progressive Infrastructure-as-a-Service solutions are delivered from strategic edge locations worldwide. Its cloud, dedicated servers, hardware leasing and colocation options are built to meet always evolving IT businesses requirements. Providing comprehensive disaster recovery solutions, a DDoS-protected global network, hybrid IT deployments with software and hardware-based security, phoenixNAP fully supports its clients business continuity planning. Offering scalable and resilient opex solutions with expert staff to assist, phoenixNAP supports growth and innovation in businesses of any size enabling their digital transformation.

phoenixNAP is a Premier Service Provider in the VMware Cloud Provider Program and a Platinum Veeam Cloud & Service Provider partner. phoenixNAP is also a PCI DSS Validated Service Provider and its flagship facility is SOC Type 1 and SOC Type 2 audited.

Share article on social media or email:

Go here to see the original:
TGen Leverages phoenixNAP's Hardware-as-a-Service Powered by Intel to Empower COVID-19 Research - PR Web

Read More..

Global Cloud Server Market Share, Competition Analysis, COVID-19 Impact Analysis & Projected Recovery, and Market Sizing & Forecast to 2026 -…

A recent market research report added to Reportspedia is an in-depth analysis of Global Cloud Server Market 2020-2026.

This report examines all the key factors influencing growth of global Cloud Server market, including demand-supply scenario, pricing structure, profit margins, production and value chain Regional assessment of global Cloud Server market unlocks a plethora of untapped opportunities in regional and domestic market places.

Top Key Players Profiled in this report are:

NECGoogle Inc.Vmware.AmazonIBM CorporationLiquid WebDell Inc.Cisco Corp.HitachiMicrosoft CorporationFujitsuOracle Corp.Hewlett-PackardRackspace

Get the PDF Sample Copy of this report @https://www.reportspedia.com/report/business-services/global-cloud-server-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69584#request_sample

The latest report on Cloud Server Market contains a detailed analysis of this marketplace and entails information about various industry segmentation. According to the report, the market is presumed t0 amass substantial revenue by the end of the forecast duration while expanding at a decent growth rate.

In addition, the research report provides a comprehensive analysis of the key segments of the Cloud Server market. An outline of each market segment such as type, application, and region are also provided in the report.

Major Product Types covered are:

Hybrid CloudPrivate CloudPublic CloudOthers

Major Applications covered are:

Banking, Financial Services and Insurance (BFSI)EducationGovernmentHealthcare and Life SciencesManufacturingMedia and EntertainmentRetailTelecommunication and ItTransportation and LogisticsTravel and HospitalityOthers

Get 40% Discount (upcomingChristmas & New Year) Offer on this Premium Report @ https://www.reportspedia.com/discount_inquiry/discount/69584

On the basis of region, the market is evaluated across:

Cloud Server Market COVID-19 Impact Analysis:

The COVID-19 outbreak was sudden and could not have been considered as dangerous during the first attack on the Chinese city of Wuhan. Although, everything in that city was shut down but coronavirus infection was as widespread in China as wildfires. Within a few months, it spread to neighboring countries and then to every corner of the globe. The World Health Organization has declared it a pandemic and has so far caused massive losses in several countries.

Do You Have Any Query Or Specific Requirement? Ask to Our Industry Expert @

https://www.reportspedia.com/report/business-services/global-cloud-server-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69584#inquiry_before_buying

The study objectives of this report are:

Table of Contents

Global Cloud Server Market Research Report 2020 2026

Chapter 1 Cloud Server Overview of the Market

Chapter 2 Economic Impact on Industrial Sector

Chapter 3 Global Cloud Server Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by region

Chapter 5 International Supply (Manufacturing), Consumption, Export, Regional Importation

Chapter 6 Global Production, Income (Price), Trend In Price In Type

Chapter 7 Global Cloud Server Market Performance Analysis

Chapter 8 Production Cost Analysis

Chapter 9 Industrial Liaison, Surveillance Strategy and Consumer Consumers

Chapter 10 Marketing Strategic Review, Distributors / Traders

Chapter 11 Cloud Server Market Features Analysis

Chapter 12 Global Cloud Server Market Forecast

To Analyze Details Of Table Of Content (TOC) of Cloud Server Market Report, Visit Here: https://www.reportspedia.com/report/business-services/global-cloud-server-market-report-2020-by-key-players,-types,-applications,-countries,-market-size,-forecast-to-2026-(based-on-2020-covid-19-worldwide-spread)/69584#table_of_contents

Visit link:
Global Cloud Server Market Share, Competition Analysis, COVID-19 Impact Analysis & Projected Recovery, and Market Sizing & Forecast to 2026 -...

Read More..

Private Cloud Server Market Report, History And Forecast 2020-2025, Breakdown Data By Manufacturers, Key Regions, Types And Application – The Monitor

These high-end research report highlighting market developments across current and historical timeframes highlights market size and dimensions besides taking into account value and volume based estimations.

The primary aim of this research report is to optimally identify major growth favoring elements as well as growth retardants such as barriers and risks that significantly dampen optimistic growth spurt.

Access the PDF sample of the Private Cloud Server market report @ https://www.orbisresearch.com/contacts/request-sample/4068569?utm_source=Atish

Other requisite details portrayed in the report include sections on top-notch vendor assessment, with detailed emphasis on industry forerunners. Sections on trend assessment and their capabilities in favorable decision-making process have also been discussed at length.

Key Players Mentioned in the Report:This report focuses on the global top players, coveredAmazonMicrosoftGoogleDropboxSeagateEgnyteBuffalo TechnologySpiderOakMEGAD-LinkElephantDriveMozy Inc.POLKASTDellJust CloudSugarsync

Make an enquiry of Private Cloud Server market report @: https://www.orbisresearch.com/contacts/enquiry-before-buying/4068569?utm_source=Atish

Segmentation Overview

The global Private Cloud Server market has been examined with ample detailing to disclose vital market specific developments across segment categories. Segment classification of the market structure has been encouraged by our seasoned in-house research experts to allow readers comprehend the versatility of the market in terms of product and service variation. Additional details on regional expanse and geography-based vendor investments are also discussed extensively based on which global Private Cloud Server market is splintered into type, application and end-user.

Browse the complete Private Cloud Server market report @ https://www.orbisresearch.com/reports/index/global-private-cloud-server-market-report-history-and-forecast-2014-2025-breakdown-data-by-companies-key-regions-types-and-application?utm_source=Atish

Segment by Type, the product can be split intoUser HostProvider Host

By Application, the market can be split intoIndividualSmall BusinessLarge Organizations

Geographical Expanse Analysis: Global Private Cloud Server Market

This research report also highlights details on region-wise demarcation, encapsulating details on massive growth opportunities and favorable growth conducive elements that harness sales optimization and revenue expansion. The report is placed to encourage appropriate vendor initiatives aligning with dynamics transitions and customer preferences.

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us: Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

The rest is here:
Private Cloud Server Market Report, History And Forecast 2020-2025, Breakdown Data By Manufacturers, Key Regions, Types And Application - The Monitor

Read More..

Bare Metal Cloud Market Poised to Expand at a Robust Pace Over 2025 – Farming Sector

Global Bare Metal Cloud Market: Snapshot

As a public cloud service that offers a facility to customers to rent hardware resources from a remotely situated service provider, bare metal cloud comes with the primary benefit of flexibility for the businesses to meet their specific and diverse requirements. With base metal clouds services, small and medium enterprises can also troubleshoot their applications without interfering with other nearby virtual machines (VMs). Since bare metal cloud are made out of dedicated servers, complications from neighbors are avoided and it works very well for high traffic workloads that are intolerant to latency as well as applications pertaining to big data.

Get Free Sample of Research Report @https://www.tmrresearch.com/sample/sample?flag=B&rep_id=2777

Some of the key factors augmenting the demand in the global bare metal cloud market are: critical need for reliable load balancing latency sensitive and data-intensive operations, decommissioning of workloads after termination of service level agreements (SLAS), no noisy neighbors and hypervisor taxes, and the advent of fabric virtualization. On the other hand, restraints such as stringent cloud regulations, expensive model, hindrances pertaining to restoring, and lightweight hypervisors are challenging the bare metal cloud market from attaining greater potential. That being said, growing usage for big data and DEVOPS applications, micro-services and batch processing applications, and growing interest in open compute project (OCP) are anticipated to open new opportunities in this market in the near future. Some of the industry verticals that are generating the demand for bare metal cloud are manufacturing, retail, healthcare, IT and telecommunication, government, and BFSI.

Some of the key audiences of this research report are providers of base metal cloud, application developers, managed service providers, third party system integrators, bare metal hardware vendors, regulatory agencies, and government. The report provides analytical and figurative assessment of the markets potential during the forecast period of 2017 to 2025.

Global Bare Metal Cloud Market: Overview

Bare metal cloud is an alternative for virtual cloud services and works with the help of a dedicated server. The dedicated server is needed in order to balance and scale the arrangement of this model. However, the dedicated hardware is attributed without including any additional storage. Yet, bare metal cloud server can support huge workloads. The main motto of bare metal cloud is to minimize the overhead which is incurred on account of the implementation of virtual technology. Despite the elimination of implementing virtual technology, bare metal cloud services offers efficiency, scalability, and flexibility. One of the other benefits of bare metal cloud servers is that it does not require any host or recipient and can be deployed with cloud-like service model. Bare metal cloud combines features of traditional hosting as well as infrastructure as a service (IaaS) in order to provide high performance workloads. Due to all these reasons, this market is expected to witness high growth in the years to come.

Global Bare Metal Cloud Market: Key Trends

There is a high demand for bare metal cloud from the telecom and the IT sector on account of big data, resulting in high demand for effective storage. The advertising sector will also make extensive use of bare metal cloud and this is a trend which is anticipated to continue throughout the forecast period. Today enterprises are switching from conventional hosting services to bare metal cloud on account of the escalating demand for secure storage facility as well as advancements in the cloud industry. Bare metal cloud solutions offer innumerable benefits such as data security, effective service delivery, efficient data storage, faster service delivery, improved performance, streamline data center operations, and standardized hardware platforms.

Global Bare Metal Cloud Market: Market Potential

The global bare metal cloud market has displayed promising potential as it offers various advantages such as easy maintenance of records, enhanced security, and ability to monitor activities in residential and commercial areas. Bare metal cloud has also found its use and application in providing national security. Because it can help monitor activities, it is enabling countries to fight against terrorism as well as external threats. This is anticipated to create potential growth opportunities within the global bare metal cloud market.

Check Exclusive Discount on this report @https://www.tmrresearch.com/sample/sample?flag=D&rep_id=2777

Global Bare Metal Cloud Market: Regional Outlook

On the basis of geography, the global bare metal cloud market is segmented into Asia Pacific, North America, Latin America, Europe, and the Middle East and Africa. Of these, North America has been leading in the market on account of the increasing focus on research and development in cloud technology. The European bare metal cloud market is also estimated to expand at a fast pace with key contribution from Germany, Spain, and the UK. However, it is Asia Pacific which is anticipated to expand the fastest pace during the forecast period on account of the increasing number of new market players. The digicloud initiative which is undertaken by the government in Singapore so as to offer IaaS, SaaS, along with the use of bare metal servers is also an important factor driving the growth of the Asia Pacific bare metal cloud market.

Global Bare Metal Cloud Market: Competitive Landscape

Key players in the market are concentrating towards achieving organic growth and thus implementing various strategies in order to maintain their position. The report profiles leading players operating in the market. They are: Rackspace Hosting, Inc. (The U.S.), CenturyLink, Inc. (The U.S.), IBM Corporation (The U.S.), Media Temple (The U.S), and Internap Corporation (The U.S.).

About TMR Research

TMR Research is a premier provider of customized market research and consulting services to busi-ness entities keen on succeeding in todays supercharged economic climate. Armed with an experi-enced, dedicated, and dynamic team of analysts, we are redefining the way our clients conduct business by providing them with authoritative and trusted research studies in tune with the latest methodologies and market trends.

Read more here:
Bare Metal Cloud Market Poised to Expand at a Robust Pace Over 2025 - Farming Sector

Read More..

Are We Heading Towards EU Legislation Banning End-to-End Encryption? – Lexology

Introduction

The possibility to impose EU obligations to messaging services using end-to-end encryption to cooperate with law enforcement agencies has been dominating justice and home affairs discussions for some time now, as a way to better prepare for planned terrorist attacks throughout Europe. A Council of Home Affairs Ministers endorsed on 14 December 2020 a Council Resolution on Encryption, paving the way to create a regulatory framework to that effect.

Since 2015, a series of campaigns run alternately by Europol and the Federal Bureau of Investigation, or the services of the "Five Eyes"1 alliance, were building towards the development of this Council Resolution. In October, Interior Ministers of the alliance called on the internet companies once again to equip their IT networks with backdoors so that law enforcement agencies and competent authorities could access end-to-end encrypted apps to police online criminality.

Overview

The Council Resolution (Resolution) on Encryption has been in the works for some months now. The German Presidency has concluded the work on the Resolution that could lead in the long term to a ban of end-to-end encryption for messenger services, to allow investigating authorities to have direct access to end-to-end encrypted communications from such providers.

Even though this proposal was originally initiated by the UK, it was picked up by Germany in 2019, and France has been pushing this proposal throughout the year with renewed impetus following the terrorist attacks in France and Austria.

What Is at Stake?

However, necessary safeguards need to be established to ensure EU citizens' privacy is respected and cybersecurity systems are not compromised. The Resolution underlines that "Protecting the privacy and security of communications through encryption and at the same time upholding the possibility for competent authorities in the area of security and criminal justice to lawfully access relevant data for legitimate, clearly defined purposes in fighting serious and/or organized crimes and terrorism, including in the digital world, and upholding the rule of law, are extremely important."

Most importantly, it is noted that "Any actions taken have to balance these interests carefully against the principles of necessity, proportionality and subsidiarity." With the adoption of the Resolution, there is a clear message on the need to develop an EU regulatory framework and to further assess how such a framework would be established.

Importantly, the regulatory framework should encompass technical and operational solutions to be developed with service providers and relevant stakeholders to enable access to authorities to encrypted data.

At this point, the legislative route by which they would be given effect is uncertain. However, considering the Council has not directly asked the European Commission to prepare a legislative proposal, it is most likely that a future legislative measure would be introduced within the national security remit, in a form of a Council Decision, which would require unanimity voting to be adopted. France, Germany and Austria appear to be the countries to lead the efforts to create such a regulatory framework, following the adoption of the Resolution.

Similarly, the adoption of the Resolution could also have an impact vis--vis the implementation of the European Electronic Communications Code (EECC, Directive EU/2018/1972), due by 21 December 2020. Member states could leverage the adoption of the Resolution to adopt their own measures at national level using the provisions of EECC Article 3(c) that the "Directive is without prejudice to ... actions taken by member states for public order and public security purposes and for defence."

Conclusions

Whereas the adoption of the Resolution aims to put a framework in place that would strengthen investigative powers against terrorism, services using end-to-end encryption could face a significant risk. The Resolution calls on the tech industry to devise mechanisms under which encrypted data can be accessed by competent authorities, while complying with "the principles of legality, necessity, proportionality and subsidiarity". Notwithstanding the principle of the Resolution, creating backdoors to communication services analogous to the lawful intercept capability required of telecommunications operators could weaken IT security and could incite action by cyber criminals and foreign intelligence services.

The broad range of consequences to the tech sector stemming from the Resolution should be closely monitored and assessed. We stand ready to provide assistance in advising clients on the most effective strategic business decisions and legal considerations in this context.

Link:
Are We Heading Towards EU Legislation Banning End-to-End Encryption? - Lexology

Read More..