Page 1,072«..1020..1,0711,0721,0731,074..1,0801,090..»

Intelligence nominee warns generative AI poses threat to 2024 … – POLITICO

Past efforts: Both Cyber Command and the NSA have played key roles in monitoring for and disrupting threats to U.S. elections in recent years. This includes Cyber Command reportedly carrying out an operation on the day of the 2018 U.S. midterm elections to block internet access for the key Russian troll farm involved in spreading disinformation about the vote. Russian hackers were also linked to efforts in 2016 to target voting infrastructure and spread disinformation designed to sway the outcome of the presidential election.

The advent of AI technologies, such as the surging use of OpenAIs ChatGPT, poses new challenges. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency, the main agency that protects U.S. election infrastructure, warned in a speech in May that AI poses epoch-defining risks, including increasing disinformation online.

AI in the spotlight: The confirmation hearing Thursday was heavy on AI-related questions from senators on both sides of the aisle eager to tackle the problem. When asked about his concerns with adversarial nations using AI, Haugh pointed to China and how its use of AI to monitor and surveil its citizens could be a worrying portend of trends worldwide.

Its an area from a threat perspective we should continue to inform and understand what that means to any nation they would be considering partnering with, and the implications of that technology on that society, Haugh said of Chinese developments.

Haugh also noted that the Department of Defense is working on an AI roadmap to help define how to use AI technologies, something critical as China plows ahead with its efforts.

The other area that I think the nation expects from us is to understand how our adversaries use this technology, and be able to inform what that looks like in terms of threat both to our national security and to our industry, Haugh said.

More here:
Intelligence nominee warns generative AI poses threat to 2024 ... - POLITICO

Read More..

UKHSA Advisory Board: Audit and Risk Committee meeting minutes – GOV.UK

Date: Wednesday 19 July 2023

Sponsor: Cindy Rampersaud

The Advisory Board is asked to note the minutes of 28 March 2023 meeting of the UK Health Security Agency (UKHSA) Audit and Risk Committee (ARC). The minutes were agreed on 6 June 2023.

Present at the meeting were:

In attendance were:

23/025 The Chair welcomed all attendees to the meeting and introduced Cindy Rampersaud who had been appointed as the substantive Chair of ARC and would be taking up her role from April 2023.

23/026 The minutes from the last meeting on 19 January 2023 (enclosure ARC/23/006) were agreed.

23/027 The action list (enclosure ARC/23/007) was noted.

23/028 The Director General, Finance, Commercial and Corporate Services provided an update on the Finance and Control Improvement Programme (enclosure ARC/23/008), which had been set up to address the concerns raised in UKHSAs 2021 to 2022 accounts. The programme was making progress and bringing positive momentum, though the path to a clean audit opinion was likely to take until 2024 to 2025.

23/029 Discussion queried whether there was sufficient resourcing for each stage of the programmes action plan. One particular risk was the loss of continuity from losing contractors as a result of DHSCs controls on the use of contingent labour. Senior officials were supporting the case to ministers on need for specific contingent labour resource in this area.

23/030 The Audit and Risk Committee endorsed the action plan within the Finance and Control Improvement Programme, subject to sufficient resourcing of staff.

23/031 The Audit and Risk Committee agreed that UKHSA should accept the NAOs proposed audit approach for 2022 to 2023. The primary focus would be assurance over closing balances.

23/032 Colleagues from the National Audit Office provided a verbal update on scoping for the financial year 2022 to 2023 audit. A planning meeting was held and would be shared with management in the coming weeks.

23/033 NAO colleagues were progressing a targeted review of UKHSAs new finance system. The work was primarily designed to inform the NAOs audit approach, but the findings were being shared with UKHSA. Areas of focus included access control, change control and opening balances. It was noted that UKHSA had commissioned a fuller audit of the system from the Government Internal Audit Agency, which would be scheduled early in the new financial year.

23/034 The Audit and Risk Committee thanked colleagues for their work and anticipated the fuller written report at the next meeting.

23/035 The Director of Corporate Services presented the latest version of the Strategic Risk Register (enclosure ARC/23/009). The Audit and Risk Committee noted the proposed additional risks and de-escalation of risks as agreed by the Executive Committee. A deeper discussion would be scheduled on specific risks, including operational risk driven by constraints on contingent labour. A proposed schedule would be developed by the secretariat, in consultation with the ARC Chair.

([Name redacted])

23/036 Comments were noted on the balance of risks sitting with inherited issues over future state of the organisation. There was consensus to maintain existing risk balance until clarity was given on financial stability of the organisation. It was noted that capital spending should be monitored with respect to infrastructure at Porton Down and approval of the Harlow business case.

23/037 ARC noted the present legal risks with advised action grounded in expert evidence. Discussion followed on risks around Porton Biopharma Limited with an update expected by the next Committee meeting.

(Donald Shepherd)

23/038 [Title redacted] presented an update on development in the Cyber Security team and measures to baseline UKHSAs cyber risk (enclosure ARC/23/010).

23/039 to 23/041 ARC noted the risk audit against Centre for Internet Security (CIS) 18 Critical Security Controls and the current risk posture of UKHSA. Information withheld in accordance with the Freedom of Information Act 2000.

23/042 The Strategic Risk Register would be updated to reflect cyber risk profile and a deeper discussion would be added to the Committee forward look.

23/043 ARC noted the health, safety and environment (HSE) arrangements set out in the paper (enclosure ARC/23/011) and planned HSE inspections. The Committee was encouraged by the positive culture of reporting incidents within high-risk laboratory settings. Further work would focus on extending reporting culture in office-based environments. Additionally, an analysis of mental health risks would enable targeting of wellbeing resources within the organisation.

23/044 Discussion followed on health and safety risks associated with overseas supply chains and activity. Staff in global settings followed advice of the Foreign, Commonwealth and Development Office. Health and safety concerns with commercial partners were mitigated in establishing contracts, including the right of audit.

23/045 ARC noted the summary report and processing taken to minimise the number of outstanding actions that had reduced significantly (enclosure ARC/23/012). The team would continue working with colleagues to agree action plans and provide support where progress was not being made or was delayed.

23/046 The Head of Internal Audit provided an update on audits from 2022 to 2023 (enclosure ARC/23/013). It was noted that the Government Internal Audit Agency (GIAA) were working to confirm actions submitted as complete by responsible owners.

23/047 ARC agreed the audit plan for 2023 to 2024 (enclosure ARC/23/014). There was a challenge of resourcing but GIAA remained confident in completing the plan. The Committee welcomed the future focus for the upcoming audit as UKHSA moved away from the establishment phase of organisation.

23/048 [Title redacted] noted that surveys had been sent to meeting attendees with a substantive report expected at the June meeting. This would be reported to the Advisory Board and inform the governance statement for the annual report.

([Name redacted])

23/049 ARC noted the forward look (enclosure ARC/23/015) that would be updated following points raised during the meeting, and in consultation with the incoming ARC Chair.

23/050 It was noted that a Serious Untoward Incident had been declared with an investigation report as expected in coming months.

23/051 There being no further business, the meeting closed at 12:23pm.

[Name redacted][Title redacted]March 2023

View original post here:
UKHSA Advisory Board: Audit and Risk Committee meeting minutes - GOV.UK

Read More..

Enhancing workplace security: A comprehensive approach to Mac … – BetaNews

Workplace modernization has emerged as an important trend impacting organizations of all sizes, in all industries, and across all geographies. The move by so many businesses to embrace modern end-user technologies is anticipated to help improve recruitment, enhance employee productivity, and may have a measurable impact on talent retention.

One of the main forces behind workplace modernization is a belief that employees will be happier and ultimately more productive if theyre able to choose the devices they use for work. Coupled with both technical and organizational support for anywhere work styles, employees are finding they have a much stronger voice in the selection of IT tooling and the accompanying workflows.

For many industries, workplace innovation started with the adoption of mobile technologies. Apple has emerged as the leading mobility solution used at work, with significant gains over its competition in both smartphones and tablets. Additionally, the Mac is growing in popularity with employer-sponsored choice programs.

Unfortunately, in an effort to move quickly, many organizations put these modern devices into production use without first ensuring they have the appropriate protections in place to keep organizational assets safe. For many, this was due to a lack of awareness of the threat landscape that put their users and devices at risk.

Endpoint security can be a complex topic, but as it relates to devices running modern software like macOS and iOS, organizations should start by practicing good security hygiene and ensure that all end-user devices align with strong and well-understood baseline settings.

In an era where technology and digital communication are paramount, complying with security standards is essential for preserving organizational integrity and managing it at scale. Businesses must define their own data security requirements, while also ensuring the organization can meet any regulatory or legal obligations. These signify an integral aspect of any organization's compliance management strategy.

So, how can organizations effectively align with these important security frameworks?

Several widely recognized compliance frameworks are available to assist organizations in following best practices and achieving essential security standards. Failure to establish and maintain secure operating standards could potentially lead to data breaches, leakage, and monetary penalties in the form of fines or settlements.

Beyond this, there's also the risk of losing customers, accounts, or even job opportunities. Establishing and maintaining security standards involves a significant effort, but doing so helps ensure organizational readiness to fend off a detrimental attack that could ultimately lead to a companys tarnished reputation.

The Centre for Internet Security (CIS)framework provides guidelines intended to support organizations in fortifying their networks and systems. Its focus lies predominantly in offering actionable, pragmatic steps organizations can employ to alleviate the impact of common cyber threats.

Similarly, theNational Institute of Standards and Technology (NIST)provides a comprehensive roadmap for managing cybersecurity risks. This guidance is based on five core functions of identification, protection, detection, response, and recovery. As a federal entity that sets the standard for US government agencies, NIST often highlights the importance of risk assessment and management, with a view toward continuous monitoring and improvement.

The International Organisation for Standardisation (ISO)also provides an important standard, ISO 27001, specifically for Information Security Management Systems (ISMS). This standard covers an extensive array of security controls, including but not limited to physical security, access control, and incident management.

Additionally, certain regulated industries must also adhere to additional specific security benchmarks. For instance, healthcare institutes must comply with Health Insurance Portability and Accountability Act (HIPAA) requirements. Similarly, educational institutions must implement the Family Educational Rights and Privacy Act (FERPA) to protect the privacy of student education records.

However, these standards are guidelines written for generic systems and not for any particular device or platform. They are best practices that are recommended and not mandatory. Additionally, for the standards to be actionable, they need to be translated to a platform and environment, and ultimately put into practice. A business needs to spend time reviewing the guidance and determining what works best for them. It is imperative to understand that the guidelines are a starting point, not the destination.

The macOS Security Compliance Project (mSCP) is an initiative dedicated to ensuring that Apple's desktop operating system, is secure and compliant with all the different security standards and regulations.

This collaborative, open-source endeavor is a macOS administrators quick reference guide to aligning well-understood standards like the CIS Benchmarks, specifically for their macOS fleet. Its the joint project of federal operational IT Security staff from esteemed institutions like the National Aeronautics and Space Administration (NASA), the Defense Information Systems Agency (DISA), NIST, and the Los Alamos National Laboratory (LANL).

Organizations can reduce the likelihood of cyber incidents and fulfill their security obligations by implementing the right controls, configuring settings, and monitoring systems. This will continue to help the companies to ensure their protection in the growing cyberspace.

Nonetheless, the evolving nature of the modern workplace to an increasingly connected mobile workforce underscores the significance of data and device security.

Additionally, with the growing prevalence of Apple technology within organizations, it is important to have complete compliance with quicker onboarding, application-specific policy enforcement, and a simplified, streamlined user experience consistent for all users, including employees, contractors, and third parties.

The first step to effective cybersecurity in an organization involves choosing the standard or standards to align with. These could be industry-specific standards like HIPAA for healthcare or generalized standards like ISO 27001. This choice will form the cornerstone of your cybersecurity strategy, informing all the decisions that follow.

Once a standard has been selected, the business can start the implementation process. For organizations utilizing macOS, a tool like the mSCP (macOS Security Compliance Project) can prove invaluable. It's also crucial to not overlook mobile devices during this process. Ensure that similar compliance standards are applied across the board, thereby safeguarding all of the organization's modern devices.

To scale this process, consider embracing tooling such as Mobile Device Management (MDM). This will facilitate the configuration of device fleets beyond a single device. The goal is to automate the setup process, eliminating the need for administrators to physically interact with every new device, and reduce the number of errors that commonly accompany manual efforts. This approach not only speeds up deployment but also ensures that IT and security do not become bottlenecks to productivity.

Maintaining these standards over time is as crucial as their initial implementation. Thus, the next step involves monitoring and auditing. Regular audits of the devices will help ensure the maintained adherence to the chosen standards. A combination of MDM and endpoint security tools can assist in establishing regular audits and automated remediation steps, to account for when devices fall out of compliance.

Adding endpoint protection capabilities to identify and stop active threats is also highly recommended. These tools go beyond mere device configuration to actively protect devices, providing a further layer of defense.

To prevent incoming risk, focus on building multiple layers of defense. These should be designed to protect devices no matter where they are used, all while considering the end-user experience. The chosen tools should not only integrate well with each other but also align with the end user experience the workers initially chose.

Lastly, adopting a holistic mindset is key. Don't just focus on device security alone. Remember that these devices are used by employees and are connected to sensitive business applications. A zero-trust strategy can be beneficial here, limiting access to business data to only authorized users on enrolled, threat-free devices. By doing this, organizations are not just modernizing the workplace but also their entire security solution stack. In this way, security becomes an integral part of an organization, rather than an afterthought.

Embracing workplace modernization means recognizing security as pivotal. From choosing applicable standards, implementing robust tools like MDM and Endpoint security, to adopting a zero-trust strategy, organizations can navigate this digitizing world. This integration of security and user-centricity enhances operational efficiency and trust, defining the successful organizations of the future.

Image credit: Wavebreakmedia / depositphotos.com

Michael Covington is VP of Strategy, Jamf, the standard in managing and securing Apple at work.

The rest is here:
Enhancing workplace security: A comprehensive approach to Mac ... - BetaNews

Read More..

New peer-to-peer worm infects Redis instances through Lua vulnerability – CSO Online

Researchers have discovered a new worm that infects servers running the Redis in-memory storage system by exploiting a known vulnerability in its Lua subcomponent. Dubbed P2PInfect, the worm is written in Rust and uses a custom peer-to-peer (P2P) communications protocol and network.

Unit 42 believes this P2PInfect campaign is the first stage of a potentially more capable attack that leverages this robust P2P command and control (C2) network, researchers from Palo Alto Networks Unit 42 research team said in a new report. There are instances of the word miner within the malicious toolkit of P2PInfect. However, researchers did not find any definitive evidence that cryptomining operations ever occurred.

Lua is a cross-platform programming language and scripting engine thats commonly embedded as a sandboxed library in applications to enable scripting support. This is also the case for Redis, which allows its users to upload and execute Lua scripts on the server for extended functionality.

While Redis instances have been infected by malicious actors and botnets before, this was mainly achieved by exploiting vulnerabilities or misconfigurations in Redis itself. Meanwhile, the P2PInfect worm also exploits a critical Lua sandbox exploit vulnerability tracked as CVE-2022-0543 that specifically affects the Redis packages on Debian Linux.

According to the Unit 42 researchers, more than 307,000 Redis instances are currently accessible from the internet, but only a small subset of around 900 are vulnerable to this flaw. However, the worm will attempt to probe and infect all public instances.

Exploiting CVE-2022-0543 makes P2PInfect effective in cloud container environments, the researchers said. Containers have a reduced set of functionalities. For example, they do not have cron services. Many of the most active worms exploiting Redis use a technique to achieve remote code execution (RCE) using cron services. This technique does not work in containers. P2PInfect incorporates the exploit for CVE-2022-0543 with the intention of covering as many vulnerable scenarios as possible, including cloud container environments.

Once the main P2PInfect dropper is deployed it connects to the P2P network and download information about the custom communication protocol, which works over TLS 1.3, as well as a list of active nodes in the network. It will also update the network with its own information and will choose a random communications port.

The fact that the worm uses a peer-to-peer command-and-control protocol and random port numbers for each node makes it resilient against takedown attempts as theres no central failure point. Its communications are also harder to block through firewalls because theres not one specific port that can be blocked to stop its traffic.

The worm is written in Rust, a modern programming language that is cross-platform and is known for its memory and type safety. This has made it a popular programming choice for major companies. The P2PInfect dropper was seen infecting Redis instances on both Linux and Windows and it deploys additional payloads written in Rust. Some of these are named linux, miner, winminer, and windows.

On Windows systems, the Palo Alto researchers also saw another component called Monitor being deployed that enables persistence and makes sure the worm is running. After deploying its additional components, the worm immediately starts scanning for vulnerable Redis instances but also scans random ranges of IP addresses for port 22 which is normally associated with SSH. Its not clear why this port is scanned because the researchers saw no evidence that the bot is trying to exploit or connect to other systems over SSH, at least not yet.

We recommend that organizations monitor all Redis applications, both on-premises and within cloud environments, to ensure they do not contain random filenames within the /tmp directory, the researchers said. Additionally, DevOps personnel should continually monitor their Redis instances to ensure they maintain legitimate operations and maintain network access. All Redis instances should also be updated to their latest versions or anything newer than redis/5:6.0.16-1+deb11u2, redis/5:5.0.14-1+deb10u2, redis/5:6.0.16-2 and redis/5:7.0~rc2-2.

P2PInfect is the latest addition in a string of self-propagating botnets that target cloud and container technologies. Researchers from Aqua Security recently documented another worm dubbed Silentbob that targets Kubernetes clusters, Docker APIs, Weave Scope instances, JupyterLab and Jupyter Notebook deployments, Redis servers, and Hadoop clusters.

Read the rest here:
New peer-to-peer worm infects Redis instances through Lua vulnerability - CSO Online

Read More..

Senate bill crafted with DEA targets end-to-end encryption, requires online companies to report drug activity – The Record from Recorded Future News

A bill requiring social media companies, encrypted communications providers and other online services to report drug activity on their platforms to the U.S. Drug Enforcement Administration (DEA) advanced to the Senate floor Thursday, alarming privacy advocates who say the legislation turns the companies into de facto drug enforcement agents and exposes many of them to liability for providing end-to-end encryption.

The bipartisan Cooper Davis Act named for a Kansas teenager who died after unknowingly taking a fentanyl-laced pill he bought on Snapchat requires social media companies and other web communication providers to give the DEA users names and other information when the companies have actual knowledge that illicit drugs are being distributed on their platforms.

Many privacy advocates caution that, if passed in its current form, the bill could be a death blow to end-to-end encryption services because it includes particularly controversial language holding companies accountable for conduct they dont report if they deliberately blind themselves to the violations.

Officials from the DEA have spent several months honing the bill with key senators, Judiciary Committee Chairman Dick Durbin (D-IL) said Thursday.

Providers of encrypted services would face a difficult choice should the bill pass, said Greg Nojeim, Senior Counsel & Director of Security and Surveillance Project at the Center for Democracy and Technology.

They could maintain end-to-end encryption and risk liability that they had willfully blinded themselves to illegal content on their service and face the music later, Nojeim said. Or they could opt to remove end-to-end encryption and subject all of their users who used to be protected by one of the best cybersecurity tools available to new threats and new privacy violations.

The bills deliberately blind provision also worries Cody Venzke, the senior policy counsel for surveillance, privacy, and technology at the American Civil Liberties Union, who said it would target encryption.

The entire purpose of privacy-protecting technology like end-to-end encryption is to protect us from platforms surveillance, Venzke added.

Meredith Whittaker, the president of the foundation behind the popular encrypted Signal app, attacked the bills willfully blind language in a tweet sent Friday, saying, Failing to put cameras in everyone's bedrooms? Not tracking all residents with location? Using E2E? All willful blindness by this logic.

Law enforcement has long complained about how end-to-end encryption creates what the Department of Justice has called a lawless space that criminals, terrorists, and other bad actors can exploit for their nefarious ends.

Two Mexican drug cartels trafficking most fentanyl and methamphetamine into America use social media applications to coordinate logistics and reach out to victims, the DEA said in a May press release. The agency named Facebook, Instagram, TikTok, and Snapchat along with encrypted platforms WhatsApp, Telegram, Signal, Wire, and Wickr as examples.

More than 1,100 cases in a recent DEA operation targeting Mexican drug cartels involved social media applications and encrypted communications platforms through which fentanyl and meth were trafficked, the agency said.

These social media platforms understand there is no legal application for the sale of many of these substances and yet they continue with impunity, Judiciary Committee Chairman Dick Durbin (D-IL) said at a Thursday Senate markup hearing for the bill, noting a similar reporting mechanism already in place which requires the companies to report child sexual abuse material.

Privacy advocates counter that determining what constitutes child sexual abuse imagery on platforms is much easier than patrolling speech, particularly in various languages and with street slang, to sniff out drug sales.

Senator Alex Padilla (D-CA) told the Judiciary Committee that, unlike online sexual imagery of children, language is harder to police on a mass scale since context is pretty important.

Do we really want to effectively deputize untrained tech companies led by people like Elon Musk to serve as law enforcement? Padilla said. This bill will empower them to disclose people's private data to federal law enforcement without a warrant or oversight based only, on quote, a reasonable belief that someone is committing an offense.

Padilla also criticized the bill for potentially criminalizing companies that offer encrypted services, citing how beneficial encryption has been for people in marginalized communities and women seeking reproductive care in the post Dobbs world.

A Thursday press release from sponsor Sen. Jeanne Shaheen (D-NH) highlighted statistics from the DEA supporting the need for the legislation.

Within a five-month period, Shaheen said, DEA investigated 390 drug-poisoning investigations and found that 129 had direct ties to social media.

Unfortunately, federal agencies have not had access to the necessary data to intervene, which has allowed the crisis to worsen, the press release said.

It added that the law will establish a comprehensive and standardized reporting regime that would enable the DEA to better identify and dismantle international criminal networks and save American lives.

But Nojeim said there is a bigger question in play and it is one that society must confront sooner rather than later as all manner of social interactions, and problems, play out online.

We live our lives online nowadays, he said. One question that we have to answer as a society is whether we want these communication service providers, with whom we can't communicate without, to be close to agents of the government.

They already are, according to Carl Szabo, vice president and general counsel of the web communications provider membership association NetChoice. Szabo said social media sites voluntarily work with law enforcement to stop the trafficking of drugs on their sites.

He said that if the bill is enacted all reporting by social media sites would be subjected to Fourth Amendment processes, and it will actually become harder for law enforcement to identify these threats."

Recorded Future

Intelligence Cloud.

Suzanne Smalley is a reporter covering privacy, disinformation and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop and Reuters. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.

See the original post here:
Senate bill crafted with DEA targets end-to-end encryption, requires online companies to report drug activity - The Record from Recorded Future News

Read More..

This is why personal encryption is vital to the future of business – Computerworld

Data encryption is threatened by government forces who havent yet recognized that without personal security, you cannot have enterprise security. Because attackers will exploit any available weakness to undermine protection and if your people or your customers aren't secure, neither is your business.

Attackers will always go where the money is. They will spend lots of it to mount attacks. They will delve deeper, and if they're spending money, they also have the necessary resources to investigate absolutely anyone they can identify as a potential target.

Such targets could be someone who works in a company, government, or enterprise, but the attack surface could be something as simple as a link theyre tricked into clicking based on insight into their personal information (insights that would not exist if that data was protected and secured).

It could also be a link a person connected to them, including less tech-savvy relatives, is tricked into clicking. Attackers are smart enough and have the resources to develop multi-stage attack patterns to get what they want; they just need access to personal information to guide their hand.

Thats why it is vital to ensure personal data is properly protected.

But the security of personal data is precisely what shoddy laws such as the UK Online Safety Bill threatens, because when it demands a weakening of messaging encryption it also means that any government anywhere including those we do not trust can demand the same. It also means that the keys to these personal data kingdoms will eventually slip into the hacker mainstream even those high-value NSO Group exploits were sold on the dark web for a while.

The weaker a system becomes, the more attacks emerge to exploit those weaknesses; this is the fundamental problem of enforcing data security weakness by design.

What thatabuse of the human right to privacy meansis that it becomes that much easier to exfiltrate personal information concerning a target of interest (Even if you need to bribe a couple of corrupt government officials to do so).

We already recognize that humans are the weakest link in any security infrastructure. But what isnt sufficiently recognized is that any action that puts those humans more at risk makes anyone they work for more vulnerable.

A well-resourced attacker will simply identify who works at the company they're aiming for and then find ways to compromise some of those individuals using seemingly unrelated tricks. That compromised data will then feed into more sophisticated attacks against the actual target.

So, what makes it easy to create those customized attacks in the first place? Information about those people, what they enjoy, who they know, where they go, and how they flow. Thats precisely the kind of data any weakening in end-to-end encryption for individuals makes easier to get.

Because if you weaken personal data protection in one place, you might as well weaken it in every place. And once you do that, youre presenting hackers and attackers with a totally tempting table of attack surface treats to chow down on. This is not clever, nor is it sensible.

Because, sure, the data encryption laws that seem to be in circulation right now make the separation between business and personal data, but they completely ignore that businesses are made up of people and people drive business.

When you remove levels of privacy from people who run or work for a business, then you also make the business less secure. It means legislation meant to protect against online harms makes such harms far more likely.

Surely by now most people understand that the Internet comprises a series of inter-connected nodes, and that all these nodes are connected. That connection means anything which reduces the security of any one of them compromises the security of all the others.

Again and again in discussions about encryption, we find ourselves returning to the age-old response on such matters, which is and remains, that online (and possibly across our burning world), we are only as safe as the least secure person we're connected to.

With that in mind, we need more data encryption, not less.

This is history repeating, of course. Because if you think back a little bit to the famed slogan from nineteenth-century author Alexandre Dumas, thanks to his book, "The Three Musketeers," the inconvenient truth on a digitally connected planet is that it's, All for one, and one for all."

No one is safe until everyone is safe.

Please follow me onMastodon, or join me in theAppleHolics bar & grillandAppleDiscussionsgroups on MeWe.

See the original post here:
This is why personal encryption is vital to the future of business - Computerworld

Read More..

EU urged to prepare for quantum cyberattacks with coordinated action plan – CSO Online

The European Union (EU) must prepare for quantum cyberattacks and adopt a new coordinated action plan to ensure a harmonized transition to post-quantum encryption to tackle quantum cybersecurity threats of the future. That's according to a new discussion paper written by Andrea G. Rodr?guez, lead digital policy analyst at the European Policy Centre.

Advances in quantum computing put Europe's cybersecurity at risk by rendering current encryption systems obsolete and creating new cybersecurity challenges, Rodr?guez wrote. This is often coined "Q-Day" - the point at which quantum computers will break existing cryptographic algorithms - and experts believe this will occur in the next five to ten years, potentially leaving all digital information vulnerable to malicious actors under current encryption protocols. For Europe to be serious about its cybersecurity ambitions, it must develop a quantum cybersecurity agenda, Rodr?guez stated, "sharing information and best practices and reaching a common approach to the quantum transition" across member states.

Quantum computing will disrupt online security by compromising cryptography or by facilitating cyberattacks such as those on digital identities, Rodr?guez wrote. "Cyberattacks on encryption using quantum computers would allow adversaries to decode encrypted information, interfere with communications, and access networks and information systems without permission, thereby opening the door to stealing and sharing previously confidential information," she warned.

"Given that the prospects of a cryptographically significant quantum computer - one able to break encryption - are not a question of if but rather when, cybercriminals and geopolitical adversaries are rushing to obtain sensitive encrypted information that cannot be read today to be de-coded once quantum computers are available." These types of cyberattacks, known as "harvest attacks" or "download now-decrypt later," are already a risk to European security.

The impact of quantum computing on Europe's cybersecurity and data protection has been mainly left out of the conversation despite sporadic mentions in some policy documents such as the 2020 EU Cybersecurity Strategy or the 2022 Union Secure Connectivity Programme, Rodr?guez said.

The US arguably leads the transition to post-quantum cybersecurity, in which post-quantum cryptography will be the protagonist, according to Rodr?guez. The National Institute of Standards and Technology (NIST) has initiated a standardization process of post-quantum cryptography algorithms, while the Quantum Cybersecurity Preparedness Act, established in 2022, sets up a roadmap to migrate government information to post-quantum cryptography, Rodr?guez wrote.

"In 2023, the new US National Cybersecurity Strategy established protection against quantum cyberattacks as a strategic objective. This priority encompasses the use of post-quantum cryptography and the need to replace vulnerable hardware, software, and applications that could be compromised."

Meanwhile, the EU's efforts to secure information from quantum cyberattacks lack a clear strategy about how to deal with short-term threats, she added. The narrow focus at the EU level on how to mitigate short-term quantum cybersecurity challenges, especially harvest attacks and quantum attacks on encryption, leaves member states as the frontline actors in the quantum transition, Rodr?guez said. "As of 2023, only a few EU countries have made public plans to counter emerging quantum cybersecurity threats, and fewer have put in place strategies to mitigate them, as in the case of Germany."

As quantum computers develop, European action will be needed to prevent cybersecurity loopholes that can be used as attack vectors and ensure that all member states are equally resilient to quantum cyberattacks. "A Coordinated Action Plan on the quantum transition is urgently needed that outlines clear goals and timeframes and monitors the implementation of national migration plans to postquantum encryption," Rodr?guez claimed.

Such a plan would bridge the gap between the far-looking objective of establishing a fully operational European Quantum Communication Infrastructure (EuroQCI) network and the current needs of the European cybersecurity landscape to respond to short-term quantum cybersecurity threats. Europe can also leverage the expertise of national cybersecurity agencies, experts, and the private sector by establishing a new expert group within ENISA where seconded national experts in post-quantum encryption can exchange good practices and encourage the establishment of migration plans, Rodr?guez wrote.

Rodr?guez's paper set out six recommendations for an EU quantum cybersecurity agenda.

Continue reading here:
EU urged to prepare for quantum cyberattacks with coordinated action plan - CSO Online

Read More..

Leeds-based photonic chip company Optalysys raises 21 million to unlock its Fully Homomorphic Encryption process – Tech.eu

Photonic chip maker Optalysys has raised a 21 million Series A funding which will see it advance its Enable photonic computing technology to unlock a new form of secure processing known as Fully Homomorphic Encryption (FHE).

Backing the company is the Agnelli family through Lingotto which is owned by Exor the Agnelli family holding company. The round was led by Lingotto, imec.xpand, and Northern Gritstone.

FHE, a form of quantum-secure cryptography, doesnt require the data to be decrypted before it can be processed, allowing confidential or sensitive data to be sent along untrusted networks, or to be worked on by multiple parties without ever exposing the data itself.

Given that encrypted data takes significantly longer to process Optalysys uses an advanced photonic semiconductor which accelerates the FHE process, allowing encrypted data to be processed at similar speeds to its unencrypted form. This brings hope of deploying FHE at the scale demanded by the largest secure data applications.

"Optalysys presents a groundbreaking semiconductor technology to reduce energy consumption, boost processing power, and enhance data security. The capability to unlock the power of FHE with their photonic computing technology will enable new markets with advances in encrypted AI. We look forward to working with Nick, Rob, and the team to bring a new level of trust and security to how we use our data," says Ashish Kaushik, Partner at Lingotto.

The funds will 'allow the company to launch its technology on a cloud-based service model, in partnership with system integrators and service providers. Initial photonic systems developed by Optalysys will also be made available to end-users via an Accelerator program - ahead of the first high-speed Enable chips being produced within 24 months'. It will also build out its teams in Europe and the US.

"Fully Homomorphic Encryption has the power to unlock the full value of data - but despite its advantages, it is currently unviable for anything beyond basic processes this is where Optalysys comes in. Our Enable technology allows us to turbo boost the workflows and address the underlying bottlenecks that hold FHE back," says Dr. Nick New, co-founder and CEO of Optalysys.

Here is the original post:
Leeds-based photonic chip company Optalysys raises 21 million to unlock its Fully Homomorphic Encryption process - Tech.eu

Read More..

Content Moderation, Encryption, and the Law – Tech Policy Press

Audio of this conversation is available via your favorite podcast service.

One of the most urgent debates in tech policy at the moment concerns encrypted communications. At issue in proposed legislation, such as the UKs Online Safety Bill or the EARN It Act put forward in the US Senate, is whether such laws break the privacy promise of end to end encryption by requiring that content moderation mechanisms like client-side scanning. But to what extent are such moderation techniques legal under existing laws that limit the monitoring and interception of communications?

Todays guest is James Grimmelmann, a legal scholar with a computer science background who along with Charles Duan recently conducted a review of various moderation technologies to determine how they might hold up in under US federal communication privacy regimes including the Wiretap Act, the Stored Communications Act, and the Communications Assistance for Law Enforcement Act (CALEA). The conversation touches on how technologies like server side and client side scanning work, the extent to which the law may fail to accommodate or even contemplate such technologies, and where the encryption debate is headed as these technologies advance.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

James, Im happy to have you back on the podcast this time to talk about a paper that I believe is still in the works, Content Moderation on End to End Encrypted Systems: A Legal Analysis with your co author, Charles Duan.

I would love to just get you to, in your own words, say why it is you chose at this moment to set out to write this piece of work.

James Grimmelmann:

So this comes out of work that some of my colleagues at Cornell Tech have been doing. Tom Ristenpart, whos a computer scientist, and his group have been working on, lets call it online safety.

With the technologies people use now, so one branch of their work, which has been very influential, deals with securing peoples devices in cases that involve intimate partner abuse. Those are cases where the threats are literally coming from inside the house and the abusers may have access to peoples devices in ways that traditional security models didnt include.

Another major strand that Tom and his team have been working on has to do with abuse prevention in end to end encrypted systems. So encrypted messaging is where the message is scrambled in a way so that nobody besides the sender and recipient can read it. Well, if youre sending that message through a server, through email or through a messaging system like Facebook Messenger or WhatsApp or Signal.

Then the question arises, is the message encrypted on its way from you to the Facebook servers and then from Facebook servers to its recipient, or is it encrypted in a way that not even Facebook can read it? If its encrypted in a way that only you and the person youre sending it to can read it, and Facebook sees it as just an equally random string of gibberish, thats called end to end encryption.

And this has been promoted as an important privacy preserving technology, especially against government agencies and law enforcement that might try to surveil communications or have the big platforms do it for them. A challenge, however With end to end encrypted messaging is that it can be a vector for abuse.

If the platform cant scan its contents, it cant look for spam or scams or harassment. Somebody who sends you harassing messages through Facebook Messenger, youll receive it. But Facebooks detectors wont know it. And If you try to report it to Facebook, then Facebook doesnt have direct evidence of its own.

This was actually received through its platform. Its open to potential false reports of abusive messaging. And so in that context, Tom and other computer scientists have been trying to find techniques to mitigate abuse. How can you report abusive messages to a platform? Or if youre a member of a group that uses encrypted communications for all members of the group, and some platforms do now have, and encrypted group chats, how can you and the other participants say, so and so is being a jerk in our community. We dont want further messages from them. And so theres this broad heading of computer science work on abused mitigation in end to end encrypted communications.

Long background on a bunch of computer science stuff, I am here as the law talking guy. So my postdoc Charles and I he, like me, has a background in computer science as well as law have been working with the computer scientists on the legal angles to this.

And in particular, Charles and I have been asking, do these abuse prevention mechanisms comply with communications privacy law? There are laws that prohibit wiretapping or unauthorized disclosure of stored electronic communications. Do these techniques for preventing abusive communications comply with the various legal rules that aim to preserve privacy?

Because in many ways, it would be a really perverse result if people using a technology designed to preserve their privacy cant also use a technology that makes those messaging safe because they would be held to have violated each others privacy. Something very backwards about that result, but our communications privacy laws are so old that it takes a full legal analysis to be certain that this is safe to do.

So our draft, which is very long, goes through a lot of those legal details.

Justin Hendrix:

So I want to get into some of the questions that you pose, including some of the normative questions that you kind of address towards the end of the paper, which pertain to news of the moment questions around. The Online Safety Bill in the UK, for instance, and the fight over encryption thats happening there, et cetera.

I do want to give the paper its due and go through what youve tried to do methodically on some level. But I do want to start perhaps with that last point you just made, which is this idea that these technologies, encrypted messaging apps, are a different generation of communications technology that the law didnt anticipate.

Is that broadly true in your view?

James Grimmelmann:

Thats probably true. Our communications privacy laws were written. Literally, with previous generations of technology in mind, its called wiretapping because this applies to wire communications, which is a telegraph or a telephone that has a physical wire running and ultimately from one person to the other.

And we still use that terminology. And there are still a lot of assumptions from older technologies baked into how the laws are written and the concepts that they use.

Justin Hendrix:

So lets talk about the spate of laws that you looked at here. You looked at the Wire Tap Act; the Stored Communications Act; Pen Registers and Trap and Trace Devices; the Computer Fraud and Abuse Act; and the Communications Assistance for Law Enforcement Act CALEA, as some folks will know it. Are there other laws that perhaps youll have to look at in the final analysis?

James Grimmelmann:

So weve been taking this paper around to conferences, and we got excellent feedback that we also need to address mandatory reporting laws around child sexual abuse material. Because those two impose certain obligations on telecommunications providers or possibly participants when they become aware of certain kinds of material and so moderation techniques that could make them aware of those materials definitely trigger the obligations of those laws. I think its ultimately the five you mentioned plus the CSAM laws.

Justin Hendrix:

So lets talk about the moderation approaches. And maybe it would be helpful for us to just go through them one by one. And in your words, if you can offer a description of what these technologies are.

James Grimmelmann:

Okay, lets start with message franking, which is really a technique designed to address the kind of scenario I mentioned to you before. Youre using an end to end encrypted messaging system, and somebody sends you something abusive.

Pictures of their genitals, repeated messages saying, I hope you die, something that you really dont want to receive. And the technical challenge that its trying to solve is how do you make this reportable to the platform so the platform can help you without undermining the privacy guarantees of end to end encrypted messaging in the first place.

And the solution, which is incredibly ingenious technically, is to allow for a kind of verified reporting in which the recipient of a message can send a report to the platform that is provably based upon actual messages. The recipient cant forge the message and say, Oh, this person sent me this abuse of content, when they didnt actually send it.

So the sender is locked in. They are committed to anything that they send. And if the recipient decides its abusive, they can report it at the same time. The platform should learn nothing. Unless the recipient actually chooses to make a report unless and until that person says, I didnt want to receive this.

This violates platform policies. The platform should have to tell nothing about the message at all. And it turns out that by basically putting a couple of well designed electronic signatures on each message, you can design a system that does this. Its called message franking. The idea being like you frank a message with a stamp, and rubber stamp, you know, carries all the information the platform recipient will later need in case of an abuse report.

And Im lumping forward tracing together with message franking because its basically an extension of it. In forward tracing, if a message is reported as abusive, The platform can trace it back not to the person who sent that specific message, but to everybody before them in a chain if it was forwarded, and that might be relevant.

If a message gets forwarded to somebody and says, this is actually like illegal material that I did not want to be involved with. The platform can then run it back to the original sender who introduced it to the network, which could be useful in rooting out somebody who is using it for abusive purposes.

So basically, its a clever application of cryptographic techniques that have been invented in this millennium after all of the communications privacy laws we discussed were drafted.

Justin Hendrix:

And which of the encrypted messaging apps that folks are familiar with at the moment are using this technique?

James Grimmelmann:

So its basically research stage. Facebook is the one that is leading the way in terms of developing this technology. Facebook was one of the their research arm was one of the original creators of one of the original message franking proposals. So theyre the one that has invested the most in making this workable.

Justin Hendrix:

And of course, Facebook intends to make its Messenger encrypted by the end of the year, its promised. So perhaps its interested in doing so alongside the introduction of technologies like this. Lets talk about whether this comports with the various laws and frameworks that youve assessed. How does it stand up when you look back at the statute?

James Grimmelmann:

So this is an answer Ill probably give you repeatedly, which is, we think its okay, but were less certain about that than we would like to be.

So lets take the wiretap act. The wiretap act, as you might expect, prohibits intercepting electronic communications in a way that lets you learn their contents. And the classic case here is like the literal wiretap plugging into a phone cable. Or also connecting to a network box and just grabbing a copy of somebodys incoming email in flight as it arrives.

And it might seem like, well, theres no interception here because only when theres an actual abuse report made to the platform does the platform learn the contents of a message, but its not quite that clean because the definition of contents in the Wiretap Act is quite broad. The statute defines it as any information concerning the substance purport or meaning of a communication.

And theres a non frivolous argument that this little franking tag, the little stamp that the platform gets applied to each message actually does contain some information about the substance of the message. It does allow the platform to verify the messages authenticity, and there are courts that have expressed at least doubt about whether this kind of metadata verifies a messages contents is in fact itself also contents. And if you go down that road, you wind up then asking a whole bunch of other statutory questions under the Wiretap Act. Does the participation of the platform in applying the franking tag to a message as it gets sent through from sender to recipient, Is that an interception under the statute again textually a hard question, and then perhaps most interestingly, and this was one really opens up a thorny set of issues.

Should we think about the participants in this communication as having consented to this process. Should the sender of the message be able to say, Wait a minute. I didnt consent to all of this cryptographic mumbo jumbo that you did when I sent a message. I did not consent to the steps necessary to verify me as the sender. I thought I was using a completely encrypted end to end messaging system. I did not agree to any of this.

And from one perspective, this is a bad argument for a person sending abusive messages to make. But from another, they do have a point that this does not completely comport with the way that end to end encrypted messaging is used in the broad public discourse.

If you think of it as meaning no one besides you and the recipient can ever learn anything about your message, then this is a small inroads on the privacy guarantees of E2EE.

Justin Hendrix:

So were going to come back to that last comment I think more than once as we go through this and perhaps well address it in the summary conversation as well because I think you might be able to say that about each of these things.

But next you go to server side automated content scanning. A lot of folks like to toss out this phrase, homomorphic. Encryption. I liked the somewhat artful description you have of this technique where the server learns nothing. Ill read it.

Imagine a blindfolded chef wearing thick mittens who follows instructions to take things out of a box, chop them up, put them in the oven for an hour at three 50 degrees, and then put it back in the box. This chef can roast vegetables for you, but doesnt learn whether you were roasting potatoes or parsnips. Its a pretty good description, I suppose, of how this is supposed to work, technically.

Lets talk first, perhaps, about whether this technology works at all.

James Grimmelmann:

So, homomorphic encryption is another one of these really interesting modern developments in cryptography.

The idea is that you can perform a computation on some data without learning anything about the data. And this seems like a kind of pointless thing to do if its just you working with your own data. But if you have some untrusted party who has a lot of Processing capacity and you want them to do some work for you.

Its actually quite valuable. Like if the chef can run an efficient enough kitchen, we might all hand off our vegetables to them to do this for us. And in particular, homomorphic encryption could be used to scan content for matching against certain kinds of. Like CSAM, Child Sexual Abuse Material registries, or certain kinds of spam detection, without letting the person doing the scanning know that it has been scanned in that way.

And you might think, well, whats the point then? Well, you can modify the message being transmitted. To flag it for the recipient so that before you open that picture of somebodys genitals, you might get a warning saying the attached image appears to be of somebodys genitals. Do you wish to proceed? And that would actually be a meaningful anti abuse factor that the server does this matching against a complicated model for you.

You dont have to have the whole huge database of these pictures on your device, and you might not be in a position to do it yourself easily. The platform can do this to help warn people about the messages that theyre receiving.

Justin Hendrix:

Is this a legal technology, at least according to the laws that you reviewed?

James Grimmelmann:

Again, we think its legal, but were not as certain as we would like to be. Take the wiretap attack analysis. The platform can do things that manipulate the message. Once again, were in that world of asking, is it receiving contents? Here, the argument against liability depends, I think, on some of the exceptions to Wiretap Act liability that the Act includes in it.

So, for example, the Wiretap Act has this exception for the ordinary course of business. In which platforms can inspect messages part of their ordinary operations and platforms routinely do spam detection and antivirus scanning on our message attachments already. So this seems to fit within the class of things that they already do.

The analysis under the other statutes is also pretty good. One of the nice things about this kind of encryption is that platforms dont retain any information once they do the processing. They send it out, it leaves their system. That means that they are not retaining the kinds of stored communications that could trigger the Stored Communications Act.

Thank you. We like it. We would like this to be legal. We think it is. We dont have 100% certainty.

Justin Hendrix:

And is it the case, based on your review, that this technology is still fragile, still unlikely to work at scale?

James Grimmelmann:

Its not scalable currently. Ordinary computation is fast. Applying and removing encryption is reasonably fast.

Homomorphic encryption is kind of slow. The work you have to do in order to compile your computation down into the kind of thing you can do blindfolded with mittens on makes it a lot less efficient. Its not surprising. Anything you do wearing thick, heavy gloves is going to be a lot less effective because you cant feel what youre doing.

And so its not a scale worthy technology yet, but its impossible enough than it might be that its worth thinking in advance about its legality.

Justin Hendrix:

So next well talk about what is, you know, perhaps the most discussed potential form of content moderation for encrypted. Messaging apps these days, client side automated content scanning.

Of course, Apple proposed one such system. Apparently the UK Home Office is funding the development of prototypes in this space, perhaps in anticipation of the potential passage of the Online Safety Bill there. How does client side scanning work? Do you have another cooking metaphor that could explain this one to us?

James Grimmelmann:

And not quite as elegantly client side scanning is really you have the client that you are using to send messages. So the Facebook Messenger app or the Signal app or Apples messaging app would perform some kind of computation, some check of your content on the device before it sent or when its received, and the scanning then can flag either for the user or for some external authority, whether it matches against some database of concerning communications.

Justin Hendrix:

And is it legal?

James Grimmelmann:

This gets really complicated, in part because of the diversity of these systems. There are a lot of different architectures. Some of them involve trying to scan against databases without revealing to the client whats in the database. Because if you figure if a database is a prohibited content, you cant just give everybody a complete copy of the things youre not supposed to have.

And also because they involve communications, that is, if Im trying to query what Ive got on my device against some database of things. It may involve sending a comp, a digest of what Ive got out to the network and back. And does that process constitute an interception? This brings us back to the same kinds of questions we asked when we were doing message franking.

Have I, as the user of this app, consented to have my data scanned in this way? And possibly to have some flag about its status being sent to the third party whos providing this app. Again, this is a hard question. I dont think you can answer it fully on the technical side. You cant just say, well because this app works this way and you ran the app you consented to it.

That same argument would say you consented to spy on your phone. But you also cant Just say, well, I didnt want this. So its, theres no consent at some point. People have to know how the software theyve been chosen to run. Its been explained to them works, or we have, you know, serious, you know, computer law violations.

Every time anybody is surprised by an app feature. So its going to be very fact dependent in a slightly uncomfortable way.

Justin Hendrix:

Youve mentioned theres some variability in terms of how these client side scanning schemes work. Are there versions of client side? scanning that you are more comfortable with than others?

Are there those that youve seen that you would regard as, you know, potentially spyware or very concerning from a privacy standpoint and ones that perhaps, I guess, are a little more responsible?

James Grimmelmann:

I mean, the obvious dividing line here is a client side app. That reports the results out to a third party versus one that merely reports it to parties to the communication.

That is, I might very well as a recipient want to have had the senders device do a client side scan and have a cryptographic certification that it didnt include stuff in this abusive database. I could see that, and if thats not revealed to anybody outside the communication, it seems reasonably privacy friendly.

If its scanning against the government provided database of terrorist supporting content, or the kinds of safety concerns that the UK Home Office would like to be monitoring for, thats a bigger intrusion on privacy. Now, it may be that the particular things on this list are particularly concerning, but you get into the fact that this is scanning your messaging for reporting out to the government, and you get into serious questions about the transparency of the process by which things are added to that database.

And so you really cant assess the privacy implications without having a larger conversation about the institutional setting.

Here is the original post:
Content Moderation, Encryption, and the Law - Tech Policy Press

Read More..

The importance of encryption for the defence industry in today’s … – defenceWeb

In todays increasingly digital world, the defence industry is increasingly adopting cutting-edge technologies to enhance its capabilities. These technologies, such as the Internet of Things (IoT), cloud computing, artificial intelligence (AI), and virtual reality (VR), offer tremendous opportunities for improved operations and services.

However, their integration brings forth new challenges related to security, privacy, and the reliability of underlying systems. As a result, robust cybersecurity solutions, including encryption, are vital to protect sensitive data.

In the past two decades, a staggering number of records (numbering in the billions) have been stolen or compromised, with barely a week going by without news of a major data breach. This month, for example, the Pentagon announced plans to tighten protection for classified information following the explosive leaks of hundreds of intelligence documents that were accessed through security gaps at a Massachusetts Air National Guard base by Guardsman Jack Teixeira. The leak is considered the most serious US national security breach since more than 700 000 documents, videos and diplomatic cables appeared on the WikiLeaks website in 2010.

Breaches on the rise

Only a few weeks ago, MOVEit, a popular file transfer tool, was compromised, leading to the sensitive data of many companies who use the software being compromised. Affected companies include payroll provider Zellis, British Airways, BBC, and the province of Nova Scotia. In May, it was alleged that vehicle manufacturer Suzuki had to stop operations at one of its plants in India after a cyberattack, incurring a production of loss of more than 20 000 vehicles during this time.

The defence industry and military have been targeted as well. Last year Kon Briefing recorded 34 major cyberattacks on the military and defence industry, which amongst others saw 1.7 million Polish Army logistics data sets published; data about 120 000 Russian soldiers fighting in Ukraine leaked; over 15 000 emails from a Russian military construction company leaked; 400 000 emails of the Chilean Ministry of Defence leaked; a database of the Russian military intelligence service leaked; and secret NATO documents from Portugal offered for sale on the Darknet etc.

What has emerged as a leading cause of data loss or compromise, was data stored on mobile or removable devices, as well as internal breaches that happened as a result of unauthorised employee access to private data. The theft of devices has also been revealed as a major factor in data breaches, and the loss of confidential information is not limited to theft of the device alone, as malware attacks increasingly go after proprietary business information and customer data.

A list of dire consequences

Furthermore, the consequences of a data breach go way beyond the direct financial costs alone, including the loss of confidence and irreparable damage to an organisations reputation. Add to this the fact that data security and privacy have become legally mandated in many major markets as the environment grows more stringent, with regulations such as PoPIA and GDPR working to safeguard sensitive information.

So what can be done to mitigate the damage of stolen devices, or malware that exfiltrates company or military information? The answer is encryption, which has emerged as a critical defence mechanism. By making use of encryption, organisations render their most confidential data useless to nefarious actors or viewers who are not authorised, guaranteeing its protection and ensuring the confidence of their stakeholders.

What is data encryption?

Data encryption refers to the process of converting data from its original form into an unreadable format called ciphertext, meaning it becomes useless to unauthorised parties. To turn the data back into its original state, a specific encryption key or cipher is needed.

Although data varies greatly in nature, encryption can be applied to practically every type of data. Encryption can be employed when data is at rest, which means it is stored in a fixed location such as a disk. It can also be employed when data is in motion, being transmitted over a network. Data encryption is also compatible with a host of operating systems, file systems, block data, bare-metal servers, virtual machines, and virtual disks.

Certain data, such as the information stored in the /proc directory on a Linux server, may not necessarily need to be encrypted, and in these cases, alternative security measures such as file-level access control should be implemented to safeguard the data.

The effectiveness of different encryption algorithms varies depending on the types of data being encrypted. Additionally, the performance of these algorithms can be influenced by the underlying infrastructure on which they are implemented.

Some algorithms may demonstrate superior performance in environments with abundant memory but limited CPU power, while others may excel in CPU-intensive environments. It is therefore recommended to experiment with different encryption algorithms to identify the ones that align best with the businesss specific requirements.

Best practices

There are also some best practices that militaries and defence businesses should follow when embarking on an encryption journey.

Firstly, safeguarding the encryption keys is crucial. Mistakes can happen, and if the encryption key is compromised, unauthorised access to company data becomes a real danger. Avoid storing the key in an unencrypted file on your computer. Instead, adopt measures such as separating the keys from the data, implementing user access restrictions and responsibilities, and regularly rotating encryption keys based on a predetermined schedule.

Next, encrypt all sensitive data, irrespective of its storage location or perceived risk. Breaches are seen as an inevitability now, so by encrypting sensitive data, the business significantly increases the barriers to unauthorised actors attempting to breach the systems.

Finally, effective data encryption involves making data unreadable to unauthorised parties while maintaining efficiency and utilising resources optimally. If the encryption process is overly time-consuming or consumes excessive CPU time and memory, consider switching to a different algorithm or experimenting with encryption tool settings to strike a balance between security and performance.

By embracing encryption as an essential security measure, the defence sector can fortify its data protection capabilities, maintain confidentiality, and instil confidence among stakeholders. Encryption serves as a cornerstone in safeguarding sensitive information, preserving national security, and supporting the defence sectors digital transformation endeavours.

Written by Caryn Vos, Senior Manager: Crypto at Altron Systems Integration

Vos has specialised in information security for over 20 years, during which time she has dealt with all facets of this industry. This has given her a deep and broad understanding of information security as a whole. While she has focused on the financial services sector for many years, she has also worked with most industries during the course of her career. She has built an extensive network throughout the channel and end-user customer base and has extensive experience in dealing with end users as well as through partners.

For more information contact me via LinkedIn https://www.linkedin.com/in/caryn-vos-4763047/

Continue reading here:
The importance of encryption for the defence industry in today's ... - defenceWeb

Read More..