Page 658«..1020..657658659660..670680..»

Apple’s flavor of RCS won’t support Google’s end-to-end encryption extension – AppleInsider

Apple wants no part of Google's addition of end to-end encryption to RCS, and the iPhone maker will instead work with the standards body to mandate a universal version instead.

Apple surprised everyone on Thursday with a brief announcement that RCS would be supported by its products in 2024. However, there's more to the story as it isn't quite the same RCS Google Messages users have come to know.

According to a report from TechRadar, Apple won't adopt proprietary extensions like the one made by Google that adds end-to-end encryption to RCS. Instead, Apple intends to work with the GSMA to add encryption to the RCS Universal Profile.

The Universal Profile for RCS is a widely adopted standard used across multiple messaging apps and carriers. Google added end-to-end encryption to RCS for users that communicate exclusively through the Google Messages app.

Apple likely didn't want to elevate Google's proprietary version of RCS and exclude other Universal Profile users. Besides, if Apple is adopting RCS to prevent antitrust litigation, it isn't going to choose yet another silo that could implicate the company.

The obvious path forward is the one Apple is taking. By working with the GSMA and getting the Universal Profile to support end-to-end encryption, Apple ensures the widest reach of a safe and secure messaging platform rather than limiting it to Google Messages users.

Group chats with iMessage and RCS users will benefit most from Apple's move. High-quality images and video can be shared, but proprietary iMessage features like dropping stickers onto a chat bubble or reactions likely won't be interoperable.

Apple said that RCS support would arrive later in 2024. This likely means it will arrive as a part of iOS 18 and the other fall releases.

There is no known timeline for how long it might be before the Universal Profile gets end-to-end encryption.

Go here to read the rest:
Apple's flavor of RCS won't support Google's end-to-end encryption extension - AppleInsider

Read More..

Proton Mail plans to tap blockchain tech for email encryption key … – SiliconANGLE News

Proton AG, a Swiss-based securities services provider best known for its encrypted email product Proton Mail, is planning to roll out a new service that taps into blockchain technology as a way to help verify that users are contacting the people they believe theyre reaching out to.

The new service, calledKey Transparency, now in beta test mode, will allow users to verify email addresses and the encryption keys that they use to secure the messages sent to them against attackers. Although end-to-end encryption already protects against snooping, ensuring that the email address and encryption key of the other party are valid could be another matter.

Encryption between parties relies on public key cryptography, which breaks keys into two parts: a private key and a public key. When a user sends a message to another user, it uses the recipients public key to encrypt the message, and the receiver uses a private key to decrypt it.

According to Chief Executive and founder of Proton, Andy Yen, a problem can arise when retrieving the public key and identity of the other recipient from public key repositories, he toldFortunein an interview, and thats what Key Transparency is designed to prevent.

Maybe its the NSA that has created a fake public key linked to you, and Im somehow tricked into encrypting data with that public key, he said. This is a potential situation known as a man-in-the-middle attack, where a potential perpetrator sneaks in and pretends to be someone else, reads the message, then encrypts it again and sends it on without the sender or receiver knowing.

Blockchain technology uses multiple cryptographically protected ledgers that mirror one another to make it nearly impossible to tamper with them after an entry has been added. Every transaction submitted to a blockchain is also verified and agreed upon before its added to the distributed ledger network and then integrated into a block, which is then chained on top of previous blocks. The combination of cryptography and exact copies of distributed ledgers gives it enhanced security over basic databases.

At the time of creation, a cryptographic hash of the encryption key will be added to the Proton blockchain along with the email address that will allow the verification of the address and key, matching them together. This will allow the platform to quickly verify that the person who owns the address also created the key linked to that address.

Yen added that although blockchain technology is the core technology behind Key Transparency, there will be no cryptocurrency involved for users to concern themselves with. The technology itself will essentially be invisible to users but will enhance their security experience.

The Key Transparency beta version currently runs on Protons own private blockchain with its own set of internal decentralized validators. The technology may eventually move onto a public blockchain such as Ethereum after the current version has been piloted.

Users on Proton Mail canenable Key Transparencynow by joining the beta through their Encryption and keys settings and switching it on. Proton will periodically audit a users contacts keys and provide messages and warnings. These could include warning about changes that a user made to keys but not properly applied, detecting keys used in the past that might not be authentic, and warning that a key was disabled in the past but re-enabled. An audit doesnt verify contacts keys are safe; instead, it warns when there are potential issues.

Key auditing also exists in the composer, which is where emails are prepared and sent. If the web app successfully verifies a public key, a blue lock icon will be displayed next to the email address meaning that the email sent will be end-to-end encrypted, if there is an issue detected it will display a red icon and the ability to send messages will be disabled to protect security.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Here is the original post:
Proton Mail plans to tap blockchain tech for email encryption key ... - SiliconANGLE News

Read More..

European Telecom Body to Open-Source Radio Encryption System – BankInfoSecurity.com

Encryption & Key Management , Security Operations

The European telecom standards body behind a widely used radio encryption system will soon open-source its encryption protocols.

See Also: Risky Business: When Third-Party Troubles Become Your Own

The European Telecommunications Standards Institute on Tuesday announced it will soon publish its Terrestrial Trunked Radio, or TETRA, a European standard for radio communication adopted by device makers such as Motorola, Hytera and Simoco.

The announcement from the agency comes after researchers from Dutch security firm Midnight Blue uncovered a critical vulnerability algorithm by hacking a Motorola radio that used TETRA protocol (see: Critical Vulnerabilities Found in Radio Encryption System).

The proprietary algorithm has been kept a secret since 1990 and distributed under a nondisclosure agreement in defiance of a widely accepted cryptographic principle holding that obscurity is detrimental to security.

Following a review meeting on the request that was attended by algorithm users - predominantly manufacturers and governments - ETSI on Tuesday announced it would publicize its Air Interface algorithms along with its cryptographic protocols. The agency will also publicize its original and latest authentic key management protocols TAA1 and TAA2.

"Public domain algorithms are now widely used to protect government and critical infrastructure networks," ETSI said. "Effective scrutiny of public domain algorithms allows for any flaws to be uncovered and mitigated before widespread deployment occurs."

Midnight Blue uncovered five vulnerabilities, including a critical flaw in the cryptographic TEA1 algorithm that it dubbed TETRA: Burst. The critical flaw allowed the researchers to backdoor the algorithms and reduce an 80-bit encryption key to a smaller size that can be brute-forced.

Midnight Blue researchers said the flaw is concerning since private security services patrolling critical infrastructure such as airports and harbors may use radios encrypted with TEA1.

"Opening up the algorithms will definitely allow the research community to assess TETRA more effectively and to understand better the degree of security offered by TETRA," said Wouter Bokslag, co-founder of Midnight Blue.

Bokslag said that in addition to the algorithms, the agency should also release the design documents needed to decipher the cipher protocol.

Read the original:
European Telecom Body to Open-Source Radio Encryption System - BankInfoSecurity.com

Read More..

Introducing Cloaked AI: The IronCore Labs encryption SDK for … – PR Newswire

Privacy and security concerns are some of the top reasons AI projects fail to launch, which is why companies are turning to Cloaked AI to protect sensitive AI data

BOULDER, Colo., Nov. 17, 2023 /PRNewswire/ --IronCore Labs, the leading provider of data protection solutions for cloud applications, today announced the launch of Cloaked AI, an SDK that protects vector embeddings with data-in-use encryption. Cloaked AI is the first solution of its kind and is a major breakthrough for companies building AI into their applications using vector databases that hold confidential information.

Large language models are shifting the paradigm for how AI products are built and where private data is stored. While private AI data used to be in the model, now it's captured as embeddings and stored in vector databases. These databases store everything from internal proprietary documents to chat histories and medical diagnoses.

"We're thrilled to launch Cloaked AI and provide a solution that allows companies to protect sensitive AI data while also maintaining full functionality," said IronCore Labs CEO, Patrick Walsh. "Cloaked AI keeps this new class of data secure and usable with strong encryption, unbypassable access controls, and optional bring your own keys (BYOK) and sovereign data functionality. And companies can still use whichever vector database best supports their use case."

Key features of Cloaked AI:

Cloaked AI is an encryption-in-use solution that protects vector embeddings without compromising usability or hampering AI use cases like anomaly detection, biometric identification, semantic search, and so on. Cloaked AI works with all known vector databases, including those from Pinecone, Weaviate, Qdrant, Elastic, and AWS OpenSearch.

To discover more about Cloaked AI and how application-layer encryption protects cloud data even when a network is breached or credentials are stolen, visit the Cloaked AI solutions page.

About IronCore Labs

IronCore Labs empowers cloud software companies with advanced encryption that keeps data useful while meeting data privacy regulations, addressing international transfer limitations, and fulfilling hold-your-own key (HYOK/BYOK) requests. Data owners decide who can access their data, monitor its use, and revoke that access at any time.

Related Links:

Riah LawryMarketing DirectorIronCore Labs[emailprotected]

This release was issued through WebWire. For more information, visit http://www.webwire.com.

SOURCE IronCore Labs

The rest is here:
Introducing Cloaked AI: The IronCore Labs encryption SDK for ... - PR Newswire

Read More..

Encrypted email service denies accused RCMP leaker’s claim it has … – CP24

Published Nov. 13, 2023 7:18 p.m. ET

Jim Bronskill

The Canadian Press

A company that provides encrypted email service is disputing a former RCMP official's claim that it secretly worked on behalf of an intelligence agency.

Cameron Jay Ortis testified in Ontario Superior Court that a foreign ally told him of a plan to encourage targets to begin using Tutanota, an online encryption service that he called a "storefront" operation created by intelligence agents to snoop on adversaries.

Ortis said he began enticing investigative targets through promises of secret information, with the actual aim of getting them to communicate with him via Tutanota.

In a statement on its website Monday, Tuta, as the company is now known, called Ortis's claim completely false and denied ties to any secret service.

"Tutanota has never and Tuta will never operate a 'storefront' for any intelligence or law enforcement agency," the German company said. "This would completely contradict our mission as a privacy protection organization."

Tutao GmbH, the company behind Tuta, was founded in 2011 by Arne Mhle and Matthias Pfau, who knew each other from studying together at university in Hannover, Germany, the statement added.

"To this day, the company is wholly owned by Matthias and Arne, and is not liable to anyone else."

Ortis, 51, has pleaded not guilty to violating the Security of Information Act by revealing secrets to three individuals in 2015 and trying to do so in a fourth instance, as well as breach of trust and a computer-related offence.

The Crown argues Ortis lacked authority to disclose classified material and that he was not doing so as part of some kind of undercover operation.

Ortis has testified that he did not tell his superiors about the plan to nudge targets to use Tutanota because he had been advised by the foreign counterpart to keep it quiet and, furthermore, targets had developed sources inside Canadian law enforcement agencies.

Reporters and the general public were excluded from the courtroom for Ortis's testimony earlier this month, but transcripts have now been released.

Ortis was the director of the RCMP's Operations Research group, which compiled and developed classified information on terror cells, transnational criminal networks, cybercriminals and commercial espionage.

He was arrested in September 2019.

The path to that arrest began the previous year when the RCMP analyzed the contents of a laptop computer owned by Vincent Ramos, chief executive officer of Phantom Secure Communications, who had been arrested in the United States.

An RCMP effort known as Project Saturation revealed that members of criminal organizations were known to use Phantom Secure's encrypted communication devices.

Ramos would later plead guilty to using his Phantom Secure devices to help facilitate the distribution of cocaine and other illicit drugs to countries including Canada.

A retired RCMP investigator has told the jury he found an email to Ramos from an unknown sender with portions of several documents, including mention of material from the federal anti-money laundering agency, the Financial Transactions and Reports Analysis Centre of Canada, known as Fintrac.

The sender would later offer to provide Ramos with the full documents in exchange for $20,000.

Ortis acknowledges he was behind the messages to Ramos and others, saying it was all part of the clandestine operation involving Tutanota.

During cross-examination, Ortis generally played down the sensitivity of materials dangled to targets.

At one point, Crown prosecutor John MacFarlane asked Ortis about a page from a Fintrac disclosure summary sent to Ramos.

MacFarlane suggested disclosure of the information confirmed to Ramos that Fintrac was investigating his company, revealing the number of suspicious transactions the agency had flagged.

Ortis replied that he believed Ramos was already aware of the authorities' interest, if not this particular set of data.

"When a company has issues with their accounts, and the banks notice suspicious transactions, the company is provided information regarding those suspicious transactions by the bank itself."

Excerpt from:
Encrypted email service denies accused RCMP leaker's claim it has ... - CP24

Read More..

Dozens of VPNs & Shadowsocks Named in Leaked Russian … – TorrentFreak

Home > Technology >

A document originating from Russia's Ministry of Transport shines more light on the government's plans to crack down on encryption tools that help people to evade monitoring and censorship. The leaked document lists dozens of VPN service targets and, for the first time, open source encryption protocol Shadowsocks, best known for its ability to evade firewalls, one in China especially.

As Russia tightens its grip on encrypted communications and tools with the ability to bypass government censorship, it was recently confirmed that 167 VPN services are actively blocked after failing to comply with state requirements.

With that total expected to grow in the months ahead, a leaked document originating from Russias Ministry of Transport reveals details of what telecoms watchdog Roscomnadzor has planned for the near-term.

The document, dated November 10, 2023, was sent by the Ministry of Transport to organizations in the transport sector. After an unofficial appearance on the ZaTelecom Telegram channel, local news outlet Kommersant sought comment from both the Ministry and Roscomnadzor. Neither responded.

The first page of the letter (original/left and Yandex OCR-translated/right), seeks input from organizations currently using any of the VPN services or protocols listed on the second page.

The text strongly implies that the services and protocols listed are viewed as potential threats to the stability, security and integrity of Russian internet/information systems and telecommunications in general.

A more pragmatic reading might conclude that the services and protocols present zero technical threat, but do limit the governments ability to control the narrative. That narrative includes claims that encrypted communications represent a threat to the stability of the internet, which of course they do not.

The letters second page is a 49-item list containing the names of well-known and lesser known VPN services. In the order they appear, some of the most notable inclusions are Private Internet Access (PIA), Ivacy Private VPN, PrivadoVPN, and PureVPN.

When a VPN appears on list like this it usually indicates a refusal to cooperate with Russian authorities, such as granting permission to inspect user data, communications or whatever else is on the governments mind at any given time.

In that sense an appearance might not be as damaging to a VPNs image as some might expect, quite the opposite in fact. That being said, item 49 on the list above shows that Russia intends to crack down on Shadowsocks, a protocol that in itself cannot be forced or coerced into compliance.

Shadowsocks is an open source encryption protocol created over a decade ago by a Chinese developer known as clowwindy and is perhaps best known for its anti-Great Firewall capabilities.

On a basic level, Shadowsocks clients offer a way to connect to SOCKS5 proxies securely using an encrypted tunnel. As standard it isnt a VPN and more importantly doesnt look like one to those hoping to shut VPNs down. People behind these projects are more easily identified, however.

Developers like clowwindy can find themselves under extreme pressure to behave in a particular way. The original Shadowsocks repo on GitHub reveals that even the most robust protocols can be Removed according to regulations.

Fortunately, the Shadowsocks genie is never going back in the bottle; perhaps Russia forgot to ask China about that one, or simply believes it can do better. The theory is that Russia plans to draw up a whitelist of organizations that use the services above in a government approved way, so they dont find themselves inadvertently blocked. That may suggest the government has something aggressive in mind or perhaps faces limitations when it comes to pinpoint blocking.

See the original post here:
Dozens of VPNs & Shadowsocks Named in Leaked Russian ... - TorrentFreak

Read More..

The evolution of ransomware: Lessons for the future – Security Intelligence

Ransomware has been part of the cyber crime ecosystem since the late 1980s and remains a major threat in the cyber landscape today. Evolving ransomware attacks are becoming increasingly more sophisticated as threat actors leverage vulnerabilities, social engineering and insider threats. While the future of ransomware is full of unknown threats, we can look to the past and recent trends to predict the future.

While the first ransomware incident was observed in 1989, ransomware attacks rapidly escalated in 2005.

Ransomware continues to be a relevant threat in todays environment. Since 2021, ransomware groups have become more sophisticated by updating their tactics, forming new groups, finding new targets and taking advantage of outside factors. These changes are important for ransomware groups to stay ahead of security measures and help secure larger ransom payments.

Tactics, techniques and procedure (TTP) changes

Groups still employ double extortion tactics; however, certain groups now use triple extortion and quadruple extortion tactics to convince victims to pay ransom. Triple extortion increases the potential of a victim paying the ransom, especially for critical infrastructure organizations. In triple extortion, threat actors threaten distributed denial of service (DDoS) attacks in addition to encrypting systems and exposing data if a ransom is not paid, potentially resulting in long periods of downtime for public-private interdependent sectors like the government, healthcare or utilities. In comparison to triple extortion, quadruple extortion pressures victims by contacting customers and business partners, informing them that their sensitive information has been stolen. This adds another layer of pressure for the victims to pay.

To gain initial access to victims, ransomware groups employ various methods, including contacting individuals who work within the target organization (insider threat), posting advertisements requesting initial access to a specific target and working with initial access brokers who sell existing access to various targets.

According to CrowdStrike, 2022 saw an increase in initial access broker offerings, with more than 2,500 posts offering initial access and the top sector offerings for these posts, including academia and technology companies. This was a 112% increase in comparison to 2021, making it clear that ransomware groups have an interest in purchasing initial access instead of doing it on their own.

Even more new ransomware groups

Many factors could have led to the increase in new ransomware groups, including affiliates working for more than one group and code leaks. Since 2021, many observed ransomware source code and builder leaks have enabled groups with little-to-no experience to create or modify their ransomware. Code leaks, including Babuk, Conti, Lockbit3.0 and Chaos, have allowed new groups to produce more frequent attacks, thereby changing the threat landscape. However, researchers have observed that groups that use these leaked builders tend to ask for a lower ransom payment. This may indicate that these groups are trying to avoid attention while testing their new variants.

Targeting Linux and ESXi machines

Ransomware groups continue to target operating systems and platforms such as Linux or ESXi machines. These are prime targets because they often host file servers, databases and web servers. Groups will also create Linux encryption with the purpose of specifically encrypting ESXi virtual machines. Linux continues to be the most popular operating system for embedded, constrained and Internet of Things devices used by critical infrastructure sectors like manufacturing and energy. In addition, attacks on Linux systems increased by 75% in 2022 and will likely continue to increase in the latter half of 2023.

Influence of global and geopolitical factors

Recent global and geopolitical factors have also influenced the increase in ransomware attacks. Global factors, including COVID-19, made the healthcare sector an appealing target, whether to obtain information on vaccines or an opportunistic attack where overwhelmed hospitals were more likely to pay a ransom. Geopolitical tensions and sanctions also continue to influence ransomware attacks. APT groups linked to governments in Russia, North Korea and China have utilized ransomware for financial gain and disruption.

Ransomware attacks will continue to evolve and become more sophisticated, advanced and targeted. Threat actors are mastering a new technique where attackers exploit vulnerabilities in the supply chain to launch large extortion campaigns. For instance, this year, Cl0p ransomware infiltrated MOVEit, a secure managed file transfer application, which continues to impact hundreds of companies. For a bigger payout, threat actors will likely continue to find initial access to companies that many organizations rely on.

There will likely be an increase in cloud-aware ransomware due to companies continuing to move their critical data into cloud storage. Ransomware groups could exploit cloud services, applications and infrastructure to gain initial access. It is an attractive opportunity for threat actors due to the larger amounts of critical information available to target and hold for ransom.

Groups will continue to use intermittent encryption, a process where only parts of files are encrypted. The encryption is able to avoid products like endpoint security and extended detection and response, which makes it harder for a security system to detect it. By only encrypting specific lines of data, intermittent encryption also enables a faster decryption process, which might entice a victim to pay the ransom.

Encryptionless ransom attacks will also continue; these are known as extortion or data theft attacks. These attacks have been used for many years and continue to trend up and down, depending on the need and threat actor sophistication. In these attacks, groups steal data and threaten to expose it instead of encrypting it.

With the development of artificial intelligence (AI) and AI models like ChatGPT, ransomware groups will likely follow the trend and utilize AI tools like chatbots, AI-developed malware, automated processes and machine learning algorithms. AI will likely help groups develop more advanced and sophisticated techniques to evade current ransomware prevention and guidance. We can expect all types of ransomware threat actors to leverage AI to help them complete successful attacks.

Looking back on the evolution of ransomware makes one thing clear: the future of cybersecurity is likely to be as unpredictable as its past. Still, the history of ransomware attacks offers much to learn. By maintaining a solid and adaptable cybersecurity strategy, organizations have a better chance of navigating the challenges to come.

Cyber Threat Intelligence Analyst - IBM

Continue Reading

Continued here:
The evolution of ransomware: Lessons for the future - Security Intelligence

Read More..

AI disinformation campaigns pose major threat to 2024 elections – Help Net Security

AI, post-quantum cryptography, zero trust, cryptography research, and election security will shape cybersecurity strategies in the present and for 2024, according to NTT.

As the world emerged from the pandemic and continued to adapt to the rapid implementation of digital transformation, businesses witnessed the rise of sophisticated ransomware attacks, state-sponsored cyber espionage and the constant need to secure the ever-expanding IoT.

AI promises to impact both cybercriminal behavior and cybersecurity strategies in 2024. Malicious actors will use AI to continue to accelerate malware and exploit development and for passive reconnaissance work to identify targets, software and weaknesses. AI will also reduce the cost of attacks through automated workflows, enabling more sophisticated phishing and disinformation campaigns.

However, AI will also impact cybersecurity strategies and technologies by enhancing detection and analysis capabilities, improving the response to disinformation, phishing, malware and anomalous behavior. It will also pave the way for automated, efficient security operations, addressing workforce challenges.

Cyber criminals and state actors are already taking advantage of generative AI to create phishing campaigns, write malicious code or identify vulnerable systems to exploit, said Mihoko Matsubara, Chief Cybersecurity Strategist, NTT.

However, AI capabilities are not only being used for nefarious purposes. Cybersecurity professionals have also found generative AI helpful to automate some tasks, data analysis and vulnerability research. For example, NTT Securitys research noticed that generative AI maximized the efficiency and accuracy to identify phishing sites quickly, Matsubara continued.

The continued advancement of AI will also force conversations in the cybersecurity industry around better, more secure posture across all business functions. In addition, the recent release of the White House Executive Order on AI is expected to drive AI-related initiatives in both public and private sectors, further emphasizing the significance of proper AI security hygiene.

2024 will bring with it presidential campaigns in Taiwan and the United States. As a result, malicious actors will increasingly use generative AI to spread disinformation. This continues a concerning trend seen in recent elections, with bots and bot farms contributing to divisiveness and the dissemination of intentionally misleading or entirely false content, including quotes and memes. In addition, implementing essential cybersecurity measures for systems and ensuring physical security of voting machines, for example, remains critical.

While the security of voting machines has improved, it remains a concern among voters, said David Beabout, CISO, NTT Security. The ability to validate and log results manually to address questionable issues will become increasingly important in the United States. This shift toward resiliency and result validation is expected to gain more prominence in 2024.

The security landscape is becoming increasingly cloud-native, emphasizing the need for enhanced authentication methods to counter emerging threats, such as bypassing MFA through techniques like JSON Web Token (JWT) injection attacks. As a result, zero trust will evolve from hot trend to a framework that will be implemented across many parts of organizations to enhance security defenses.

Zero trust is no longer a buzz word, but a core concept that organizations will implement to improve their cybersecurity measures, said Taro Hashimoto, CSIS Visiting Fellow & Senior Manager of Cybersecurity, NTT.

The concept of zero trust is all about risk-based management and continuous process. This includes the implementation of a variety of underlying technologies, including Identity and Access Management (IAM), Endpoint Detection & Response (EDR), Cloud Access Security Broker (CASB), Data Loss Prevention (DLP), Security Information & Event Management (SIEM), etc. that seamlessly integrate within an organizations cybersecurity strategy, Hashimoto added.

While 2024 is unlikely to be the year where we see widespread adoption of quantum technology by hackers and threat actors due to its nascent stage and substantial costs in comparison to existing effective methods, there is an urgency to prepare for its arrival. Measures are already underway with the White House issuing a memo instructing federal agencies to initiate their preparations and NIST publishing draft versions of several potential post-quantum cryptography (PQC) algorithms. Given the extensive time required to migrate systems, in 2024 we will see a continued focus on preparing systems and applications for the adoption of quantum computing.

While the timing of threats posed by scalable quantum computers is still speculative, the need to prepare for this threat is real, said Kazuhiro Gomi, President & CEO of NTT Research. With NISTs expected release of more PQC standards in 2024, industries, governments, and others are expected to begin ramping up their migration planning efforts. This is based on the concern that malicious actors are currently collecting ongoing communication data and could compromise security once scalable quantum computers become available. In this regard, its important to note that cryptography researchers are working on fortifying the security of advanced cryptographic methods, such as attribute-based encryption (ABE), for PQC readiness.

The challenge ahead lies in managing the security of encryption for those without access to quantum capabilities, as well as defending against those who possess such capabilities once they become more prevalent.

In 2024, we expect to see cryptography and encryption research continue to explore new ways to safeguard data, both at rest and in the cloud. The evolution of advanced encryption systems, like ABE (attribute-based encryption), presents an intriguing prospect for real-world adoption. However, concerns of privacy remain due to the absence of assured privacy in interactions with AI models. As these interactions may involve even more sensitive information than conventional search queries, its conceivable that researchers will delve into the prospect of enabling private engagements with such models.

One potential area of interest across the cryptography research community is to expand private search queries to encompass private interactions with AI systems, said Dr. Brent Waters, Director of the Cryptography & Information Security (CIS) Lab, NTT Research. The rapid rise and utility of large language models like ChatGPT has transformed various industries. However, privacy concerns could be holding back the potential of these technologies. I imagine that the research community will examine the possibility of having private interactions with these types of AI technologies.

With the advancement of technologies such as artificial intelligence and quantum computing, 2024 will be the year that organizations implement and innovate through technology. Not only will businesses implement a zero trust strategy as a baseline cybersecurity practice, but they will also begin to capitalize on advanced cybersecurity technologies made possible through fundamental research and R&D such as ABE to safeguard their business, data and preserve privacy.

More:
AI disinformation campaigns pose major threat to 2024 elections - Help Net Security

Read More..

SIPO warns election hopefuls on crowdfunding campaigns – Connacht Tribune Group

Bradley Bytes a sort of political column with Dara Bradley

Local election candidates hoping to win seats on Galway City and County Councils next year are turning to GoFundMe pages to finance their campaigns.

And while crowd funding online is a relatively easy way to generate cash to buy posters and fund newspaper and social media advertisements, election hopefuls need to be careful they dont fall foul of SIPO rules.

The Standards in Public Office Commission has confirmed to Bradley Bytes that there are specific obligations on candidates in the European Parliament elections next year, and in the Local Government Elections.

SIPO will publish guidelines specifically relating to the 2024 Local and European Elections in advance of the campaigns and each candidate will receive a copy of these guidelines.

But using past guidelines as an indication of future guidelines, candidates will need to keep a record of all donations they receive.

Candidates should know the name, description, citizenship, and postal address of the donor, date on which donation was received, whether the donation was requested (and if so name and address of person who requested it) and whether a receipt issued in respect of the donation, according to SIPO guidelines in last years Dil by-election in Dublin.

SIPO advised that candidates must ensure that any donations accepted are not prohibited.

There is a list outlining what constitutes a prohibited donation, but, for example, acceptance of an anonymous donation exceeding a value of 100 is prohibited, according to the legislation.

Cash donations of more than 200 are prohibited and so too are donations from people outside of Ireland, which is of particular interest for crowd funding campaigns.

The maximum value of donations a candidate can receive from one person is 1,000 in one calendar year.

The Commission advises that, if a candidate is using a crowd funding service, they should make it clear to donors that the acceptance of prohibited donations is not permitted.

The candidate may seek to work with the service to put in place measures to support this, according to SIPO.

Of course, regulations are only as good as the organisation that enforces them.

And SIPO has been shown to be a largely toothless organisation if Galway candidates want to circumvent rules on donations online or otherwise, theyll find a way.This is a shortened preview version of this column. For more Bradley Bytes, see the November 17 edition of the Galway City Tribune. You can buy a digital edition HERE.

Follow this link:
SIPO warns election hopefuls on crowdfunding campaigns - Connacht Tribune Group

Read More..

These are OpenAIs board members who fired Sam Altman – Hindustan Times

ChatGPT-maker OpenAI said on Friday it has removed its co-founder and CEO Sam Altman after a review found he was not consistently candid in his communications with the board of directors. The board no longer has confidence in his ability to continue leading OpenAI, the artificial intelligence company said in a statement.

OpenAI said its board consists of the company's chief scientist, Ilya Sutskever, and three non-employees: Quora CEO Adam DAngelo, tech entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

Ilya Sutskever is a Russian-born Israeli-Canadian computer scientist specialising in machine learning. Sutskever co-founded OpenAI and holds a prominent role within the organisation.

Sutskever is credited as a co-inventor, alongside Alex Krizhevsky and Geoffrey Hinton, of the neural network, AlexNet. He is also among the co-authors of the AlphaGo paper, Live Mint reported.

Holding a BSc in mathematics and computer science from the University of Toronto under the mentorship of Geoffrey Hinton, Sutskever's professional trajectory includes a brief postdoctoral stint with Andrew Ng at Stanford University, followed by a return to the University of Toronto to join DNNResearch, a venture stemming from Hinton's research group.

Google later acquired DNNResearch, appointing Sutskever as a research scientist at Google Brain, where he contributed to significant developments, including the creation of the sequence-to-sequence learning algorithm and work on TensorFlow. Transitioning from Google in late 2015, Sutskever took on the role of co-founder and chief scientist at OpenAI.

This year, he announced that he would co-lead OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in four years.

D'Angelo was born on August 21, 1984. An American internet entrepreneur, D'Angelo is known for co-founding and helming Quora. Previously, he held key positions at Facebook, serving as its chief technology officer and later as vice president of engineering until 2008. In June 2009, D'Angelo embarked on the Quora venture, personally injecting $20 million during their Series B financing phase.

Graduated with a BS in computer science from the California Institute of Technology in 2002, D'Angelo's involvement has extended to advisory and investment roles, advising and investing in Instagram before its acquisition by Facebook in 2012. In 2018, he joined the board of directors of OpenAI.

Tasha McCauley is an independent director at OpenAI and is recognised for her work as a technology entrepreneur in Los Angeles. She is also known in the public eye as the spouse of American actor Joseph Gordon.

McCauley serves as the CEO of GeoSim Systems. McCauley recent endeavours at GeoSim Systems focus on the creation of highly detailed and interactive virtual models of real cities. McCauley has also co-founded Fellow Robots. She held roles teaching robotics and served as the director of the Autodesk Innovation Lab at Singularity University.

Helen Toner is director of Strategy and Foundational Research Grants at Georgetowns Center for Security and Emerging Technology (CSET). She also serves in an uncompensated capacity on the non-profit board of directors for OpenAI. She previously worked as a senior research analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a research affiliate of Oxford Universitys Center for the Governance of AI.

Follow the latest breaking news and developments from India and around the world with Hindustan Times' newsdesk. From politics and policies to the economy and the environment, from local issues to national events and global affairs, we've got you covered....view detail

See the original post:
These are OpenAIs board members who fired Sam Altman - Hindustan Times

Read More..