Page 322«..1020..321322323324..330340..»

What was (A)I made for? – by The Ink – The.Ink

The real A.I. threat? Not some future Matrix turning us all into rechargeable batteries, but todays A.I. industry demanding all of our data, labor, and energy right now.

The vast tech companies behind generative A.I. (the latest iteration of the tech, responsible for all the hyperrealistic puppy videos and uncanny automated articles) have been busy exploiting workers, building monopolies, finding ways to write off their massive environmental impacts, and disempowering consumers while sucking up every scrap of data they produce.

But generative A.I.s hunger for data far outstrips that of earlier digital tools, so firms are doing this on a vaster scale than weve seen in any previous technology effort. (OpenAIs Sam Altman is trying to talk world leaders into committing $7 trillion to his project, a sum exceeding GDP growth for the entire world in 2023.) And thats largely in pursuit of a goal A.G.I., or artificial general intelligence that is, so far as anyone can tell, more ideological than useful.

Karen Hao, whos covered the A.I. industry for MIT Technology Review, The Wall Street Journal, and most recently The Atlantic, is one of the few writers who has focused specifically on the human, environmental, and political costs of emerging A.I. technology. Below, she tells us about the very physical supply chain behind digital technologies, the mix of magical thinking and profit maximization that drives A.I.s most influential advocates, how A.I. advances might jeopardize climate goals, and about who stands to gain and lose the most from widespread adoption of generative A.I.

A lot has been promised about what A.I. will supposedly do for us, but youve been writing mostly about what A.I. might cost us. What are the important hidden costs people are missing in this A.I. transition that were going through?

I like to think about the fact that A.I. has a supply chain like any other technology; there are inputs that go into the creation of this technology, data being one, and then computational power or computer chips being another. And both of those have a lot of human costs associated with them.

First of all, when it comes to data, the data comes from people. And that means that if the companies are going to continue expanding their A.I. models and trying to, in their words, deliver more value to customers, that fuels a surveillance capitalism business model where theyre continuing to extract data from us. But the cleaning and annotation of that data requires a lot of labor, a lot of low-income labor. Because when you collect data from the real world, its very messy, and it needs to be curated and neatly packaged in order for a machine learning model to get the most out of it. And a lot of this work this is an entire industry now, the data annotation industry is exported to developing countries, to Global South countries, just like many other industries before it.

Have we just been trained to miss this by our experience with the outsourcing of manufacturing, or by what's happened to us as consumers of online commerce? And is this really just an evolution of what we've been seeing with big tech already?

Theres always been outsourcing of manufacturing. And in the same way, we now see a lot of outsourced work happening in the A.I. supply chain. But the difference is that these are digital products. And I dont think people have fully wrapped their heads around the fact that there is a very physical and human supply chain to digital products.

A lot of that is because of the way that the tech industry talks about these technologies. They talk about it like, It comes from the cloud, and it works like magic. And they dont really talk about the fact that the magic is actually just people, teaching these machines, very meticulously and under great stress and sometimes trauma, to do the right things. And the A.I. industry is built on surveillance capitalism, as internet platforms in general have been built on this ad-targeting business thats in turn been built on the extraction of our data.

But the A.I. industry is different in the sense that it has an even stronger imperative to extract that data from us, because the amount of data that goes into building something like ChatGPT completely dwarfs the amount of data that was going into building lucrative ad businesses. Weve seen these stories showing that OpenAI and other companies are running out of data. And that means that they face an existential business crisis and if there is no more data they have to generate it from us, in order to continue advancing their technology.

Share

Connecting these issues seems like the way people really need to be framing this stuff, but its a frame that most people are still missing. These are all serious anti-democratic threats.

Read more here:

What was (A)I made for? - by The Ink - The.Ink

Read More..

What is AGI? How is it linked with Chat GPT 5? – Analytics Insight

Artificial General Intelligence (AGI) is an advanced form of artificial intelligence that aims to mimic human intelligence across a wide range of cognitive tasks. Unlike narrow AI systems, which are designed for specific tasks like image recognition or natural language processing, AGI seeks to exhibit general intelligence comparable to that of humans, allowing it to learn, reason, and adapt to new situations autonomously.

AGI represents the pinnacle of AI research, where machines possess the ability to understand, learn, and apply knowledge across various domains, like the breadth of human intelligence. Achieving AGI requires breakthroughs in several key areas, including machine learning, natural language understanding, reasoning, and problem-solving.

ChatGPT-5, an advanced language model developed by OpenAI, represents a significant step towards the realization of AGI. While ChatGPT-5 is not an AGI system itself, it exhibits characteristics that align with the goals of AGI research. Heres how ChatGPT-5 is linked to the concept of AGI:

Natural Language Understanding: ChatGPT-5 demonstrates a remarkable ability to understand and generate human-like text based on context. It can engage in conversations, answer questions, and generate coherent responses across a wide range of topics, showcasing a level of language understanding that approaches human fluency.

Adaptability and Learning: AGI systems are expected to exhibit adaptive learning capabilities, allowing them to acquire new knowledge and skills over time. ChatGPT-5 leverages advanced machine learning techniques, such as transformer architectures and large-scale training data, to continuously improve its performance and adapt to different contexts and tasks.

Generalization: AGI systems should be capable of generalizing knowledge across diverse domains, applying insights gained from one task to solve new and unfamiliar problems. While ChatGPT-5 is primarily a language model, its ability to generate text spans a wide range of topics and domains, indicating a degree of generalization in its understanding and reasoning abilities.

Human-like Interaction: AGI systems are envisioned to interact with humans in a natural and intuitive manner, much like conversing with another person. ChatGPT-5 simulates human-like conversation, engaging users in dialogue and providing responses that are contextually relevant and coherent, fostering a sense of interaction and engagement.

Continuous Improvement: AGI research emphasizes the importance of continuous improvement and self-learning mechanisms. ChatGPT-5 incorporates feedback loops and iterative training processes, allowing it to learn from user interactions and refine its language generation capabilities over time, akin to the learning process observed in human intelligence.

Artificial General Intelligence (AGI) represents the pursuit of creating intelligent systems that exhibit human-like cognitive abilities across a broad range of tasks. While AGI remains a long-term goal of AI research, models like ChatGPT-5 offer glimpses into the potential of achieving AGI-like capabilities. By leveraging advanced machine learning techniques and large-scale training data, ChatGPT-5 demonstrates impressive natural language understanding, adaptability, generalization, and human-like interaction, highlighting its role in advancing the quest for AGI. As AI technology continues to evolve, the intersection between models like ChatGPT-5 and the principles of AGI research paves the way for future breakthroughs in artificial intelligence and human-machine interaction.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Link:

What is AGI? How is it linked with Chat GPT 5? - Analytics Insight

Read More..

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings – Brookings Institution

In April 2023, a Stanford study found rapid acceleration in the U.S. federal government spending in 2022. In parallel, the House Appropriations Committee was reported in June 2023 to be focusing on advancing legislation to incorporate artificial intelligence (AI) in an increasing number of programs and third-party reports tracking the progress of this legislation corroborates those findings. In November 2023, both the Department of Defense (DoD) and the Department of State (DoS) released AI strategies, illustrating that policy is starting to catch up to, and potentially shape, expenditures. Recognizing this criticality of this domain on government, The Brookings Institutions Artificial Intelligence and Emerging Technology Initiative (AIET) has been established to advance good governance of transformative new technologies to promote effective solutions to the most pressing challenges posed by AI and emerging technologies.

In this second in a series of articles on AI spending in the U.S. federal government, we continue to follow the trail of money to understand the federal market for AI work. In our last article, we analyzed five years of federal contracts. Key findings included that over 95% of AI-labeled expenditures were in NAICS 54 (professional, scientific, and technical services); that within this category over half of the contracts and nearly 90% of contract value sit within the Department of Defense; and that the vast majority of vendors had a single contract, reflecting a very fragmented vendor community operating in very narrow niches.

All of the data for this series has been taken directly from federal contracts and was consolidated and provided to us by Leadership Connect. Leadership Connect has an extensive repository of federal contracts and their data forms the basis for this series of papers.

In this analysis, we analyzed all new federal contracts since our original report that had the term artificial intelligence (or AI) in the contract description. As such, our dataset included 489 new contracts to compare with 472 existing contracts. Existing values are based on our previous study, tracking the five years up to August 2022; new values are based on the year following to August 2023.

Out of the 15 NAICS code categories we identified in the first paper, there were only 13 NAICS codes still in use from previous contract and only five used in new contracts, demonstrating a refinement and focusing of categorization of AI work. In the current analysis, we differentiate between funding obligated and potential value of award as the former is indicative of current investment and the latter is representative of future appetite. During the period of the study, the value of funding obligated increased over 150% from $261 million to $675 million while the value of potential value of award increased almost 1200% from $355 million to $4.561 billion. For funding obligated, NAICS 54 (Professional, Scientific and Technical Services) was the most common code used followed by NAICS 51 (Information and Cultural Industries), where NAICS 54 increased from $219 million for existing contracts to $366 million for new contracts, while NAICS 51 grew from $5 million of existing to $17 million of new contracts. For potential value of award, NAICS 54 increased from $311 million of existing to $1.932 billion of new contracts, while NAICS 51 grew from $5 million of existing to $2.195 billion of new contracts, eclipsing all other NAICS codes.

The number of federal agencies with contracts rose from 17 to 23 in the last year, with notable additions including the Department of the Treasury, the Nuclear Regulatory Commission, and the National Science Foundation. With an astounding growth from 254 contracts to 657 in the last year, the Department of Defense continues to dominate in AI contracts, with NASA and Health and Human Services being distant a second and third with 115 and 49 contracts respectively. From a potential value perspective, defense rose from $269 million with 76% of all federal funding to $4.323 billion with 95%. In comparison, NASA and HHS increased their AI contract values by between 25% and 30% each, but still fell to 1% each from 11% and 6% respectively of the overall federal government AI contract potential value due to the 1500% increase in the DoD AI contract values. In essence, DoD grew their AI investment to such a degree that all other agencies become a rounding error.

For existing contracts, there were four vendors with over $10 million in contract value, of which one was over $50 million. For new contracts, there were 205 vendors with over $10 million in contract value, of which six were over $50 million and a seventh was over $100 million. The driver for the change in potential value of contracts appears to be the proliferation of $15 million and $30 million maximum potential value contracts, of which 226 and 25 were awarded respectively in the last year, but none of which have funds obligated yet to them. We posit that these are contract vehicles established at the maximum signing authority value for future funding allocation and expenditure. It is notable that only one of the firms in the top 10 potential contract value in the previous study were in the top 10 of new contract awards (MORSE Corp), that the top firm in previous years did not receive any new contract (AI Signal Research) and that the new top firm did not receive any contracts in previous study years (Palantir USG).

In our previous analysis, we reported 62 firms with multiple awards, while over the past year there were 72 firms receiving multiple awards. However, the maximum number of awards has changed significantly, where the highest number of existing contracts was 69 (AI Solutions) while for new contracts the maximum is four. In fact, there were 10 vendors with four or more existing contracts but only three vendors with four or more new ones (Booz Allen Hamilton, Leidos, and EpiSys Science). This reflects a continued fragmented vendor community that is operating in very narrow niches with a single agency.

Growth in private sector R&D has been at above 10% per year for a decade while the federal government has shown more modest growth over the last five years after a period of stagnation, however the 1200% one-year increase in AI potential value of awards of over $4.2 billion is indicative of a new imperative in government AI R&D leading to deployment.

In our previous analysis, we noted that the vendor side of the market was highly fragmented with many small players whose main source of revenues were likely a single contract with a nearby federal client. The market remains fragmented with smaller vendors, but larger players such as Accenture, Booz Allen Hamilton, General Atomics, and Lockheed Martin, are moving quickly into the market, following, or perhaps resulting in, the significant increase of the value of contracts. In our previous analysis, we identified that these larger firms would be establishing beachheads for entry into AI and we expect this trend to continue with other large defense players such as RAND, Northrop Grumman, and Raytheon amongst others as vendors integrate AI in their offerings.

From the client side, we had previously discussed the large number of relatively small contracts demonstrating an experimental phase of purchasing AI. The explosion of large, maximum potential value contracts appears to be a shift from experimentation to implementation, which would be bolstered by the shift from almost uniquely NAICS 54 to a balance between NAICS 54 and 51. While research and experimentation are still ongoing, there are definite signs of vendors bringing to the federal market concrete technologies and systems. The thousand flowers are starting to bloom and agenciesparticularly DoDare tending to them carefully.

We had identified that the focus on federal AI spending was DoD and over the last year, this focus has proportionally become almost total. Defense AI applications have long been touted as a potential long term growth area and it appears that 2022/23 has been a turning point in the realization of those aspirations. While other agencies are continuing to invest in AI, either adding to existing investment or just starting, DoD is massively investing in AI as a new technology across a range of applications. In January 2024, Michael C. Horowitz (deputy assistant secretary of defense for force development and emerging capabilities) confirmed a wide swath of investments in research, development, test and evaluation, and new initiatives to speed up experimentation with AI within the department.

We have noted in other analyses that there are different national approaches to AI development, where the U.S. and its allies have been focusing on the traditional guardrails of technology management (e.g., data governance, data management, education, public service reform) and so spreading their expenditures between governance and capacity development, while potential adversaries are almost exclusively focused on building up their R&D capacity and are largely ignoring the guardrails. While we had identified risks with a broad-based approach leading to a winnowing of projects for a focused ramp-up of investment, we rather see a more muscular approach where a wide range of projects are receiving considerable funding. The vast increase in overall spendingparticularly in defense applicationsappears to indicate that the U.S. is substantially ramping up its investment in this area to address the threat of potential competitors. At the same time, public statements by federal agency leaders often strike a balance between the potential benefits and the risks of AI while outlining potential legislative and policy avenues while agencies seek means of controlling the potential negative impacts of AI. The recent advancement of U.S. Congress legislation and agency strategies coupled with the significant investment increase identified in the current study demonstrate that well-resourced countries such as the U.S. can have both security and capacity when it comes to AI.

The current framework for solving this coordination issue is the National Artificial Intelligence Initiative Office (NAIIO), which was established by the National Artificial Intelligence Initiative Act of 2020. Under this Act, the NAIIO is directed to sustain consistent support for AI R&D, support AI educationsupport interdisciplinary AI researchplan and coordinate Federal interagency AI activitiesand support opportunities for international cooperation with strategic AIfor trustworthy AI systems. While the intent of this Act and its formal structure are admirable, the current federal spending does not seem to reflect these lofty goals. Rather, we are seeing a federal market that appears to be much more chaotic than desirable, especially given the lead that China already has on the U.S. in AI activities. This fragmented federal market may resolve itself as the impact of recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directs agency engagement on the issue of monitoring and regulation of AI.

In conclusion, the analysis of the U.S. federal governments AI spending over the past year reveals a remarkable surge in investment, particularly within the DoD. The shift from experimental contracts to large, maximum potential value contracts indicates a transition from testing to implementation, with a significant increase in both funding obligated and potential value of awards. The federal governments focus on AI, as evidenced by the substantial investments and legislative initiatives, reflects a strategic response to global competition and security challenges. While the market remains fragmented with smaller vendors, the concentration of investments in defense applications signals a turning point in the realization of AIs potential across various government agencies. The current trajectory, led by the DoD, aligns with the broader national approach that combines governance and capacity development to ensure both security and innovation in AI technologies.

As we noted in our first article in this series, if one wants to know what the real strategy is, one must follow the money. In the case of the U.S. federal government, the strategy is clearly focused on defense applications of AI. The spillover of this focus is a likelihood of defense and security priorities, needs and values being the dominant ones in government applications. This is a double-edged sword as while it may lead to more secure national systems or more effective defenses against hostile uses of AI against the U.S. and its allies, it may also involve trade-offs in individual privacy or decision-making transparency. However, the appropriate deployment of AI by government has the potential to increase both security and freedom, as noted in other contexts such as surveillance.

The AI industry is in a rapid growth phase as demonstrated by the potential revenues from the sector growing exponentially. As virtually all new markets go through the same industry growth cycle, the increasing value of the AI market will likely continue to draw in new firms in the short-term, including the previously absent large players to whom the degree of actual and potential market capitalization has now drawn their attention and capacity. While an industry consolidation phase of start-up and smaller player acquisitions will likely happen in the future, if the scale of AI market increase continues at a similar rate this winnowing process is likely still several years away. That being said, the government may start to look more towards their established partner firmsparticularly in the defense and security sectorwho have the track record and industrial capacity to meet the high value contracting vehicles being put in place.

Despite the commendable intentions outlined in the National Artificial Intelligence Initiative Act of 2020, the current state of federal spending on AI raises concerns about coordination and coherence. NAIIO is tasked with coordinating interagency AI activities and promoting international cooperation, but the observed chaotic nature of the federal market calls into question the effectiveness of the existing framework. The fragmented market may see resolution as the recent executive order on AI guides agencies towards more a more cohesive and coordinated approach to AI. As the U.S. strives to maintain its technological leadership and address security challenges posed by potential adversaries, the coordination of AI initiatives will be crucial. The findings emphasize the need for continued policy development, strategic planning, and collaborative efforts to ensure the responsible and effective integration of AI technologies across the U.S. federal government.

More:

The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution

Read More..

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI – PYMNTS.com

Three entities in the field of artificial intelligence (AI) plan to combine to create the Artificial Superintelligence Alliance.

Fetch.ai,Ocean ProtocolandSingularityNETaim to create a decentralized alternative to existing AI projects controlled by Big Tech, the companies said in a Wednesday (March 27)press release.

The proposed alliance is subject to approval from the three entities respective communities, per the release.

As part of this alliance, the tokens that fuel the members networks $FET, $OCEAN and $AGIX will be merged into a single $ASI token that will function across the combined decentralized network created by this partnership, according to the release.

The combined value of the three tokens is $7.6 billion as of Tuesday (March 26), per the release.

The creation of the largest open-sourced, decentralized network through a multi-billion token merger is a major step that accelerates the race to artificial general intelligence (AGI), the release said.

The Artificial Superintelligence Alliance also brings together SingularityNETs decentralized AI network, Fetch.ais Web3 platform and Ocean Protocols decentralized data exchange platform, according to the release.

The deal provides an unparalleled opportunity for these three influential leaders to create a powerful compelling alternative to Big Techs control over AI development, use and monetization, the release said.

Leveraging blockchain technology, it will turn AI systems into open networks for coordinating machine intelligence, rather than hiding their inner workings from the public, according to the release.

The alliance will also facilitate the commercialization of the technology and enable greater access to AI platforms and large databases, advancing the path to AGI on the blockchain, the release said.

In another recent development in this space,Stability AIannounced Friday (March 22) that its founder and CEO Emad Mostaque has resigned as CEO and stepped down from the companys board to pursuedecentralized AI.

We should have more transparent & distributed governance in AI as it becomes more and more important, Mostaque said when announcing his move. Its a hard problem, but I think we can fix it The concentration of power in AI is bad for us all. I decided to step down to fix this at Stability & elsewhere.

Link:

Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com

Read More..

Future quantum computers will be no match for ‘space encryption’ that uses light to beam data around with the 1st … – Livescience.com

By converting data into light particles and beaming them around the world using satellites, we could prevent encrypted messages from being intercepted by a superpowerful quantum computer, scientists claim.

Currently, messaging technology relies on mathematical, or cryptographic, methods of protection, including end-to-end encryption. This technology is used in WhatsApp as well as by corporations, the government and the military to protect sensitive data from being intercepted.

Encryption works by scrambling data or text into what appears to be nonsense, using an algorithm and a key that only the sender and recipient can use to unlock the data. These algorithms can, in theory, be cracked. But they are designed to be so complex that even the fastest supercomputers would take millions of years to translate the data into something readable.

Quantum computers change the equation. Although the field is young, scientists predict that such machines will be powerful enough to easily break encryption algorithms someday. This is because they can process exponentially greater calculations in parallel (depending on how many qubits they use), whereas classical computers can process calculations only in sequence.

Fearing that quantum computers will render encryption obsolete someday, scientists are proposing new technologies to protect sensitive communications. One field, known as "quantum cryptography," involves building systems that can protect data from encryption-beating quantum computers.

Unlike classical cryptography, which relies on algorithms to scramble data and keep it safe, quantum cryptography would be secure thanks to the weird quirks of quantum mechanics, according to IBM.

For example, in a paper published Jan. 21 in the journal Advanced Quantum Technologies, scientists describe a mission called "Quick3," which uses photons particles of light to transmit data through a massive satellite network.

Get the worlds most fascinating discoveries delivered straight to your inbox.

Related: Experts divided over claims of 1st 'practical' algorithm to protect data from quantum computers

"Security will be based on the information being encoded into individual light particles and then transmitted," Tobias Vogl, professor of quantum communication systems engineering at TUM and co-author of the paper, said in a statement. "The laws of physics do not permit this information to be extracted or copied."

That's because the very act of measuring a quantum system changes its state.

"When the information is intercepted, the light particles change their characteristics," he added. "Because we can measure these state changes, any attempt to intercept the transmitted data will be recognized immediately, regardless of future advances in technology."

The challenge with traditional Earth-based quantum cryptography, however, lies in transmitting data over long distances, with a maximum range of just a few hundred miles, the TUM scientists said in the statement. This is because light tends to scatter as it travels, and there's no easy way to copy or amplify these light signals through fiber optic cables.

Scientists have also experimented with storing encryption keys in entangled particles meaning the data is intrinsically shared between two particles over space and time no matter how far apart. A project in 2020, for example, demonstrated "quantum key distribution" (QKD) between two ground stations 700 miles apart (1,120 km).

When it comes to transmitting photons, however, at altitudes higher than 6 miles (10 kilometers), the atmosphere is so thin that light is not scattered or absorbed, so signals can be extended over longer distances.

The Quick3 system would involve the entire system for transmitting data in this way, including the components needed to build the satellites. The team has already tested each component on Earth. The next step will be to test the system in space, with a satellite launch scheduled for 2025.

They will probably need hundreds, or perhaps even thousands, of satellites for a fully working quantum communications system, the team said.

See more here:
Future quantum computers will be no match for 'space encryption' that uses light to beam data around with the 1st ... - Livescience.com

Read More..

Backdoor found in widely used Linux utility breaks encrypted SSH connections – Ars Technica

Enlarge / Internet Backdoor in a string of binary code in a shape of an eye.

Getty Images

Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.

The compression utility, known as xz Utils, introduced the malicious code in versions 5.6.0 and 5.6.1, according to Andres Freund, the developer who discovered it. There are no known reports of those versions being incorporated into any production releases for major Linux distributions, but both Red Hat and Debian reported that recently published beta releases used at least one of the backdoored versionsspecifically, in Fedora Rawhide and Debian testing, unstable and experimental distributions. A stable release of Arch Linux is also affected. That distribution, however, isn't used in production systems.

Because the backdoor was discovered before the malicious versions of xz Utils were added to production versions of Linux, it's not really affecting anyone in the real world, Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.

Several people, including two Ars readers, reported that the multiple apps included in the HomeBrew package manager for macOS rely on the backdoored 5.6.1 version of xz Utils. HomeBrew has now rolled back the utility to version 5.4.6. Maintainers have more details available here.

The first signs of the backdoor were introduced in a February 23 update that added obfuscated code, officials from Red Hat said in an email. An update the following day included a malicious install script that injected itself into functions used by sshd, the binary file that makes SSH work. The malicious code has resided only in the archived releasesknown as tarballswhich are released upstream. So-called GIT code available in repositories arent affected, although they do contain second-stage artifacts allowing the injection during the build time. In the event the obfuscated code introduced on February 23 is present, the artifacts in the GIT version allow the backdoor to operate.

The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system, Freund wrote. Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the fixes provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer's name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass, the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added), the Ubuntu maintainer said. "He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise."

Maintainers for xz Utils didnt immediately respond to emails asking questions.

The malicious versions, researchers said, intentionally interfere with authentication performed by SSH, a commonly used protocol for connecting remotely to systems. SSH provides robust encryption to ensure that only authorized parties connect to a remote system. The backdoor is designed to allow a malicious actor to break the authentication and, from there, gain unauthorized access to the entire system. The backdoor works by injecting code during a key phase of the login process.

I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access, Freund wrote. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution.

In some cases, the backdoor has been unable to work as intended. The build environment on Fedora 40, for example, contains incompatibilities that prevent the injection from correctly occurring. Fedora 40 has now reverted to the 5.4.x versions of xz Utils.

Xz Utils is available for most if not all Linux distributions, but not all of them include it by default. Anyone using Linux should check with their distributor immediately to determine if their system is affected. Freund provided a script for detecting if an SSH system is vulnerable.

Here is the original post:
Backdoor found in widely used Linux utility breaks encrypted SSH connections - Ars Technica

Read More..

Quantum Encryption: The New Frontier in Cybersecurity – yTech

Amidst the backdrop of heightened cyber threats and the rise of quantum computing, Toshiba and network specialist Ciena have made a breakthrough in data protection with their introduction of a quantum key distribution (QKD) system at the recent OFC Conference. This advancement in secure communication technology has industry experts looking closely at quantum encryptions potential to withstand the sophisticated hacking attempts of the future.

Summary: Toshiba and Cienas QKD system is a state-of-the-art approach to cybersecurity, using the laws of quantum mechanics to generate cryptographic keys that are almost invulnerable to attacks. The systems introduction aligns with evolving security needs as companies like Verizon and SpaceX experiment with quantum encryption for both terrestrial and extraterrestrial communication. The market for quantum encryption is expected to grow exponentially, yet integration and global standardization present notable challenges. Investments are being made to conquer these hurdles and harness the full possibilities of this pioneering technology.

Quantum encryption showcases the peculiar nature of quantum mechanics to produce cryptographic keys that are virtually impossible to intercept or decode. This technology is not just rooted on the ground; its expanding its reach to protect digital information exchanged through satellites and other non-terrestrial means.

Despite its promising prospects, the adoption of quantum cryptography entails overcoming significant integration issues with existing network systems and establishing consistent international protocols. Still, with the potential for incredible market expansion and its capacity to transform security models across numerous industries, quantum encryption remains a focal point for investors.

Individuals and organizations keen on the progression of cybersecurity have ample resources through industry innovators such as Toshiba and Ciena. Their ongoing research and dialogue offer a window into the advancements shaping the cybersecurity domain. With continuous technological development, the introduction of quantum encryption could set a new standard in the protection against emergent and future cyber anomalies. The collaborative work across industries will be crucial in determining the speed and success with which quantum cryptography becomes a mainstream security asset.

The Emergence of Quantum Encryption in the Cybersecurity Industry

The cybersecurity industry stands at the cusp of a revolution with the advent of quantum key distribution (QKD) systems spearheaded by major players like Toshiba and network expert Ciena. This leap in security technology is particularly significant in light of the increasing cyber threats and the anticipated impact of quantum computing on encryption. QKD utilizes the principles of quantum mechanics to create cryptographic keys that are exceedingly difficult for would-be attackers to hack, marking a paradigm shift in how information is secured.

Market Forecasts and Implications for Quantum Cryptography

As the threat landscape evolves, so does the urgency for advanced security measures. Companies such as Verizon and SpaceX are experimenting with quantum encryption to safeguard both earthly and space-based communications. The promise held by quantum encryption technology has profound implications, driving the market towards significant growth. Analysts project that the quantum encryption market will witness explosive expansion in the coming years, with demand permeating from government, financial services, healthcare, and other sectors seeking robust defense mechanisms against cyber espionage and data breaches.

Challenges of Integration and Standardization

Despite the optimistic outlook, integrating quantum cryptography with existing network infrastructures is fraught with complexities. The challenge is not only technological but also involves the harmonization of international standardsa herculean task that requires global cooperation. Investors and technologists are actively seeking solutions to streamline this process, ensuring that the transition to quantum-secure networks does not compromise functionality or interoperability.

The Pioneers in Quantum Cryptography

At the forefront of these developments, Toshiba and Ciena continue to drive innovation in the field, providing critical insights into how quantum encryption can be deployed effectively. Their groundbreaking work, including their presence at prominent events like the OFC Conference, provides a glimpse into the future of cybersecurity and the role quantum technologies will play in it.

Industry stakeholders can explore further advancements and acquire knowledge from leaders in the cyber and quantum realms through reputable sources and innovators. For those interested, reliable information can be accessed through the official websites of industry leaders such as Toshiba and Ciena.

Securing the Future

Quantum encryption is rapidly progressing from a theoretical concept to a pivotal industry resource with the capability to redefine security standards. The intersection of academia, industry, and policy will be instrumental in driving the adoption of quantum cryptography, offering substantial protection for the digital infrastructure of tomorrow. The journey to ubiquitous quantum encryption is contingent upon the collaborative efforts of experts globally, determined to leverage this nascent technology for a more secure future in the face of ever-advancing cyber threats.

Jerzy Lewandowski, a visionary in the realm of virtual reality and augmented reality technologies, has made significant contributions to the field with his pioneering research and innovative designs. His work primarily focuses on enhancing user experience and interaction within virtual environments, pushing the boundaries of immersive technology. Lewandowskis groundbreaking projects have gained recognition for their ability to merge the digital and physical worlds, offering new possibilities in gaming, education, and professional training. His expertise and forward-thinking approach mark him as a key influencer in shaping the future of virtual and augmented reality applications.

Visit link:
Quantum Encryption: The New Frontier in Cybersecurity - yTech

Read More..

GoFetch: Apple chips vulnerable to encryption key stealing attack – SC Media

Apple M-series chips are vulnerable to a side-channel attack called GoFetch, which exploits data memory-dependent prefetchers (DMPs) to extract secret encryption keys.

DMPs are a feature of some modern processors that use memory access patterns to predict which data might be useful, and preload that data into cache memory for fast access.

A group of researchers discovered that the DMP process in Apple M-series chips (M1, M2 and M3) could be probed using attacker-selected inputs, and its prefetching behavior analyzed to ultimately predict encryption keys generated by the intended target. The researchers published their findings in a paper shared on their website Thursday.

This bug can extract encryption keys, which is a problem for servers (using TLS) or for those organizations where users are encrypting information. Largely, it will probably be highly secure environments that need to worry the most over this, but any organization running Apple CPUs and using encryption should be concerned, John Bambanek, president of Bambanek Consulting, told SC Media in an email.

The researchers GoFetch exploit involves feeding guesses into the targeted cryptographic application and observing changes in memory access on the system indicating prefetching patterns. By refining their inputs based on the observed changes, and correlating signals from the DMP to bits of cryptographic data, an attacker could ultimately infer the targeted encryption keys.

This attack essentially circumvents the safeguards of constant-time cryptography, which prevents side-channel extraction of encryption keys by eliminating any relationship between secret data contents and their execution timing.

The GoFetch researchers demonstrated that their proof-of-concept exploit works against Go RSA-2048 encryption, OpenSSL Diffie-Hellman key exchange (DHKE), and even the post-quantum encryption protocols CRYSTALS-Kyber and CRYSTALS-Dilithium. The attack takes a minimum of about 49 minutes (against Go RSA keys) and up to 15 hours (against Dilithium keys) to complete on average.

The attack was primarily tested on Apples M1 processor, but the groups investigations of the M2 and M3 CPUs indicated similar DMP activation patterns, suggesting they are likely vulnerable to the same exploit, the researchers said.

The Intel 13th generation Raptor Lake processor also uses a DMP in its microarchitecture, but the researchers found it was not as susceptible to attack due to its more restrictive activation criteria.

As a microarchitectural hardware feature of Apple chips, the DMPs susceptible to GoFetch cannot be directly patched. However, some mitigations are available to prevent or lower the likelihood of attack.

The attack requires the attackers GoFetch process (which probes and monitors the DMP) to run locally on the same machine as the targeted process, so avoiding the installation of suspicious programs is one line of defense.

Apple cited the ability to enable data-independent timing (DIT) as a mitigation for GoFetch in an email to SC Media. Enabling DIT, which is available on M3 processors, disables the vulnerable DMP feature, Ars Technica reported.

The researchers also noted that DMP does not activate for processes running on Apples Icestorm efficiency cores. Restricting cryptographic processes to these smaller cores will prevent GoFetch attacks but will also likely result in a performance reduction.

Cryptographic software providers can also use techniques like input blinding to mask the contents being fetched, but this also presents challenges in terms of performance penalties. Overall, users are recommended to keep any cryptographic software up to date as providers make changes to counter side-channel attack risks.

The researchers have said they will be releasing the proof-of-concept soon, which will significantly lower the difficulty to exploit this bug, Bambenek commented. There isnt much for [users] to do except to wait for encryption software writers to release updates and to see whether those vendors will create a configurable option so CISOs can choose speed or higher security.

The GoFetch vulnerability was disclosed to Apple in December 2023 and the researchers paper states Apple was investigating the PoC. An Apple spokesperson expressed gratitude toward the researchers in a comment to SC Media without disclosing further details about an investigation.

The vulnerability was also reported to the Go Crypto, OpenSSL and CRYSTALS teams. Go Crypto said the attack was considered low severity, OpenSSL said local side-channel attacks fall outside of its threat model, and CRYSTALS acknowledged that hardware fixes would be needed to resolve the issue in the long term.

SC Media reached out to the GoFetch team to ask about industry reactions to their research and did not receive a reply.

Link:
GoFetch: Apple chips vulnerable to encryption key stealing attack - SC Media

Read More..

Quantum Encryption: The Vanguard of Digital Safety – yTech

Summary: During the OFC Conference, Toshiba and Ciena presented a groundbreaking secure communications platform employing quantum key distribution, poised to become a fundamental countermeasure against advanced cyber threats, including strategies that leverage the future capabilities of quantum computers.

Amid the mounting concerns over cyber security, a revolutionary technology was unveiled at the recent OFC Conference, signaling a transformative era in cybersecurity with quantum encryption. Toshiba, collaborating with network specialist Ciena, showcased their quantum key distribution (QKD) platform, capable of protecting data transmissions at rapid speeds, a necessity in the metropolitan networks domain.

This technology exemplifies innovation, drawing on the properties of quantum mechanics to enforce powerful security through undecipherable cryptographic keys. The demonstration at the conference illustrated the utilization of Toshibas QKD apparatus in conjunction with Cienas Waveserver 5, culminating in a reinforced, secure transmission network that exemplifies the capability of a Trusted Node system.

Quantum encryptions significance transcends terrestrial limitations. With Verizon experimenting with a quantum-safe virtual network and SpaceX extending quantum key distribution to safeguard satellite communications, the potential applications are as wide as the spectrum of modern communication itself. The absorption of such technology by these sector behemoths indicates a market ready to embrace quantum encryption to counteract potential future cyber-attacks, including those by quantum computers.

Quantum encryption is not without its challenges; from integrating this nascent technology into existing infrastructures to developing standards for universal application. Nonetheless, the market prospects look promising, with increasing investment and research pushing forward this cryptographic frontier.

For further insight into the evolutions of quantum cryptography and other technological advancements, resources such as Toshiba and Ciena provide in-depth knowledge for industry and academic professionals alike. They offer a glimpse into the current technological landscape and the essentials for potential future market dynamics in cybersecurity.

Quantum Encryption Technology: Industry and Market Outlook

The introduction of quantum encryption technology, featuring quantum key distribution (QKD), at the OFC Conference serves as a landmark in the cybersecurity industry. As Toshiba and Ciena navigate the forefront of this space, the implication of their success could redefine how sensitive information is protected across various communication platforms.

The cybersecurity industry is currently faced with the daunting prospect of quantum computer attacks which could render traditional encryption methods obsolete. Herein lies the significance of QKD; it uses the principles of quantum mechanics to create keys which are virtually impossible to intercept without detection. Given the universal importance of data security, this technology has vast implications across numerous sectors, including government, military, financial services, and healthcare.

Market Forecasts for Quantum Cryptography

As quantum technology becomes more tangible, market forecasts reflect an optimistic growth trajectory. Quantum cryptography is expected to experience exponential growth due to the increasing need for secure communications. A report by MarketsandMarkets suggests that the global quantum cryptography market size is expected to grow from an estimated value of USD 89 million in 2020 to USD 214 million by 2025, at a Compound Annual Growth Rate (CAGR) of 19.1% during the forecast period.

This growth is fueled by the rising incidents of cyber threats, government investment in secure communications, and multinational corporations recognizing the urgent need for next-generation security solutions. With companies like Verizon and SpaceX investing in QKD, it indicates a pronounced confidence in its market potential and viability.

Challenges and Advancements in the Quantum Encryption Sector

Despite the markets upward trend, quantum encryption technology is not without hurdles. Key issues include the complexity of integrating this leading-edge technology into existing communication infrastructures and the need for developing universally accepted standards. Additionally, the current reach of QKD is limited in distance, and quantum technologies often require extreme operating conditions, such as very low temperatures, to function effectively.

However, the industry continues to invest heavily in research and development, addressing limitations and enhancing usability. Innovations in QKD systems, such as the Trusted Node system demonstrated by Toshiba and Ciena, hint at a future of more robust and practical quantum-resistant networks that could withstand the capabilities of quantum computers.

For those seeking a deeper understanding of the expanding domain of quantum cryptography and its associated technologies, reputable sites like Toshiba and Ciena can offer a wealth of knowledge. These resources stand as pillars for professionals interested in the ongoing narrative of cybersecurity technology and the market possibilities that it presents. With continuous advancement and the collaboration of tech giants, quantum encryption is becoming an increasingly integral part of the conversation on securing the future of communication.

Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.

See more here:
Quantum Encryption: The Vanguard of Digital Safety - yTech

Read More..

Surge in Encrypted Attacks on Government Underscores the Need for Improved Defenses – FedTech Magazine

As agencies look to fortify their security measures, many are following guidance from the National Cybersecurity Strategy and CISA for leveraging zero trust to advance the nations cybersecurity progress.

By reducing the reliance on legacy technology and implementing zero-trust architecture, federal agencies can limit the impact of threat actors and strengthen their security postures.

The adoption of zero-trust architecture emerges as a crucial step to counter encrypted threats. Many conventional devices such as VPNs and firewalls can be vulnerable in the face of sophisticated attacks, and agencies must prioritize replacing such devices with more secure alternatives.

By embracing zero trust, agencies can significantly limit the shortcomings of legacy perimeter-based security approaches by enforcing strict least-privileged access controls and continuous verification. This will help prevent breaches, reduce the blast radius of successful attacks and hold up a strong security posture to protect against evolving threats.

However, not all zero-trust solutions are the same. Its critical that agencies thoroughly test and verify the effectiveness of solutions through proofs of concept and pilots. With the establishment of formalized zero-trust offices, dedicated zero-trust leads and working groups, agencies are on the right track.

There is a wealth of information and expertise that can be leveraged to drive zero-trust adoption. This represents a significant step toward the end goal of widespread implementation of zero trust across the government.

When examining the surge in cyberthreats, the role of encryption and obfuscation techniques takes center stage. By implementing zero-trust architecture and microsegmentation as effective strategies to limit the impact of threat actors, agencies can enhance their overall security posture.

LEARN MORE: Smoothly navigate the cultural shift triggered by zero trust.

As agencies begin the process of selecting and implementing zero-trust solutions, here are a few best practices.

Agencies should look to reduce the number of entry points into an environment by placing internet-facing apps and services behind a cloud proxy that brokers connections, thereby eliminating vulnerable backdoors. Agencies should also evaluate their attack surface to quantify risk and adjust security appropriately.

As federal guidelines urge, establishing a governmentwide implementation of zero trust is imperative for maintaining a robust cyber posture. As cybercriminals continuously evolve their tactics, including encrypted threats and beyond, zero trust remains the best tactic for enhanced security.

Read more from the original source:
Surge in Encrypted Attacks on Government Underscores the Need for Improved Defenses - FedTech Magazine

Read More..