Page 332«..1020..331332333334..340350..»

Unveiling New Physics With AI-Powered Particle Tracking – SciTechDaily

By The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences March 24, 2024

AI is emerging as a key tool in nuclear physics, offering solutions for the data-intensive and complex task of particle track reconstruction. Credit: SciTechDaily.com

Particles colliding in accelerators produce numerous cascades of secondary particles. The electronics processing the signals avalanching in from the detectors then have a fraction of a second in which to assess whether an event is of sufficient interest to save it for later analysis. In the near future, this demanding task may be carried out using algorithms based on AI.

Electronics has never had an easy life in nuclear physics. There is so much data coming in from the LHC, the most powerful accelerator in the world, that recording it all has never been an option. The systems that process the wave of signals coming from the detectors therefore specialize in forgetting they reconstruct the tracks of secondary particles in a fraction of a second and assess whether the collision just observed can be ignored or whether it is worth saving for further analysis. However, the current methods of reconstructing particle tracks will soon no longer suffice.

Research presented in the journal Computer Science by scientists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, suggests that tools built using artificial intelligence could be an effective alternative to current methods for the rapid reconstruction of particle tracks. Their debut could occur in the next two to three years, probably in the MUonE experiment which supports the search for new physics.

The principle of reconstructing the tracks of secondary particles based on hits recorded during collisions inside the MUonE detector. Subsequent targets are marked in gold, and silicon detector layers are marked in blue. Credit: IFJ PAN

In modern high-energy physics experiments, particles diverging from the collision point pass through successive layers of the detector, depositing a little energy in each. In practice, this means that if the detector consists of ten layers and the secondary particle passes through all of them, its path has to be reconstructed on the basis of ten points. The task is only seemingly simple.

There is usually a magnetic field inside the detectors. Charged particles move in it along curved lines and this is also how the detector elements activated by them, which in our jargon we call hits, will be located with respect to each other, explains Prof. Marcin Kucharczyk, (IFJ PAN) and immediately adds: In reality, the so-called occupancy of the detector, i.e. the number of hits per detector element, may be very high, which causes many problems when trying to reconstruct the tracks of particles correctly. In particular, the reconstruction of tracks that are close to each other is quite a problem.

Experiments designed to find new physics will collide particles at higher energies than before, meaning that more secondary particles will be created in each collision. The luminosity of the beams will also have to be higher, which in turn will increase the number of collisions per unit time. Under such conditions, classical methods of reconstructing particle tracks can no longer cope. Artificial intelligence, which excels where certain universal patterns need to be recognized quickly, can come to the rescue.

The artificial intelligence we have designed is a deep-type neural network. It consists of an input layer made up of 20 neurons, four hidden layers of 1,000 neurons each and an output layer with eight neurons. All the neurons of each layer are connected to all the neurons of the neighboring layer. Altogether, the network has two million configuration parameters, the values of which are set during the learning process, describes Dr. Milosz Zdybal (IFJ PAN).

The deep neural network thus prepared was trained using 40,000 simulated particle collisions, supplemented with artificially generated noise. During the testing phase, only hit information was fed into the network. As these were derived from computer simulations, the original trajectories of the responsible particles were known exactly and could be compared with the reconstructions provided by the artificial intelligence. On this basis, the artificial intelligence learned to correctly reconstruct the particle tracks.

In our paper, we show that the deep neural network trained on a properly prepared database is able to reconstruct secondary particle tracks as accurately as classical algorithms. This is a result of great importance for the development of detection techniques. Whilst training a deep neural network is a lengthy and computationally demanding process, a trained network reacts instantly. Since it does this also with satisfactory precision, we can think optimistically about using it in the case of real collisions, stresses Prof. Kucharczyk.

The closest experiment in which the artificial intelligence from IFJ PAN would have a chance to prove itself is MUonE (MUon ON Electron elastic scattering). This examines an interesting discrepancy between the measured values of a certain physical quantity to do with muons (particles that are about 200 times more massive equivalents of the electron) and predictions of the Standard Model (that is, the model used to describe the world of elementary particles). Measurements carried out at the American accelerator centre Fermilab show that the so-called anomalous magnetic moment of muons differs from the predictions of the Standard Model with a certainty of up to 4.2 standard deviations (referred as sigma). Meanwhile, it is accepted in physics that a significance above 5 sigma, corresponding to a certainty of 99.99995%, is a value deemed acceptable to announce a discovery.

The significance of the discrepancy indicating new physics could be significantly increased if the precision of the Standard Models predictions could be improved. However, in order to better determine the anomalous magnetic moment of the muon with its help, it would be necessary to know a more precise value of the parameter known as the hadronic correction. Unfortunately, a mathematical calculation of this parameter is not possible. At this point, the role of the MUonE experiment becomes clear. In it, scientists intend to study the scattering of muons on electrons of atoms with low atomic number, such as carbon or beryllium. The results will allow a more precise determination of certain physical parameters that directly depend on the hadronic correction. If everything goes according to the physicists plans, the hadronic correction determined in this way will increase the confidence in measuring the discrepancy between the theoretical and measured value of the muons anomalous magnetic moment by up to 7 sigma and the existence of hitherto unknown physics may become a reality.

The MUonE experiment is to start at Europes CERN nuclear facility as early as next year, but the target phase has been planned for 2027, which is probably when the Cracow physicists will have the opportunity to see if the artificial intelligence they have created will do its job in reconstructing particle tracks. Confirmation of its effectiveness in the conditions of a real experiment could mark the beginning of a new era in particle detection techniques.

Reference: Machine Learning based Event Reconstruction for the MUonE Experiment by Miosz Zdyba, Marcin Kucharczyk and Marcin Wolter, 10 March 2024, Computer Science. DOI: 10.7494/csci.2024.25.1.5690

The work of the team of physicists from the IFJ PAN was funded by a grant from the Polish National Science Centre.

See the original post:
Unveiling New Physics With AI-Powered Particle Tracking - SciTechDaily

Read More..

What it means for nations to have "AI sovereignty" – Marketplace

Imagine that you could walk into one of the worlds great libraries and leave with whatever you wanted any book, map, photo or historical document forever. No questions asked.

There is an argument that something like that is happening to the digital data of nations. In a lot of places, anyone can come along and scrape the internet for the valuable data thats the backbone of artificial intelligence. But what if raw data generated in a particular country could be used to benefit not outside interests, but that country and its people?

Some nations have started building their own AI infrastructure to that end, aiming to secure AI sovereignty. And according to venture capitalist Vinod Khosla, the potential implications, and opportunities, are huge.

The following is an edited transcript of Khoslas conversation with Marketplaces Lily Jamali.

Vinod Khosla: These language models are trained in English, but theres 13 Indian scripts, and within that theres probably a couple of hundred languages or language variants. So the cultural context for these languages is different. We do think it deserves an effort to have cultural context and nuances, like in India: You dont speak Hindi and you dont speak English, you mix the two, whats sometimes called Hinglish. So those kinds of things have to be taken into account. Then you go to the other level. Will India rely on something that the technology could be banned, like a U.S. model?

Lily Jamali: So you were just talking about the cultural context. There is a huge political overlay

Khosla: Political and national security. So imagine India is buying oil [from] Iran, which it does. If theres an embargo on Iranian trade, is that possible that they cant get oil or they cant get AI models? So every country will need some level of national security independence in AI. And I think thats a healthy thing. Maybe itll make the world more diversified and a little bit safer.

Jamali: More safe. Why? Why do you say that?

Khosla: Because everybody cant be held hostage to just an American model. The Chinese are doing this for sure. But if theres a conflict between India and China, can it 100% predict what the U.S. will do? They may care more about Taiwan than the relationship between India and China, for example.

Jamali: And can you explain why you think it is important for each country to have its own model?

Khosla: Im not saying in India theyll only use the Indian model. They will use all sorts of models from all over the world, including open-source models. Now China, I have a philosophical view [that we are] competitors and enemies, and I take a somewhat hawkish view on China. The best way to protect ourselves is be well-armed to be safe against China and avoid conflict if its mutually assured destruction, so to speak. In countries like India or Japan, theyll use all sorts of models from everywhere in the world, including their own local models, depending upon the application or the context.

Jamali: As some of our listeners may know, you were very early to the AI trend, and wed love to know what you think might come next. So what do you think?

Khosla: Heres what I would say. AI has surprised us in the last two years. But its taken us 10 years to get to that ChatGPT moment, if you might. What has happened since is theres a lot of resources poured in. And that will accelerate development. But also, it diversified the kinds of things we worked on pretty dramatically. And so I think well see a lot of progress. Some things are predictable, like systems will get much better at reasoning and logic, some things that they get critiqued for. But then therell be surprises that we cant predict.

Jamali: Although we may try.

Khosla: Other kinds of capabilities will show up in these systems. Reasoning is an obvious one. The embodied world, which is generally meant to represent what happens in the real world, of which is mostly robotics, will see a lot of progress in the next five years. So think of logic and reasoning, rapid progress. Think of robotics, artificial intelligence, rapid progress. Think of diversity in the kinds of algorithms being used. Theyll be really interesting and probably not one people are generally expecting.

Jamali: Diversity in the kinds of algorithms. What kind of diversity are we talking about?

Khosla: If you take the human brain, sometimes we do pattern matching, and theres all kinds of emergent behavior that emerge from that. And [large language models] are going to keep going. And they may do everything. And we may reach AGI, or artificial general intelligence, just with LLMs. But its possible theres other approaches, whats called sometimes neurosymbolic computing. Reasoning is symbolic computing planning, being able to make long-term plans, things like that. We do a lot of probabilistic thinking this might happen or that might happen, whats the likelihood of this happening? Thats generally called probabilistic thinking. Theyll start to emerge. So those are just some examples. And of course, Ill be surprised.

Another person talking a lot about this is Jensen Huang, CEO of Nvidia, which designs industry-leading graphics processing units. This week, the company announced a collaboration with software and cloud company Oracle to deliver sovereign AI solutions to customers around the world.

Huang envisions AI factories that can run cloud services within a countrys borders. The pitch: Countries and organizations need to protect their most valuable data, and Oracle CEO Safra Catz said in a statement that strengthening ones digital sovereignty is key to making that happen.

See original here:

What it means for nations to have "AI sovereignty" - Marketplace

Read More..

Experts explain how AI is supercharging innovation – Newswise

Newswise Rapid moving advances in artificial intelligence have stirred controversy and debate, but they have all raised enticing prospects for supercharged technological innovation. Researchers at Virginia Tech who are exploring these frontiers can offer previews of the potential positive developments that could derive from AI.

Advancing autonomous systems to assist in their diagnostics, safety, and human training

Ella Atkins, Fred D. Durham professor and head of the Kevin T. Crofton Aerospace and Ocean Engineering Department at Virginia Tech, investigates aerospace information systems for advanced air mobility, uncrewed aircraft systems, and space robotics applications. Her research explores how emerging AI and machine learning techniques can assist in a range of tasks from safe autonomy and self-diagnostics to tutoring human pupils.

AI and machine learning can make an autonomous vehicle safer through self-monitoring diagnostics and prognostics and data-informed decision making.Maintenance and repair operations for aircraft have been revolutionized with this technology.For example, this technology can assist modern vehicles in avoiding or recovering from problems such as unexpectedly low battery energy reserves, Atkins said.

Large Language Models, or LLM, powered by deep neural network machine learning, enable a person to interact with a computer more naturally, more conversationally. This can help a person learn even difficult concepts because the first step isto get past anxiety with that concept, and LLM conversation interacts more like a teaching assistant than an encyclopedia or textbook, she said.

Deploying human-robot interactions

I am interested in assistive technology, such as wheelchair mounted robot arms, says Dylan Losey, an assistant professor of mechanical engineering with a specialization in robotics who directs the Collaborative Robotics Laboratory (Collab). My lab is focused on the fundamentals of human-robot interaction. This includes how robots learn from humans, how robots collaborate with humans, and how humans understand the robots they are working with.

My main interest is developing robots that can learn from humans and communicating what the robot has learned back to the human operator. I see this mutual understanding between humans and robots key to avoiding the pitfalls of AI. I want AI that helps people do what they want, but is also clear and transparent to the human, Losey said.

Enabling self-sufficient communication systems

Walid Saad, a professor of electrical and computer engineering and the Next-G wireless lead at the Virginia Tech Innovation Campus, enthuses about the next steps in the evolution of AI and how it could intersect with forthcoming 6G wireless systems. Current AI systems exhibit prowess in learning but struggle with reasoning, he said. The central challenge for the upcoming years lies in equipping AI systems with common sense abilities, enabling these systems to think critically, reason logically, and plan proactively. This marks an initial stride toward the development of what's known as artificial general intelligence (AGI), aiming to approach intelligence levels seen in animals, if not eventually reaching human-level intelligence.

AI can help automate and augment existing functionalities within wireless systems like 6G, Saad said. Nevertheless, as we peer into a more distant horizon, the notion of AI-native wireless systems presents limitations. AGI could potentially herald a revolutionary paradigm in wireless technology by enabling systems capable of human-like cognition that is, reasoning, planning, and the application of 'common sense' where contextually relevant.

"While we realized for a while that 6G needs AI and potentially AGI, it is also worth noting that creating AGI needs an understanding of the physical world that 6G system can potentially provide, hence we foresee these two technologies truly flowing together in the future," Saad said.

Revolutionizing the construction industry

AI can help designers and engineers to optimize their design for energy consumption, user comfort, evacuation and disaster plans, conformity with codes and regulations, environmental impact, and even more, at a level that was not possible before, says Ali Shojaei with Virginia Techs Myers-Lawson School of Construction, who is working to revolutionize the construction industry through digital innovations.

AI-driven automation and robotics can significantly speed up the construction process and also reduce human error, he said. AI can optimize the supply chain in construction. From predicting the need for materials to tracking their delivery, AI can ensure that projects are not delayed due to material shortages or logistical errors.

In construction, AI-driven automation and robotics can significantly increase efficiency and precision, assisting in tasks like bricklaying, painting, or even complex tasks like installing electrical systems. And post-construction, AI can aid in the maintenance and lifecycle management of buildings, predicting when maintenance is needed and preventing costly repairs, Shojaei said.

Schedule an interview To schedule interviews with these experts, contact Mike Allen in the media relations office at [emailprotected] or 540-400-1700.

Read more from the original source:

Experts explain how AI is supercharging innovation - Newswise

Read More..

SingularityNET to Hold Live Stream on YouTube on March 26th – TradingView

Coindar

SingularityNET will host a live stream on YouTube on March 26th at 5 pm UTC. The event will focus on the advancements in Hyperons cognitive components and scalability improvements at all computational levels.

Refer to the official tweet by AGIX:

Join us this Tuesday, March 26th, at 5 PM UTC to explore the technical advancements of the OpenCog Hyperon #AGI framework, including advances in Hyperons cognitive components & scalability improvements at all computational levels.

Set a reminder now: https://t.co/eXKFJh6VNm pic.twitter.com/1PuqabEMuT

AGIX Info

SingularityNET (AGI) is a blockchain-based marketplace and framework for artificial intelligence (AI) services. Founded by Dr. Ben Goertzel, it allows organizations and individuals to create, share, and monetize AI services at scale. The platforms native token, AGI (Artificial General Intelligence), is used for transactions within the network. By using SingularityNET, developers can access a broad set of AI algorithms, services, and agents, and can also contribute their own models to earn AGI tokens. The goal of SingularityNET is to create a decentralized, open market for AI services, thus reducing reliance on tech giants for such services.

View original post here:

SingularityNET to Hold Live Stream on YouTube on March 26th - TradingView

Read More..

OpenAI’s Sam Altman Weighs In on AI Risks and His Current Stance on AGI Fears – CCN.com

agi Key Takeaways

As one of the most important figures in AI, Sam Altman is better placed than most to understand where the technology is going and has a unique perspective on its risks.

In a recent episode of Lex Fridmans podcast, the OpenAI CEO discussed Artificial General Intelligence (AGI), how he anticipates AI evolving in the coming years and the risks it poses to humanity.

When asked by Fridman when he thought humanity would build AGI, Altman observed that the question misses the complexity of the debate:

I used to love to speculate on that question. I have realized since that I think its very poorly formed, and that people use extremely different definitions for what AGI is. So I think it makes more sense to talk about when well build systems that can do capability X or Y or Z, rather than when we fuzzily cross this one-mile marker.

Nonetheless, he predicted that by the end of the decade we will have quite capable systems that will look remarkable compared to todays technology.

While Altman prefers to talk in terms of fuzzy milestones rather than radical epistemic shifts, he doesnt underestimate the potential for AI to transform the global economy.

He observed that it is a huge deal when a new technology can significantly increase the rate of scientific discovery. But he suggested that OpenAIs most advanced AI models have yet to instigate the kind of profound economic transformation he expects to see in the coming years.

As such, he concluded that the road to AGI will be a giant power struggle.

Asked by Fridman whether he trusts himself with the kind of power AGI could create, Altman hesitated.

Referring to the drama of his firing and subsequent reinstatement as the CEO of OpenAI last year, Altman acknowledged that it is now harder to argue that the board could easily fire him if it needed to.

Nonetheless, he said he still believes it is important that no single person has total control over the company or the technology it creates.

I continue to not want super-voting control over OpenAI. I never have, he stated. Even after all this craziness, I still dont want it. I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place.

Was this Article helpful? Yes No

Continued here:

OpenAI's Sam Altman Weighs In on AI Risks and His Current Stance on AGI Fears - CCN.com

Read More..

Some Apple CPUs have an "unfixable" security flaw and they’re leaking secret encryption keys – TechRadar

Researchers have discovered a new side-channel vulnerability in Apples M-series of processors that they claim could be used to extract secret keys from Mac devices when theyre performing cryptographic operations.

Academic researchers from the University of Illinois Urbana-Champaign, University of Texas at Austin, Georgia Institute of Technology, University of California, University of Washington, and Carnegie Mellon University, explained in a research paper that the vulnerability, dubbed GoFetch, was found in the chips data memory-dependent prefetcher (DPM), a optimization mechanism that predicts the memory addresses of data that active code could access in the near future.

Since the data is loaded in advance, the chip makes performance gains. However, as the prefetchers make predictions based on previous access patterns, they also create changes in state that the attackers can observe, and then use to leak sensitive information.

The vulnerability is not unlike the one abused in Spectre/Meltdown attacks as those, too, observed the data the chips loaded in advance, in order to improve the performance of the silicon.

The researchers also noted that this vulnerability is basically unpatchable, since its derived from the design of the M chips themselves. Instead of a patch, the only thing developers can do is build defenses into third-party cryptographic software. The caveat with this approach is that it could severely hinder the processors performance for cryptographic operations.

Apple has so far declined to discuss the researchers findings, and stressed that any performance hits would only be visible during cryptographic operations.

While the vulnerability itself might not affect the regular Joe, a future patch hurting the devices performance just might.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Those interested in reading about GoFetch in depth, should check out the research paper here.

Via Ars Technica

Originally posted here:
Some Apple CPUs have an "unfixable" security flaw and they're leaking secret encryption keys - TechRadar

Read More..

High-security learning-based optical encryption assisted by disordered metasurface – Nature.com

Working principle

The whole process can be divided into two stages: optical encryption and learning-based decryption, as shown in Fig.1. In the optical encryption stage (Fig. 1a), the sender (Alice) projects a light beam of two different polarizations (P(i) or P(j)) (ij) onto a plaintext, which is firstly encrypted by a QR code phase pattern (security key) and then traveling through the DM as the secondary infilling of the plaintext, generating a speckle pattern (ciphertext). The DM scatters light differently with different input polarizations due to the spin-multiplexing random phase design. The relationship among the speckle, plaintext, security key, and DM can be expressed as:

$$U(x,, y,, z)=iint {U}_{{{{{{rm{P}}}}}}}({x}_{0},, {y}_{0}){U}_{{{{{{rm{S}}}}}}}{left({x}_{0},, {y}_{0}right)U}_{{{{{{{rm{DM}}}}}}}}left({x}_{0},, {y}_{0}right)hleft(x-{x}_{0},, y-{y}_{0},, zright){{{{{rm{d}}}}}}{x}_{0}{{{{{rm{d}}}}}}{y}_{0},$$

(1)

where UP(x0, y0), US(x0, y0), and UDM(x0, y0) correspond to the functions of the plaintext, security key, and DM, respectively, and h(x, y, z) is an impulse response. From Eq. (1), it is very clear that the security key and the DM are applied encryption on the plaintext in sequence to achieve double-secure function. In addition, as UDM(x0, y0) varies with the change of incident beam polarization according to the design, multi-channel encryption can be implemented by changing the polarization of the incident beam.

a Optical encryption. The sender (Alice) illuminates light beams with two different polarizations of P(i) and P(j) onto the phase profiles of the superposition of plaintexts (human face images) and security keys (QR codes), which propagates through DM, generating ciphertexts (speckles). b Learning-based decryption. Two deep neural networks (DNN) of the same structure, e.g., P(i)-DMNet and P(j)-DMNet, are trained with data obtained with incident beams of P(i) and P(j), respectively. After recording the ciphertext and being authorized by Alice to acquire the security key and the polarization of the incident beam, the receiver (Bob) can feed the ciphertext and the security key into the corresponding neural network to decrypt the plaintext. The mark above the straight line with arrows at both ends indicates that the information cannot be commutative. DM disordered metasurface.

In the learning-based decryption stage, several different deep neural networks (DNN) sharing the same structure, termed as P(i)-DMNet and P(j)-DMNet (Fig.1b), are trained with data from incident beams of P(i) and P(j), in which ciphertext and the security key serve as the inputs to decode the plaintext. The receiver (Bob) needs authorization from Alice to acquire the security key and the polarization of the incident beam. Assuming that Bob can receive the ciphertext at the output terminal in real time by himself, he can directly get access to the plaintext by feeding the ciphertext and QR code into the polarization-matched network. For hackers who can even have access to the ciphertext, they cannot decrypt the plaintext without the authentication from Alice (i.e., lack of the security key and the polarization of the incident beam).

The DM consists of elliptical titanium dioxide (TiO2) meta-pillars, as shown in Fig.2a. The meta-pillars are 600nm tall (h) and rest on a square lattice with a periodic constant (P) of 350nm, and the design wavelength is 488nm. The length of two axis (u and v) of meta-pillars varies in the range of 70320nm, such that a controllable propagation phase ({phi }_{{{{{{{rm{propagation}}}}}}}}) is introduced for both LCP and RCP light beams. The simulated phase delays (({varphi }_{{xx}}) and ({varphi }_{{yy}})) of the meta-pillar for two orthogonal linear polarizations (x and y) versus lengths based on a commercial software Lumerical FDTD are shown in Fig.2b. The propagation phase of the structure can be calculated from ({varphi }_{{xx}}) and ({varphi }_{{yy}}), i.e., ({phi }_{{{{{{{rm{propagation}}}}}}}}={arg }left(({{{{{{rm{e}}}}}}}^{1{{{{{rm{i}}}}}}*{varphi }_{{xx}}}-{{{{{{rm{e}}}}}}}^{1{{{{{rm{i}}}}}}*{varphi }_{{yy}}})/2right)) (more details are discussed in Supplementary Note1). The birefringent meta-pillar is rotated with a rotation angle of that is able to perform circular polarization (CP) conversion ({|L}rangle to {e}^{i2delta })|R and ({|R}rangle to {e}^{-i2delta }{|L}rangle), i.e., the LCP and RCP beams are converted to the opposite spin with a geometric phase (or PancharatnamBerry (PB) phase) ({phi }_{{{{{{{rm{geometric}}}}}}}}) of (2delta) and (-2delta), respectively. The combination of the propagation phase and geometric phase enables the decoupling of RCP and LCP light at the designed wavelength for multiplexing wavefront modulation applications30. Given the desired phase of two orthogonal CP light ({phi }_{{{{{{{rm{RCP}}}}}}}}) and ({phi }_{{{{{{{rm{LCP}}}}}}}}), the required propagation phase and geometric phase at each meta-pillar can be calculated as31

$${phi }_{{{{{{{rm{propagation}}}}}}}}=frac{({phi }_{{{{{{{rm{RCP}}}}}}}}+{phi }_{{{{{{{rm{LCP}}}}}}}})}{2}$$

(2)

$${phi }_{{{{{{{rm{geometric}}}}}}}}=frac{left({phi }_{{{{{{{rm{LCP}}}}}}}}-{phi }_{{{{{{{rm{RCP}}}}}}}}right)}{4}$$

(3)

Therefore, phase profiles of the DM for RCP and LCP incident beam are randomly distributed for the generation of speckle images.

a A TiO2 unit meta-pillar of the DM with designed parameters is arranged in a square lattice on a fused silica substrate. b The simulated phase delays of the meta-pillar for two orthogonally linear polarizations (along x and y directions) versus lengths of the two axis of the DM. c Seven different polarization states between the LCP and RCP are defined by tuning the fast axis of QWP in the setup (Fig.3a) and the recorded speckles corresponding to the 7 polarization states. d Speckle PCC versus polarization of incident beam, with the speckle associated with incident LCP as the reference. e Top (left) and perspective (right) views of SEM images of the fabricated DM. The scale bar in (e) is 1mm. DM disordered metasurface, PCC Pearsons correlation coefficient, RCP right-handed circular polarization, LCP left-handed circular polarization.

Specific parameters of meta-pillar structures selected in the experiment can be found in Supplementary Note2. As any polarization can be decomposed into two orthogonal polarization states (RCP and LCP in this study) with different weights32, speckles generated from the DM vary with the polarization of the incident beam. A combination of a half-wave plate (HWP) and a quarter-wave plate (QWP) after the spatial light modulator (SLM) as shown in Fig.3a is used to alter the polarization of the incident beam. Two specific orthogonal optical channels are defined by the two circular polarization states, i.e., P(1): LCP and P(7): RCP. In addition to these two orthogonal channels, 5 intermediate polarization channels, P(2) to P(6), located between P(1) and P(7), are created by rotating the QWP with an interval of 15, as shown in the second row in Fig.2c. Figures in the third row of Fig.2c shows the recorded speckles corresponding to these 7 incident polarizations. Variation of Pearson correlation coefficient (PCC) of the speckles, taking the speckle of incident LCP as the reference, is illustrated in Fig.2d. It can be seen that the speckle is highly sensitive to the rotation angle: the PCC gradually decreases from 1 to 0.08. Such a decrease of PCC can significantly impair the recovery efficiency of the input information. Meanwhile, it suggests the independence of each polarization state. It should be noted that only part of the diffused light field needs to be collected due to the complex mapping between the input and output light fields for information decryption33, which further introduces benefits to the enhancement of the spatial security and the information capacity. Scanning electron microscope (SEM) images of the top and perspective views of the DM are shown in Fig.2e (please refer Methods for more details).

a The schematic diagram of the optical setup. b Examples of plaintext for encryption. c The corresponding ciphertexts, i.e., the speckles. d Exampled QR codes. e The decrypted information by inputting (c, d) into the DMNet. The DMNet herein is trained by the RCP data. Inset numbers below each image in (d) are formatted as PCC(SSIM) between b the ground truth and e the decrypted images. SLM spatial light modulator, DM disordered metasurface, HWP half-wave plate, L1, L2 lens, PCC Pearsons correlation coefficient, RCP right-handed circular polarization, QR quick response, QWP quarter-wave plate, HWP quarter-wave plate, SSIM structure similarity.

The schematic diagram of the optical setup for data collection is illustrated in Fig.3a. A collimated continuous-wave coherent laser beam with a wavelength of 488nm (OBIS, Coherent, USA) is expanded to illuminate the aperture of a reflective SLM (HOLOEYE PLUTO VIS056, German), although a transmissive SLM for better visual observation is shown in Fig.3a. Phase patterns are pre-loaded on the SLM to modulate the laser beam, which is polarized and tuned by a pair of a HWP and a QWP with controllable polarization state and then is slightly focused on the DM using a lens (L1) to generate optical speckles captured by a CMOS camera (FL3-U3-32S2M-CS, PointGrey, Canada). Another lens (L2) put in front of the camera is used to adjust the grain size of the recorded speckles. Sine the decryption is not a trivial inverse of the scattering process like other works16,20,21 (more detailed discussion will be given in Discussion), a DNN named DMNet is specifically designed to match the physical process, with details provided in Supplementary Note3.

When the training of DMNets in this experiment is done (more details can be found in Methods), the encryption process is ready. Notably, the DMNet trained and tested with the data generated via an RCP incident beam, i.e., P(7) polarization in Fig.2c, serves as the example in this part, i.e., the RCP-DMNet or P(7)-DMNet. As shown in Fig.3, by feeding both the ciphertext (i.e., speckles in Fig.3c) and the security key (i.e., the QR code in Fig.3d) into the well-trained DMNet, decrypted images can be retrieved with high quality, as shown in Fig.3e. Many fine features on the retrieved human faces can be identically mapped to the ground truth images (plaintext, Fig.3b)34. Metrics for evaluation, as well, indicate excellent performance with averaged PCC=0.941 and structural similarity index measure (SSIM)=0.833. An example with PCC and SSIM as high as 0.97 and 0.93, respectively, as listed in the second column in Fig.3. The network is therefore proved to accomplish accurate information reconstruction from the speckles. Nevertheless, such success depends on another two factors which strictly ensure the decryption: the second input (i.e., QR code used in this study) and the matched polarization between speckles and the network. Other datasets such as fMNIST and Quickdraw (quantitative analysis of information complexity for different datasets can be referred to Supplementary Note4) have also been tried, and the results can be referred to Supplementary Note5.

As discussed in our previous work21, speckle-based cryptosystem benefits from the complexity of the physical secret key demonstrating high-level security. Nevertheless, if the ciphertext (i.e., speckles) is accidentally obtained by the hackers, it is expected that the system still has the ability to protect itself. As designed in this study, additional authorized security key (i.e., the QR code) from the sender is needed for decryption at the receiver terminal. Several ciphertexts are generated when different QR codes (100 in this study) are paired up with each single plaintext. The performance of the decryption is therefore set to be sensitive to the change from the correct one in Input 2 in Fig.3, given that the Input 1 or the ciphertext is correct. Likewise, RCP data serves as the example and five samples are randomly chosen for demonstrations, as shown in Fig.4. As seen, if a uniform matrix is fed as Input 2 (Fig.4aII), the DMNet merely outputs faces without recognizable features, whose PCC and SSIM (0.080 and 0.109, respectively) are both far below the performance with correct QR code (0.941 and 0.833, respectively; Fig.4aI). Furtherly, excellent protection from the brutal attack for Input 2 is also achieved (Fig.4aIII). By randomly generating one million binary-amplitude matrices to attack Input 2, the guessed plaintext is similar with that in Fig.4aII. Notably, metrics to quantify the performance of brutal attack are not the average in Fig.4bIII but the maximum, since the brutal attack succeed if one trial passes the guess regardless of its number of realizations. Nevertheless, the low PCC and SSIM (0.005 and 0.121, respectively) validate the safety of the designed network against the brutal attack for Input 2. Cases with mismatched pairs for the two inputs, for example, Input 1 is accurate but Input 2 is a correct QR code corresponding to another plaintext, can be found in Supplementary Note6. The DMNet output (denoted as Mismatched output) also fails to visualize the human faces but with similar patterns as shown in Rows II and III in Fig.4a.

a, b Attack analysis regarding Input 2. Decryption with correct ciphertext (i.e., Input 1: speckles) by varying Input 2 with a correct QR code (Row I), a uniform pattern (Row II), and a random binary pattern (Row III) for a qualitative demonstration and b the statistics, quantifying the PCC and SSIM between the plaintext and decrypted images for Rows IIII. PCC Pearsons correlation coefficient, SSIM structure similarity index. The metrics for both Correct and Uniform are averaged over 2000 samples, and metrics for the Random group is the average of 1,000,000 randomly generated binary-amplitude attacks. c Cross-validation for the decryption by inputting speckles with seven different polarization states (i.e., P(i)-speckles, i=1,2,3,4,5,6,7) into DMNet with seven different states (i.e., P(i)-DMNet, i=1,2,3,4,5,6,7). (d) Averaged decryption PCC corresponding to the cross-validation arrangement in (c) and each is averaged over 2000 samples. QR quick response, PCC Pearsons correlation coefficient, SSIM structure similarity.

In Fig.2c, d, we have demonstrated the sensitivity of speckles to the incident polarization. Here, the data independency in these 7 polarization channels will be further verified. Seven DMNets are individually trained using these seven polarized datasets, and each DMNet trained with P(i) data is denoted as P(i)-DMNet (i=1,2,3,4,5,6,7). With correct QR code (not shown in the Fig. 4c for simplicity), the plaintexts can only be correctly deciphered when the polarization state of the speckle matches that of the corresponding DMNet, as shown in the diagonal in Fig.4c: P(i) speckles are input into the P(i)-DMNet, resulting in decryption PCCs of ~0.94. Once the polarization channels between the input data and network are mismatched, e.g., P(1)-speckles (LCP) input into P(7)-DMNet (RCP) or P(7)-speckles (RCP) input into P(1)-DMNet (LCP), the decrypted plaintext exhibits unrecognizable faces, with decryption PCCs of 0.0158 and 0.0268, respectively. In statistical analysis in Fig.4d, it can be observed that the decryption PCCs for matched polarization states (~0.94 on the diagonal) are orders of magnitude higher than those with mismatched polarizations (<0.06 off the diagonal). That said, realizations for multi-channel decryption do not necessarily rely on the orthogonality of the polarization. The additional polarization states between the orthogonal ones can also support independence among the polarization channels. By jointly adjusting a half-wave plate and a quarter-wave plate, more polarization states can be created. In principle, arbitrary polarization state could be an encryption channel, with the polarization regulation as discussed in the Working principle section. Therefore, the feasibility of achieving multi-channel encryption, which requires independence of polarization channel and the realization of multi-polarization channels based on the DM, is assured.

Stability of the decryption performance is critical in real applications but has seldom been discussed in earlier works due to the nature of CSM used in experiment. In this study, the system has been collecting data intermittently for 135h (Periods 114 in Fig.5a), whose status is characterized by the background PCC (blue dots). The background PCC is defined as the PCC between instant background speckle pattern and the initial one at Time=0. All background speckle patterns are generated with the same uniform phase pattern displayed on the SLM as described in Methods. Thereby, the initial status of the cryptosystem is defined in Period 1 in Fig. 5a, whose data is fed into RCP-DMNet for training with average decryption PCC (red bar) of around 0.94, as demonstrated in previous sessions. In other words, test data in the Periods 214 are new data for the network, which are collected under temporally varying medium status and have never been learned or probed by the network. Without additional training, decryption PCC in the following periods (Periods 214) changes accordingly with the background PCC, which is positively correlated. More importantly, the varying status can recover back to the initial status, e.g., Periods 26, Periods 7 to 8, and Periods 1214, whose corresponding averaged decryption PCC recovers from 0.82 to 0.93, from 0.73 to 0.90, and from 0.68 to 0.90, respectively. The decrypted images can be seen in Fig.5b. One should be noted that during such 135h, the experiment is performed on the seventh floor and the environmental perturbations are general and diverse, including switching the laser/SLM/camera, other experiments on the same optical table, traffic around the building, large machine noise from adjacent machine room, etc. As seen, in our cryptosystem, the DM provides excellent stability against those everyday perturbations and the deviation from the initial status is reversible. Such a phenomenon can hardly be seen in CSM-based implementations (Ground glass diffuser, DG-10-220, Thorlabs) for such a long duration of time as shown in Fig.5c: with everyday perturbations, the background PCC of the CSM-based system (with the same setup as the DM-based implementations) decreases obviously (down to around 0.2) without recovery back to the initial status. As seen in Fig.5d, starting from period 2, the decryption performance also deteriorates over time. The fine facial features gradually erode, resulting in significant deviations from the ground truth images. This highlights an additional advantage of utilizing DM over CSM: for those media like ground glass diffusers, the deviation from the initial state is highly unpredictable and often irreversible. However, our proposed DM-based system exhibits reversibility (Fig.5a). This remarkable feature can be attributed to single-layered nature of the DM, which ensures a wider range of the memory effect24. This characteristic physically enables a more relaxed optical conjugation of the DM with the input wavefront compared to typical multi-layered diffusers. Therefore, our system can be practically recovered back to the initial status, as quantified by the background PCC of the recorded speckle (i.e., 0.98) when the perturbations become similar to those at initial status or when simply tuning the system is feasible. Furthermore, since no additional training for the network is needed over time, encrypting new plaintext with the proposed cryptosystem becomes practically feasible even though long period of time has elapsed since the network was trained.

a, b Stability analysis for the DM-based decryption performance. a Background PCC (blue dots) and decryption PCC (red columns) based on the data collected in 14 periods. b Decryption performance for three representative examples with respect to the 14 periods in (a). Digits below each reconstructed images are the Decryption PCCs between the decrypted image and the ground truth image. c, d Stability analysis for the CSM-based decryption performance. c, d are the counterparts of (a, b), respectively, under the same experiment conditions with a ground glass to replace the DM as the scattering medium. GT ground truth, DM disordered metasurface, CSM conventional scattering medium, PCC Pearsons correlation coefficient.

Follow this link:
High-security learning-based optical encryption assisted by disordered metasurface - Nature.com

Read More..

The DOJ Puts Apple’s iMessage Encryption in the Antitrust Crosshairs – WIRED

The argument is one that some Apple critics have made for years, as spelled out in an essay in January by Cory Doctorow, the science fiction writer, tech critic, and coauthor of Chokepoint Capitalism. The instant an Android user is added to a chat or group chat, the entire conversation flips to SMS, an insecure, trivially hacked privacy nightmare that debuted 38 years agothe year Wayne's World had its first cinematic run, Doctorow writes. Apple's answer to this is grimly hilarious. The company's position is that if you want to have real security in your communications, you should buy your friends iPhones.

In a statement to WIRED, Apple says it designs its products to work seamlessly together, protect peoples privacy and security, and create a magical experience for our users, and it adds that the DOJ lawsuit threatens who we are and the principles that set Apple products apart in the marketplace. The company also says it hasn't released an Android version of iMessage because it couldn't ensure that third parties would implement it in ways that met the company's standards.

If successful, [the lawsuit] would hinder our ability to create the kind of technology people expect from Applewhere hardware, software, and services intersect, the statement continues. It would also set a dangerous precedent, empowering government to take a heavy hand in designing peoples technology. We believe this lawsuit is wrong on the facts and the law, and we will vigorously defend against it.

Apple has, in fact, not only declined to build iMessage clients for Android or other non-Apple devices, but actively fought against those who have. Last year, a service called Beeper launched with the promise of bringing iMessage to Android users. Apple responded by tweaking its iMessage service to break Beeper's functionality, and the startup called it quits in December.

Apple argued in that case that Beeper had harmed users' securityin fact, it did compromise iMessage's end-to-end encryption by decrypting and then re-encrypting messages on a Beeper server, though Beeper had vowed to change that in future updates. Beeper cofounder Eric Migicovsky argued that Apple's heavyhanded move to reduce Apple-to-Android texts to traditional text messaging was hardly a more secure alternative.

Its kind of crazy that were now in 2024 and there still isn't an easy, encrypted, high-quality way for something as simple as a text between an iPhone and an Android, Migicovsky told WIRED in January. I think Apple reacted in a really awkward, weird wayarguing that Beeper Mini threatened the security and privacy of iMessage users, when in reality, the truth is the exact opposite.

Even as Apple has faced accusations of hoarding iMessage's security properties to the detriment of smartphone owners worldwide, it's only continued to improve those features: In February it upgraded iMessage to use new cryptographic algorithms designed to be immune to quantum codebreaking, and last October it added Contact Key Verification, a feature designed to prevent man-in-the-middle attacks that spoof intended contacts to intercept messages. Perhaps more importantly, it's said it will adopt the RCS standard to allow for improvements in messaging with Android usersalthough the company did not say whether those improvements would include end-to-end encryption.

More:
The DOJ Puts Apple's iMessage Encryption in the Antitrust Crosshairs - WIRED

Read More..

Vulnerability found in Apple’s Silicon M-series chips and it can’t be patched – Mashable

A new security vulnerability has been discovered in Apple's Mac and MacBook computers and the worst part is that it's unpatchable.

Academic researchers discovered the vulnerability, first reported by Ars Technica, which allows hackers to gain access to secret encryption keys on Apple computers with Apple's new Silicon M-Series chipset. This includes the M1, M2, and M3 Apple MacBook and Mac computer models.

Basically, this vulnerability can be found in any new Apple computer released from late 2020 to today.

The issue lies with prefetchers components meant to predictively retrieve data before a request to increase processing speed and the opening they leave for malicious attacks from bad actors.

The researchers have dubbed the attack "GoFetch," which they describe as "a microarchitectural side-channel attack that can extract secret keys from constant-time cryptographic implementations via data memory-dependent prefetchers (DMPs)."

A side-channel attack is a type of cyber attack that uses extra information that's left vulnerable due to the design of a computer protocol or algorithm.

The researchers explained the issue in an email to Ars Technica:

Prefetchers usually look at addresses of accessed data (ignoring values of accessed data) and try to guess future addresses that might be useful. The DMP is different in this sense as in addition to addresses it also uses the data values in order to make predictions (predict addresses to go to and prefetch). In particular, if a data value "looks like" a pointer, it will be treated as an "address" (where in fact it's actually not!) and the data from this "address" will be brought to the cache. The arrival of this address into the cache is visible, leaking over cache side channels.

Our attack exploits this fact. We cannot leak encryption keys directly, but what we can do is manipulate intermediate data inside the encryption algorithm to look like a pointer via a chosen input attack. The DMP then sees that the data value "looks like" an address, and brings the data from this "address" into the cache, which leaks the "address." We dont care about the data value being prefetched, but the fact that the intermediate data looked like an address is visible via a cache channel and is sufficient to reveal the secret key over time.

Basically, the researchers discovered that the DMPs in Apple's Silicon chipsets M1, M2 and, M3 can give hackers access to sensitive information, like secret encryption keys. The DMPs can be weaponized to get around security found in cryptography apps, and they can do so quickly too. For example, the researchers were able to extract an 2048-bit RSA key in under one hour.

Usually, when a security flaw is discovered nowadays, a company can patch the issue with a software fix. However, the researchers say this one is unpatchable because the issue lies with the "microarchitectural" design of the chip. Furthermore, security measures taken to help mitigate the issue would require a serious degradation of the M-series chips' performance.

Researchers say that they first brought their findings to Apple's attention on December 5, 2023. They waited 107 days before disclosing their research to the public.

Read more here:
Vulnerability found in Apple's Silicon M-series chips and it can't be patched - Mashable

Read More..

A vulnerability in Apple M-series chips could expose encryption keys and harm performance and the flaw is … – ITPro

A vulnerability etched into the design of Apple M-series chips has been uncovered by researchers which could allow attackers to extract encryption secret keys when performing cryptographic operations.

Six academic researchers at institutions across the US authored a paper outlining a vulnerability they dubbed GoFetch, which leaks cryptographic data from the CPU cache that hackers can use to piece together a cryptographic key.

GoFetch is a microarchitectural side-channel attack that can extract secret keys from constant-time cryptographic implementations via data memory-dependent prefetchers (DMPs). stated a blog published by the authors.

GoFetch relies on exploiting a relatively new microarchitectural design feature only found on Apple M-series chips and Intels Raptor Lake microarchitecture intended to reduce memory-access latency a common CPU bottleneck.

DMPs proactively load data into the CPU cache before it is directly required, helping to reduce latency between the main memory and CPU.

This technology is vulnerable to cache side-channel attacks which observe the side effects of the victim programs secret-dependent accesses to the processor cache, according to the paper.

During the prefetching process, the DMP must make a series of predictions on what data will be required, based on previous access patterns, and attackers can exploit this side channel to steal information.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2023.

A popular workaround neutralizing this threat is constant-time programming, which standardizes the execution time for operations regardless of the size of the input by ensuring the data has no secret-dependent memory accesses.

The new paper from Chen et al. demonstrates how DMPs often compromise the security of constant-time programming by mixing up memory content with pointer values that are used to direct the DMP to load other data.

We show that even if a victim correctly separates data from addresses by following the constant-time paradigm, the DMP will generate secret-dependent memory access on the victim's behalf, resulting in variable-time code susceptible to our key-extraction attacks:, Chen et al explained.

Applications using the GoFetch attack can manipulate data to look like a pointer value, which the DMP treats as an address and brings the data from this location into the cache, which is then visible and leaked over cache side channels.

The vulnerability can be exploited when the cryptographic operation being targeted is running on the same CPU cluster as the malicious application.

The authors stated they will release proof-of-concept code demonstrating GoFetchs attack path soon.

This vulnerability cannot be patched directly as it stems from the microarchitectural design of the silicon itself, the paper stated.

Notably, Intels Raptor Lake CPU architecture doesnt share this vulnerability with its M-series counterparts, despite sharing the same prefetcher as Apples chips.

This shows that the vulnerability can be addressed by altering the silicon, but this will only be available for future Apple M-series architectures, where the CPU architecture will need to be redesigned.

As a result, current M-series chips exposed to the vulnerability cannot be patched in the silicon, and businesses using these devices can only try to mitigate the potential damage a successful exploit could incur using third-party software.

But integrating extra layers of protection into third-party cryptographic software will take a significant toll on encryption and decryption performance, leaving developers with a difficult choice between efficiency and security.

At the time of writing, Apple has not published any release dates for an official fix.

Continue reading here:
A vulnerability in Apple M-series chips could expose encryption keys and harm performance and the flaw is ... - ITPro

Read More..