Page 1,180«..1020..1,1791,1801,1811,182..1,1901,200..»

Fast magnetic imaging with diamond-based quantum sensor technology – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by Fraunhofer Institute for Applied Solid State Physics IAF

Microscopic imaging of magnetic fields, enabled by quantum sensing, allows the measurement of the unique magnetic fingerprint of objects. This opens the door for fundamentally new applications in various fields such as materials testing or biomedicine.

Fraunhofer IAF has developed an innovative method using fast camera images in the form of an improved wide-field magnetometer. The system offers a unique compromise of sensitivity, resolution and speed. It will be presented at LASER World of QUANTUM 2023, held June 2730 in Munich, as part of the Quantum Sensing Hub Freiburg, in which the institutes Fraunhofer IAF, IPM and IWM pool their expertise in the field of quantum magnetometry.

Researchers at the Fraunhofer Institute for Applied Solid State Physics IAF have managed to harness the great potential of quantum sensor technology based on nitrogen-vacancy (NV) centers in a unique measurement setup. Their wide-field magnetometer enables the magnetic stray field of a sample to be measured rapidly over a large range. Its high measurement accuracy is characterized by a resolution down to the nanometer range and is absolutely quantifiable.

This measurement method opens up new avenues in metrology and is suitable for various industries such as (nano-)electronics, material sciences or biomedicine due to its wide range of applications, from inorganic to organic samples.

The innovative measurement system was developed in the course of the Fraunhofer lead project QMag. In the course of the project, a concentrated magnetometry expertise and infrastructure has developed in Freiburg im Breisgau at the three institutes Fraunhofer IAF, IPM and IWM, which together form the Quantum Sensing Hub Freiburg. Schematic structure of the measuring principle of the wide-field magnetometer. Due to the direct contact of the sample with the diamond sensor, magnetic nanoparticles can be measured precisely and quickly. Credit: Fraunhofer IAF

Wide-field magnetometry is based on NV centers in thin diamond films and is a young approach in quantum sensing. The measurement setup developed at Fraunhofer IAF uses an arbitrary waveform generator (AWG), which generates microwave radiation and triggers a laser and the recording time window of a camera with nanosecond precision. By using different measurement protocols, this allows for high flexibility and precision of measurements.

"The wide-field magnetometer benefits not only from our improved setup, but also from the growth process developed at Fraunhofer IAF for diamond plates, which we use as sensors," explains Dr. Jan Jeske, deputy business unit manager quantum devices at Fraunhofer IAF. The substrates grown at the institute are based on (100)-oriented, pure, undoped diamond of type "IIa" with a thickness of 500 m and an area of 4 x 4 mm. This substrate is overgrown with a thin layer in which the NV centers for the sensor application are generated close to the sample.

In materials science, experimental methods are used to characterize polycrystalline materials in order to obtain a microscopic understanding of the macroscopic material behavior. This makes it possible to better understand materials and optimize their properties. However, current methods usually rely on long measurement times and large experimental facilities. Often, vacuum conditions or high-energy particles are also necessary, which can have a detrimental effect on the sample material.

Wide-field magnetometry based on NV centers is an alternative, non-invasive method that operates at room temperature. This opens up new possibilities for insights into the microscopic magnetic field distribution, which has great potential for material analyses. The system is not limited to inorganic material samples, but can also be applied to organic samples due to its comparatively low demands on the measurement environment. These measurement properties, coupled with the high measurement speed of the method developed at Fraunhofer IAF, allow even complex measurements such as fluctuations, alternating fields and alternating current (AC) measurementspaving the way for new material analysis methods.

Provided by Fraunhofer Institute for Applied Solid State Physics IAF

View original post here:

Fast magnetic imaging with diamond-based quantum sensor technology - Phys.org

Read More..

How Does Multiverse Theory Relate to Time Travel? – DISCOVER Magazine

While many sci-fi writers have wondered if time travel into the past can be allowed as long as you slip into an alternate universe, you should be warned: Nature doesnt like a cheater.

Time travel into the past is a real pain in the neck. For one, as far as we can tell its forbidden in our universe. You can travel into the future as slowly or as quickly as you like, but you cant stop time or go in reverse.

Technically speaking, physicists are not exactly sure why time travel is forbidden. We dont have a single law of physics that clearly rejects the possibility.

But every time we try to concoct a time travel concept like wormholes or twisting paths around infinitely long cylinders we find some reason that its not allowed. Wormholes, it turns out, require negative mass to stabilize themselves, and infinitely long cylinders are rather hard to come by.

Read More: Why Do Humans Perceive Time The Way We Do?

Even within any glimmer of a possibility that time travel might be possible, traveling into our own pasts opens up all sorts of noxious paradoxes, as plenty of films and comic book storylines have demonstrated.

Beyond the fact that the past is over and done with, we must reckon with the reality that what happened in the past created the present that we experience.

To put the paradox succinctly: If you change the past, you change the present, but the present already exists.

One of the most common examples of this head-scratching scenario is known as the grandfather paradox.

Suppose you travel back in time and kill your own grandfather before he met your grandmother (no need to dissect your motivations behind this hypothetical act here). That means one of your parents was never born, which means you never exist. But if you never existed, how did you go back in time to kill your grandfather in the first place?

Read More: What Is the Grandfather Paradox of Time Travel?

To address this dilemma, we could invoke the idea of parallel or alternate universes, a concept under the general umbrella of the multiverse.

Perhaps when you go back and kill your grandfather, you create a new universe where you dont exist. You can then return to your original universe where everything is hunky-dory (or, at least youre still alive). Essentially, you only monkeyed around with some other, disconnected cosmos.

There are, in fact, two places in physics where the multiverse concept pops up. Unfortunately for any wannabe time travelers, neither of them allows for this type of parallel-futures hopping.

The first place where alternate universe theories have gained traction is in quantum mechanics.

Subatomic processes are inherently random. When we run an experiment we never know what result were going to get. One explanation for this is that all experimental results happen, just in separate universes.

If someone shoves an electron through a strong magnetic field, for example, and it has a fifty-fifty chance of deflecting either up or down, then every time they run an electron through the device, they get two universes: one where the electron went up, and one where the electron went down.

As the theory goes, this kind of splitting isnt just limited to arcane physics experiments. It happens with every single quantum interaction throughout the cosmos, which is a lot of interactions, thus generating countless parallel worlds.

In this view, there is a universe out there that already exists where your grandfather is dead and you were never born. But you could never see it.

Read More: Advanced Quantum Material Curves the Fabric of Space

Even if the theory above is actually happening, the rules of quantum physics are absolutely clear: There is no travel or communication between these universes.

Granted, universe is probably the wrong word here. Thats because the splitting is really separate partitions of a single quantum wave function. And that separation is really permanent.

If there was any kind of connection allowed, then the many branching possibilities of quantum processes simply wouldnt happen.

The second place where the multiverse pops up in physics is in studies related to the extremely early universe.

From this lens, our cosmos could be just one of many like a bubble in an infinite foam of universes. These universes too could contain all the many interesting and varied possibilities that make life so fun.

In this (extremely hypothetical) view, there are many universes out there that contain alternate realities from the one you know. Theres one where your grandfather died. Theres another where no stars ever formed. And theres a universe where all your matter is replaced with antimatter, and so on.

Because all these bubble universes exist in the same expanding foam that is reality, those alternate universes really are out there, an unimaginable but still-finite distance away.

But even in this foam-like collection of bubble universes, time still does its thing. That is, it moves forward, always.

Even if you could concoct some scheme to travel among the alternate universes, those universes still share the same spacetime fabric, marching forward along with us. Thus they all appear to forbid time travel, as the theory goes.

They also, in theory, share the reality that nature doesnt like a cheater. So, if youre theoretically going to visit a universe where you never existed, maybe nix the time travel method.

Read More: Scientists Attempt to Map the Multiverse

Original post:

How Does Multiverse Theory Relate to Time Travel? - DISCOVER Magazine

Read More..

Research – Stony Brook University

Quantum Information Science and Technology (QIST) is a key area of growth for Stony Brook University, and a central piece of investment from President Maurie McInnis Innovation and Excellence (PIE) Fund.

The field of QIST offers scientific opportunities that may revolutionize information technology and how we store, manipulate and transmit information.

Earlier this year, faculty were asked to submit proposals for protocenters to comprise a QIST Consortium. A protocenter represents the planning stage in a QIST research program, and, based on the success of each protocenter in key domains including the generation of cutting-edge research and external funding, each has the potential to grow into a university-recognized center within the QIST Consortium.

Protocenter proposals requested between $100,000-300,000 per year for up to four years, at the end of which time the success will be evaluated to determine additional funding and potential growth into a QIST center within the QIST consortium. Protocenters demonstrating considerable success and potential for long-term sustainability will then transition into a university-recognized center within the QIST Consortium.

The review panel selected two proposals to fund for the next four years.

Leon Shterengas, professor in the Department of Electrical and Computer Engineering in the College of Engineering and Applied Sciences, serves as Principal Investigator for the Devices for Quantum Sensing and Communication Protocenter. Along with co-PIs Jennifer Cano (assistant professor, Department of Physics and Astronomy in the College of Arts and Sciences), Eden Figueroa (associate professor, Department of Physics and Astronomy), Dmitri Donetski (associate professor, Department of Electrical and Computer Engineering) and Sergey Suchalkin (associate professor, Department of Electrical and Computer Engineering), this research center will focus on creating new and advanced devices that use the principles of quantum physics.

The initial goal of the research center will be development of photonic devices for quantum communication and sensing. Design and fabrication of the single photon and entangled photon emitters for quantum internet and distributed sensing as well as single photon detectors for quantum ghost imaging in mid-infrared region of spectrum will be the primary targets.

The research center will also work on the development of a scalable material platform for quantum computation and signal processing. Advanced epitaxial and nanofabrication capabilities including metamorphic epitaxy of the ultra-short period superlattices and monolithic integration with photonic crystals will be applied.

This research center offers a unique opportunity to leverage state-of-the-art semiconductor optoelectronic technological capabilities developed at the Belenky Lab at the Department of Electrical and Computer Engineering to achieve critical advances in the QIST field, Shterengas said.

Himanshu Gupta, professor in the Department of Computer Science, serves as Principal Investigator for the Center for Quantum Computing and Networks Protocenter. The following faculty members serve as co-PIs: Aruna Balasubramanian (associate professor, Department of Computer Science), Xianfeng (David) Gu (professor, Department of Computer Science), Jon Longtin (Interim Dean of the College of Engineering of Applied Sciences and professor, Department of Mechanical Engineering), Omkant Pandey (associate professor, Department of Computer Science), Supartha Podder (assistant professor, Department of Computer Science; NSF Quantum Faculty Fellow), C. R. Ramakrishnan, (professor, Department of Computer Science) and Nengkun Yu (associate professor, Department of Computer Science; SUNY Empire Innovation Scholar).

The research conducted in this protocenter will mainly focus on solving complex problems related to the design and development of large, scalable, and reliable quantum computing systems. In particular, the protocenter will have four specific research areas: (i) Quantum Networks and Distributed Quantum Computing, (ii) Quantum Algorithms and Advantages, (iii) Quantum Cryptography and (iv) Quantum Verification.

The specific research goals of the center align well with the eight frontiers identified in the 2020 Quantum Frontiers report by the White Houses National Quantum Coordination Office. This broad research agenda positions the protocenter to attract collaborators and compete for different funding opportunities. The overall goals of the protocenter are to perform fundamental research in the scientific and technological foundations of quantum computing platforms and to build a vibrant, diverse community of researchers and practitioners

Protocenter PI Himanshu Gupta said, We are thankful to the university for giving us this opportunity to build a center on Quantum Computing and Networks. We hope to leverage the strengths of our team to build a self-sustaining visible center. Quantum Computing has the potential to significantly alter the computing landscape and the focus of our center would be to conduct transformative research that can lead to new frontiers and usable technology in quantum information science.

Beth Squire

Read story "QIST Protocenters Receive a Piece of the PIE (Fund)" on SBU News

Here is the original post:

Research - Stony Brook University

Read More..

Researchers manufacture first-ever droplet-etched quantum dots that glow in C-band optical light – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

proofread

by Universitt Paderborn

Paderborn researchers from the Department of Physics and the Institute for Photonic Quantum Systems (PhoQS) have succeeded in manufacturing quantum dotsnanoscopic structures where the material's quantum properties come into playthat glow in the optical C-band at wavelengths of 1530 to 1565 nanometers.

This is particularly special as it is the first time that quantum dots like these have been manufactured by the local droplet etching and subsequent filling of nanoholes in an indium aluminum arsenide / indium gallium arsenide system lattice-matched to indium phosphide substrates.

In the future, these quantum dots could for example be used as a source of entangled photons, which might be relevant for innovative encryption systems involving quantum technologies. Luminescence in the optical C-band is particularly relevant here: slowdown in fiber optic networks is minimal at this wavelength, enabling potential future use with the current network. The researchers have now published their findings in the journal AIP Advances.

The team, consisting of Dennis Deutsch, Christopher Buchholz, Dr. Viktoryia Zolatanosha, Prof. Dr. Klaus Jns and Prof. Dr. Dirk Reuter, etched nanoholes in an indium aluminum arsenide surface and filled them with indium gallium arsenide.

"One critical element of manufacturing quantum dots, if they are to be used for generating entangled photons, is lattice matching. If this is not performed, it causes tension in the quantum dot, which can dispel the quantum mechanical entanglement of the photons generated," Denis Deutsch explains.

Manufacturing quantum dots by filling droplet-etched holes is not new, but unlike in previous processes, the researchers used lattice matching to indium phosphide rather than gallium arsenide. The change of material enabled them to achieve emission in the C-band. As well as lattice matching materials, the symmetry of quantum dots is also a key factor in their suitability as an entangled photon source. The publication therefore also statistically evaluated and examined the symmetry of numerous holes manufactured using different parameters.

This is a long way from being technically implementable, but the method is already demonstrating its potential for manufacturing quantum dots. This is because in the future, quantum computing is likely to be far superior to traditional computers when it comes to encryption.

The phenomenon of entanglement is a promising approach to securely exchanging encrypted data, as any attempts to eavesdrop are exposed thanks to the laws of physics. Since entangled photons are exchanged via fiber optic cables, it is essential that transmission should be as low-loss as possible. "Manufacturing photons in the particularly low-loss optical C-band is therefore a major step forward in encryption using entangled photons," Deutsch concludes.

More information: D. Deutsch et al, Telecom C-band photon emission from (In,Ga)As quantum dots generated by filling nanoholes in In0.52Al0.48As layers, AIP Advances (2023). DOI: 10.1063/5.0147281

Journal information: AIP Advances

Provided by Universitt Paderborn

See the original post:

Researchers manufacture first-ever droplet-etched quantum dots that glow in C-band optical light - Phys.org

Read More..

Stephen Hawking and I created his final theory of the cosmos … – Daily Maverick

The late physicist Stephen Hawking first asked me to work with him to develop a new quantum theory of the Big Bang in 1998. What started out as a doctoral project evolved over some 20 years into an intense collaboration that endedonly with his passingon March 14 2018.

The enigma at the centre of our research throughout this period was how the Big Bang could have createdconditions so perfectly hospitable to life. Our answer is beingpublished in a new book, On the Origin of Time: Stephen Hawkings Final Theory.

Questions about the ultimate origin of the cosmos, or universe, take physics out of its comfort zone. Yet this was exactly where Hawking liked to venture. The prospect or hope to crack the riddle of cosmic design drove much of Hawkings research in cosmology. To boldly go where Star Trek fears to tread was his motto and also his screen saver.

Our shared scientific quest meant that we inevitably grew close. Being around him, one could not fail to be influenced by his determination and optimism that we could tackle mystifying questions. He made me feel as if we were writing our own creation story, which, in a sense, we did.

In the old days, it was thought that the apparent design of the cosmos meant there had to be a designer a God. Today, scientists instead point to the laws of physics. These laws have a number of striking life-engendering properties. Take the amount of matter and energy in the universe, the delicate ratios of the forces, or the number of spatial dimensions. Physicistshave discoveredthat if you tweak these properties ever so slightly, it renders the universe lifeless. It almost feels as if the universe is a fix even a big one.

But where do the laws of physics come from? From Albert Einstein to Hawking in his earlier work, most 20th-century physicists regarded the mathematical relationships that underlie the physical laws as eternal truths. In this view, the apparent design of the cosmos is a matter of mathematical necessity. The universe is the way it is because nature had no choice.

Around the turn of the 21st century, a different explanation emerged. Perhaps we live in a multiverse, an enormous space that spawns a patchwork of universes, each with its own kind of Big Bang and physics. It would make sense, statistically, for a few of these universes to be life-friendly. However, soon such multiverse musings got caught in aspiral of paradoxesand no verifiable predictions.

Can we do better? Yes, Hawking and I found out, but only by relinquishing the idea, inherent in multiverse cosmology, that our physical theories can take a Gods-eye view, as if standing outside the entire cosmos.

It is an obvious and seemingly tautological point: cosmological theory must account for the fact that we exist within the universe. We are not angels who view the universe from the outside, Hawking told me. Our theories are never decoupled from us. We set out to rethink cosmology from an observers perspective. This required adopting the strange rules ofquantum mechanics, which governs the microworld of particles and atoms.

According to quantum mechanics, particles can be in several possible locations at the same time a property called superposition. It is only when a particle is observed that it (randomly) picks a definite position. Quantum mechanics also involves random jumps and fluctuations, such as particles popping out of empty space and disappearing again.

In a quantum universe, therefore, a tangible past and future emerge out of a haze of possibilities by means of a continual process of observing. Such quantum observations dont need to be carried out by humans. The environment or even a single particle can observe. Countless such quantum acts of observation constantly transform what might be into what does happen, thereby drawing the universe more firmly into existence. And once something has been observed, all other possibilities become irrelevant.

We discovered that when looking back at the earliest stages of the universe through a quantum lens, theres a deeper level of evolution in which even the laws of physics change and evolve, in sync with the universe that is taking shape. Whats more, this meta-evolution has a Darwinian flavor.

Variation enters because random quantum jumps cause frequent excursions from whats most probable. Selection enters because some of these excursions can be amplified and frozen, thanks to quantum observation. The interplay between these two competing forces variation and selection in the primeval universe produced a branching tree of physical laws.

The upshot is a profound revision of the fundamentals of cosmology. Cosmologists usually start by assuming laws and initial conditions that existed at the moment of the Big Bang, then consider how todays universe evolved from them. But we suggest that these laws are themselves the result of evolution.

Dimensions, forces, and particle species transmute and diversify in the furnace of the hot Big Bang somewhat analogous to how biological species emerge billions of years later and acquire their effective form over time.

Moreover, the randomness involved means that the outcome of this evolution the specific set of physical laws that makes our universe what it is can only be understood in retrospect. In some sense, the early universe was a superposition of an enormous number of possible worlds. But we are looking at the universe today at a time when humans, galaxies and planets exist. That means we see the history that led to our evolution.

We observe parameters with lucky values. But we are wrong to assume they were somehow designed or always like that.

The crux of our hypothesis is that, reasoning backward in time, evolution towards more simplicity and less structure continues all the way. Ultimately, even time and, with it, the physical laws fade away.

This view is especially borne out of the holographic form of our theory. The holographic principle in physics predicts that just as a hologram appears to have three dimensions when it is in fact encoded in only two dimensions, the evolution of the entire universe is similarly encoded on an abstract, timeless surface.

Hawking and I view time and causalityas emergent qualities, having no prior existence but arising from the interactions between countless quantum particles. Its a bit like how temperature emerges from many atoms moving collectively, even though no single atom has temperature. One ventures back in time by zooming out and taking a fuzzier look at the hologram. Eventually, however, one loses all information encoded in the hologram. This would be the origin of time the Big Bang.

For almost a century, we have studied the origin of the universe against the stable background of immutable laws of nature. But our theory reads the universes history from within and as one that includes, in its earliest stages, the genealogy of the physical laws. It isnt the laws as such but their capacity to transmute that has the final word.

Future cosmological observations may find evidence of this. For instance, precision observations ofgravitational waves ripples in the fabric of spacetime may reveal signatures of some of the early branches of the universe. If spotted, Hawkings cosmological finale may well prove to be his greatest scientific legacy.DM

This story was first published inThe Conversation.

Thomas Hertog is a Professor of Physics at KU Leuven.

Link:

Stephen Hawking and I created his final theory of the cosmos ... - Daily Maverick

Read More..

15 Action Films with Plots So Convoluted, You’ll Need an Actual Map – Startefacts

These are the movies that made our brains hurt. Repeatedly.

Welcome, dear reader, to a bewildering cinematic landscape filled with narrative twists, plot-turns and mind-bending realities that refuse to adhere to the law of simplicity.

Here, in the sometimes confusing but always entertaining world of action films, some plots demand more than just your popcorn and soda they require a compass, a map, and perhaps a background in quantum physics.

15. Inception (2010)

Christopher Nolan's tour de force through dreamscapes is a riveting exploration of the human subconscious. Armed with an ensemble cast and a plot so twisted that it makes a pretzel look straight, 'Inception' requires a notepad, a pause button, and a personal lecture from Nolan himself to understand. Dreams within dreams within dreams and no, pinching yourself won't help here.

14. Primer (2004)

Shane Carruth's low-budget indie darling doesn't just walk the line of complex narrative it pole-vaults over it. 'Primer' takes a hard dive into time travel and wraps its plot around a Mbius strip of paradoxes. With a script that would baffle Einstein, it's the kind of movie that makes Sudoku seem like child's play.

13. Ghost in the Shell (1995)

This seminal anime film takes us deep into a cyberpunk universe where cybernetic humans coexist with artificial intelligence. What starts as a simple investigation spirals into a metaphysical discourse about consciousness, identity, and reality. Halfway through, you may find yourself wishing for an actual shell to crawl into.

12. The Matrix Revolutions (2003)

If the first 'Matrix' was a refreshing wake-up call, the third instalment felt more like being trapped in a recursive nightmare. 'Revolutions' amps up the series' philosophy quotient, culminating in a finale that could make Schrdinger's cat scratch its head in confusion. Red pill or blue pill? How about a headache pill?

11. Predestination (2014)

'Predestination' isn't just a movie; it's a mind-bending odyssey through time and fate that challenges you at every twist. The Spierig Brothers crafted a narrative so interwoven, so cyclical, it defies straightforward comprehension. Ethan Hawke gives a stellar performance as a Temporal Agent, but it's the screenplay that's the real star a star around which we orbit in dizzying circles, trying to piece together a plot as elusive as time itself.

10. Donnie Darko (2001)

Richard Kelly's cult classic introduces us to a disturbed teenager, a monstrous rabbit, and a series of bizarre events culminating in an impending apocalypse. However, beneath this strange premise is a narrative that merges quantum mechanics, time travel, and philosophical dilemmas. 'Donnie Darko' doesn't merely require a map it needs an astrophysicist to decipher its multi-layered plot. Keep an eye on that countdown clock; it's ticking toward a cerebral workout!

9. Looper (2012)

Rian Johnson's 'Looper' offers an intriguing take on time travel. It's not merely the complex mechanics of past meeting future that demands our attention; it's the moral and existential questions that come along. While Bruce Willis and Joseph Gordon-Levitt deliver impressive performances, the plot's intricacies are as twisted as the looping timelines it portrays. A note to the viewer a detailed flowchart might be as necessary as a bucket of popcorn.

8. Cloud Atlas (2012)

The Wachowskis and Tom Tykwer presented us with an ambitious narrative spanning five centuries in 'Cloud Atlas'. With six intertwined stories exploring themes of reincarnation, destiny, and interconnectedness, it's less of a film and more of a cinematic jigsaw puzzle. Each story is a piece of the larger narrative, and assembling them in order can be as daunting as scaling a mountain peak.

7. Interstellar (2014)

Another entry from the mind of Christopher Nolan, 'Interstellar' takes us on an awe-inspiring journey through wormholes, across galaxies, and into the heart of a black hole. Between the science of relativity and the concept of five-dimensional space-time, the plot becomes as infinite as the universe itself. An actual map? You might need an actual spaceship.

6. Mr. Nobody (2009)

Jaco Van Dormael's magnum opus 'Mr. Nobody' is a tapestry of tangled timelines, quantum physics, and 'what if' scenarios. We follow Nemo Nobody, played by Jared Leto, as he navigates different versions of his life based on various choices. The film is an exhilarating exercise in mental gymnastics, making the labyrinth of Minos seem like a backyard maze.

5. Memento (2000)

Yet again, we find ourselves ensnared in Nolan's web. 'Memento' subverts the traditional storytelling structure, unfolding its plot in a unique non-linear format that mirrors the protagonist's anterograde amnesia. It's an ingenious narrative trick, but it turns the plot into a jigsaw puzzle one you're asked to solve while blindfolded.

4. Twelve Monkeys (1995)

Terry Gilliam's dystopian time-travel thriller 'Twelve Monkeys' is an intellectual maze that intertwines the past, present, and future in a seemingly chaotic fashion. It combines elements of mental illness, environmental devastation, and fatalism in a plot as intricate as the gears of a clock. Tick-tock goes the plot, leaving us to untangle its threads.

3. The Adjustment Bureau (2011)

What happens when a romantic thriller meets a science fiction mystery? You get 'The Adjustment Bureau'. Matt Damon stars as a politician whose life is manipulated by a group of mysterious agents controlling fate itself. As the film delves into philosophical questions about destiny and free will, the plot takes more turns than a labyrinth. Here's a tip don't adjust your screen, adjust your expectations of understanding everything at the first watch.

2. Timecrimes (2007)

This Spanish sci-fi thriller by Nacho Vigalondo presents a multi-layered exploration of time travel that makes a Rubik's cube look like a walk in the park. As the protagonist stumbles into a series of eerie and confusing events, the plot spirals into a dizzying temporal paradox. A word of advice keep track of the bandaged man, and remember, every action has a consequence, especially when time travel's involved.

1. Edge of Tomorrow (2014)

Closing our list is this science fiction action film that takes 'rinse and repeat' to a whole new level. Tom Cruise plays a soldier who, after being killed in a battle against alien invaders, finds himself in a time loop that forces him to relive the same combat over and over again. The complexity isn't just in the plot, but in the intricate weave of the timeline. Think Groundhog Day meets Starship Troopers with a dash of temporal chaos.

Read more from the original source:

15 Action Films with Plots So Convoluted, You'll Need an Actual Map - Startefacts

Read More..

How Can RGB Improve Bitcoin? – Bitfinex blog – Bitfinex

21 Jun How Can RGB Improve Bitcoin?Posted at 16:33hin Educationbyadmin

RGB is a smart contract layer and off-chain protocol built on the Bitcoin blockchain, which allows for the minting and issuing of Bitcoin-based digital assets. With RGB, users will be able to issue stablecoins, tokens, Non Fungible Tokens (NFTs), and create client-validated confidential smart contracts on Bitcoin.

With the success of Ethereums Web3 smart contract ecosystem, and the recent hype surrounding Ordinals and Inscriptions, BRC-20 tokens, Stamps, and other methods to create tokenised assets on Bitcoin, its clear that crypto users really want smart contracts and tokens on Bitcoin.

One of the key philosophical differences between Bitcoin and Web3 blockchains like Ethereum, is the desire in Bitcoin, to push unnecessary data off-chain, to keep the base layer blockchain simple, nimble, and scalable. The less data that needs to be stored on-chain, the easier it is for a user to run their own node, helping with the networks overall decentralisation and censorship-resistance.

Bitcoiners have long disagreed with Ethereums design choices of having the computational logic of smart contracts, tokens, and more elaborate use cases like Decentralised Finance (DeFi), or Decentralised Exchanges (DEXs) on-chain on the base layer blockchain.

By adding this complexity to layer one, it makes it harder for normal users to run their own node which adds a vector for centralisation, as it becomes much harder to scale the network as it consumes more bandwidth and computational resources.

It also makes the chain easier to attack or censor, as there are less nodes, and they are more often run by companies or individual blockchain projects who utilise data centres, like Infura, rather than average users on their own personal computers. Public organisations like a blockchain startup or foundation have a much larger regulatory attack surface than a random anon running a node at home.

With the advent of the Lightning Network, as a cheap and instant settlements layer, with its vast improvements to scalability, Bitcoiners now have a viable path forward to implement smart contracts, tokens, and other Web3-style features. Better yet, they can be implemented in a way that could address the perceived flaws of adding smart contracts and complexity to the base layer blockchain.

Lightings scalability benefits allow for smart contracts and associated data to be stored off-chain, and its high transaction throughput and instant settlement make for more performant smart contracts, done the right way from a design perspective.

With the more recent attempts at tokenisation and smart contracts on Bitcoin, like Ordinals, which have been in the news a lot recently, the approach has been accused of being sloppily hacked and slapped together.

Inscriptions are created by including an arbitrary blob of data in the SegWit witness data using Taproot spend scripts, which were never intended to be used in this way. This creates more on-chain data, making operating a node resource intensive, and making it harder to scale Bitcoin.

As a result, Ordinals, Inscriptions, and BRC-20 tokens suffer from technical limitations that Ethereums ERC-20, ERC-721, and ERC-1155 token standards do not.

With RGB however, smart contracts and tokenised assets will have several notable advantages over Ethereums current tokenisation schemes.

Simply put, RGB is a layer two network for asset tokenization and smart contracts on Bitcoin and Lightning Network using the client side validation model.

RGB, which stands for Red, Green, Blue, paying homage to Bitcoins coloured coins which laid the foundation for the projects research. RGB is based on research by Peter Todd which was later adapted and repurposed by Giacomo Zucco to create the design for RGB.

After several years of continuous improvements and development, RGB is now represented by a diverse ecosystem of entities and individuals who build on top of the protocol and contribute to it.

The way RGB works is quite innovative. RGB employs client-side validation which means that all the data associated with transactions and smart contracts is kept off-chain and verified by the user. Client side validation has two principal advantages.

First, it enables Lightning Network compatibility without any additional changes to the Lightning Network protocol. Second, it lays a strong base for more scalability, privacy, and programmability, since it keeps all the validation logic outside the blockchain.

This means that on-chain fees are minimised, blockchain observers cannot violate the privacy of RGB users, and there is no need for a global consensus to update the protocol and its validation logic.

This client side validation model employs single-use seals defined over Bitcoin transaction outputs (UTXO) which provides the ability to fully leverage Bitcoins double spending protection and censorship resistance without any trust compromise.

An asset is always allocated to a Bitcoin output (which has the role of single-use seal in the protocol), and when it is transferred to a new owner such outputs needs to be spent, meaning that a Bitcoin transaction is created with that UTXO as an input, and the off-chain RGB transfer data will define which UTXO is the new owner of the asset.

To issue a new asset, a user needs to create an issuance contract, which will define all the parameters of the token, such as total supply, metadata, media attachment and transaction validation rules.

The user can define these parameters for each smart contract in a Schema, which can be seen as a template to create new issuance contracts. Different Schema can be used for different use cases, for example, issuing tokenised assets that are fungible uses one Schema, issuing NFTs use another Schema, while a digital identity may use yet another Schema.

Each RGB smart contract operates independently, as its own shard or segment, which means its fully independent of other contracts, and they never interact directly, meaning that, unlike other token protocols, each user needs to validate only the contracts that are relevant for him, keeping the hardware requirement for full validation low.

Thanks to the compatibility with the Lightning Network, it is possible to send and receive RGB assets with the same speed and costs of Bitcoin Lightning Network payments, while retaining all the security of the Bitcoin blockchain. This represents a paradigm shift compared to other asset protocols which usually achieve fast and cheap transactions only by sacrificing network decentralisation, security and censorship resistance.

Moreover the Lightning Network can be leveraged also to enable DEX functionalities, letting users swap assets against Bitcoin directly on Lightning. Compared to liquidity pool based DEXes, this model offers lower latency, lower fees, more privacy, no risk of front-running by miners or flashbots and certainty on the execution price, all elements that contribute to developing a more efficient market with better prices for all the participants.

RGB, while extremely exciting, is still in its final stages of development and testing. Many developer tools and wallets have already been created, but at the time of writing they are still only used in testnet. RGB can be compared to the early days of the Lightning Network, where using it was considered reckless and the protocol may still have bugs and users could potentially lose funds if they are not very careful.

That being said, for the brave and adventurous, theres already a great educational website for users who want to start understanding and playing around with the protocol, and for developers interested in building with RGB.

As it stands currently, there are several RGB wallets that users can start using to issue, send, and receive tokenised assets via RGB. While many Bitcoin wallets have not implemented support for RGB, it can be expected that they will as the protocol matures.

The current wallet offerings include Iris, Bitmask, MyCitadel, and Shiro, and new teams that want to build on top of RGB can leverage the rgb-lib library for faster development without having to dive too deep in the protocols technicalities.Interested users can also elect to run their own RGB compatible Lightning Network node, which can be used to create Lightning channels denominated in RGB assets and send payments through them .

RGB is a very promising protocol that can solve many of the problems currently faced by the digital assets ecosystem. This is why Bitfinex is actively supporting RGB with an entire team dedicated full-time to advancing its development. Bitfinexs RGB team is contributing to RGB not only by contributing to the protocol itself, but also by releasing several developer tools, launching the Iris Wallet and the Lightning Node. Tackling ecosystem-wide problems with new approaches is the best way to enable progress and advance the whole industry.

Go here to see the original:

How Can RGB Improve Bitcoin? - Bitfinex blog - Bitfinex

Read More..

An Introduction to Binance Smart Contracts for Token Holders – Net Newsledger

Token holders know that they can benefit from Binance Smart Contracts greatly, but many of them may be worried about being bested by a scam or a fraud. This article will share the key factors that token holders need to consider in order to avoid getting entangled in a scam and suffering losses.

Binance Smart Contracts are a powerful tool for token holders looking to securely and reliably manage their crypto assets. Binance Smart Contracts provide users with the ability to create decentralized and trustless agreements, allowing them to protect themselves from potential scams and other malicious activities commonly associated with the crypto-currency space. With Binance Smart Contracts, users can write contracts that automatically execute when certain conditions are met and all transactions are securely stored on the blockchain. This allows users to streamline the process of sending money, managing their tokens, and exchanging goods or services amongst other participants in the network.

Binance Smart Contracts also offer a range of additional features such as multi-signature wallets, secure storage protocols, and decentralized applications. This makes it an ideal platform for token holders to manage their assets and stay safe from theft or fraud. Overall, Binance Smart Contracts are a powerful tool that can help token holders securely manage their crypto assets, reduce the risk of scams, and access additional features such as multi-signature wallets and decentralized applications. With its innovative features, it is fast becoming one of the most popular ways to manage tokens and other crypto assets.

Binance Smart Contracts for Token Holders

Binance Smart Contracts offer token holders a number of benefits. With these contracts, token holders can control their own crypto assets without having to rely on third-party services or platforms. This gives them more control over their funds and reduces the risk of fraud or theft. Additionally, Binance Smart Contracts also provide a way for users to transfer tokens securely and easily, without the need for a middleman. This makes it easier for users to send tokens between themselves in a trustless manner. The contracts also provide token holders with additional security features such as on-chain dispute resolution and scam prevention mechanisms.

These additional security measures can help reduce the risk of theft while providing users with more control over their funds. Overall, Binance Smart Contracts provide token holders with a secure and reliable way to manage their crypto assets. The use of these contracts also offers users the ability to create decentralized applications on top of the blockchain without having to worry about security or other issues related to maintaining trust in third-party services. This eliminates the need for users to trust any third-party services or platforms when using the blockchain, allowing them to interact directly with each other without relying on intermediaries. This makes it easier and more secure for users to transact on the blockchain while increasing the flexibility of their crypto assets.

RING Financial is a good example of a project that utilized Binance Smart Contracts and faced some serious challenges along the way. Some have thrown accusations at RING Financial, making the claims that the project was a scam. Lets study this example and see what it says about the Binance blockchain scams or possible solutions to any such scam in the crypto space. But first, lets see what the RING Financial project was designed to be.

The DeFi project was a protocol aimed at aggregating all the best staking protocols and giving access to all decentralized protocols. As many enthusiasts already know the crypto space can be a challenge to navigate. The RING Financial Token was designed to ease the process for all investors. RING Financial also aimed to reduce costs and fees. Having been built on the Binance Smart Chain, RING Financial was able to offer lower price points to users. The project aimed at changing the DeFi space for its noders and it was essentially on the way to achieving this goal. However, RING Financial suffered a hack on December 5, 2021.

Many crypto projects have suffered due to hacker attacks in the past, and RING Financial fell victim to the same issue. The project utilized the famously secure Binance smart contracts, but the development phase had a flaw that laid the project open to attack. The fundamental flaw of the development was not assigning the onlyOwner function to the Reward part of the project. Why didnt the RING Financial developers include the function? They assumed that the codes would automatically inherit the functions assigned to their parents, which is the standard for many coding languages. This resulted in anyone being able to modify this part of the code and hence exposing the project to the threat of a scam. This resulted in the project getting hacked, leading to losses on the part of the users and, in the end, causing a decline in trust in RING Financial.

So, what can we draw from the RING Financial case study? We can discern that crypto spaces are still developing and the potential for a scam is a danger to most crypto projects. We do think that despite the losses suffered, RING Financial was likely not a scam or a fraud due to the nature of the Contracts flaw. Many accused RING Financial of being a scam, but the facts of the case simply dont bear out the claims. What can future crypto investors do to avoid getting accused of being a scam or a fraud? They must avoid security breaches at all costs. Lets check out some important crypto-safety rules they can follow.

To avoid getting entangled in a fraud or a scam ensure proper coding and the lack of development flaws.

These are the most important basic rules to keep in mind for projects to avoid being accused of a scam or fraud. You should also strive to avoid the flaws in other projects such as RING Financial. By following these rules and digging a bit deeper to come up with your own, you can build a safe and secure project of your own, ensuring its success.

Read the rest here:

An Introduction to Binance Smart Contracts for Token Holders - Net Newsledger

Read More..

What Is Ethereum Virtual Machine (EVM) and How Does It Work? – Techopedia

What Is Ethereum Virtual Machine (EVM)?

Ethereum Virtual Machine (EVM) is a software that sets the rules of computing the state of the Ethereum network from block to block. The EVM is a core part of Ethereum as it executes smart contracts, processes transactions, and updates account balances.

In simple words, EVM is a virtual machine or a cloud computer that powers the Ethereum protocol. Just like a computer, it executes code and has memory to save information. However, EVM is not just a single computer but a pool of thousands of cloud machines around the world.

EVMs Key Functions

Managing the state of the Ethereum blockchain

Executing smart contracts

Calculating gas fees

Developers on Ethereum write their smart contract code in a programming language called Solidity. The code is then translated to byte code so that the EVM can read the instructions.

In the process of translating the code from Solidity to byte code, the instructions are first broken down into opcodes or operation codes. Every line of code is converted to opcodes so that the EVM knows exactly how to execute a transaction.

As we know, every transaction on Ethereum requires a gas fee to be executed. Therefore, the relationship between opcodes and gas fees is also important in understanding how the EVM works.

In theory, when you are paying gas fees, you are actually paying for the opcodes to be executed by the EVM. The more opcodes there are, the higher your gas fees will be.

Simple transactions like sending ETH from one account to another will require less gas compared to complex processes like creating a smart contract, as the EVM is required to do more work.

The EVM is designed as a Turing-complete virtual machine. Turing completeness refers to a machine that can solve any problem if it is given the necessary resources of time, energy, and complete instructions.

Ethereums Turing completeness is the networks ability to understand and implement future agreements of a smart contract.

The EVM executes code deterministically. A particular smart contract will always produce the same output for the same input. It does not matter where the smart contract is executed or who is executing it, the output will always be consistent for a particular input.

The EVM is designed to work in isolation from the rest of the computer system. This ensures that the smart contracts are executed within a safe environment.

EVM code:The byte code that the EVM can execute.

State:Ethereum is a large data structure that holds information on accounts and balances. The state of this information changes from block to block as the EVM processes new inputs to produce deterministic outputs.

Transactions: Transactions are cryptographically-signed instructions from users that the EVM executes. There are two types of instructions:

Space:The EVM utilizes three space components:

Gas: The amount of computational effort required to execute operations on a blockchain network. Every EVM computation requires gas fees otherwise, the transaction will not be processed.

Ethereum has grown to become the most valuable smart contracts public blockchain network in the world.

On June 21, 2023, Ethereums ecosystem of decentralized applications (dApps) boasted a total value locked (TVL) of $24.63 billion. The second most-valuable chain Tron held a TVL of $5 billion, according to DefiLlama.This shows a clear market preference for Ethereum, which has established itself as the go-to blockchain for decentralized finance (DeFi).

Numerous blockchain networks are designed to be EVM-compatible. This allows the blockchain to execute Ethereum smart contracts. EVM compatibility enables developers to easily deploy their smart contracts across Ethereum and multiple EVM-compatible chains.

EVM-compatible networks can tap into the vast pool of Ethereum users, which can be critical for growth and mass adoption.

The EVM software is the lifeblood of Ethereum as it maintains the state of the blockchain and executes smart contracts.

The meteoric growth of Ethereum has made EVM an industry standard, so much so that rival blockchain networks are designing their systems to be compatible with it.

See the original post here:

What Is Ethereum Virtual Machine (EVM) and How Does It Work? - Techopedia

Read More..

3 reasons why Ethereums market cap dominance is on the rise – Cointelegraph

Ethereum has been the dominant smart contract and decentralized application (DApp) network since its inception. An analysis of Ethers price (ETH) and market capitalization shows indisputable evidence that the blockchain has been gaining market share.

As shown above, Ethers dominance in market capitalization terms has grown over the past couple of years, from an 18% average in July 2021 to the current 20%. Excluding Bitcoin (BTC) from the analysis, Ethers market share presently stands at 40.6%, while its next competitor, BNB (BNB), holds a 7.2% share.

The disparity between Ethereum and others is also evident when analyzing the total value locked (TVL) on each networks smart contracts. Ethereum is the absolute leader with $24.6 billion in TVL, followed by Trons $5.4 billion and BNB Chains $3.3 billion.

The above chart depicts how Ethereums TVL market share declining from 70.5% in June 2021 to 49.5% in May 2022 as Terra and Avalanche gained a combined 20% market share. However, following the Terra ecosystem collapse in May 2022 which culminated in developers halting network activity Ethereum quickly regained a 58% market share.

Despite the emergence of DApps on the BNB and Tron blockchains, Ethereums leadership has remained uncontested over the past 12 months. This data shows the irrelevance of the total number of unique active wallets (UAW) interacting with smart contracts per chain.

For instance, according to DappRadar, WAX has 363,600 active users, followed by BNB Chains 517,300 30-day UAW. These figures are way higher than the Ethereum networks 66,300 unique active addresses, but they reflect much lower transaction fee, opening room for manipulation.

The Ethereum ecosystem has the highestnumber of active developers, surpassing 1,870, which is more than the next three competitors Polkadot (752), Cosmos (511) and Solana (383) combined.

Currently, the Ethereum network hasover 700,000 validators, with 99% of the balances locked in staking participating in the process. The 32 ETH threshold limit per validator undoubtedly inflates this number, but Lido, the largest known staking pool, controls 32% of the staking, with Coinbase coming in second with 9.6%.

Consequently, it is safe to say that Ethereum is far less centralized in terms of development and validation compared with Tron, BNB Chain and Solana.

Other reasons Ethers dominance has been on the rise, even as Bitcoin reached a 50% market share on June 19, include derivatives activity and its dominance of the NFT market

Ethers future contracts are essential for institutional trading practices like hedging and trading with leverage. Ethers cash-settled futures were added to the Chicago Mercantile Exchange in February 2021. To date, no other cryptocurrency, apart from Bitcoin, has reached the worlds largest derivatives exchange.

In futures markets, longs and shorts are balanced at all times, but having a larger number of active contracts open interest allows the participation of institutional investors who require a minimum market size. Ether futures aggregated open interest stands at $5.4 billion, while competitor BNB hold $380 million and Solana a mere $178 million.

Nonfungible tokens are a perfect example of how cheaper, faster transactions do not always translate to increased adoption. Nothing is stopping NFT projects from shifting between blockchains, whether for new listings or existing collections. In fact, y00ts and DeGods moved to Polygon earlier in 2023.

Despite facing gas fees that often break above $10, Ethereum remains the absolute leader in the number of buyers and total sales. According to CryptoSlam, the leading networkreached $380 million in sales over the past 30 days, while Solana, Polygon and BNB Chain totaled a combined $93 million.

Ultimately, the data favors Ethereum versus its competing smart contract-focused blockchains. The positive trend in Ethers dominance might fade over time if the promised network upgrade to allow parallel processing (sharding) does not come to fruition, but for now, Ethers 20% market capitalization share remains unchallenged.

This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the authors alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

This article does not contain investment advice or recommendations. Every investment and trading move involves risk, and readers should conduct their own research when making a decision.

The rest is here:

3 reasons why Ethereums market cap dominance is on the rise - Cointelegraph

Read More..