Page 31«..1020..30313233..4050..»

Engineering and architecture institutions preparing graduates for international careers – Study International

Engineers and architects are at the forefront of creating eco-friendly solutions, from designing energy-efficient buildings to developing renewable energy systems. For instance, the Bullitt Center in Seattle, often hailed as the greenest commercial building in the world, showcases the innovative work of architects and engineers in creating a self-sustaining structure that significantly reduces its environmental impact.

These professioanls are also instrumental in large-scale projects like the Belt and Road Initiative, which spans continents and aims to enhance global trade and economic development. Such tasks require a deep understanding of diverse engineering practices and architectural styles.

However, whether one chooses to pursue engineering or architecture, each path offers unique opportunities to make a significant impact. Engineers might focus on creating advanced technologies that solve critical problems, such as developing clean energy solutions or improving transportation infrastructure. Architects, on the other hand, might concentrate on designing spaces that enhance community well-being and promote sustainability.

Each discipline, while distinct, contributes in essential ways to building a better world. Here are three universities that excel in both areas:

The Department of Engineering and Architecture is a scientific and educational division of the Universit di Parma. Source: Universit di Parma

Founded in 962 AD, the Universit di Parma is one of Europes oldest universities. Today, it is home to over 32,000 students and 960 faculty members. Its picturesque location in Parma, a city renowned for its cultural richness, is an instant draw that keeps these numbers growing. Beyond its artistic and culinary delights, Parma thrives on music and drama. Apart from boasting association with famed opera composer Giuseppe Verdi, it prides itself on having the highest number of quality-protected food products in Italy.

Its also an apt location for students drawn to future-focused topics. Universit di Parmas Science and Technology Campus serves as a hub for innovation, accommodating five departments one of which is dedicated to Engineering and Architecture. Programmes span architecture and city sustainability, civil engineering, computer engineering, electrical and electric vehicle engineering, and more.

The MS in Communication Engineering stands out for its competitive pricing (around 2,000 Euros annually). The best part? It is just as comprehensive as it is affordable. Delivered entirely in English by internationally renowned professors, it covers digital, wireless, and optical communications, networking, information theory, antennas, photonic devices, the Internet of Things, network security, and various other Information and Communication Technology topics.

This combination ensures students gain expertise in designing and managing complex telecom systems such as cellular networks 4G/LTE and emerging 5G and fibre-optic infrastructure critical to the Internets backbone. At the same time, they relish insights from the universitys cutting-edge research.

Such exposure explains why upon completing their studies, graduates are highly sought-after by research centres across Europe, including Nokia Bell Labs France and the European Space Agency. Dr. Matteo Lonardi, a 2016 graduate, is a prime example. He is currently working as a research scientist and product manager specialising in advanced analytics at Nokia Bell Labs. Learn more about following in his footsteps.

Chalmers University of Technologys Department of Architecture and Civil Engineering aims to address global challenges for the built environment in innovative and responsible ways. Source: Chalmers University of Technology/Facebook

Chalmers University of Technology, with roots dating back to 1829, houses a forward-thinking Department of Architecture and Civil Engineering. It develops future architects and engineers by combining education and research across engineering, social sciences, architecture, and humanities.

To produce sustainable solutions for a thriving society, the department offers specialised programmes through comprehensive curricula at the Bachelors, Masters, and postgraduate levels. For architects, the focus is on responsible resource use and creating high-quality living spaces, from urban planning to intricate building details. Programmes integrate artistic methods, technical research, and socio-cultural considerations to cultivate a design-thinking mindset in students.

Civil Engineering programmes explore the vast spectrum of engineering disciplines crucial for sustainable community development. Recognising the building sectors profound impact on society, the department equips students with the knowledge to navigate the entire construction process, from planning and development to operation, while prioritising human needs, environmental impact, energy efficiency, and economic viability.

Masters programmes delve even deeper. Specialisations include Architecture and Urban Design, Planning Beyond Sustainability, Design and Construction Management, Infrastructure and Environmental Engineering, and more. Students benefit from the expertise of instructors who are actively engaged in both research and industry, ensuring cutting-edge knowledge.

Extensive experimental activities form the backbone of the departments approach, with cutting-edge labs like acoustics, building materials, geomechanics, and structures labs providing a hands-on environment that fuels both research and teaching.

The UCD School of Civil Engineering is home to a community of staff and students engaged in researching, teaching, and learning the various aspects of the built environment. Source: University College Dublin/Facebook

University College Dublin, Irelands global university with over 160 years of experience, is a leader in pursuing a sustainable and equitable future. This commitment is especially evident in the School of Civil Engineering, where the United Nations Sustainable Development Goals (SDGs) are embedded into the curriculum.

The school fosters a vibrant community dedicated to research, teaching, and learning across the entire spectrum of the designed environment. From buildings and urban spaces to rural environments, transportation systems, water management, and historical preservation, their expertise is as diverse as it is impactful.

The school offers a comprehensive range of programmes, including Civil Engineering, Civil, Structural & Environmental Engineering, Water, Waste & Environmental Engineering, and Structural Engineering. The best part? Regardless of the programme chosen, UCD graduates are empowered to pursue professional engineering careers globally thanks to international recognition of their degrees through agreements with Engineers Ireland. This recognition allows graduates to practise in numerous countries within the EU and those adhering to the Washington Accord.

Whats more, the school is a hub of impactful research, with nearly 60 PhD and research Masters students actively engaged in diverse fields. In recent years, significant investments have been made to modernise research capabilities across various sub-disciplines and establish world-class facilities. These include laboratories for structural testing, material analysis, hydraulics, and water treatment, alongside advanced computing resources and an engineering workshop. This infrastructure allows them to translate theoretical knowledge into practical solutions that shape a more sustainable future.

*Some of the institutions featured in this article are commercial partners of Study International

Read the original here:

Engineering and architecture institutions preparing graduates for international careers - Study International

Read More..

World-first: EVs give power back to grid during outage in Australia – Interesting Engineering

The development of electric vehicles was primarily driven by the need to reduce dependence on fossil fuels and lower the impact of climate change. However, a major benefit of EVs was unexplored until a fleet of electric cars supplied power to a grid during a blackout in Canberra.

The power supply to tens of thousands of homes was interrupted during a major storm in Canberra. Then, the power was supplied from vehicles batteries to an Australian electricity grid.

Its the first time in the world this type of vehicle-to-grid response to an emergency has been demonstrated, said lead author of the study, Senior Research Fellow Dr Bjorn Sturmberg from the Australian National University.

It shows electric vehicles can provide the backup we need in an emergency like this.

Sturmberg maintained that the team has a fleet of 51 EVs across Canberra that monitor the grid whenever theyre plugged in and can quickly inject short bursts of power to rebalance the system if the national grid rapidly loses power. Theyre essentially big batteries on wheels.

During the blackout, 16 EVs were plugged in at properties across Canberra.

The researcher claimed that immediately after the blackout, these vehicles started discharging power into the grid, as theyve been programmed to do.

In total, they provided 107 kilowatts of support to the national grid, added Sturmberg.

To put that in perspective, 105,000 vehicles responding in this way would fully cover the backup required for the whole of the ACT and NSW.For context, there were just under 100,000 EVs sold in Australia last year.

The event, which took place in February, was first real-world test of our vehicles and chargers, according to the researcher.

The team believes theres a lot to be done to balance increased EVs being charged with grid security. Also, EV owners charging their vehicles at the same time in evening when they come back could put additional burden on grid.

Sturmberg highlighted that in the case of the February emergency, once the vehicles had provided power for ten minutes some resumed charging by default. There would be little cost or inconvenience in delaying charging for an hour or two in this kind of situation.

Electric Vehicle Council energy and infrastructure head Ross De Rango said vehicle-to-grid technology was a huge opportunity for Australia which could put downward pressure on power bills and help enable coal and gas-fired power stations to be closed sooner, according to ABC.

We dont see a future where anyone is able to draw energy out of a consumers car without their consent. This level of consumer protection is actually baked in at a very basic level because its the driver that decides if the car is plugged in or not, he told ABC.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Prabhat Ranjan Mishra Prabhat, an alumnus of the Indian Institute of Mass Communication, is a tech and defense journalist. While he enjoys writing on modern weapons and emerging tech, he has also reported on global politics and business. He has been previously associated with well-known media houses, including the International Business Times (Singapore Edition) and ANI.

Read the original:

World-first: EVs give power back to grid during outage in Australia - Interesting Engineering

Read More..

Scientists discover new hormone in breastfeeding women that helps heal bones faster – Interesting Engineering

Researchers from the University of California at San Francisco (UCSF) and UC Davis discovered a hormone that keeps breastfeeding womens bones strong, and it could help heal fractures, too.

The new study sought to solve the mystery of how womens bones remain unaffected even though they lose calcium to produce milk. Though estrogen levels are low, osteoporosis and bone fractures are much rarer, as per a press release, suggesting that something other than estrogen is at play.

And they found it: a new hormone named CCN3.

Previously, senior author Dr. Holly Ingraham and collaborators, in studying female mice, blocked an estrogen receptor in neurons in a small area of the brain, and their bone mass increased. They thought that strong bones were linked to a hormone in the blood, but they couldnt find it.

After an exhaustive search, they finally identified CCN3, a hormone that behaved differently than others that neurons secrete.

The notion that a hormone can be secreted directly from the brain is a new concept in the field of endocrinology. Our findings leave us wondering if other hormones are secreted from the so-called windows of the brain in response to changing physiological demands, such as lactation.

As per the press release, they were able to locate CCN3 in the same brain region in lactating female mice, but not the receptor, as of yet.

In the absence of this hormone, lactating female mice lost bone mass, and the babies lost weight as well. This confirmed how vital this hormone is, so they named it the Maternal Brain Hormone (MBH).

In increasing the levels of CCN3 in female and male mice, their bone mass and strength improved in weeks, and dramatically. Remarkably, CCN3 doubled the amount of bone mass in very old female mice and those lacking estrogen.

Further testing proved just how strong the bones were.

Dr. Thomas Ambrosi, a project collaborator, went on to say that highly mineralized bones arent always better as they can become weaker and break more easily. However, when we tested these bones, they turned out to be much stronger than usual.

When he examined the stem cells in the bones, responsible for regeneration, he found that when exposed to CCN3, they supported the production of new bone cells. Thus, they concluded that CCN3 could possibly assist in bone healing.

They created a hydrogel patch and attached it to the bone fracture, so they could slowly release CCN3 for two weeks. Normally, bone fractures in older mice dont heal easily or well, but the CCN3 patch actually helped to regenerate the bone. The healing of their fracture was even described as youthful.

They essentially repair at the rate of young two-month-old male mice, researchers told IE.

Weve never been able to achieve this kind of mineralization and healing outcome with any other strategy, Ambrosi said. Were really excited to follow it up and potentially apply CCN3 in the context of other problems, such as regrowing cartilage.

Now, researchers intend to continue studying the molecular mechanisms of CCN3 in breastfeeding women, and its potential to treat bone conditions. No side effects have been found yet, as researchers told IE. However, once they identify the receptor for CCN3, they can survey which tissues and cell type might be affected by this hormone.

With osteoporosis impacting more than 200 million people worldwide, which is classified as a weakened bone structure, thus making them susceptible to fracturing, women, after menopause, are at a particularly high risk.

A decrease in estrogen levels was thought to be the cause, which is true. However, whilebreastfeeding, researchers found this loophole, as they dont lose any bone mass. So, they found the hormone that they can now apply to help women later in life.

But this study also stands to support breast cancer survivors, as they have to take hormone blockers, female athletes, and older men who statistically have a lower survival rate after a hip fracture than women.

It would be incredibly exciting if CCN3 could increase bone mass in all these scenarios., Ingraham said.

Lastly, interestingly enough, one of the remarkable things about these findings, Dr. Ingraham said in the press release, is that female mice arent used in biomedical research, which is why the hormone had never been discovered.

It underscores just how important it is to look at both male and female animals across the lifespan to get a full understanding of biology.

The study was published in Nature.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Maria Mocerino Originally from LA, Maria Mocerino has been published in Business Insider, The Irish Examiner, The Rogue Mag, Chacruna Institute for Psychedelic Plant Medicines, and now Interesting Engineering.

Read the rest here:

Scientists discover new hormone in breastfeeding women that helps heal bones faster - Interesting Engineering

Read More..

Air Force engineer charged with cover up in Marine KC-130 crash that killed 16 – Task & Purpose

A catastrophic propeller failure that ripped apart a Marine Corps KC-130T during a 2017 flight, killing 15 Marines and one sailor on board, was more than just a mechanical failure, federal prosecutors say. The deadly crash traces back, officials say, to faulty inspections half-a-decade earlier which an Air Force civilian engineer approved and then covered up.

Federal prosecutors charged James Michael Fisher, a former Air Force civilian engineer at the Warner Robins Air Logistics Complex in Georgia, this week with obstructing justice and making false statements to crash investigators. Fisher, prosecutors say, lied about and then tried to cover up his role in signing off on inspections that should have detected cracking in a propeller blade before it disintegrated mid-flight, dooming the flight known as Yanky 72, all 16 service members on board.

The federal investigation found that Fisher, who served as the C-130 lead propulsion system engineer at the complex from 2011 to 2022, had signed off on a waiver of a time-consuming inspection method and continued to recommend that technicians use a less reliable way to inspect propeller blades, according to an indictment against Fisher in the U.S. District Court for the Northern District of Mississippi.

Fisher, 67, is also accused of trying to thwart efforts by federal agents to learn about his decisions regarding propeller blade inspections.

The federal charges are the latest chapter in a series of escalating investigations around the crash. An initial military-led investigation found that as the planes propeller broke apart 20,000 feet over Mississippi, spinning pieces of the blades cut the fuselage in half, dooming all onboard. Military investigators whose review focused on the physical cause of the crash blamed technicians at the Warner Robins Air Logistics Complex, finding that technicians and supervisors there had been negligent in a series of inspections on the planes propellers six years earlier. But while the militarys findings established the cause of the crash, the question of criminal misconduct whether the shoddy inspections and efforts to cover them up amounted to a crime fell to the Department of Justice to decide.

Task & Purpose obtained through the Public Access to Court Electronic Records system, or PACER.

Fisher attempted to obstruct the criminal investigation by intentionally withholding documents showing that he played a crucial role in removing the critical inspection procedure and providing false statements to federal agents in order to cover up his role in removing the critical inspection procedure, the indictment says.

He also admitted to federal agents that the inspection that was not performed would have found the cracking in the faulty propeller blade that caused the KC-130T to crash, but he claimed others had approved using a different type of inspection for C-130 propeller blades, according to the indictment.

Fisher has been charged with making false statements and obstruction of justice, according to the Justice Department. He faces a maximum sentence of 20 years in prison, if convicted.

When reached by Task & Purpose on Thursday, Fishers attorney declined to comment for this story.

Fifteen Marines and one sailor were killed when the KC-130T with the call sign Yanky 72 crashed on July 10, 2017 in Mississippi. Seven of the Marines killed were with the 2nd Raider Battalion.

Subscribe to Task & Purpose today. Get the latest military news and culture in your inbox daily.

The crash was caused when a corroded propeller blade broke apart in flight. The blade, identified as PB24 Corroded Propeller Blade, had arrived at the Warner Robins Air Logistics Complex in August 2011 for an inspection and overhaul. It was later determined that the blade had corrosion and about three inches of cracking in an area at the base of the blade known as the taper bore, that was neither detected nor fixed at the complex, the indictment says.

The inspection and overhaul process lasted until Sept. 12, 2011 and the propeller blade went back into the Navys C-130 fleet, according to the indictment. It is unclear what tests were performed on the blade because all work documents were destroyed per Air Force policy.

The charges against Fisher stem from how his technicians looked for corrosion and cracking in taper bores.

One method for inspecting propeller blades known as penetrant inspections involved immersing or spraying the blades with a fluorescent dyeand then using a black light to see where the dye had seeped through cracks in the taper bore.

The other method is called an eddy current inspection: Maintenance technicians move an electromagnetic probe over the surface of the taper bore. The probe sends a signal to a monitor if it detects any cracks or corrosion.

Of the two methods, using the fluorescent dye took the most time. Prior to Aug. 22, 2011, maintenance technicians were required to perform the penetrant inspections on all Air Force and navy C-130 propeller blade taper bores. They were also required to conduct eddy current inspections as a backup test if the penetrant inspections found cracks or corrosion.

But the technicians knew that there were problems with the eddy current inspections, according to the indictment.

Robins only had one set of eddy current testing probes for the Navy and Air Force, even though the Tech Manuals had different probe requirements, the indictment says. Before August 22, 2011, maintenance technicians had repeatedly reported to their supervisors and other engineers at Robins that the eddy current probes being used were unreliable.

Despite these shortcomings, a maintenance technician supervisor sent Fisher an email on Aug. 19, 2011 asking permission to stop conducting the penetrant inspections because they were very time consuming, the indictment says.

Fisher quickly wrote back, I have no problem with removing the requirement for dye penetrant, and added his rationale for approving the request.

On Aug. 22, 2011, another maintenance technician supervisor submitted a request known as a Blanket Form 202 requesting permission to stop conducting the penetrant inspections. Such forms were required for any chances in how the Navy and Air Force technical manuals called for inspecting propeller blades. The Blanket Form 202 was approved that day.

The request contained the notation ATTN: MIKE FISHER and contained the same language from Fishers email, stating the rationale for removing the request was PER MIKE FISHER C-130 PROPULSION SYSTEM ENGINEER, the indictment says.

Neither Fisher nor the engineers with the System Program Office, which oversees how C-130s are inspected and overhauled, consulted with the specialists known as Level 3 engineers, who were experts on the penetrant and eddy current inspections, according to the indictment.

The Level 3 engineers had the training and the expertise to determine whether removing the penetrant inspections was advisable, the indictment says.

As a result, maintenance technicians did not conduct any penetrant inspections at the complex between Aug. 11 2011 and Dec. 13, 2013. Fisher was the engineer assigned to three additional Blanket Form 202s during that time that recommended using solely eddy current inspections.

The supervising engineer tried to talk to Fisher between April and September 2012. That September.

In late 2011, a Level 3 supervising engineer ordered an evaluation to look into the effectiveness of eddy current inspections based on concerns from maintenance technicians, the indictment says. The report came out in February 2012 and found that the probes used for the eddy current inspections were not reliable. Fisher did not immediately respond to the report.

Finally, in September 2012, Fisher responded to the supervising engineer since we are already using penetrant I would be happy with just eliminating the use of probes even though penetrant inspections were not being performed after August 22, 2011, the indictment says.

In December 2013, the System Program Office engineers approved going back to penetrant inspections and stopping the use of eddy current inspections. The unreliable eddy current probes were taken out of service. Technicians at the complex did not perform any eddy current inspections on taper bores until after the Yanky 72 crash.

When military investigators looked into the crash, they were not provided with any of the Blanket Form 202s, according to the indictment. They were also led to believe that they could not speak with any of the technicians who performed the inspection on the faulty propeller blade that caused the crash, nor were they told about the technicians concerns about eddy current inspections or that an engineer had found in 2012 that the eddy current probes were unreliable.

They believed that the technicians who had inspected the blade had followed the Navy technical manual, which calls for penetrant inspections.

In sum, the JAGMAN [Jude Advocate General Manual investigation] Report primarily blamed maintenance technicians for the crash, stating they were grossly negligent and primarily responsible for the mishap, the indictment says. Fisher and the System Program Office avoided scrutiny.

But when federal agents launched an investigation in 2020 into whose gross negligence was responsible for the crash, maintenance technicians told investigators that their supervisors in 2011 cared more about production than safety, and they had disregarded technicians concerns about inadequate inspections, the indictment says. They also told federal agents that their supervisors used Blanket Form 202s to work around any problems they had identified, such as insufficient equipment.

Several technicians provided investigators with the Blanket 202 Forms about stopping penetrant inspections and other documents that showed technicians had raised concerns about the eddy current probes in 2011.

The technicians believed that their supervisors focus on production and productivity and the east with which Blanket Form 202s could be obtained caused the corrosion of the P2B4 Corroded Propeller Blade to go undetected.

During the investigation, federal agents determined that Fisher could not be trusted, according to the indictment. He initially did not tell investigators about the Blanket Form 202 that ended penetrant inspections. He later falsely told federal agents that there were no Form 202s approved in 2011 and 2012 for Navy aircraft and only seven in 2013.

When federal agents met with Fisher in July 2021 to ask about the Form 202s they had found, he gave them the form from August 2011 about penetrant inspections.

Fisher admitted to the agents that the August 22 Form 202 was a new revelation that changed the root cause conclusion as to why maintenance technicians missed the cracking of the propeller blade that caused the Yanky 72 crash, the indictment says. Fisher, whose name was on the Form 202, denied knowing about its existence and denied approving it. Specifically, Fisher stated that he would not have approved removal of penetrant inspections because deviating from penetrant inspections would result in corrosion going undetected.

He stated that a penetrant inspection would have detected the corrosion and pitting in the taper bore that led to the intergranular crack that caused the PB24 corroded propeller grade to fail, according to the indictment. Fisher stated that he could not understand why his System Program Office colleagues would approve such a Form 202. Fisher further claimed that he never would have approved the August 22 Form 202 because in 2011 there were problems with the reliability of eddy current probes.

Federal agents later found Fishers email saying he had no problem with ending the penetrant inspections, the indictment says. Fisher also falsely told investigators that the waiver for penetrant inspections had expired in February 2012. Ultimately, the federal agents determined that Fisher was the primary decisionmaker in resuming penetrant inspections in 2013, more than a year after being told that eddy current inspection probes were unreliable.

When talking to investigators in December 2021, Fisher again denied that he knew about any of the Blanket Form 202s before that July. He denied approving the August 2011 Blanket 202 Form, he said he could not understand why his colleagues approved it, and he said the Level 3 technicians were not helpful at the time because they didnt respond to his request for assistance.

Federal agents finally confronted Fisher with his August 19, 2011 email, the indictment says. Fisher denied remembering the email and stated that, regardless, his colleagues should not have approved the Blanket Form 202 without doing their own research. The next day, on or about December 3, 2021, Fisher sent federal agents a follow-up email. In that email, Fisher again placed blame on the System Program Office engineers who approved the August 22 Form 202.

UPDATE: 07/11/2024; this story was updated after James Michael Fisher declined to comment.

Read this article:

Air Force engineer charged with cover up in Marine KC-130 crash that killed 16 - Task & Purpose

Read More..

Festival celebrates the engineers helping to solve some of the worlds greatest challenges | UCL News – UCL – University College London

An action-packed programme of free interactive events for the whole family will showcase how UCL engineers are creating the future, in fields such as artificial intelligence, space exploration, robotics and medicine.

Launching on UN World Youth Skills Day, the first UCL Festival of Engineering will run from 15-20 July 2024 at sites across several London boroughs, from the main UCL campus in Bloomsbury to the UCL East and Here East campuses at Queen Elizabeth Olympic Park in Stratford.

The Festival celebrates 150 years of pioneering engineering education at UCL that spans traditional disciplines, saw the introduction of the first engineering teaching laboratory in the UK, and has reimagined how engineering is taught globally.

The programme built around the four themes of climate, healthcare, data and inequality has been designed to be highly interactive, with opportunities to do rather than just see. An augmented reality app, which festivalgoers can access on a smartphone or tablet, will help to bring the environment to life.

The main family days will be on 19 and 20 July in Bloomsbury, with an industry showcase on 18 July. Activities for young people, schools and community groups will take place throughout the week, including at Here East on 15 July and UCL East on 16 July. Events will also take place at UCL PEARL, a unique facility to explore how people interact with their environment, in Dagenham on 17 July.

The Festival will feature over 80 demonstrations and workshops, 22 spotlight events, and 20 labs will be open to school groups.

Some of the highlights of the Festival include:

Alongside the interactive events will be a series of talks aimed at the general public, from big questions like Can Engineers Save the World? to quickfire presentations by current UCL students on their area of research.

Professor Clare Elwell, co-organiser of the Festival from UCL Medical Physics & Biomedical Engineering, said: Engineering at UCL is all about solving real-world problems. We are led by the challenges that need to be met, whether they be in medicine, sustainability or computing. It goes way beyond what people may see as the traditional engineering disciplines, really it covers all of life.

We are delivering the Festival to engage a range of audiences with how engineers are creating future worlds, both physical and digital. We want people to see that engineering is fundamentally collaborative. Its about working with end users to create new solutions for the most pressing issues facing humanity.

The Festival is about the engineers of the future in more ways than one. There will be sessions on the recently launched Foundation Year in Engineering and on apprenticeships. Both of these initiatives are designed to provide multiple entry points into engineering, particularly for communities underserved by further and higher education.

Professor Elpida Makrygianni MBE, Head of Education Engagement at UCL Engineering, said: Were delighted to welcome young people, teachers and families to the Festival to experience a fully interactive programme of events built around creating a happier, greener and fairer society.

We invite young people to discover modern engineering and navigate through the wealth of fascinating, diverse and wide-ranging career pathways. We hope that the festival gives them a better understanding of what it is that engineers do and their significance to society and our planet, in solving global challenges. We want to inspire young people from a diverse range of backgrounds, to want make a difference through engineering.

Across the week, the Festival will engage a wide range of groups who influence and are influenced by engineering. There will be a launch event on Monday 15 July for policymakers and industry celebrating UCL Engineerings role in innovation and impact.

E: m.midgley [at] ucl.ac.uk

See the article here:

Festival celebrates the engineers helping to solve some of the worlds greatest challenges | UCL News - UCL - University College London

Read More..

‘Time Traveling’ Quantum Sensor Breakthrough Allows Scientists to Gather Data from the Past – The Debrief

Time travel, widely recognized as a staple of science fiction stories and films, is at least theoretically possible under certain conditions. These include situations like extremely high-speed travel through space, as well as a travelers proximity to particularly strong sources of gravity.

However, new research suggests scientists could be moving closer to extending the manipulation of time beyond theory and into practical use, thanks to new innovations in quantum physics.

Einsteins theory of relativity helped to show the intimate connection between time and space, revealing that as a travelers speed while passing through space increases, their experience of time slows down. This reality has been experimentally verified in experiments involving observed variances on separate clocks that reveal what physicists call time dilation.

Technically, as we walk down the street on any given day of the week, our feet are moving through time at a slightly different rate than our head, given the closer proximity of our lower body to Earths gravitational field. However, such variances are so subtle that they are indiscernible, and quirks of space and time like these have little practical significance.

However, recent research by a team at Washington University in St. Louis, along with collaborators from NIST and the University of Cambridge, is revealing how a new kind of quantum sensor designed to leverage quantum entanglement could lead to a form of real-life time-traveling detectors. The breakthrough discovery, detailed in a new study published on June 27, 2024, presents a bold possibility: scientists could soon be able to collect data from the past.

In their paper, the team describes experiments involving a two-qubit superconducting quantum processor. Their measurements demonstrated a quantum advantage that outperformed every strategy that did not involve the phenomena of quantum entanglement. The results of their study could potentially enable data from the past to be collected by leveraging the unique properties of what Einstein called spooky action at a distance.

While impossible in our everyday world, the realm of quantum physics offers possibilities that defy conventional rules. Central to this advancement is a property of entangled quantum sensors referred to as hindsight.

Kater Murch, the Charles M. Hohenberg Professor of Physics and Director of the Center for Quantum Leaps at Washington University, likens the teams investigations into these concepts to sending a telescope back in time and allowing it to capture imagery of a shooting star.

In their research, the team devised a process where two quantum particles were entangled in a quantum singlet state, comprising a pair of qubits whose opposing spins are always oriented in opposite directions, no matter their frame of reference. One of the qubits, which the researchers designate as the probe, is then introduced to a magnetic field, which induces rotation.

Meanwhile, the qubit that has not been exposed to a magnetic field is measured. This reveals a key aspect of the teams innovation, given that the entanglement properties shared between the two qubits allow the quantum state of the ancillary qubit to influence the probe qubit under the influence of the magnetic field. The remarkable result is that the probe qubit is retroactively influenced, which effectively facilitates the ability to send information back in time.

This means that scientists are technically able to employ this phenomenon of hindsight to determine the optimal direction for the spin of the probe qubit after the fact, almost as if they are watching from the future but controlling the qubits behavior in the past. This allows them to increase the accuracy of measurements.

Under most circumstances, measuring a qubits spin rotation as a means of gauging the size of a magnetic field would have about a one in three chance of failure since the alignment of the field with the spins direction effectively nullifies results. By contrast, the hindsight property allowed the team the unique ability to set the best direction for the spin retroactively.

Under these conditions, the entangled particles effectively function as a single entity that simultaneously exists in both forward and backward positions in time, thereby allowing innovative potentials in the creation of advanced quantum sensors that could produce temporally manipulated measurements.

The implications of such technology are significant and could help give rise to all new sensor technologies, from the detection of rare astronomical phenomena to greatly improving the way researchers study and manipulate the behavior of magnetic fields.

Ultimately, the teams new time travel technology likely marks a significant step toward bringing this well-recognized science fiction concept into reality, allowing innovative new possibilities and insights into nature that extend beyond our current mastery of time.

Published under the innocuous title Agnostic Phase Estimation, the groundbreaking new study by Murch and co-authors Xingrui Song, Flavio Salvati, Chandrashekhar Gaikwad, Nicole Yunger Halpern, and David R.M. Arvidsson-Shukur, appeared in Physical Review Letters.

Micah Hanks is the Editor-in-Chief and Co-Founder of The Debrief. He can be reached by email atmicah@thedebrief.org. Follow his work atmicahhanks.comand on X:@MicahHanks.

Excerpt from:

'Time Traveling' Quantum Sensor Breakthrough Allows Scientists to Gather Data from the Past - The Debrief

Read More..

Higgs boson God particle still remains a quantum mystery after 12 years – Earth.com

The discovery of the Higgs boson has been a captivating journey for physicists worldwide since the particle was first detected in the Large Hadron Collider (LHC) about twelve years ago.

This monumental finding, confirming the existence of the elusive particle theorized almost half a century prior, has unlocked new avenues of exploration and understanding in particle physics.

Despite dedicated research, the properties of this enigmatic particle remain somewhat shrouded in mystery.

Today, the scientific community embraces a new breakthrough that brings us a step closer to understanding the origin of the Higgs boson.

This exciting breakthrough comes from an international group of theoretical physicists, including members from the Institute of Nuclear Physics of the Polish Academy of Sciences.

These scientists have pooled their expertise and resources in a concerted effort to unravel the complexities of the Higgs boson.

For many years, the Higgs boson has remained the crowning glory of discoveries made with the Large Hadron Collider.

Yet, understanding its properties has proven to be a colossal challenge, mainly due to the scientific hurdles encountered during experimental and computational studies.

Established in the 1970s, the Standard Model is a theoretical framework designed to explain the elementary particles of matter accurately.

From quarks to electrons, this model has been instrumental in understanding how various electromagnetic and nuclear forces interact.

The Higgs boson, discovered thanks to the LHC, is the coveted jewel of the Standard Model. It holds a pivotal role in the mechanism that bestows masses to other elementary particles.

Without the Higgs field, particles would not have mass, and the universe as we know it would be drastically different.

Dr. Rene Poncelet from the IFJ PAN, part of this important research, provides clarity on the significance of their work.

We have focused on the theoretical determination of the Higgs boson cross section in gluon-gluon collisions. These collisions are responsible for the production of about 90% of the Higgs, traces of whose presence have been registered in the detectors of the LHC accelerator, Poncelet explained.

This work delves deeper into the quantum realm, where interactions are governed by the rules of quantum mechanics, offering deeper insights into the fundamental workings of our universe.

One of the co-authors of this research, Prof. Michal Czakon from the RWTH, explains why their work is a scientific achievement.

The essence of our work was the desire to take into account, when determining the active cross section for the production of Higgs bosons, certain corrections that are usually neglected because ignoring them significantly simplifies the calculations, Czakon claims.

Its the first time we have succeeded in overcoming the mathematical difficulties and determining these corrections.

This finding is a triumph over mathematical challenges and a testament to the rigorous and meticulous nature of scientific inquiry.

This work has contributed to a more profound understanding of the Higgs bosons and opened avenues for further research.

The teams findings indicate that the mechanisms responsible for the formation of Higgs bosons, at least for now, show no signs of diverging from the established physics.

However, questions still abound:

Why do elementary particles carry the masses they do?

Why do they form families?

What exactly is dark matter?

What causes the dominance of matter over antimatter in the Universe?

These inquiries take us beyond the scope of the Standard Model, hinting at the existence of new physics. The pursuit to answer these questions is not just about theoretical curiosity; it has the potential to revolutionize our understanding of the universe and even lead to new technologies.

In the coming years, as more particle collisions are observed with the fourth research cycle of the LHC, reducing measurement uncertainties and bringing us closer to understanding the Higgs boson may be possible.

Each new cycle of experiments at the LHC is like turning a page in a giant book of the universe, revealing new insights and deepening our comprehension of the cosmos.

For now, the Standard Model remains secure, standing strong in the face of mysteries yet to be unraveled in the world of quantum mechanics. Lets brace ourselves; the quest to solve these mysteries promises to be nothing short of fascinating.

This journey reflects the enduring human spirit to explore the unknown, a spirit that has driven scientific and technological progress throughout history.

The full study was published in the journal Physical Review Letters.

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

Excerpt from:

Higgs boson God particle still remains a quantum mystery after 12 years - Earth.com

Read More..

Realization of higher-order topological lattices on a quantum computer – Nature.com

Mapping higher-dimensional lattices to 1D quantum chains

While small quasi-1D and 2D systems have been simulated on digital quantum computers27,28, the explicit simulation of higher-dimensional lattices remains elusive. Directly simulating a d-dimensional lattice of width L along each dimension requires ~Ld qubits. For large dimensionality d or lattice size L, this quickly becomes infeasible on NISQ devices, which are significantly limited by the number of usable qubits, qubit connectivity, gate errors, and decoherence times.

To overcome these hardware limitations, we devise an approach to exploit the exponentially large many-body Hilbert space of an interacting qubit chain. The key inspiration is that most local lattice models only access a small portion of the full Hilbert space (particularly non-interacting models and models with symmetries), and an Ld-site lattice can be consistently represented with far fewer than Ld qubits. To do so, we introduce an exact mapping that reduces d-dimensional lattices to 1D chains hosting d-particle interactions, which is naturally simulable on a quantum computer that accesses and operates on the many-body Hilbert space of a register of qubits.

At a general level, we consider a generic d-dimensional n-band model ({{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{k}}}}}}}}}{{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}^{{{{dagger}}} }{{{{{{{mathcal{H}}}}}}}}({{{{{{{bf{k}}}}}}}}){{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}) on an arbitrary lattice. In real space,

$${{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{c}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} }{c}_{{{{{{{{{bf{r}}}}}}}}}^{{prime} }{gamma }^{{prime} }},$$

(1)

where we have associated the band degrees of freedom to a sublattice structure , and ({h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}=0) for (| {{{{{{{bf{r}}}}}}}}-{{{{{{{{bf{r}}}}}}}}}^{{prime} }|) outside the coupling range of the model, i.e., adjacent sites for a nearest-neighbor (NN) model, next-adjacent for next-NN, etc. The operator cr annihilates particle excitations on sublattice of site r.

To take advantage of the degrees of freedom in the many-body Hilbert space, our mapping is defined such that the hopping of a single particle on the original d-dimensional lattice from (({{{{{{{{bf{r}}}}}}}}}^{{prime} },;{gamma }^{{prime} })) to (r, ) becomes the simultaneous hopping of d particles, each of a distinct species, from locations (({r}_{1}^{{prime} },ldots,{r}_{d}^{{prime} })) to (r1,, rd) and sublattice ({gamma }^{{prime} }) to on a 1D interacting chain. Explicitly, this map is given by

$${{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} } , mapsto {prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} },qquad {{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma } , mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }gamma }^{alpha },$$

(2)

where r is the th component of r, and ( { omega^{alpha}_{ell gamma} }_{alpha = 1}^{d}) represents d excitation species hosted on sublattice of site on the interacting chain, yielding

$${{{{{{{mathcal{H}}}}}}}}mapsto {{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{1D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} }{omega }_{{r}_{alpha }^{{prime} }{gamma }^{{prime} }}^{alpha }.$$

(3)

In the single-particle context, exchange statistics is unimportant, and {} can be taken to be commuting. This mapping framework accommodates any lattice dimension and geometry, and any number of bands or sublattice degrees of freedom. As the mapping is performed at the second-quantized level, any one-body Hamiltonian expressed in second-quantized form can be treated, which encompasses a wide variety of single-body topological phenomena of interest. We refer readers to Supplementary Note1 for a more expansive technical discussion. With slight modifications, this mapping can also be extended to admit interaction terms in the original d-dimensional lattice Hamiltonian, although we do not explore them further in this work.

For concreteness, we specialize our Hamiltonian to HOT systems henceforth and shall detail how our mapping enables them to be encoded on quantum processors. The simplest square lattice with HOT corner modes21 may be constructed from the paradigmatic 1D Su-Schrieffer Heeger (SSH) model29. To allow for sufficient degrees of freedom for topological localization, we minimally require a 2D mesh of two different types of SSH chains in each direction, arranged in an alternating fashion

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}={sum}_{(x,y)in {[1,L]}^{2}}left[{u}_{xy}^{x}{c}_{(x+1)y}^{{{{dagger}}} }+{u} , _{yx}^{y}{c}_{x(y+1)}^{{{{dagger}}} }right]{c}_{xy}+,{{mbox{h.c.}}},,$$

(4)

where cxy is the annihilation operator acting on site (x, y) of the lattice and ({u}_{{r}_{1}{r}_{2}}^{alpha }) takes values of either ({v}_{{r}_{1}{r}_{2}}^{alpha }) for intra-cell hopping (odd r2) or ({w}_{{r}_{1}{r}_{2}}^{alpha }) for inter-cell hopping (even r2), {x, y}. Conceptually, we recognize that the 2D lattice momentum space can be equivalently interpreted as the joint configuration momentum space of two particles, specifically, the (1+1)-body sector of a corresponding 1D interacting chain. We map cxyxy, where and annihilate hardcore bosons of two different species at site on the chain. In the notation of Eq. (2), we identify ({omega }_{ell }^{1},=,{omega }_{ell }^{x},=,{mu }_{ell }) and ({omega }_{ell }^{2},=,{omega }_{ell }^{y},=,{nu }_{ell }), and the sublattice structure has been absorbed into the (parity of) spatial coordinates. This yields an effective 1D, two-boson chain described by

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}},=, {sum}_{x=1}^{L}{sum}_{y=1}^{L}left[{u}_{xy}^{x}{mu }_{x+1}^{{{{dagger}}} }{mu }_{x}{n}_{y}^{nu },+,{u}_{yx}^{y}{nu }_{y+1}^{{{{dagger}}} }{nu }_{y}{n}_{x}^{mu }right],+,,{{mbox{h.c.}}},,$$

(5)

where ({n}_{ell }^{omega }) is the number operator for species at site of the chain. As written, each term in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) represents an effective SSH model for one particular species or , with the other species not participating in hopping but merely present (hence its number operator). These two-body interactions arising in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) appear convoluted, but can be readily accommodated on a quantum computer, taking advantage of the quantum nature of the platform. To realize ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) on a quantum computer, we utilize 2qubits to represent each site of the chain, associating the unoccupied, -occupied, -occupied and both , -occupied boson states to qubit states (leftvert 00rightrangle), (leftvert 01rightrangle), (leftvert 10rightrangle), and (leftvert 11rightrangle) respectively. Thus 2L qubits are needed for the simulation, a significant reduction from L2 qubits without the mapping, especially for large lattice sizes. We present simulation results on IBM quantum computers for lattice size (L sim {{{{{{{mathcal{O}}}}}}}}(10)) inthe Two-dimensional HOT square lattice section.

Our methodology naturally generalizes to higher dimensions. Specifically, ad-dimensional HOT lattice maps onto a d-species interacting 1D chain, and d qubits are employed to represent each site of the chain, providing sufficient many-body degrees of freedom to encode the 2d occupancy basis states of each site. We write

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }{c}_{{{{{{{{bf{r}}}}}}}}+{hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}+,{{mbox{h.c.}}},,$$

(6)

where enumerates the directions along which hoppings occur and ({hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }) is the unit vector along . As before, the hopping coefficients alternate between inter- and intra-cell values that can be different in each direction. Compactly, ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }=[1-pi ({r}_{alpha })]{v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }+pi ({r}_{alpha }){w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) for parity function , intra- and inter-cell hopping coefficients ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }), and r are spatial coordinates in non- directionssee Supplementary Table1 for details of the hopping parameter values used in this work. Using d hardcore boson species {} to represent the d dimensions, we map onto an interacting chain via ({c}_{{{{{{{{bf{r}}}}}}}}}mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }}^{alpha }), giving

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }left[{left({omega }_{{r}_{alpha }+1}^{alpha }right)}^{{{{dagger}}} }{omega }_{{r}_{alpha }}^{alpha } prod_{beta=1 atop beta neq alpha}^d {n}_{{r}_{beta }}^{beta }right]+,{{mbox{h.c.}}},,$$

(7)

where ({omega }_{ell }^{alpha }) annihilates a hardcore boson of species at site of the chain and ({n}_{ell }^{alpha }) is the number operator of species . In the d=2 square lattice above, we had r=(x, y) and {}={, }. The highest dimensional HOT lattice we shall examine is the d=4 tesseract, for which r=(x, y, z, w) and {}={, , , }. In total, a d-dimensional HOT lattice Hamiltonian has d2d distinct hopping coefficients, since there are d different lattice directions and 2d1 distinct edges along each direction, each comprising two distinct hopping amplitudes for inter- and intra-cell hopping. Appropriately tuning these coefficients allows the manifestation of robust HOT modes along the boundaries (corners, edges, etc.) of the latticesschematics of the various lattice configurations investigated in our experiments are shown in later sections.

Accordingly, the equivalent interacting 1D chain requires dL qubits to realize, an overwhelming reduction from the Ld otherwise needed in a direct simulation of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{dD}) without the mapping. We remark that such a significant compression is possible because HOT is inherently a single-particle phenomenon. See Methods for further details and optimizations of our mapping scheme on the HOT lattices considered, and Supplementary Note1 for an extended general discussion, including examples of other lattices and models.

With our mapping, a d-dimensional HOT lattice ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with Ld sites is mapped onto an interacting 1D chain ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with dL number of qubits, which can be feasibly realized on existing NISQ devices for (L sim {{{{{{{mathcal{O}}}}}}}}(10)) and d4. While the resultant interactions in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) are inevitably complicated, below we describe how ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) can be viably simulated on quantum hardware.

A high-level overview of our general framework for simulating HOT time-evolution is illustrated in Fig.2. To evolve an initial state (leftvert {psi }_{0}rightrangle), it is necessary to implement the unitary propagator (U(t)=exp (-i{{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}t)) as a quantum circuit, such that the circuit yields (leftvert psi (t)rightrangle=U(t)leftvert {psi }_{0}rightrangle) and desired observables can be measured upon termination. A standard method to implement U(t) is Trotterization, which decomposes ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in the spin-1/2 basis and splits time-evolution into small steps (see Methods for details). However, while straightforward, such an approach yields deep circuits unsuitable for present-generation NISQ hardware. To compress the circuits, we utilize a tensor network-aided recompilation technique30,31,32,33. We exploit the number-conserving symmetries of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in each boson species, arising from ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) and the nature of our mapping (see Methods), to enhance circuit construction performance and quality at large circuit breadths (up to 32qubits). Moreover, to improve data quality amidst hardware noise, we employ a suite of error mitigation techniques, in particular, readout errormitigation (RO) that approximately corrects bit-flip errors during measurement34, a post-selection (PS) technique that discards results in unphysical Fock-space sectors30,35, and averaging across machines and qubit chains (see Methods).

a, b Mapping of a higher-dimensional lattice to a 1D interacting chain to facilitate quantum simulation on near-term devices. Concretely, a two-dimensional single-particle lattice can be represented by a two-species interacting chain; a three-dimensional lattice can be represented by a three-species chain with three-body interactions. c Overview of quantum simulation methodology: higher-dimensional lattices are first mapped onto interacting chains, then onto qubits; various techniques, such as d Trotterization and e ansatz-based recompilation, enable the construction of quantum circuits for dynamical time-evolution, or IQPE for probing the spectrum. The quantum circuits are executed on the quantum processor, and results are post-processed with RO and PS error mitigationsto reduce effects of hardware noise. See Methods for elaborations on the mapping procedure, and quantum circuit construction and optimization.

After acting on (leftvert {psi }_{0}rightrangle) by the quantum circuit that effects U(t), terminal computational-basis measurements are performed on the simulation qubits. We retrieve the site-resolved occupancy densities (rho ({{{{{{{bf{r}}}}}}}})=langle {c}_{{{{{{{{bf{r}}}}}}}}}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}rangle=langle {prod}_{alpha=1}^{d}{n}_{{r}_{alpha }}^{alpha }rangle) on the d-dimensional lattice, and the extent of evolution of (leftvert psi (t)rightrangle) away from (leftvert {psi }_{0}rightrangle), whose occupancy densities are 0(r), is assessed via the occupancy fidelity

$$0le {{{{{{{{mathcal{F}}}}}}}}}_{rho }=frac{{left[{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}}){rho }_{0}({{{{{{{bf{r}}}}}}}})right]}^{2}}{left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}})^2right] left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}{rho }_{0} ({{{{{{{bf{r}}}}}}}})^2right]} le 1.$$

(8)

Compared to the state fidelity ({{{{{{{mathcal{F}}}}}}}}=| langle {psi }_{0}| psi rangle {| }^{2}), the occupancy fidelity ({{{{{{{{mathcal{F}}}}}}}}}_{rho }) is considerably more resource-efficient to measure on quantum hardware.

In addition to time evolution, we can also directly probe the energy spectrum of our simulated Hamiltonian ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) through iterative quantum phase estimation (IQPE)36see Methods. Specifically, to characterize the topology of HOT systems, we use IQPE to probe the existence of midgap HOT modes at exponentially suppressed (effectively zero for L1) energies. In contrast to quantum phase estimation37,38, IQPE circuits are shallower and require fewer qubits, and are thus preferable for implementation on NISQ hardware. As our interest is in HOT modes, we initiate IQPE with maximally localized boundary states that are easily constructed a priori, which exhibit good overlap (>80% state fidelity) with HOT eigenstates, and examine whether IQPE converges consistently towards zero energy. These states are listed in Supplementary Table2.

As the lowest-dimensional incarnation of HOT lattices, the d=2 staggered square lattice harbors only one type of HOT modezero-dimensional corner modes (Fig.1a). Previously, such HOT corner modes on 2D lattices have been realized in various metamaterials39,40 and photonic waveguides41, but not in a purely quantum setting to-date. Our equivalent 1D hardcore boson chain can be interpreted as possessing interaction-induced topology that manifests in the joint configuration space of the d bosons hosted on the many-body chain. Here, the topological localization is mediated not due to physical SSH-like couplings or band polarization but due to the combined exclusion effects from all its interaction terms. We emphasize that our physically realized 1D chain contains highly non-trivial interaction terms involving multiple sitesthe illustrative example in Fig.3f for an L=6 chain already contains a multitude of interactions, even though it is much smaller than the L=10 and L=16 systems we simulated on quantum hardware. As evident, the (d times 2^d = 8) unique types of interactions, corresponding to the 8 different couplings on the lattice, are mostly non-local; but this does not prohibit their implementation on quantum circuits. Indeed, the versatility of digital quantum simulators in realizing effectively arbitrary interactions allows the implementation of complex interacting Hamiltonian terms, and is critical in enabling our quantum device simulations.

a Ordered eigenenergies on a 1010 lattice for the topologically trivial C0 and nontrivial C2 and C4 configurations. They correspond to 0, 2, and 4 midgap zero modes (red diamonds), as measured via IQPE on a 20-qubit quantum chain plus an additional ancillary qubit; the shaded red band indicates the IQPE energy resolution. The corner state profiles (right insets) and other eigenenergies (black and gray dots) are numerically obtained via ED. Time-evolution of four initial states on a 1616 lattice mapped onto a 32-qubit chainb, c localized at corners to highlight topological distinction, d localized along an edge, and e delocalized in the vicinity of a corner. Left plots show occupancy fidelity for the various lattice configurations, obtained from ED and quantumhardware (labeled HW), with insets showing the site-resolved occupancy density (x, y) of the initial states (darker shading represents higher density). The right grid shows occupancy density measured on hardware at two later times. States with good overlap with robust corners exhibit minimal evolution. Error bars represent standard deviation across repetitions on different qubit chains and devices. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay. f Schematic of the interacting chain Hamiltonian, mapped from the parent 2D lattice, illustrated for a smaller 66 square lattice. The physical sites of the interacting boson chain are colored black, with their many-body interactions represented by colored vertices. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{1}).

In our experiments, we consider three different scenarios: C0, having no topological corner modes; C2, having two corner modes at corners (x, y)=(1, 1) and (L, 1); and C4, having corner modes on all four corners. These scenarios can be obtained by appropriately tuning the eight coupling parameters in the Hamiltonian (Eq. (4))see Supplementary Table1 for parameter values42.

We first show that the correct degeneracy of midgap HOT modes can be measured on each of the configurations C0, C2, and C4 on IBM transmon-based quantum computers, as presented in Fig.3a. For a start, we used a 20-qubit chain, which logically encodes a 1010 HOT lattice, with an additional ancillary qubit for IQPE readout. The number of topological corner modes in each case is accurately obtained through the degeneracy of midgap states of exponentially suppressed energy (red), as measured through IQPE executed on quantum hardwaresee Methods for details. That these midgap modes are indeed corner-localized is verified via numerical (classical) diagonalization, as in the insets of Fig.3a.

Next, we demonstrate highly accurate dynamical state evolution on larger 32-qubit chains on quantum hardware. We time-evolve various initial states on 1616 HOT lattices in the C0, C2, and C4 configurations and measure their site-resolved occupancy densities (x, y), up to a final time t=0.8 when fidelity trends become unambiguous. The resultant occupancy fidelity plots (Fig.3be) conform to the expectation that states localized on topological corners survive the longest, and are also in excellent agreement with reference data from ED. For instance, a localized state at the corner (x0, y0)=(1, 1) is robust on C2 and C4 lattice configurations (Fig.3b), whereas one localized on the (x0, y0)=(1, L) corner is robust only on the C4 configuration (Fig.3c). These fidelity decay trends are corroborated with the measured site-resolved occupancy density (x, y): low occupancy fidelity is always accompanied by a diffused (x, y) away from the initial state, whereas strongly localized states have high occupancy fidelity. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay; this is apparent from the occupancy fidelities, which remain near unity over time. In comparison, states that do not enjoy topological protection, such as the (1, L)-localized state on the C2 configuration and all initial states on the C0 configuration, rapidly delocalize and decay quickly.

Our experimental runs remain accurate even for initial states that are situated away from the lattice corners, such that they cannot enjoy full topological protection. In Fig.3d, the initial state at (x0, y0)=(2, 1), which neighbors the corner (1, 1), loses its fidelity much sooner than the corner initial state of Fig.3b, even for the C2 and C4 topological corner configurations. That said, its fidelity evolution still agrees well with ED reference data. In a similar vein, an initial state that is somewhat delocalized at a corner (Fig.3e) is still conferred a degree of stability when the corner is topological.

Next, we extend our investigation to the staggered cubic lattice in 3D, which hosts third-order HOT corner modes (Fig.1a). These elusive corner modes have to date only been realized in classical platforms43 or in synthetic electronic lattices44. Compared to the 2D cases, the implementation of the 3D HOT lattice (Eq. (6)) as a 1D interacting chain (Eq. (7)) on quantum hardware is more sophisticated. The larger dimensionality of the staggered cubic lattice, in comparison to the square lattice, is reflected by a larger density of multi-site interaction terms on the interacting chain. This is illustrated in Fig.4b for the minimal 444 lattice, where the combination of the various d=3-body interactions gives rise to emergent corner robustness (which appears as up to 3-body boundary clustering as seen on the 1D chain).

a The header row displays energy spectra for the topologically trivial C0 and inequivalent nontrivial C4a, C4b, and C8 configurations. The configurations host 0, 4, and 8 midgap zero modes (red diamonds), as measured via IQPE on an 18-qubit chain plus an ancillary qubit; the shaded red band indicates the IQPE energy resolution. Schematics illustrating the locations of topologically robust corners are shown on the right. Subsequent rows depict the time-evolution of five initial states on a 666 lattice mapped onto an 18-qubit chainlocalized at a corner, on an edge, on a face, and in the bulk of the cube, and delocalized in the vicinity of a corner. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z) of the initial state (darker shading represents higher density). The central grid shows occupancy density measured on hardware at a later time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). Error bars represent standard deviation across repetitions on different qubit chains and devices. Again, initial states localized close to topological corners exhibit higher occupational fidelity. b Hamiltonian schematic of the interacting chain realizing a minimal 444 cubic lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }){x, y, z} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{2}).

On quantum hardware, we implemented 18-qubit chains representing 666 cubic lattices in four configurations, specifically, the trivial lattice (C0), two geometrically inequivalent configurations hosting four topological corners (C4a, C4b), and a configuration with all 23=8 topological corners (C8). Similar to the 2D HOT lattice, we first present the degeneracy of zero-energy topological modes (header row of Fig.4a) with low-energy spectral data (red diamonds) accurately obtained via IQPE.

From the first row of Fig.4a, it is apparent that initial states localized on topological corners enjoy significant robustness. Namely, the measured site-resolved occupancy densities (x, y, z) (four right columns) indicate that the localization of (x0, y0, z0)=(1, 1, 1) corner initial states on C4a, C4b, and C8 configurations are maintained, and measured occupancy fidelities remain near unity. In comparison, an initial corner-localized state on the C0 configuration, which hosts no topological corner modes, delocalizes quickly. Moving away from the corners, an edge-localized state adjacent to a topological corner is conferred slight, but nonetheless present, stability (second row of Fig.4a), as observed from the slower decay of the (x0, y0, z0)=(2, 1, 1) state on C4a, C4b, and C8 configurations in comparison to the C0 topologically trivial lattice. This conferred robustness is diminished for states localized further from topological corners, for instance, surface-localized states (third row), and is virtually unnoticeable for states localized in the bulk (fourth row), which decay rapidly for all topological configurations. Initial states that are slightly delocalized near a corner enjoy some protection when the corner is topological, but are unstable when the corner is trivial (fifth row of Fig.4a). We again highlight the quantitative agreement of our quantum hardware simulation results with theoretical ED predictions.

We now turn to our key resultsthe NISQ quantum hardware simulation of four-dimensional staggered tesseract HOT lattices. A true 4D lattice is difficult to simulate on most experimental platforms, and with a few exceptions45, most works to date have relied on using synthetic dimensions18,46. In comparison, utilizing our exact mapping (Eqs. (6) and (7)) that exploits the exponentially large many-body Hilbert space accessible by a quantum computer, a tesseract lattice can be directly simulated on a physical 1D spin (qubit) chain, with the number of spatial dimensions only limited by the number of qubits. The tesseract unit cell can be visualized as two interlinked three-dimensional cubes (spanned by x, y, z axes) living in adjacent w-slices (Fig.5). The full tesseract lattice of side length L is then represented as successive cubes with different w coordinates, stacked successively from inside out, with the inner and outer wireframe cubes being w=1 and w=L slices. Being more sophisticated, the 4D HOT lattice features various types of HOT corner, edge, and surface modes (Fig.1a); we presently focus on the fourth-order (hexadecapolar) HOT corner modes, as well as the third-order (octopolar) HOT edge modes.

A L=6 tesseract lattice is illustrated as six cube slices indexed by w and highlighted on a color map. The header row displays energy spectra computed numerically for the topologically trivial C0 and nontrivial C4, C8, and C16 configurations. The configurations host 0, 4, 8, and 16 midgap zero modes (black circles). Schematics on the right illustrate the locations of the topologically robust corners. Subsequent rows depict the time-evolution of three initial states on a 6666 lattice mapped onto a 24-qubit chainlocalized on a a corner, b an edge, and c a face. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z, w) of the initial state. Central grid shows occupancy density measured on hardware at the final simulation time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). The color of individual sites (spheres) denotes their w-coordinate and color saturation denotes occupancy of the site; unoccupied sites are translucent. Error bars represent standard deviation across repetitions on different qubit chains and devices. Initial states with less overlap with topological corners exhibit slightly lower stability than their lower dimensional counterparts, as these states diffuse into the more spacious 4D configuration space. d Hamiltonian schematic of the interacting chain realizing a minimal 4444 tesseract lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y, z, w} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{3}). To limit visual clutter, only ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) intra-cell couplings are shown; a corresponding set of ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) inter-cell couplings are present in the Hamiltonian but have been omitted from the diagram.

To start, we realized a dL=46=24-qubit chain on the quantum processor, which encodes a 6666 HOT tesseract. The 4-body (8-operator) interactions now come in d2d=64 typeshalf of them are illustrated in Fig.5d, which depicts only the minimal L=4 case. As discussed inthe Mapping higher-dimensional lattices to 1D quantum chains section, these interactions are each a product of d1 density terms and a hopping process, the latter acting on the particle species that encodes the coupling direction on the HOT tesseract. In generic models with non-axially aligned hopping, these interactions could be a product of up to d hopping processes. As we shortly illustrate, despite the complexity of the interactions, the signal-to-noise ratio in our hardware simulations (Fig.5a) remains reasonably good.

In Fig.5, we consider the configurations C0, C4, C8, and C16, which correspond respectively to the topologically trivial scenario and lattice configurations hosting four, eight, and all sixteen HOT corner modes, as schematically sketched in the header row. Similar to the 2D and 3D HOT lattices, site-resolved occupancy density (x, y, z, w) and occupancy fidelities measured on quantum hardware reveal strong robustness for initial states localized at topological corners, as illustrated by the strongly localized final states in the C4, C8, and C16 cases (Fig.5a). However, their stability is now slightly lower, partly due to the more spacious 4D configuration space into which the state can diffuse, as seen from the colored clouds of partly occupied sites after time evolution. Evidently, the stability diminishes as we proceed to the edge- and surface-localized initial states (Fig.5b and c).

Next, we investigate a lattice configuration that supports HOT edge modes (or commonly referred to as topological hinge states in literature22). So far we have seen topological robustness only from topological corner sites (Fig.5); but with appropriate parameter tuning (see Supplementary Table1), topological modes can be made to lie along entire edges. This is illustrated in the header row of Fig.6, where topological modes lie along the y-edges. As our HOT lattices are constructed from a mesh of alternating SSH chains, we expect the topological edges to have wavefunction support (nonzero occupancy) only on alternate sites, consistent with the cumulative occupancy densities of the midgap zero-energy modes. This is corroborated by site-resolved occupancy densities and occupancy fidelities measured on quantum hardware, which demonstrate that initial states localized on sites with topological wavefunction support are significantly more robust (Fig.6a, b), i.e., (x0, y0, z0, w0)=(1, 3, 1, L) overlaps with the topological mode on (1, y, 1, L), y{1, 3, 5} sites and is hence robust, but (1, 2, 1, L) is not. The stability of the initial state is reduced as we move farther from the corner, as can be seen, for instance, by comparing occupancy fidelities and the size of the final occupancy cloud for (1, 1, 1, L) and (1, 3, 1, L) in Fig.6a, b, which is expected from the decaying y-profile of the topological edge mode. Finally, our measurements verify that surface-localized states do not enjoy topological protection (Fig.6c) as they are localized far away from the topological edges. It is noteworthy that such measurements into the interior of the 4D lattice can be made without additional difficulty on our 1D qubit chain, but doing so can present significant challenges on other platforms, even electrical (topolectrical) circuits.

Our mapping facilitates the realization of any desired HOT modes, beyond the aforementioned corner mode examples. The header row on the left displays the energy spectrum for a configuration of the tesseract harboring topologically non-trivial edges (midgap mode energies in black). Accompanying schematic highlights alternating sites with topological edge wavefunction support. Subsequent columns present site-resolved occupancy density (x, y, z, w) for a 6666 lattice mapped onto a 24-qubit chain, measured on quantum hardware at t=0 (first row) and final simulation time t=0.6 (second row), for three different experiments. a A corner-localized state along a topological edge is robust, compared to one along a non-topological edge. b On a topologically non-trivial edge, a state localized on a site with topological wavefunction support is robust, compared to one localized on a site without support. c A surface-localized state far away from the topological edges diffuses into a large occupancy cloud. The bottom leftmost summarizes occupancy fidelities for the various initial states, obtained from ED and hardware (labeled HW). Error bars represent standard deviation across repetitions on different qubit chains and devices.

Our approach of mapping a d-dimensional HOT lattice onto an interacting 1D chain enabled a drastic reduction in the number of qubits required for simulation, and served a pivotal role in enabling the hardware realizations presented in this work. Here, we further illustrate that employing this mapping for simulation on quantum computers can provide a resource advantage over ED on classical computers, particularly at large lattice dimensionality d or linear size L. For this discussion, we largely leave aside tensor network methods, as their advantage over ED is unclear in the generic setting of lattice dimensionality d>1, with arbitrary initial states and evolution time (which may generate large entanglement).

To be concrete, we consider simulation tasks of the following broad type: given an initial state (leftvert {psi }_{0}rightrangle), we wish to perform time-evolution to (leftvert psi (t)rightrangle), and extract the expectation value of an observable O that is local, that is, O is dependent on ({{{{{{{mathcal{O}}}}}}}}({l}^{d})) number of sites on the lattice for a fixed neighborhood of radius l independent of L. State preparation or initialization resources for (leftvert {psi }_{0}rightrangle) are excluded from our considerations, as there can be significant variations in costs depending on the choice of specification of the state for both classical and quantum methods. Measurement costs for computing O, however, are considered. To ensure a meaningful comparison, we assume first-order Pauli-basis Trotterization for the construction of quantum circuits, such that circuit preparation is algorithmically straightforward given a lattice Hamiltonian. As a baseline, classical ED of a d-dimensional, length L system with a single particle generally requires ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time and ({{{{{{{mathcal{O}}}}}}}}({L}^{2d})) dense classical storage to complete a task of such a type47.

A direct implementation of a generic Hamiltonian using our mapping gives ({{{{{{{mathcal{O}}}}}}}}(d{L}^{d}cdot {2}^{d})) Pauli strings per Trotter step (see Methods), where hoppings along each edge of the lattice, extensive in number, are allowed to be independently tuned. However, physically relevant lattices typically host only a systematic subset of hopping processes, described by a sub-extensive number of parameters. In particular, in the HOT lattices we considered, the hopping amplitude ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }) along each axis is dependent only on and the parities of coordinates r. Noting the sub-extensive number of distinct hoppings, the lattice Hamiltonian can be written in a more favorable factorized form, yielding ({{{{{{{mathcal{O}}}}}}}}(dLcdot {2}^{2d})) Pauli strings per Trotter step (see Methods). Decomposing into a hardware gate set, the total number of gates in a time-evolution circuit scales as ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )) in the worst-case for simulation precision , assuming all-to-all connectivity between qubits. Imposing linear NN connectivity on the qubit chain does not alter this bound. Crucially, there is no exponential scaling of d in L (of form ~Ld), unlike classical ED.

For large L and d, the circuit preparation and execution time can be lower than the ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time of classical ED. We illustrate this in Fig.7, which shows a qualitative comparison of run-time scaling between the quantum simulation approach and ED. We have assumed execution time on hardware to scale as the number of gates in the circuit ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )), which neglects speed-ups afforded by parallelization of single- or two-qubit gates acting on disjoint qubits48. The difference in asymptotic complexities implies a crossover at large L or d beyond which quantum simulation exhibits a growing advantage. The exact crossover boundary is sensitive to platform-specific details such as gate times and control capabilities; given the large spread in gate timescales (3 orders of magnitude) across present-day platforms49,50, and uncertain overheads from quantum error correction or mitigation, we avoid giving definite numerical promises on breakeven L and d values. Classical memory usage is similarly bounded during circuit construction, straightforwardly reducible to ({{{{{{{mathcal{O}}}}}}}}(dL)) by constructing and executing gates in a streaming fashion51, and worst-case ({{{{{{{mathcal{O}}}}}}}}({2}^{ld})) during readout to compute O, reducible to a constant supposing basis changes to map components of O onto the computational basis of a fixed number of measured qubits can be implemented on the quantum circuits52.

Comparison of asymptotic computational time required for the dynamical simulation of d-dimensional, size-L lattice Hamiltonians of similar complexity as our HOT lattices. a With fixed lattice dimension d and increasing lattice size L, the time taken with our approach on a quantum computer (labeled QC) scales with L2, rather than the higher power of L3d through classical ED. b For fixed L and varying d, our approach scales promisingly, scaling like 4d instead of ({({L}^{3})}^{d}) for ED. We assume conventional Trotterization for circuit construction, and at large L and d, our mapping and quantum simulation approach can provide a resource advantage over classical numerical methods (e.g., ED).

The favorable resource scaling (run-time and memory), in combination with the modest dL qubits required, suggests promising scalability of our mapped quantum simulation approach, especially in realizing larger and higher-dimensional HOT lattices. We re-iterate, however, that Trotterized circuits without additional optimization remain largely too deep for present-generation NISQ hardware to execute feasibly. The use of qudit hardware architectures in place of qubits can allow shallower circuits53; in particular, using a qudit of local Hilbert space dimension 2d instead of a group of d qubits avoids, to a degree, decomposition of long-range multi-site gates, assuming the ability to efficiently and accurately perform single- and two-qudit operations54. Nonetheless, for the quantum simulation of sophisticated topological lattices as described to be achieved in their full potential, fault-tolerant quantum computation, at the least quantum devices with vastly improved error characteristics and decoherence times, will likely be needed.

Here is the original post:
Realization of higher-order topological lattices on a quantum computer - Nature.com

Read More..

With spin centers, quantum computing takes a step forward – UC Riverside

Quantum computing, which uses the laws of quantum mechanics, can solve pressing problems in a broad range of fields, from medicine to machine learning, that are too complex for classical computers. Quantum simulators are devices made of interacting quantum units that can be programmed to simulate complex models of the physical world. Scientists can then obtain information about these models and, by extension, about the real world by varying the interactions in a controlled way and measuring the resulting behavior of the quantum simulators.

In a paper published in Physical Review Band selected by the journal as an editors' suggestion,a UC Riverside-led research team has proposed a chain of quantum magnetic objects, called spin centers, that, in the presence of an external magnetic field, can quantum simulate a variety of magnetic phases of matter as well as the transitions between these phases.

We are designing new devices that house the spin centers and can be used to simulate and learn about interesting physical phenomena that cannot be fully studied with classical computers, said Shan-Wen Tsai, a professor of physics and astronomy, who led the research team. Spin centers in solid state materials are localized quantum objects with great untapped potential for the design of new quantum simulators.

According to Troy Losey, Tsais graduate student and first author of the paper, advances with these devices could make it possible to study more efficient ways of storing and transferring information, while also developing methods needed to create room temperature quantum computers.

We have many ideas for how to make improvements to spin-center-based quantum simulators compared to this initial proposed device, he said. Employing these new ideas and considering more complex arrangements of spin centers could help create quantum simulators that are easy to build and operate, while still being able to simulate novel and meaningful physics.

Below, Tsai and Losey answer a couple of questions about the research:

Tsai: It is a device that exploits the unusual behaviors of quantum mechanics to simulate interesting physics that is too difficult for a regular computer to calculate. Unlike quantum computers that operate with qubits and universal gate operations, quantum simulators are individually designed to simulate/solve specific problems. By trading off universal programmability of quantum computers in favor of exploiting the richness of different quantum interactions and geometrical arrangements, quantum simulators may be easier to implement and provide new applications for quantum devices, which is relevant because quantum computers arent yet universally useful.

A spin center is a roughly atom-sized quantum magnetic object that can be placed in a crystal. It can store quantum information, communicate with other spin centers, and be controlled with lasers.

Losey: We can build the proposed quantum simulator to simulate exotic magnetic phases of matter and the phase transitions between them. These phase transitions are of great interest because at these transitions the behaviors of very different systems become identical, which implies that there are underlying physical phenomena connecting these different systems.

The techniques used to build this device can also be used for spin-center-based quantum computers, which are a leading candidate for the development of room temperature quantum computers, whereas most quantum computers require extremely cold temperatures to function. Furthermore, our device assumes that the spin centers are placed in a straight line, but it is possible to place the spin centers in up to 3-dimensional arrangements. This could allow for the study of spin-based information devices that are more efficient than methods that are currently used by computers.

As quantum simulators are easier to build and operate than quantum computers, we can currently use quantum simulators to solve certain problems that regular computers dont have the abilities to address, while we wait for quantum computers to become more refined. However, this doesnt mean that quantum simulators can be built without challenge, as we are just now getting close to being good enough at manipulating spin centers, growing pure crystals, and working at low temperatures to build the quantum simulator that we propose.

Link:
With spin centers, quantum computing takes a step forward - UC Riverside

Read More..

Guest Post Controlling the Qubits: Overcoming DC Bias and Size Challenges in Quantum – The Quantum Insider

Guest Post by Gobinath Tamil Vanan (bio below)

Quantum computing, with its promise of efficient calculations in challenging applications, is rapidly advancing in research and development. The pivotal technology for quantum computing lies in the control and evaluation of qubits.

Quantum computing is gaining attention for its ability to solve complex problems that prove difficult for regular computers. In this journey, instruments like the DC bias source play a crucial role, especially for flux-tunable superconducting and silicon spin qubits. The DC bias source helps to adjust the flux to decide the resonance frequency of the superconducting qubit and to apply DC bias voltage to each gate terminal of silicon spin qubits. In addition, the number of qubits employed in a quantum computer increases the physical size of the machine based on the number of DC bias sources needed to control the qubits.

Figure 1. Single qubit control and evaluation system for flux-tunable superconducting and silicon spin qubits. The instruments and lines indicated in red represent the DC voltage bias source and wiring. For the flux-tunable superconducting qubit, the DC voltage bias source helps tune the resonance frequency using the magnetic flux generated in the coil. For the silicon spin qubit, the DC voltage bias source works by tuning the electric potential of gate terminals.

An engineer can initialize, control, and read the qubit states by using control evaluation systems, as depicted in Figure 1. This control evaluation system enables the characterization of qubit properties like coherence time and fidelity and the execution of benchmark tests, thereby advancing the research and development of quantum computers.

Challenges in DC biasing of qubits

There are two significant challenges when using DC power supplies: 1) Voltage fluctuations due to DC power supply noise and environmental interference through long cables induce qubit decoherence and 2) DC power supplies that may number several hundred require substantial storage space and can introduce significant qubit decoherence.

Qubits are highly susceptible to noise and even minor voltage fluctuations. DC bias voltage can quickly induce unintended changes in the quantum state. These changes can lead to the loss of information stored in the qubit, a phenomenon known as decoherence. This results in a decline in the precision of qubit control and evaluation. Moreover, quantum computers have now reached a stage where they can exceed 100 qubits. It is essential to supply an independent DC bias to each qubit by securing substantial space to house several hundred general power supplies.

Voltage fluctuations induce qubit decoherence

In the quantum world, qubits exist in a superposition of states, representing both 0 and 1 simultaneously. This unique property makes them exceptionally powerful for certain computations. However, it also makes them incredibly sensitive to external influences. The challenge arises when DC power supply noise and environmental interference introduce voltage fluctuations that disturb the delicate balance of the qubits superposition.

Even the slightest variation in voltage can cause the qubits quantum state to waver, leading to decoherence making them less reliable for computations. This is a significant challenge in quantum computing because maintaining the integrity of qubit states is crucial for accurate and reliable quantum information processing.

Figure 2. Fluctuation in DC voltage bias propagated to qubits

Fluctuations in DC bias voltage primarily contribute to the DC power supplys output voltage noise. Furthermore, environmental interference, such as electromagnetic interference and physical vibrations of the cables, can contribute to voltage instability. The qubits sensitivity to noise necessitates continual monitoring of the potential noise source. Figure 2 illustrates how this effect becomes more pronounced when extending the cables because the DC power supply rack is at a distance from the entrance of the cryostat or if the power supply is in the lower sections of the rack.

The occurrence of voltage fluctuations disrupting qubit coherence is rooted in the fundamental nature of quantum systems. The challenge is not just about preventing external disturbances but also about developing tools and technologies that can shield qubits from these disturbances, ensuring stable and coherent quantum states for reliable computational processes.

Larger quantum computers can introduce more qubit decoherence

One important direction in the practical application of quantum computers is the increase in the number of qubits to run more complex quantum algorithms. For example, the current noisy intermediate-scale quantum (NISQ) machines under development require the implementation of tens to hundreds of qubits. This means the need for significant numbers of DC power supplies that both need to be physically stored and can introduce a lot of noise into the system.

This proliferation of DC bias sources introduces additional sources of noise into the system. The noise from DC bias sources can stem from three factors as follows:

1.Power Supply Imperfections: Not all power supplies were made for precision, and even small fluctuations or imperfections in the DC bias source can translate to noise in the qubits operation. 2. Crosstalk: In a setup with numerous DC bias sources in close proximity, crosstalk can occur. This means that the adjustments made to one qubits bias source can affect neighboring qubits, leading to unwanted noise. 3. Electromagnetic Interference (EMI): The operation of multiple DC bias sources in a confined space can generate electromagnetic fields that interfere with each other. This interference can manifest as noise that disrupts the qubits quantum states.

As the number of DC bias sources increases to accommodate a larger number of qubits, the overall size of the system increases, and the cumulative effect of these noise sources becomes more pronounced. Each additional DC bias source adds another layer of potential noise, making it challenging to maintain the precision and coherence of the qubits states.

Figure 3. Configuration of 100-channel conventional precision power sources with a large footprint

Taking an example of a quantum computer that uses 100 qubits, providing DC bias voltage to each qubit presents an additional challenge. Figure 3 illustrates that each qubit requires at least one DC power supply. The test rack must fit in a minimum of 100 power supply channels to bias all qubits. Even with power supplies of a typical size 2U half-rack housed in a maximum-sized rack, a single rack can only store 40 channels.

Consequently, securing an ample space measuring 180 cm in width, 90.5 cm in depth, and 182 cm in height for three racks becomes necessary in a laboratory filled with various other instruments, such as arbitrary waveform generators (AWG). This necessity creates a logistical challenge regarding physical space within quantum computing laboratories. This spatial challenge not only impacts the physical layout of the lab but also raises practical concerns about efficient management, accessibility, and equipment maintenance. There is a growing emphasis on developing compact and efficient power supply solutions that can cater to the individual requirements of each qubit while minimizing the overall footprint to address this challenge. Streamlining the power supply infrastructure is crucial for the scalability of quantum computing projects, enabling researchers to expand their quantum systems without space constraints.

Enabling an Effective and Efficient Quantum Computing Development

Solving spatial constraints helps scale quantum computing efforts, enabling researchers to explore larger quantum systems. Managing voltage fluctuations and spatial limitations in DC bias sources for quantum computing is crucial for progress.

Figure 4. Voltage source noise density with the combination of a source meter and a low-noise filer adapter

It is essential to use low-noise power supplies or source meters / source measure units (SMUs) to provide clean bias voltage positioned as close as possible to the cryostat to achieve this. This approach can significantly help with unnecessary environmental interference caused by exposed cable lengths.

You can attach optional accessories, such as a low-noise filter adapter (LNF), to the precision source meters to further push the stability of the bias voltage. In some cases, reducing the noise level to 25 V rms (10 Hz to 20 MHz, 6 V range) is illustrated in Figure 4.

Figure 5 shows, from a rack setup perspective, that using source meters that are as compact in channel density allows for placements directly at the entrance of the cryostat, even at elevated positions. This approach will significantly help to minimize the DC voltage bias fluctuation, enabling ideal quantum control and precise qubit characterization through long coherence time.

Figure 5. A 100-channel configuration with high-density source meters set close to the cryostat, providing clean DC bias voltage

Tips to minimize the fluctuations in the bias voltage

Given the significant impact of the surrounding environment and experimental setup on DC bias fluctuations, achieving a clean bias voltage requires proper setup and usage. When constructing your DC bias line, you can minimize voltage noise and effectively utilize a high-precision source measure unit and a low-noise filter by paying attention to certain aspects.

Using different grounds for each instrument can create a circuit known as a ground loop. Ground loops can be a source of noise. Figure 6 illustrates the steps to stabilize the DC bias voltage. Avoiding ground loops using techniques such as single-point grounding is necessary.

Figure 6. Examples of wiring that creates a ground loop (left) and avoids a ground loop (right)

You can either short the LF terminal to the frame ground or leave it floating. This choice could impact the noise level of the DC bias voltage. If your system design does not have specific requirements for the LF terminals potential, you can experiment with both configurations and choose the one that yields better results.

According to Faradays law, electromagnetic induction can contribute to the noise if the HF and LF cables become spatially separated. To prevent this outcome, keep the HF and LF cables as close as possible or use a twisted pair configuration.

To solve these challenges, use a combination of source meter options that helps to provide a high channel density, low noise, and precision voltage supply to provide a stable and clean bias voltage to more than 100 qubits. Ensure the source meters are as close as possible to the cryostat to minimize electromagnetic interference from long cables. Always take note of the potential configurations of the LF terminal, avoid ground loops, and implement twisted pair configurations to reduce the impact of Faradays Law further.

Gobinath Tamil VananKeysight Technologies

Gobinath graduated from the Swinburne University of Technology with a Degree in Electrical and Electronics Engineering and has more than 9 years of experience in the semiconductor, aerospace & defense, and automotive industries, as well as the field of automated testing. At Keysight, he works closely with field engineers, product managers, and R&D engineers to ensure that all relevant customer needs in the industry are brought out well and early to enable customer success and solve the grand challenges of tests and measurements.

The rest is here:
Guest Post Controlling the Qubits: Overcoming DC Bias and Size Challenges in Quantum - The Quantum Insider

Read More..