Page 2,376«..1020..2,3752,3762,3772,378..2,3902,400..»

A Year After the Capitol Insurrection, Faith in Democracy Wanes – News @ Northeastern – News@Northeastern

A year after a mob of people stormed the U.S. Capitol in an attempt to disrupt the formal certification of President Joe Bidens electoral victory, Americans faith in the countrys electoral process and its democracy writ large have only fallen.

Large swaths of the electorate still think that the 2020 election was rigged, and over the last year, a modest number of people have indeed lost confidence in the fairness of that election, according to a new U.S. study.

Every single one of our predictions from January 2021 came true, says David Lazer, university distinguished professor of political science and computer sciences at Northeastern, and one of the principal investigators on the study. What were seeing is that while there isnt a ton of explicit support for storming the Capitol, theres a notable ambivalence among Republicans, driven in part by a very strong belief that the election was stolen.

David Lazer, distinguished professor of political science and computer and information science. Photo by Adam Glanzman/Northeastern University

Lazer is part of a team of researchers from Northeastern, Harvard, Northwestern, and Rutgers universities that comprise the Covid States Project. The team conducted a survey of public sentiment immediately after the Jan. 6 insurrection in 2021, and issued a follow-up study a year later.

The researchers surveyed 15,269 people in the U.S. across all 50 states plus Washington, D.C., between Dec. 22, 2021, and Jan. 5, 2022. What they found reveals stark differences between Democrats and Republicans, as well as a drop in Americans faith in its political institutions overall.

Over the last year, Republicans and independents became more ambivalent about the Capitol riotopposition to the event dropped by 11 percentage points among members of the GOP and by 8 percentage points among independents, but held steady among Democrats.

This warming attitude, Lazer says, may be explained by a parallel belief among voters that the 2020 presidential election was stolen from Republican candidate Donald Trump, who was the incumbent at the time. The latest data show that a modest number of people across all political affiliations3 percent to 4 percent of those surveyedexpressed less confidence in the fairness of the election this time around than they did last year.

But the belief is especially strong among Republicans and independents. Sixty-two percent of Republicans believe that if the votes had been fairly counted, Donald Trump would have won the 2020 election, while only 18 percent disagree with that sentiment. Independent voters lag only slightly behind54 percent believe Trump would have won the election if it were a fair contest, while 21 percent disagree.

Compare the figures to those of Democrats: Only 5 percent of Democrats surveyed believed Trump should have won, and 88 percent dont.

If you sincerely believe that the election was stolen, it becomes the predicate for lots of other things, Lazer says. At minimum, it becomes the predicate for taking equal and opposing action; at the extreme end of the continuum, it could become the predicate for the kind of violence we saw last year.

Looking back, Lazer says its clear that the particular circumstances of the 2020 election created a perfect storm that crystallized distrust of the democratic process among people who already may have had misgivings.

COVID-19 precautions meant a huge number of people voted by mail instead of in person, and those who mailed in their ballots bucked historical trends. In a given election, more Republicans typically vote by mail than Democrats; in 2020, it was Democrats who did so. As a result, Trump appeared to be in the lead on election night, before all the mail-in ballots had been counted.

Then there was Trump himself, whose stop the steal campaign culminated in a speech on Jan. 6, 2021, during which he posited again and again that Bidens victory was the result of a rigged election stolen by emboldened radical-left Democrats.

There was a sense in which you couldnt have written a political thriller like this because it just wouldnt have been plausible, Lazer says. There was already distrust in elections for some people, and this was like throwing gasoline on a fire. Everything just seemed to fit that narrative so neatly.

For media inquiries, please contact Shannon Nargi at s.nargi@northeastern.edu or 617-373-5718.

Original post:

A Year After the Capitol Insurrection, Faith in Democracy Wanes - News @ Northeastern - News@Northeastern

Read More..

Do the Legal Rules Governing the Confidentiality of Cyber Incident Response Undermine Cybersecurity? – Lawfare

When businesses suspect that they may have experienced a cyber incident, their first call is typically not to a cybersecurity firm, public relations outfit or even their cyber insurer. Instead, it is increasingly to a lawyer. These lawyersmany of whom market themselves as breach coachesthen coordinate all subsequent elements of the response to their clients potential cyber incident, including the efforts of the clients internal personnel and those of third-party cybersecurity and public relations firms that the lawyer hires directly. More than 4,000 cyber incidents in 2018 were handled in this manner. Similarly, the cybersecurity firm Crowdstrike reports that 50 percent of its investigations were directed by an attorney in 2020. This approach is accepted so widely that in-house attorneys explicitly recommend it in their professional publications and many cyber insurers provide policyholders with 800 numbers to call in the event of a cyber incident that go directly to an independent law firm rather than the insurer.

Lawyers pole position in coordinating cyber incident response is driven predominantly by their capacity to shield any information that is produced during that process from discovery in a subsequent lawsuit. Under long-standing case law, communications between consultants and attorneys who hire them to help provide legal advice to a client are shielded by the attorney-client privilege. Additionally, any documents and mental processes of third-party consultants that are produced in reasonable anticipation of litigationwhether or not they are communicated to the attorneyare similarly shielded from discovery under the work product immunity doctrine.

Putting lawyers, rather than technical security firms, in charge of data breach investigations can influence the incident response process in many ways, and its not entirely clear to what extent law firms emphasis on protecting attorney-client privilege and work product immunity alters the course of those investigations. We are an interdisciplinary group of researchersin law, political science and computer sciencewho are investigating this question. We are particularly interested in the prospect that these confidentiality doctrines have the potential to significantly undermine the efficiency and effectiveness of cybersecurity controls and processes. To look at this question, we are interviewing and surveying a broad range of participants in the cybersecurity ecosystemincluding breach coach lawyers, cyber-insurance personnel and digital forensic investigators.

Some of the potential distorting effects of attorney-client privilege and work product doctrine are well known, if only because they have played out so visibly in high-profile data breaches. For instance, several salient cases suggest that firms wishing to preserve the confidentiality of their post-breach efforts should consider launching dual investigations, with one focused on understanding the root causes of an incident and potential security solutions, and the other intended solely to facilitate the efforts of the companys lawyers. Doing so can limit the risk that post-breach assessments of legal and regulatory risks may be discoverable because they are combined with nonlegal materials, such as recommendations for improving future cybersecurity protocols. This was the strategy that Target employed when hackers stole 41 million payment card numbers from the retailer in 2013. In holding that the results of the second investigation were shielded from discovery in a subsequent class-action lawsuit, the court emphasized that this investigation was conducted solely for legal purposes. Not only does this approach have the obvious potential to inflate the costs of cyber incident response, but it may well undermine the effectiveness of such responses by creating confusion about the distinct responsibilities of the two investigative teams.

By contrast, when firms victimized by cyberattacks have tasked cybersecurity firms with both supporting their lawyers and helping them to shore up their technical defenses, courts have been much less willing to treat any resulting communications as privileged. This was the result when health insurer Premera hired security firm Mandiant to conduct a security audit, which detected a year-long breach that affected 11 million customers personal information. After the breach was discovered, Premera amended Mandiants statement of work and instructed it to report directly to its external counsel. In holding that Mandiants ultimate report was not protected by privilege, the court emphasized that Mandiant had been engaged prior to the discovery of the breach and that its report was not solely intended to provide legal advice. Documents, the court reasoned, prepared for a purpose other than or in addition to obtaining legal advice and intended to be seen by persons other than the attorney are not privileged.

Unlike Premera, Target was willing to go to extremeand expensivelengths to protect that attorney-client privilege in the aftermath of its 2013 breach, perhaps because it knew that the incident was likely to lead to litigation. But for many breached firms, paying for a dual-track investigation is costly and inefficient. Nor is it entirely clear that its a necessary step for preserving attorney-client privilege. A 2021 ruling held that a forensics report for a 2018 data breach of the Marriott hotel chain was privileged, even though the report was prepared by IBM, which had also provided pre-breach security services to Marriott. Though IBM had been working with Marriott since 2011, following the investigation, the company entered into a new statement of work with Marriott and BakerHostetler, the law firm the hotel chain retained to manage the breach investigation.

Our preliminary investigations suggest that attorney-client privilege and work product doctrine create potential distortions that may go much deeper than triggering occasional inefficient dual-track cyber-incident investigations. For instance, in the course of our initial conversations with participants in the cybersecurity ecosystem, we have learned that lawyers coordinating cyber-incident investigations routinely refuse to make forensic reports produced by cybersecurity firms available to cyber insurers. Such disclosure, these attorneys worry, could constitute a waiver of attorney-client privilege. Irrespective of the accuracy of this concernwhich has not yet been tested in courtthis practice may deprive insurers of potentially useful information that they could use to improve their underwriting processes or to advise other policyholders. Some attorneys, moreover, go even further, instructing their clients and cybersecurity firms not to disclose forensic reports to the clients internal information technology (IT) personnel, lest a court interpret that report to have been produced for business, rather than legal, purposes.

Some of our preliminary discussions suggest even more fundamental ways in which lawyers efforts to preserve confidentiality may undermine cybersecurity. For instance, some industry participants tell us that attorneys increasingly instruct forensic investigation teams not to record their findings in a written report at all, because of the potential that such a report could make its way into the hands of plaintiffs lawyers. Instead, forensic experts are instructed to explain the results of their investigations either via stripped-down PowerPoint presentations or through entirely oral presentations. This, of course, raises the prospect that any information communicated to clients that may allow them to improve their cybersecurity efforts in the future will not be fully understood by them or accurately communicated to others within the firm.

Similarly, the rules governing confidentiality appear to create the perverse incentive for firms to hire different security firms to run post-breach investigations from the ones that already provided pre-breach monitoring services. This reduces the speed of response as another firm must be engaged, contracted and provided with network access, all while an adversary has already infiltrated the targets networks. Further, the new firm may be unfamiliar with the network environment, often needing to navigate new software and IT portals to access monitoring tools and the corresponding logs.

Perhaps most perniciously of all, current rules may even disincentivize firms from taking proactive steps to conduct cybersecurity audits or other forms of monitoring. Since privilege and work product immunity attach only to documents produced when a firm reasonably anticipates litigation or communicates with attorneys to secure legal advice, these protections may not apply to materials produced to help detect a future breach. So companies may be less inclined to engage in those efforts directly or to hire cybersecurity firms to do so on their behalf. And even when they do, they may be reluctant to use the same firms for post-breach investigations that they hired for pre-breach monitoring, even if the firms coordinating pre-breach monitoring are more familiar with their computer systems and could conduct a faster forensic investigation.

Beyond distorting what information is documented and shared, current confidentiality rules create operational and business complexities. Because they place lawyers at the center of incident response, they cause law firms to charge large hourly fees, control communications, and even choose which forensics firms are hired. This disrupts established relationships and work patterns between internal IT firms and external cybersecurity vendors. In some cases this disruption may produce a variety of benefits that have nothing to do with confidentiality. For instance, some lawyers claim they are particularly adept at efficiently managing multiple work streams spanning technical investigation, ransomware negotiation, regulatory notifications, public relations and insurance. Others dispute these alleged benefits; some security professionals claim that centralizing communications through lawyers creates bottlenecks and delays, and even accuse lawyers of unmeritocratic hiring.

We are still working to understand the prevalence of these different practices for preserving attorney-client privilege, and their impact on the investigation process and findings. But policymakers, insurers and security researchers are all struggling to assemble reliable datasets about cyber threats and the effectiveness of different countermeasures. The Cyberspace Solarium Commission report issued in 2020 even recommended that Congress establish a new Bureau of Cyber Statistics specifically to collect statistical data on cybersecurity. So its worth considering how concerns about attorney-client privilege and work product doctrine may be contributing to those challenges by influencing the processes for investigating breaches, sharing and aggregating information about those breaches, and learning from past cybersecurity incidents.

Its not clear how big a problem confidentiality considerations are for cybersecurity investigations and data collection, so its hard to know what the right solution isor, indeed, if any solution is even needed. Jeff Kosseff has proposed the creation of a stand-alone privilege for cybersecurity work so that firms will be less reluctant to hire security professionals to assess and audit their computer systems. But its also possible that creating new privileges around cybersecurity could make it harder for people to sue firms in the aftermath of breaches, thereby limiting those firms accountability. On the other hand, it remains an open question how effective such lawsuits have been at incentivizing better cybersecurity practices.

The influence of attorney-client privilege and work product immunity on cybersecurity raises many more similarly open questions. It seems possible that the doctrines governing attorney-client privilege and work product have had the unintended consequences of undermining cybersecurity, information sharing about data breaches, and insurers ability to collect empirical data about cybersecurity incidents and the most effective countermeasures to prevent and mitigate those incidents. Given how central lawyers have become to breach response, and how high a priority maintaining confidentiality is for many of them, these questions are worthy of more study and attention as technical experts, policymakers and insurers all grapple with the best ways to learn from cybersecurity incidents. We would welcome any readers with experience on these issues to contact us directly so that we can learn more about how the laws governing attorney-client privilege and work product can promote, or undermine, effective cybersecurity.

Continue reading here:

Do the Legal Rules Governing the Confidentiality of Cyber Incident Response Undermine Cybersecurity? - Lawfare

Read More..

Mr. Jeroen Tas Joins Zylorion as a Strategic Advisor and Observer to the Board of Directors – PRNewswire

CALGARY, AB, Jan. 6, 2022 /PRNewswire/ - PsiloTec Health Solutions Inc., operating as Zylorion, ("Zylorion" or the "Company"), a mental health care and psychedelic therapy focused innovator, is pleased to announce Mr. Jeroen Tas has joined the Company as a strategic advisor and will also act as an observer to the Board of Directors.

Mr. Tas is an innovation leader and entrepreneur with deep expertise in large scale digital transformationand a history of leveraging information technology to transform and grow businesses. Mr. Tas is the formerChief Innovation and Strategy Officer with Koninklijke Philips N.V. ("Philips") (NYSE:PHG;AMS:PHIA), aglobal health technology company, where he was instrumental in the transition of Philips to a customer-centric, digital health tech solutions company. Before joining Philips, Mr. Tas co-founded and served asPresident, COO and Vice-Chairman of the Board for Mphasis, a technology solutions company focused onservices for the financial industry, which was ultimately acquired by HP (EDS). Mr. Tas was also the formerhead of Transaction Technology Inc., Citi Bank's tech lab, responsible for the innovation and developmentof Citi Bank's customer-facing systems, where he oversaw the first launch of internet banking, payment networks and internet-basedself-servicedevices.

Mr. Tas is the 2004 winner of the E&Y Entrepreneur of the Year award in the Information Technology category for the New York region. Mr. Tas was also the recipient of the 2013 Dutch Chief Information Officer of the year award, the NASSCOM 2014 Global Chief Information Officer award, the World Innovation Congress 2014 Chief Information Officer Leadership award, the CIONet 2014 European Chief Information Officer award, the IT Executive 2014 award and the Accenture 2015 Innovator of the Year award. Mr. Tas is a native of the Netherlands and holds a Master's in Computer Science and Business Administration from the Vrije University, Amsterdam.

"I am delighted to be a part of the Zylorion organization and to be working with the world-renowned team ofexperts that Dr. Silverstone and the Board of Directors have been able to attract to the Company. In thisnascent industry, we as a team look forward to making an impact by developing new and innovativetreatmentsolutions,"commentedMr. JeroenTas.

In his role as a Strategic Advisor and Board Observer, Mr. Tas will provide strategic guidance and advise the development and delivery of the Company's clinical, and technology enabled therapy programs. Mr. Tas will also be acting as a non-voting observer to the Board of Directors. On August 13, 2021, the Company announced that it had entered into a non-binding letter of intent with Michichi Capital Corp. ("Michichi") in respect of a transaction which would, if completed, result in a reverse takeover of Michichi by Zylorion (the "RTO"). If the RTO closes, it is the current intention of the Company's Board of Directors to nominate Mr. Tas for election to the Michichi Board of Directors at Michichi's first annual general meeting following closing of the RTO. While Michichi and the Company continue to advance the RTO, the parties have not yet entered into a binding agreement with respect to the same.

"We are thrilled to have Mr. Tas join our organization and our mission. He brings a wealth of experience both as a successful entrepreneur and as a global technology leader. Mr. Tas has a proven track record of leading transformation change and creating sustainable shareholder value," noted Dr. Peter Silverstone, Chief Executive Officer & Director.

About ZylorionZylorion is a biopharmaceutical company engaged in the development and delivery of integrated mental health therapies to address psychological and neurological mental health conditions. Zylorion is focused on the research, development and commercialization of psychedelic-based compounds coupled with therapeutic treatment programs targeting a continuum of mental health conditions, such as MDD (major depressive disorder), TRD (treatment resistant depression), PTSD (post-traumatic stress disorder), general depression, anxiety disorders, and a number of addictive tendencies. Zylorion aims to leverage leading technologies to support the scalability and accessibility of its integrated therapy programs in its mission to enable those experiencing mental health challenges to thrive.

CautionaryNoteRegardingForward-LookingStatementsThis news release contains statements that constitute forward-looking information ("forward-looking information") within the meaning of the applicable Canadian securities legislation. All statements, other than statements of historical fact, are forward-looking information and are based on expectations, estimates and projections as at the date of this news release. Any statement that discusses predictions, expectations, beliefs, plans, projections, objectives, assumptions, future events or performance (often but not always using phrases such as "expects", or "does not expect", "is expected", "anticipates" or "does not anticipate", "plans", "budget", "scheduled", "forecasts", "estimates", "believes" or "intends" or variations of such words and phrases or stating that certain actions, events or results "may" or "could", "would", "might" or "will" be taken to occur or be achieved) are not statements of historical fact and may be forward-looking information.

SOURCE Zylorion

See the rest here:

Mr. Jeroen Tas Joins Zylorion as a Strategic Advisor and Observer to the Board of Directors - PRNewswire

Read More..

Introduction to quantum mechanics – Wikipedia

Non-technical introduction to quantum physics

Quantum mechanics is the study of very small things. It explains the behavior of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain.[1] The desire to resolve inconsistencies between observed phenomena and classical theory led to two major revolutions in physics that created a shift in the original scientific paradigm: the theory of relativity and the development of quantum mechanics.[2] This article describes how physicists discovered the limitations of classical physics and developed the main concepts of the quantum theory that replaced it in the early decades of the 20th century. It describes these concepts in roughly the order in which they were first discovered. For a more complete history of the subject, see History of quantum mechanics.

Light behaves in some aspects like particles and in other aspects like waves. Matterthe "stuff" of the universe consisting of particles such as electrons and atomsexhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales.

Many aspects of quantum mechanics are counterintuitive[3] and can seem paradoxical because they describe behavior quite different from that seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She isabsurd".[4]

For example, the uncertainty principle of quantum mechanics means that the more closely one pins down one measurement (such as the position of a particle), the less accurate another complementary measurement pertaining to the same particle (such as its speed) must become.

Another example is entanglement, in which a measurement of any two-valued state of a particle (such as light polarized up or down) made on either of two "entangled" particles that are very far apart causes a subsequent measurement on the other particle to always be the other of the two values (such as polarized in the opposite direction).

A final example is superfluidity, in which a container of liquid helium, cooled down to near absolute zero in temperature spontaneously flows (slowly) up and over the opening of its container, against the force of gravity.

Thermal radiation is electromagnetic radiation emitted from the surface of an object due to the object's internal energy. If an object is heated sufficiently, it starts to emit light at the red end of the spectrum, as it becomes red hot.

Heating it further causes the color to change from red to yellow, white, and blue, as it emits light at increasingly shorter wavelengths (higher frequencies). A perfect emitter is also a perfect absorber: when it is cold, such an object looks perfectly black, because it absorbs all the light that falls on it and emits none. Consequently, an ideal thermal emitter is known as a black body, and the radiation it emits is called black-body radiation.

By the late 19th century, thermal radiation had been fairly well characterized experimentally.[note 1] However, classical physics led to the RayleighJeans law, which, as shown in the figure, agrees with experimental results well at low frequencies, but strongly disagrees at high frequencies. Physicists searched for a single theory that explained all the experimental results.

The first model that was able to explain the full spectrum of thermal radiation was put forward by Max Planck in 1900.[5] He proposed a mathematical model in which the thermal radiation was in equilibrium with a set of harmonic oscillators. To reproduce the experimental results, he had to assume that each oscillator emitted an integer number of units of energy at its single characteristic frequency, rather than being able to emit any arbitrary amount of energy. In other words, the energy emitted by an oscillator was quantized.[note 2] The quantum of energy for each oscillator, according to Planck, was proportional to the frequency of the oscillator; the constant of proportionality is now known as the Planck constant. The Planck constant, usually written as h, has the value of 6.631034J s. So, the energy E of an oscillator of frequency f is given by

To change the color of such a radiating body, it is necessary to change its temperature. Planck's law explains why: increasing the temperature of a body allows it to emit more energy overall, and means that a larger proportion of the energy is towards the violet end of the spectrum.

Planck's law was the first quantum theory in physics, and Planck won the Nobel Prize in 1918 "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta".[7] At the time, however, Planck's view was that quantization was purely a heuristic mathematical construct, rather than (as is now believed) a fundamental change in our understanding of the world.[8]

In 1905, Albert Einstein took an extra step. He suggested that quantization was not just a mathematical construct, but that the energy in a beam of light actually occurs in individual packets, which are now called photons.[9] The energy of a single photon of light of frequency f {displaystyle f} is given by the frequency multiplied by Planck's constant h {displaystyle h} (an extremely tiny positive number):

For centuries, scientists had debated between two possible theories of light: was it a wave or did it instead comprise a stream of tiny particles? By the 19th century, the debate was generally considered to have been settled in favor of the wave theory, as it was able to explain observed effects such as refraction, diffraction, interference, and polarization.[10] James Clerk Maxwell had shown that electricity, magnetism, and light are all manifestations of the same phenomenon: the electromagnetic field. Maxwell's equations, which are the complete set of laws of classical electromagnetism, describe light as waves: a combination of oscillating electric and magnetic fields. Because of the preponderance of evidence in favor of the wave theory, Einstein's ideas were met initially with great skepticism. Eventually, however, the photon model became favored. One of the most significant pieces of evidence in its favor was its ability to explain several puzzling properties of the photoelectric effect, described in the following section. Nonetheless, the wave analogy remained indispensable for helping to understand other characteristics of light: diffraction, refraction, and interference.

In 1887, Heinrich Hertz observed that when light with sufficient frequency hits a metallic surface, the surface emits electrons.[11] In 1902, Philipp Lenard discovered that the maximum possible energy of an ejected electron is related to the frequency of the light, not to its intensity: if the frequency is too low, no electrons are ejected regardless of the intensity. Strong beams of light toward the red end of the spectrum might produce no electrical potential at all, while weak beams of light toward the violet end of the spectrum would produce higher and higher voltages. The lowest frequency of light that can cause electrons to be emitted, called the threshold frequency, is different for different metals. This observation is at odds with classical electromagnetism, which predicts that the electron's energy should be proportional to the intensity of the incident radiation.[12]:24 So when physicists first discovered devices exhibiting the photoelectric effect, they initially expected that a higher intensity of light would produce a higher voltage from the photoelectric device.

Einstein explained the effect by postulating that a beam of light is a stream of particles ("photons") and that, if the beam is of frequency f, then each photon has an energy equal to hf.[11] An electron is likely to be struck only by a single photon, which imparts at most an energy hf to the electron.[11] Therefore, the intensity of the beam has no effect[note 3] and only its frequency determines the maximum energy that can be imparted to the electron.[11]

To explain the threshold effect, Einstein argued that it takes a certain amount of energy, called the work function and denoted by , to remove an electron from the metal.[11] This amount of energy is different for each metal. If the energy of the photon is less than the work function, then it does not carry sufficient energy to remove the electron from the metal. The threshold frequency, f0, is the frequency of a photon whose energy is equal to the work function:

If f is greater than f0, the energy hf is enough to remove an electron. The ejected electron has a kinetic energy, EK, which is, at most, equal to the photon's energy minus the energy needed to dislodge the electron from the metal:

Einstein's description of light as being composed of particles extended Planck's notion of quantized energy, which is that a single photon of a given frequency, f, delivers an invariant amount of energy, hf. In other words, individual photons can deliver more or less energy, but only depending on their frequencies. In nature, single photons are rarely encountered. The Sun and emission sources available in the 19th century emit vast numbers of photons every second, and so the importance of the energy carried by each photon was not obvious. Einstein's idea that the energy contained in individual units of light depends on their frequency made it possible to explain experimental results that had seemed counterintuitive. However, although the photon is a particle, it was still being described as having the wave-like property of frequency. Effectively, the account of light as a particle is insufficient, and its wave-like nature is still required.[13][note 4]

The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energyenough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energyonly enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.[15]

All photons of the same frequency have identical energy, and all photons of different frequencies have proportionally (order 1, Ephoton = hf ) different energies.[16] However, although the energy imparted by photons is invariant at any given frequency, the initial energy state of the electrons in a photoelectric device before absorption of light is not necessarily uniform. Anomalous results may occur in the case of individual electrons. For instance, an electron that was already excited above the equilibrium level of the photoelectric device might be ejected when it absorbed uncharacteristically low-frequency illumination. Statistically, however, the characteristic behavior of a photoelectric device reflects the behavior of the vast majority of its electrons, which are at their equilibrium level. This point helps clarify the distinction between the study of small individual particles in quantum dynamics and the study of massive individual particles in classical physics.[citation needed]

By the dawn of the 20th century, the evidence required a model of the atom with a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. These properties suggested a model in which electrons circle the nucleus like planets orbiting a sun.[note 5] However, it was also known that the atom in this model would be unstable: according to classical theory, orbiting electrons are undergoing centripetal acceleration, and should therefore give off electromagnetic radiation, the loss of energy also causing them to spiral toward the nucleus, colliding with it in a fraction of a second.

A second, related puzzle was the emission spectrum of atoms. When a gas is heated, it gives off light only at discrete frequencies. For example, the visible light given off by hydrogen consists of four different colors, as shown in the picture below. The intensity of the light at different frequencies is also different. By contrast, white light consists of a continuous emission across the whole range of visible frequencies. By the end of the nineteenth century, a simple rule known as Balmer's formula showed how the frequencies of the different lines related to each other, though without explaining why this was, or making any prediction about the intensities. The formula also predicted some additional spectral lines in ultraviolet and infrared light that had not been observed at the time. These lines were later observed experimentally, raising confidence in the value of the formula.

The mathematical formula describing hydrogen's emission spectrum

In 1885 the Swiss mathematician Johann Balmer discovered that each wavelength (lambda) in the visible spectrum of hydrogen is related to some integer n by the equation

where B is a constant Balmer determined is equal to 364.56nm.

In 1888 Johannes Rydberg generalized and greatly increased the explanatory utility of Balmer's formula. He predicted that is related to two integers n and m according to what is now known as the Rydberg formula:[17]

where R is the Rydberg constant, equal to 0.0110nm1, and n must be greater than m.

Rydberg's formula accounts for the four visible wavelengths of hydrogen by setting m = 2 and n = 3, 4, 5, 6. It also predicts additional wavelengths in the emission spectrum: for m = 1 and for n > 1, the emission spectrum should contain certain ultraviolet wavelengths, and for m = 3 and n > 3, it should also contain certain infrared wavelengths. Experimental observation of these wavelengths came two decades later: in 1908 Louis Paschen found some of the predicted infrared wavelengths, and in 1914 Theodore Lyman found some of the predicted ultraviolet wavelengths.[17]

Both Balmer and Rydberg's formulas involve integers: in modern terms, they imply that some property of the atom is quantized. Understanding exactly what this property was, and why it was quantized, was a major part of the development of quantum mechanics, as shown in the rest of this article.

In 1913 Niels Bohr proposed a new model of the atom that included quantized electron orbits: electrons still orbit the nucleus much as planets orbit around the sun, but they are permitted to inhabit only certain orbits, not to orbit at any arbitrary distance.[18] When an atom emitted (or absorbed) energy, the electron did not move in a continuous trajectory from one orbit around the nucleus to another, as might be expected classically. Instead, the electron would jump instantaneously from one orbit to another, giving off the emitted light in the form of a photon.[19] The possible energies of photons given off by each element were determined by the differences in energy between the orbits, and so the emission spectrum for each element would contain a number of lines.[20]

Starting from only one simple assumption about the rule that the orbits must obey, the Bohr model was able to relate the observed spectral lines in the emission spectrum of hydrogen to previously known constants. In Bohr's model, the electron was not allowed to emit energy continuously and crash into the nucleus: once it was in the closest permitted orbit, it was stable forever. Bohr's model didn't explain why the orbits should be quantized in that way, nor was it able to make accurate predictions for atoms with more than one electron, or to explain why some spectral lines are brighter than others.

Some fundamental assumptions of the Bohr model were soon proven wrongbut the key result that the discrete lines in emission spectra are due to some property of the electrons in atoms being quantized is correct. The way that the electrons actually behave is strikingly different from Bohr's atom, and from what we see in the world of our everyday experience; this modern quantum mechanical model of the atom is discussed below.

A more detailed explanation of the Bohr model

Bohr theorized that the angular momentum, L, of an electron is quantized:

where n is an integer and h is the Planck constant. Starting from this assumption, Coulomb's law and the equations of circular motion show that an electron with n units of angular momentum orbits a proton at a distance r given by

where ke is the Coulomb constant, m is the mass of an electron, and e is the charge on an electron.For simplicity this is written as

where a0, called the Bohr radius, is equal to 0.0529nm.The Bohr radius is the radius of the smallest allowed orbit.

The energy of the electron[note 6] can also be calculated, and is given by

Thus Bohr's assumption that angular momentum is quantized means that an electron can inhabit only certain orbits around the nucleus and that it can have only certain energies. A consequence of these constraints is that the electron does not crash into the nucleus: it cannot continuously emit energy, and it cannot come closer to the nucleus than a0 (the Bohr radius).

An electron loses energy by jumping instantaneously from its original orbit to a lower orbit; the extra energy is emitted in the form of a photon. Conversely, an electron that absorbs a photon gains energy, hence it jumps to an orbit that is farther from the nucleus.

Each photon from glowing atomic hydrogen is due to an electron moving from a higher orbit, with radius rn, to a lower orbit, rm. The energy E of this photon is the difference in the energies En and Em of the electron:

Since Planck's equation shows that the photon's energy is related to its wavelength by E = hc/, the wavelengths of light that can be emitted are given by

This equation has the same form as the Rydberg formula, and predicts that the constant R should be given by

Therefore, the Bohr model of the atom can predict the emission spectrum of hydrogen in terms of fundamental constants.[note 7] However, it was not able to make accurate predictions for multi-electron atoms, or to explain why some spectral lines are brighter than others.

Just as light has both wave-like and particle-like properties, matter also has wave-like properties.[21]

Matter behaving as a wave was first demonstrated experimentally for electrons: a beam of electrons can exhibit diffraction, just like a beam of light or a water wave.[note 8] Similar wave-like phenomena were later shown for atoms and even molecules.

The wavelength, , associated with any object is related to its momentum, p, through the Planck constant, h:[22][23]

The relationship, called the de Broglie hypothesis, holds for all types of matter: all matter exhibits properties of both particles and waves.

The concept of waveparticle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Waveparticle duality is an example of the principle of complementarity in quantum physics.[24][25][26][27][28] An elegant example of wave-particle duality, the double-slit experiment, is discussed in the section below.

In the double-slit experiment, as originally performed by Thomas Young in 1803,[29] and then Augustin Fresnel a decade later,[29] a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. If one of the slits is covered up, one might navely expect that the intensity of the fringes due to interference would be halved everywhere. In fact, a much simpler pattern is seen, a diffraction pattern diametrically opposite the open slit. The same behavior can be demonstrated in water waves, and so the double-slit experiment was seen as a demonstration of the wave nature of light.

Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules,[30][31] and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses both particle and wave characteristics.

Even if the source intensity is turned down, so that only one particle (e.g. photon or electron) is passing through the apparatus at a time, the same interference pattern develops over time. The quantum particle acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum particle acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves.

De Broglie expanded the Bohr model of the atom by showing that an electron in orbit around a nucleus could be thought of as having wave-like properties. In particular, an electron is observed only in situations that permit a standing wave around a nucleus. An example of a standing wave is a violin string, which is fixed at both ends and can be made to vibrate. The waves created by a stringed instrument appear to oscillate in place, moving from crest to trough in an up-and-down motion. The wavelength of a standing wave is related to the length of the vibrating object and the boundary conditions. For example, because the violin string is fixed at both ends, it can carry standing waves of wavelengths 2 l n {displaystyle {frac {2l}{n}}} , where l is the length and n is a positive integer. De Broglie suggested that the allowed electron orbits were those for which the circumference of the orbit would be an integer number of wavelengths. The electron's wavelength, therefore, determines that only Bohr orbits of certain distances from the nucleus are possible. In turn, at any distance from the nucleus smaller than a certain value, it would be impossible to establish an orbit. The minimum possible distance from the nucleus is called the Bohr radius.[32]

De Broglie's treatment of quantum events served as a starting point for Schrdinger when he set out to construct a wave equation to describe quantum-theoretical events.

In 1922, Otto Stern and Walther Gerlach shot silver atoms through an inhomogeneous magnetic field. Relative to its northern pole, pointing up, down, or somewhere in between, in classical mechanics, a magnet thrown through a magnetic field may be deflected a small or large distance upwards or downwards. The atoms that Stern and Gerlach shot through the magnetic field acted similarly. However, while the magnets could be deflected variable distances, the atoms would always be deflected a constant distance either up or down. This implied that the property of the atom that corresponds to the magnet's orientation must be quantized, taking one of two values (either up or down), as opposed to being chosen freely from any angle.

Ralph Kronig originated the theory that particles such as atoms or electrons behave as if they rotate, or "spin", about an axis. Spin would account for the missing magnetic moment,[clarification needed] and allow two electrons in the same orbital to occupy distinct quantum states if they "spun" in opposite directions, thus satisfying the exclusion principle. The quantum number represented the sense (positive or negative) of spin.

The choice of the orientation of the magnetic field used in the SternGerlach experiment is arbitrary. In the animation shown here, the field is vertical and so the atoms are deflected either up or down. If the magnet is rotated a quarter turn, the atoms are deflected either left or right. Using a vertical field shows that the spin along the vertical axis is quantized, and using a horizontal field shows that the spin along the horizontal axis is quantized.

If instead of hitting a detector screen, one of the beams of atoms coming out of the SternGerlach apparatus is passed into another (inhomogeneous) magnetic field oriented in the same direction, all of the atoms are deflected the same way in this second field. However, if the second field is oriented at 90 to the first, then half of the atoms are deflected one way and half the other so that the atom's spin about the horizontal and vertical axes are independent of each other. However, if one of these beams (e.g. the atoms that were deflected up then left) is passed into a third magnetic field, oriented the same way as the first, half of the atoms go one way and half the other, even though they all went in the same direction originally. The action of measuring the atoms' spin concerning a horizontal field has changed their spin concerning a vertical field.

The SternGerlach experiment demonstrates several important features of quantum mechanics:

In 1925, Werner Heisenberg attempted to solve one of the problems that the Bohr model left unanswered, explaining the intensities of the different lines in the hydrogen emission spectrum. Through a series of mathematical analogies, he wrote out the quantum-mechanical analog for the classical computation of intensities.[33] Shortly afterward, Heisenberg's colleague Max Born realized that Heisenberg's method of calculating the probabilities for transitions between the different energy levels could best be expressed by using the mathematical concept of matrices.[note 9]

In the same year, building on de Broglie's hypothesis, Erwin Schrdinger developed the equation that describes the behavior of a quantum-mechanical wave.[34] The mathematical model, called the Schrdinger equation after its creator, is central to quantum mechanics, defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time.[35] The wave itself is described by a mathematical function known as a "wave function". Schrdinger said that the wave function provides the "means for predicting the probability of measurement results".[36]

Schrdinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a classical wave, moving in a well of the electrical potential created by the proton. This calculation accurately reproduced the energy levels of the Bohr model.

In May 1926, Schrdinger proved that Heisenberg's matrix mechanics and his own wave mechanics made the same predictions about the properties and behavior of the electron; mathematically, the two theories had an underlying common form. Yet the two men disagreed on the interpretation of their mutual theory. For instance, Heisenberg accepted the theoretical prediction of jumps of electrons between orbitals in an atom,[37] but Schrdinger hoped that a theory based on continuous wave-like properties could avoid what he called (as paraphrased by Wilhelm Wien) "this nonsense about quantum jumps".[38] In the end, Heisenberg's approach won out, and quantum jumps were confirmed.[39]

Bohr, Heisenberg, and others tried to explain what these experimental results and mathematical models really mean. Their description, known as the Copenhagen interpretation of quantum mechanics, aimed to describe the nature of reality that was being probed by the measurements and described by the mathematical formulations of quantum mechanics.

The main principles of the Copenhagen interpretation are:

Various consequences of these principles are discussed in more detail in the following subsections.

Suppose it is desired to measure the position and speed of an objectfor example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired.

In 1927, Heisenberg proved that this last assumption is not correct.[41] Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantitiesthe assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.[42]

Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain (momentum is velocity multiplied by mass), for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.[43]

At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.

The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to Planck's constant.

Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics.

For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD.

Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.

In the SternGerlach experiment discussed above, the spin of the atom about the vertical axis has two eigenstates: up and down. Before measuring it, we can only say that any individual atom has an equal probability of being found to have spin up or spin down. The measurement process causes the wavefunction to collapse into one of the two states.

The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value.

In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."[44]

A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the SternGerlach experiment.

Bohr's model of the atom was essentially a planetary one, with the electrons orbiting around the nuclear "sun". However, the uncertainty principle states that an electron cannot simultaneously have an exact location and velocity in the way that a planet does. Instead of classical orbits, electrons are said to inhabit atomic orbitals. An orbital is the "cloud" of possible locations in which an electron might be found, a distribution of probabilities rather than a precise location.[44] Each orbital is three dimensional, rather than the two-dimensional orbit, and is often depicted as a three-dimensional region within which there is a 95 percent probability of finding the electron.[45]

Schrdinger was able to calculate the energy levels of hydrogen by treating a hydrogen atom's electron as a wave, represented by the "wave function" , in an electric potential well, V, created by the proton. The solutions to Schrdinger's equation[clarification needed] are distributions of probabilities for electron positions and locations. Orbitals have a range of different shapes in three dimensions. The energies of the different orbitals can be calculated, and they accurately match the energy levels of the Bohr model.

Within Schrdinger's picture, each electron has four properties:

The collective name for these properties is the quantum state of the electron. The quantum state can be described by giving a number to each of these properties; these are known as the electron's quantum numbers. The quantum state of the electron is described by its wave function. The Pauli exclusion principle demands that no two electrons within an atom may have the same values of all four numbers.

The first property describing the orbital is the principal quantum number, n, which is the same as in Bohr's model. n denotes the energy level of each orbital. The possible values for n are integers:

The next quantum number, the azimuthal quantum number, denoted l, describes the shape of the orbital. The shape is a consequence of the angular momentum of the orbital. The angular momentum represents the resistance of a spinning object to speeding up or slowing down under the influence of external force. The azimuthal quantum number represents the orbital angular momentum of an electron around its nucleus. The possible values for l are integers from 0 to n 1 (where n is the principal quantum number of the electron):

The shape of each orbital is usually referred to by a letter, rather than by its azimuthal quantum number. The first shape (l=0) is denoted by the letter s (a mnemonic being "sphere"). The next shape is denoted by the letter p and has the form of a dumbbell. The other orbitals have more complicated shapes (see atomic orbital), and are denoted by the letters d, f, g, etc.

The third quantum number, the magnetic quantum number, describes the magnetic moment of the electron, and is denoted by ml (or simply m). The possible values for ml are integers from l to l (where l is the azimuthal quantum number of the electron):

The magnetic quantum number measures the component of the angular momentum in a particular direction. The choice of direction is arbitrary; conventionally the z-direction is chosen.

The fourth quantum number, the spin quantum number (pertaining to the "orientation" of the electron's spin) is denoted ms, with values +12 or 12.

The chemist Linus Pauling wrote, by way of example:

In the case of a helium atom with two electrons in the 1s orbital, the Pauli Exclusion Principle requires that the two electrons differ in the value of one quantum number. Their values of n, l, and ml are the same. Accordingly they must differ in the value of ms, which can have the value of +12 for one electron and 12 for the other."[44]

It is the underlying structure and symmetry of atomic orbitals, and the way that electrons fill them, that leads to the organization of the periodic table. The way the atomic orbitals on different atoms combine to form molecular orbitals determines the structure and strength of chemical bonds between atoms.

In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.

Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory.

The Pauli exclusion principle says that two electrons in one system cannot be in the same state. Nature leaves open the possibility, however, that two electrons can have both states "superimposed" over each of them. Recall that the wave functions that emerge simultaneously from the double slits arrive at the detection screen in a state of superposition. Nothing is certain until the superimposed waveforms "collapse". At that instant, an electron shows up somewhere in accordance with the probability that is the square of the absolute value of the sum of the complex-valued amplitudes of the two superimposed waveforms. The situation there is already very abstract. A concrete way of thinking about entangled photons, photons in which two contrary states are superimposed on each of them in the same event, is as follows:

Imagine that we have two color-coded states of photons: one state labeled blue and another state labeled red. Let the superposition of the red and the blue state appear (in imagination) as a purple state. We consider a case in which two photons are produced as the result of one single atomic event. Perhaps they are produced by the excitation of a crystal that characteristically absorbs a photon of a certain frequency and emits two photons of half the original frequency. In this case, the photons are interconnected via their shared origin in a single atomic event. This setup results in superimposed states of the photons. So the two photons come out purple. If the experimenter now performs some experiment that determines whether one of the photons is either blue or red, then that experiment changes the photon involved from one having a superposition of blue and red characteristics to a photon that has only one of those characteristics. The problem that Einstein had with such an imagined situation was that if one of these photons had been kept bouncing between mirrors in a laboratory on earth, and the other one had traveled halfway to the nearest star when its twin was made to reveal itself as either blue or red, that meant that the distant photon now had to lose its purple status too. So whenever it might be investigated after its twin had been measured, it would necessarily show up in the opposite state to whatever its twin had revealed.

In trying to show that quantum mechanics was not a complete theory, Einstein started with the theory's prediction that two or more particles that have interacted in the past can appear strongly correlated when their various properties are later measured. He sought to explain this seeming interaction classically, through their common past, and preferably not by some "spooky action at a distance". The argument is worked out in a famous paper, Einstein, Podolsky, and Rosen (1935; abbreviated EPR) setting out what is now called the EPR paradox. Assuming what is now usually called local realism, EPR attempted to show from quantum theory that a particle has both position and momentum simultaneously, while according to the Copenhagen interpretation, only one of those two properties actually exists and only at the moment that it is being measured. EPR concluded that quantum theory is incomplete in that it refuses to consider physical properties that objectively exist in nature. (Einstein, Podolsky, & Rosen 1935 is currently Einstein's most cited publication in physics journals.) In the same year, Erwin Schrdinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."[46] Ever since Irish physicist John Stewart Bell theoretically and experimentally disproved the "hidden variables" theory of Einstein, Podolsky, and Rosen, most physicists have accepted entanglement as a real phenomenon.[47] However, there is some minority dispute.[48] The Bell inequalities are the most powerful challenge to Einstein's claims.

The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just like in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory.

Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists".[49] Other effects that manifest themselves as fields are gravitation and static electricity.[50] In 2008, physicist Richard Hammond wrote:

Read more here:

Introduction to quantum mechanics - Wikipedia

Read More..

What existed before the Big Bang? – BBC News

My understanding is that nothing comes from nothing. For something to exist, there must be material or a component available, and for them to be available, there must be something else available. Where did the material come from that created the Big Bang, and what happened in the first instance to create that material? Peter, 80, Australia.

"The last star will slowly cool and fade away. With its passing, the Universe will become once more a void, without light or life or meaning." So warned the physicist Brian Cox in the recent BBC series Universe.

The fading of that last star will only be the beginning of an infinitely long, dark epoch. All matter will eventually be consumed by monstrous black holes, which in their turn will evaporate away into the dimmest glimmers of light. Space will expand ever outwards until even that dim light becomes too spread out to interact. Activity will cease.

Or will it? Strangely enough, some cosmologists believe a previous, cold dark empty universe like the one which lies in our far future could have been the source of our very own Big Bang.

The first matter

But before we get to that, let's take a look at how "material" physical matter first came about. If we are aiming to explain the origins of stable matter made of atoms or molecules, there was certainly none of that around at the Big Bang, nor for hundreds of thousands of years afterwards. We do, in fact, have a pretty detailed understanding of how the first atoms formed out of simpler particles, once conditions cooled down enough for complex matter to be stable, and how these atoms were later fused into heavier elements inside stars. But that understanding doesn't address the question of whether something came from nothing.

So let's think further back. The first long-lived matter particles of any kind were protons and neutrons, which together make up the atomic nucleus. These came into existence around one ten-thousandth of a second after the Big Bang. Before that point, there was really no material in any familiar sense of the word. But physics lets us keep on tracing the timeline backwards to physical processes which predate any stable matter.

This takes us to the so-called "grand unified epoch". By now, we are well into the realm of speculative physics, as we can't produce enough energy in our experiments to probe the sort of processes that were going on at the time. But a plausible hypothesis is that the physical world was made up of a soup of short-lived elementary particles, including quarks, the building blocks of protons and neutrons. There was both matter and "antimatter" in roughly equal quantities. Each type of matter particle, such as the quark, has an antimatter "mirror image" companion, which is near identical to itself, differing only in one aspect. However, matter and antimatter annihilate in a flash of energy when they meet, meaning these particles were constantly created and destroyed.

See the rest here:

What existed before the Big Bang? - BBC News

Read More..

What is the Planck time? – Space.com

The Planck time is an incredibly small interval of time that emerges naturally from a few basic quantities in theoretical physics. When it was discovered by Max Planck at the end of the 19th century, it seemed to be no more than a scientific curiosity. But today it plays a tantalizing role in our understanding of the Big Bang and the search for a theory of quantum gravity.

Heres a summary of everything we know about the Planck time: where it came from, what it is, and what it might reveal about the way the universe works.

Related: How does time work?

The Planck time was first described in a scientific paper written by Planck in 1899, in a section called Natural Measurement Units (the paper, in German, can be found at the Biodiversity Heritage Library). In everyday use, measurement units are no big deal. We use whatever is convenient ounces or tons for mass, miles or inches for distance, minutes or days for time. Scientists tend to use SI units of kilograms, meters and seconds, because they simplify complex calculations but only up to a point. The math can still get tortuously complicated.

In Newtons equation for the force of gravity, for example, the gravitational constant G has brain-twisting units of cubic meters per kilogram per second squared, according to Swinburne University. In these units, G which is one of the most fundamental numbers in the universe has the arbitrary-looking value of 0.0000000000667. Planck wanted to find a more natural set of units in which G, and similar fundamental constants, are exactly equal to 1.

Related: What is a light-year?

Who was Max Planck?

Max Planck may not be a household name, but he gave the world a household phrase: quantum theory. According to the European Space Agency, which named its Planck spacecraft after him, the breakthrough came in 1900 when he discovered that energy can only be transmitted in small packets of prescribed size, which he termed quanta. This was decades before the likes of Werner Heisenberg and Erwin Schrdinger discovered all the quantum weirdness were familiar with today, but none of that would have been possible if Planck hadnt paved the way first. As such, hes rightly described as the father of quantum physics.

The second parameter Planck chose was the speed of light c, in meters per second. This was known to be an important constant even in 1899, despite the fact that Einsteins theory of relativity, with which its closely associated, still lay several years in the future. The third parameter was a brand-new constant Planck himself had just discovered, now known simply as Plancks constant. Usually represented by the letter h, its the ratio of a photon's energy to its frequency, with units of kilograms multiplied square meters per second.

Taking these three constants as his starting point, Planck was able to find a new set of measurement units in which theyre all precisely equal to one. These basic units are referred to as the Planck mass, Planck length and Planck time. Our particular interest here is in the last of these, but theres a close relationship between the last two: the Planck length is equal to the Planck time multiplied by the speed of light.

The U.S. National Institute for Standards and Technology gives the value of the Planck time as 5.391247 10^-44 seconds. In other sources, including Plancks original paper, you may find a slightly bigger value around 1.35 10^-43 seconds. As explained on Eric Weissteins World of Physics site, this is due to the use of two different versions of Plancks constant. The larger value uses Plancks original quantity, h, while the smaller, more common value uses a parameter called h-bar, which is h divided by 2 pi.

Whichever value is used, the result is a time interval that is unimaginably tiny in the context of everyday experience. A nanosecond, often used colloquially to mean a very short time, is 0.000000001 seconds, with 8 zeros between the decimal point and the first significant figure. The Planck time has no fewer than 43 zeroes. Its the time it takes light to travel one Planck length, which is around a hundredth of a millionth of a trillionth of the diameter of a proton, according to Symmetry magazine.

Because the Planck time is so impractically small, it was largely ignored by scientists prior to the 1950s, according to K. A. Tomilin of the Moscow Institute for the History of Science and Technology. At best it was considered an interesting curiosity with no real physical significance. Then, when physicists started looking for a theory of everything that would encompass both gravity and quantum mechanics, they realized that the Planck time might have enormous significance after all.

The key lies in the fact that the Planck time, along with the other Planck units, incorporates both the gravitational constant G and Plancks constant h, which is central to quantum theory. Inadvertently, back in 1899, Planck had come up with a formula that straddled both halves of modern physics, long before anyone had started looking for such a connection.

Universal units

Plancks original motivation in devising his measurement system was to define a set of units that werent Earth-centric, in the way our units usually are. Thats even true of the so-called astronomical unit, which is the average distance from the Earth to the Sun, according to the University of Surrey, or the light year, which is the distance light travels in the time it takes the Earth to orbit once around the Sun. In contrast, Plancks units as impractical as they are for everyday use have no such anthropocentric connections. As Planck himself put it, according to Don Lincoln of Fermilab, his units necessarily retain their meaning for all times and for all civilizations, even extraterrestrial and non-human ones.

For any given mass, Einsteins theory of gravity general relativity gives a characteristic length scale called the Schwarzschild radius. But quantum theory has its own length scale for that mass, which is termed the Compton wavelength, according to Georgia State University. So is there any mass for which the Schwarzschild radius is exactly equal to the Compton wavelength? It turns out there is and its the Planck mass, for which those two parameters, one from quantum theory and one from general relativity, both equal the Planck length.

Is this just a coincidence, or does it mean that gravitational and quantum effects really do start to overlap at the Planck scale?

Some scientists, such as Diego Meschini of Jyvaskyla University in Finland, remain skeptical, but the general consensus is that Planck units really do play a key role in connecting these two areas of physics. One possibility is that spacetime itself is quantized at the level of a Planck length and Planck time. If this is true, then the fabric of spacetime, when looked at on that scale, would appear chunky rather than smoothly continuous.

In the universe we see today, there are four fundamental forces: gravity, electromagnetism and the strong and weak nuclear forces. But as we look backward in time through the first moments after the Big Bang, the universe becomes so hot and dense that these forces gradually merge into each other. It all happened very quickly; from ten microseconds onward, the four forces looked just as they do today. Before that, however, there was no distinction between the electromagnetic and weak forces and prior to 10^-36 seconds, these were joined by the strong force as well.

At this point, gravity was still a separate force and based on current theories, we cant look back any further in time than this. But its widely believed that, given a better understanding of quantum gravity, wed find that prior to the Planck time gravity was also merged into the other forces. It was only at the Planck time, around 5 10^-44 seconds after the Big Bang, that gravity became the separate force we see today.

Originally posted here:

What is the Planck time? - Space.com

Read More..

Black Holes Cannot Lead To Other Places In The Universe, Claims New Study – IFLScience

Could black holes be tunnels to other locations in space-time? Could they be wormholes connecting different regions of the universe? A new study gives a resounding no to these questions.

Black holes are complicated beasts. They have consistently broken our physics and studying them has opened our eyes to the limitations of our knowledge. One crucial problem is the Information Paradox. Matter cant escape black holes so, simplistically, once something gets in, its information is lost forever.

Thats a big no-no in physics. The Information Paradox was one of the areas that the late Stephen Hawking focused on. His, and others', work led to the understanding that black holes evaporate and that information is somehow preserved. Understanding exactly how that takes place could provide crucial insight in the quest for unifying quantum mechanics with a theory of gravity.

One of the ways that herculean task is being attempted is with string theory. String theory posits that the fundamental components of the universe are vibrating strings. So far theres no evidence that this is the ultimate theory of nature but its ability to find solutions to major open questions in physics has been appealing to many.

When it comes to the Information Paradox, there have been multiple proposals on how to solve that in string theory, including the idea that black holes are wormholes, a hypothetical construct very popular in sci-fi. Wormholes are a proposed connection to two different points in space-time but theres no evidence that they exist.

A different theory instead sees black holes in string theory as "fuzzballs", messy constructions that radiate energy (and thus information). Black holes, in this view, are not mostly empty with their whole mass contained in a singularity at its center. They are complex stringy structures.

What we found from string theory is that all the mass of a black hole is not getting sucked into the center, Professor Samir Mathur from Ohio State University explained in a statement. The black hole tries to squeeze things to a point, but then the particles get stretched into these strings, and the strings start to stretch and expand and it becomes this fuzzball that expands to fill up the entirety of the black hole.

Professor Mathur, who amongst others, put forward the idea of black holes as fuzzballs 18 years ago, put both the fuzzball hypothesis and the wormhole paradigm to the test. Publishing their paper in the Turkish Journal of Physics, Mathur and colleaguespretty much concluded the wormhole approach doesnt work.

"In each of the versions that have been proposed for the wormhole approach, we found that the physics was not consistent," Mathur said. "The wormhole paradigm tries to argue that, in some way, you could still think of the black hole as being effectively empty with all the mass in the center. And the theorems we prove show that such a picture of the hole is not a possibility."

The study is certainly intriguing but there is still a huge debate whether string theory is the correct way to explain reality. So black holes might be even weirder than wormholes and fuzzballs. Or not.

See the article here:

Black Holes Cannot Lead To Other Places In The Universe, Claims New Study - IFLScience

Read More..

U-M forms collaboration to advance quantum science and technology – University of Michigan News

The University of Michigan has formed a collaboration with Michigan State University and Purdue University to study quantum science and technology, drawing together expertise and resources to advance the field.

The three universities are partnering to form the Midwest Quantum Collaboratory, or MQC, to find grand new challenges we can work on jointly, based on the increased breadth and diversity of scientists in the collaboration, said Mack Kira, professor of electrical engineering and computer science at Michigan Engineering and inaugural director of the collaboration.

U-M researchers call quantum effects the DNA of so many phenomena people encounter in their everyday lives, ranging from electronics to chemical reactions to the study of light wavesand everything they collectively produce.

We scientists are now in a position to start combining these quantum building blocks to quantum applications that have never existed, said Kira, also a professor of physics at U-Ms College of Literature, Science, and the Arts. It is absolutely clear that any such breakthrough will happen only through a broad, diverse and interdisciplinary research effort. MQC has been formed also to build scientific diversity and critical mass needed to address the next steps in quantum science and technology.

Collaborators at U-M include Steven Cundiff, professor of physics and of electrical engineering and computer science. Cundiffs research group uses ultrafast optics to study semiconductors, semiconductor nanostructures and atomic vapors.

The main goal of the MQC is to create synergy between the research programs at these three universities, to foster interactions and collaborations between researchers in quantum science, he said.

Each university will bring unique expertise in quantum science to the collaboration. Researchers at U-M will lead research about the quantum efforts of complex quantum systems, such as photonics, or the study of light, in different semiconductors. This kind of study could inform how to make semiconductor-based computing, lighting, radar or communications millions of times faster and billions of times more energy efficient, Kira says.

Similar breakthrough potential resides in developing algorithms, chemical reactions, solar-power, magnetism, conductivity or atomic metrology to run on emergent quantum phenomena, he said.

The MQC will be a virtual institute, with in-person activities such as seminars and workshops split equally between the three universities, according to Cundiff. In the first year, MQC will launch a seminar series, virtual mini-workshops focused on specific research topics, and will hold a larger in-person workshop. The collaboration hopes fostering connections between scientists will lead to new capabilities, positioning the MQC to be competitive for large center-level funding opportunities.

We know collaboration is key to driving innovation, especially for quantum, said David Stewart, managing director of the Purdue Quantum Science and Engineering Institute. The MQC will not only provide students with scientific training, but also develop their interpersonal skills so they will be ready to contribute to a currently shorthanded quantum workforce.

The MQC will also promote development of the quantum workforce by starting a seminar series and/or journal club for only students and postdocs, and encouraging research interaction across the three universities.

MQC also provides companies with interest in quantum computing with great opportunities for collaboration with faculty and students across broad spectrums of quantum computing with the collaborative expertise spanning the three institutions, said Angela Wilson, director of the MSU Center for Quantum Computing, Science and Engineering.

Additionally, bringing together three of our nations largest universities and three of the largest quantum computing efforts provides potential employers with a great source of interns and potential employees encompassing a broad range of quantum computing.

Read more here:

U-M forms collaboration to advance quantum science and technology - University of Michigan News

Read More..

Meet Valery Vermeulen, the scientist and producer turning black holes into music – MusicRadar

Scientific pursuits have often acted as the inspiration for electronic music, from Kraftwerks The Man-Machine through to Bjorks Biophilia and the techno-futurist aesthetic of acts like Autechre and Aphex Twin.

Scientist, researcher, musician and producer Valery Vermeulen is taking this one step further with his multi-album project Mikromedas, which transforms scientific data gathered from deep space and astrophysical models into cosmic ambient compositions.

The first album from this project, Mikromedas AdS/CFT 001, runs data generated by simulation models of astrophysical black holes and extreme gravitational fields through custom-made Max/MSP instruments, resulting in a unique kind of aleatoric music thats not just inspired by scientific discovery, but literally built from it.

Could you tell us a little about your background in both science and music?

I started playing piano when I was six or seven years old. The science part came when I was like, 15 or 16, I think in my teenage years, I got to the library, and I stumbled upon a book, which had a part on quantum physics. I was very curious. And I think this is how the two got started.

During my career path I always had the impression that I had to choose one or the other: music or mathematics, music or physics, theoretical physics. So in the beginning, I did a PhD in the mathematical part of superstring theory with the idea of doing research in theoretical physics. And I was really interested in the problem of quantum gravity - that's finding a theory that unifies quantum physics and general relativity theory.

But at the same time, I was always making music, I started busking on the street, then I started playing in bands. Then, after my PhD, I switched, because I wanted to pursue more music. So I started at IPEM, that is the Institute for Psychoacoustics and Electronic Music in Belgium.

What kind of work were you doing there?

At IPEM I did research on music, artificial intelligence, and biofeedback. Out of that came the first project which combined the two and that is called EMO-Synth. With that project, with a small team, we try to build a system that can automatically generate personalised soundtracks that adapt themselves to emotional reaction.

"So the idea of the system is to have an AI assistant that can automatically generate a personalised soundtrack for a movie, specialised and made for you using genetic programming. That's a technique from AI.

Could you tell us about the Mikromedas project?

After EmoSynth, I wanted to do some more artistic stuff. That is how I stumbled upon Mikromedas, the project with which Ive recorded the album. There's two series for the moment, and every series has a different topic. The first series started in 2014, as a commissioned work for The Dutch Electronic Arts Festival in Rotterdam.

"They wanted me to do something with space and sound. The question was: could I represent a possible hypothetical voyage from earth to an exoplanet near the centre of the Milky Way? Is it possible to evoke this using only sound, no visuals, that was the question. And this is how I stumbled upon data sonification for the first time.

The question was: could I represent a possible hypothetical voyage from earth to an exoplanet near the centre of the Milky Way?

Basically, that's the scientific domain in which scientists are figuring out ways to use sound to convey data. Normally, you would look at data - as a data scientist, you look at your screen, you present the data on your screen, and you try to figure out structures in the data. But you can also do that using sound. Its called multimodal representations. So if you both use your ears and your eyes, you can have a better understanding of data.

With Mikromedas I got into that field, a very interesting scientific domain. Of course, artists have also started using it for creative purposes. It was a one-time concert that I made the whole show for, but it turned out that I played more and more concerts with that. And this is how the Mikromedas project got started.

After the first series, I wanted to dive even deeper into my fascination for mathematics and theoretical physics. I still had the idea of quantum gravity, this fascinating problem, in the back of my head. And black holes are a very hot topic - they are one of the classical examples where we can combine general relativity and quantum physics.

The next step was, I needed to find ways to get data. I could program some stuff myself, but I also lacked a lot of very deep scientific knowledge and expertise. A venue here in Belgium put me in contact with Thomas Hertog, a physicist who worked with Stephen Hawking, and we did work on sonifications of gravitational waves, and I made a whole concert with that.

"From there, we made the whole album. Its a bit of a circle, I think - at first the music and physics were apart from each other, these longtime fascinations that were split apart, and now theyve come together again.

What kinds of data are you collecting to transform into sound and music?

If were talking from a musical perspective, I think the most fascinating data and the most close to music are gravitational wave data. Gravitational waves are waves that occur whenever you have two black holes, and they're too close to each other, they will swirl around each other, and they will merge to a bigger black hole. This is a super cataclysmic event. And because of this event, it will emit gravitational waves. If you encounter a gravitational wave, you become larger, smaller, thicker, or thinner. So it's sort of an oscillation that you would undergo.

What I discovered via the work with Thomas is that there's some simulations of gravitational waves that are emitted by certain scenarios, because you have different types of black holes, you can have different masses, etc. To calculate and to programme it, you need something which we call spherical harmonics. And those are three dimensional generalisations of sine wave functions.

I wear two hats. So one hat is the hat of the scientist, the physicist, and the other hat is the hat of a music producer

And if you're into sound synthesis, I mean, if you're studying sounds, this is what we all learn about - the square wave is just a sum of all the overtones of a fundamental frequency, the sawtooth wave has all the overtones linearly decaying. And it's the same principle that holds with those generalisations of sine waves, those spherical harmonics. Using those, you can calculate gravitational waves in three dimensions, which is really super beautiful to watch. And this is what I did for one of the datasets.

They say everything is waves. And it is, in a way - I mean, I don't like this New Age expression so much, saying everything is connected - but in some sort of way, vibrations are, of course, essential to music, but also to physics.

How are you transforming that data into sounds we can hear?

First I made 3D models. So these are STL files, 3D object files. And then, together with Jaromir Mulders, hes a visual artist that I collaborate with, he could make a sort of a movie player. And so you can watch them in 3D, evolving. But then I thought, how on earth am I going to use this for music?

The solution was to make two-dimensional intersections with two-dimensional planes. And then you have two-dimensional evolving structures. And those you can transform into one-dimensional evolutions and one dimensional number streams. Then you can start working with this data - thats how I did it. Once you have those, it's a sort of a CV signal.

I'm working in Ableton Live, using Max/MSP and Max for Live, and can easily connect those number streams to any parameter in Ableton Live, using the API in Max for Live, you can quite easily connect it to all the knobs you want. Another thing that I was using was quite a lot of wavetable synthesis. Different wave tables: Serum, Pigments, and the Wavetable synth from Ableton.

How much of what were hearing on the album is determined by the data alone, and how much comes from your own aesthetic decision-making?

I wear two hats. So one hat is the hat of the scientist, the physicist, and the other hat is the hat of a music producer, because I also studied music composition here at the Conservatory in Ghent. And I'm also teaching at the music production department there. Its all about creativity. That's the common denominator, you know, because I always think it's difficult to say this is the science part, this is the musical part.

In the more numerical part, what I would do is collect the data sets. You have all the different datasets, then you have to devise different strategies to sonify it, to turn all those numbers into sound clips, sound samples, you could say. These are sort of my field recordings, I always compare it to field recordings, but they are field recordings that come from abstract structures that give out data. I collect a whole bank of all these kinds of sounds.

Next you design your own instruments, in something like Reaktor or Max/MSP, that are fed by the data streams. Once I have those two, I'm using those two elements to make dramatic compositions, abstract compositions. One theme of the album was to try to evoke the impression of falling into a black hole, something that is normally not possible, because you break all laws of physics, because we don't know what the physics looks like inside of a black hole, the region inside the event horizon.

Sometimes people ask me, why on earth make it so difficult? I mean, just make a techno track and release it. But no, I mean, everyone is different! And this is who I am

Then I wear a hat as a music producer, because I want to make this into a composition. I was working before for a short time as a producer for dance music. So I want to have a kind of an evolution in the track. So how am I going to do that? I'm working with the sounds, I'm editing the sounds a lot with tools in Ableton, in Reaktor, and I also have some analogue synths here.

"So I have a Juno-106, a Korg MS-20. Sometimes I would just take my Juno, I put it into unison, you know, use the low pass filter, and then get a gritty, beautiful low analogue sound to it, mix it underneath to give an impression of this abstract theme.

After that, once the arrangement is done, then theres the mixing process. I did quite a lot of mixing, I think over a year, because I wanted the sound quality to be really very good. And I also started using new plugins, new software. And the whole idea was to make it sound rather analogue. I hope I managed to do the job with a record that did not make it sound too digital.

Which plugins were you using to mix the material?

Slate, of course. SoundToys, Ohmforce, I love Ohmforce plugins. Waves, we use a lot of Waves plugins. I also use the native plugins of Ableton. I started to appreciate them because before that I didn't know how to use them properly. I also have some hardware here. So I have a Soundcraft mixing table that I love a lot.

The record was released on an international label, Ash International. It's a subsidiary of Touch. And Mike Harding, he let me know, the record is going to be mastered by Simon Scott. He's the drummer of Slowdive, the band Slowdive. So I was a little bit nervous to send a record to Simon, but he liked it a lot. So it's like, okay, I managed to do a mix that's okay. I was really happy about it.

Aside from the scientific inspiration, what were the musical influences behind the project?

Because the music is quite ambient, quite slow, Alva Noto is a big inspiration. Loscil, I was listening to a lot at the time. Biosphere, Tim Hecker. Also, at the same time, to get my head away a little bit, I tend to listen to other kinds of music when Im doing this stuff. I was at the same time studying a lot of jazz, Im studying jazz piano. I was listening to a lot of Miles Davis, Coltrane, Bill Evans, McCoy Tyner. Im a big Bill Evans fan because of his crazy beautiful arrangements. Grimes is a big influence, and Lil Peep, actually - his voice is like, whoa.

Do you have any plans to play the material live? How would you approach translating the project to a live performance?

There are plans to play live. We're gonna play it as an audiovisual show. The visuals are produced by Jeremy Miller, this amazing, talented visual artist from the Netherlands. Live, of course, I'm using Ableton Live. I have a lot of tracks, and basically splitting them out into a couple of different frequency ranges. So high, high-mid, low-mid, low and sub frequency ranges.

Then I try to get them in different clips, loops, that make sense. And then I can remix the tracks in a live situation, I also add some effects. And I also add some new drones underneath. There's no keys or musical elements going into it. It's a very different setup than I was used to when I was still doing more melodic and rhythmic music.

Whats in store for the next series of the project?

Theres two routes, I think. Mikromedas is experimental, and I want it to remain experimental, because its just play. Ive discovered something new, I think - it's finding a way to make a connection between the real hardcore mathematical theoretical physics, the formulas, and the sound synthesis and the electronic music composition. But with one stream that I'm looking at, I already have a new album ready. And that's to combine it with some more musical elements, just because I'm very curious.

I think the Mikromedas project gave me a new way to approach making electronic music. Sometimes people ask me, why on earth make it so difficult? I mean, just make a techno track and release it. But no, I mean, everyone is different! And this is who I am. But going back a little bit towards the musical side, that's something that's really fascinating me.

The other stream that I want to follow is to connect it even more with abstract mathematics. So my PhD was in the classification of infinite dimensional geometrical structures, which are important for superstring theory. The problem was always how can you visualise something that is infinitely dimensional. So you have to take an intersection with a finite dimensional structure to make sense out of it. But now I'm thinking that maybe I can try to make a connection with that and with sound, that's even more abstract than black holes. Making a connection with geometry, 3D, and sound using sonification.

Mikromedas AdS/CFT 001 is out now on Ash International.

You can find out more about Valery's work by visiting his website or Instagram page.

Read the original here:

Meet Valery Vermeulen, the scientist and producer turning black holes into music - MusicRadar

Read More..

Emotion Isn’t the Enemy of Reason – The Atlantic

Paul Dirac was one of the greatest physicists of the 20th century. A pioneer in quantum theory, which shaped our modern world, Dirac was a genius when it came to analytical thinking. But when his colleagues asked him for advice, his secret to success had nothing to do with the traditional scientific method: Be guided, Dirac told them, by your emotions.

Why would the cold logic of theoretical physics benefit from emotion? Physics theories are expressed in mathematics, which is governed by a set of rules. But physicists dont just study existing theoriesthey invent new ones. In order to make discoveries, they have to pursue roads that feel exciting and avoid those that they fear will lead nowhere. They have to be brave enough to question assumptions and confident enough to present their conclusions to their skeptical colleagues. Dirac recognized that the best physicists are comfortable letting emotion guide their decisions.

Diracs advicelike his physicsran against the common assumption of psychology in his day: that rational thought primarily drives our behavior, and that when emotions play a role, they are likely to deflect us from our best judgment.

Today researchers have gained a deeper understanding of emotion and how it can positively influence logical choices. Consider a study led by Mark Fenton-OCreevy, a professor at the Open University Business School, in England. Fenton-OCreevy and his colleagues conducted interviews with 118 professional traders and 10 senior managers at four investment banks. The experimenters found that even among the most-experienced traders, the lower-performing ones seemed less likely to effectively engage with the emotions guiding their choiceswhether to buy or sell a set of securities, for example, with millions of dollars at stake. The most-successful traders, however, were particularly likely to acknowledge their emotions, and followed their intuitive feelings about stock options when they had limited information to draw on. They also understood that when emotions become too intense, toning them down can be necessary. The issue for the successful traders was not how to avoid emotion but how to harness it.

One way emotions aid decision making is by steering attention to both threats and opportunities. Consider the role that disgust plays in encoding your experience of foods that could sicken you. If youre about to slurp down an oyster and you notice worms crawling all over it, you dont stop and consciously analyze the details of that situation; you just gag and throw it down. The traders, similarly, have to know what to prioritize and when to actand they have to do it quickly. People think if you have a Ph.D. you will be very good, because you have an understanding of options theory, but this is not always the case, said one of the investment-bank managers the researchers interviewed. You have to also have good gut instincts, and those gut instincts are largely rooted in emotion.

Read: The best headspace for making decisions

In the past decade, scientists have begun to understand precisely how emotions and rationality act together. The key insight is that before your rational mind processes any information, the information must be selected and evaluated. Thats where emotion plays a dominant role. Each emotionfear, disgust, angercauses certain sensory data, memories, knowledge, and beliefs to be emphasized, and others downplayed, in your thought processes.

Imagine youre walking up a dark street in a relaxed state, looking forward to dinner and a concert later that evening. You may be aware of being hungry and may not register small movements in the shadows ahead, or the sound of footsteps behind you. Most of the time, ignoring those things is fine; the footsteps behind you are probably other pedestrians traveling to their own evening plans, and the movements in the shadows could just be leaves blowing in the wind. But if something triggers your fear, those sights and sounds will dominate your thinking; your sense of hunger will vanish, and the concert will suddenly seem unimportant. Thats because when you are in a fear mode of processing, you focus on sensory input; your planning shifts to the present, and your goals and priorities change. You might adjust your route to one that takes longer but is better lit, sacrificing time for safety.

In one illustrative study, researchers induced fear in their subjects by sharing a grisly account of a fatal stabbing. They then asked these participants to estimate the probability of various calamities, including other violent acts and natural disasters. Compared with subjects whose fear had not been activated, these subjects had an inflated sense of the probability of those misfortunesnot only related incidents, such as murder, but also the unrelated, such as tornadoes and floods. The gruesome stories affected the subjects mental calculus on a fundamental level, making them generally warier of environmental threats. In the world outside the laboratory, that wariness pushes you to avoid dangerous situations.

The ways in which our emotions influence our judgment arent always clear to us. In a study on disgust, for instance, scientists showed volunteers either a neutral film clip or a scene from Trainspotting in which a character reaches into the bowl of a filthy toilet. One of the characteristics of disgust is a tendency toward disposal, whether of food or other items. After playing the clips, the researchers gave the subjects the opportunity to trade away one box of unidentified office supplies for anotherand found that 51 percent of those who had seen the Trainspotting clip exchanged their box, compared with 32 percent of participants whod watched the neutral clip. But when quizzed about their decision afterward, the disgusted participants tended to justify their actions with rational reasons.

Welcoming emotion into the decision-making process can help us be more clear-eyed about where our choices come from. Dirac knew that emotion helped him look beyond the beliefs of his contemporaries. Again and again, his controversial ideas proved correct. He invented a mathematical function that seemed to violate the basic rules of the subjectbut that was eventually embraced and developed by later mathematicians. He predicted a new type of matter, called antimatteranother idea that was revolutionary at the time but is widely embraced today. And his appreciation for the role of emotion was prescient in itself. Dirac died in 1984, a couple of decades before the revolution in emotion theory began, but hed no doubt have been happy to see that hed been right again.

This article was adapted from Leonard Mlodinows forthcoming book, Emotional: How Feelings Shape Our Thinking.

See the original post here:

Emotion Isn't the Enemy of Reason - The Atlantic

Read More..