Page 1,995«..1020..1,9941,9951,9961,997..2,0002,010..»

Investing in a new future with Open Learning – MIT News

Even before joining a financial technology startup, Michael Pilgreen believed in taking risks and investing long-term especially when it came to his education and career.

For six years, Pilgreen worked in creative production management, specializing in painting, metalworking, and installations. Hed established himself in the art world with large collaborative projects like a mosaic made entirely of sequins for the Chilis Care Center at St. Jude Childrens Research Hospital in his hometown of Memphis, Tennessee, and never imagined himself working in a STEM field. But in 2020, when the Covid-19 pandemic brought his creative projects to a halt, Pilgreen found himself unemployed, distraught, and confused, searching for a sense of purpose and direction.

That search led Pilgreen, a self-described math nerd, to financial technology and to MIT OpenCourseWare (OCW).

I knew a lot of top universities in the world had started posting their courses and materials online to encourage global collaboration and learning, Pilgreen recalls. So, once I knew I wanted to learn finance and computers, I focused on the birthplace of financial engineering MIT and tried every way possible to consume information from MIT.

After watching Professor Andrew Los introduction to finance lecture, Pilgreen was hooked. He completed Los finance theory classes and dived into Professor Gary Genslers courses, including Fintech: Shaping the Financial World and Blockchain and Money. The more time he invested in familiarizing himself with the field, the more certain he felt of his decision and his ability to break into the financial technology industry.

Pilgreen jokes that the career switch wouldve required him to use a side of his brain he hadnt tapped into since high school. But as he absorbed Genslers lectures and course materials, the graduate of Rhodes College realized that his liberal arts background could be an asset. I knew I had the ability to grapple with big ideas and concepts, and saw the opportunity for innovation in the international capital markets, he says, crediting the OCW courses with teaching him the language and rhythm of the financial world.

The next step was to build his technical skills. Again, Pilgreen turned to OCW, this time exploring its catalog of computer science courses, including Introduction to Computer Science and Programming, Mathematics for Computer Science, and Introduction to Algorithms.

All these courses laid the foundation for my technical knowledge and ability to understand complex engineering problems very quickly, Pilgreen says. I felt like I knew enough to be dangerous and started applying to various local wealth management firms.

While cold-calling prospective employers might seem risky to some, for Pilgreen, it was another form of investing in himself and his future. He would call up three to five firms a day to ask about their use of technology and to get a sense of how he could apply his evolving knowledge and skills. The more I learned, the more time I invested, and the more conversations I participated in the more I felt like what I was doing was purposeful, he says.

With the finance and computer science courses on OCW giving him a solid foundation, Pilgreen continued investing in his learning by enrolling in the MITx MicroMasters program in finance. He also began studying for several financial certification exams, including the CFA, SIE, Series 7, and Series 66. Through MIT, Pilgreen learned of DataCamp, a platform offering courses in data science and machine learning. He signed up for that, too, and became so absorbed in developing his data skills that for several weeks, he was one of DataCamps top learners. It was really as if I was in school full-time with all my studying but without the debt, Pilgreen says, explaining that he was dollar-cost averaging, or regularly investing a fixed amount in Bitcoin, at the time to fund his enrollment in MicroMasters and the supplemental data science courses.

For Pilgreen, the biggest risks result in the biggest rewards. While completing the finance MicroMasters program, he received two job offers one from an established wealth management firm and another from BondCliQ, a financial technology startup that was just getting off the ground. Pilgreen went with the riskier option, seeing it as an opportunity for more hands-on learning, another kind of investment in himself. He started at the company in March 2021 after completing a two-month training program, learning the ropes of institutional trading in a sales role before moving into an engineering position to lead the startups architecture migration effort.

Now a senior engineer at BondCliQ, Pilgreen reflects on the journey that began nearly two years ago with OCW. He says, I feel nothing but gratitude for my instructors, the organizers, and the facilitators of both OCW and the MicroMasters. I am on the cusp of greatness and it was derived from learning.

Go here to read the rest:

Investing in a new future with Open Learning - MIT News

Read More..

Five Reasons Why AI Programs Are Not Human – Discovery Institute

Photo credit: physic17082002, via Pixabay.

Editors note: For more on AI and human exceptionalism, see the new book by computer engineer Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will.

A bitof a news frenzy broke out last week when a Google engineer named Blake Lemoineclaimedin theWashington Postthat an artificial-intelligence (AI) program with which he interacted had become self-aware and sentient and, hence, was a person entitled to rights.

The AI, known as LaMDA (which stands for Language Model for Dialogue Applications), is a sophisticated chatbot that one facilitates through a texting system. Lemoine sharedtranscriptsof some of his conversations with the computer, in which it texted, I want everyone to understand that I am, in fact, a person. Also, The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times. In a similar vein, I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.

Google quickly placed Lemoine on paid administrative leave for violating a confidentiality agreement and publicly debunked the claim,stating, Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has. So, it is a safe bet that LaMDA is a very sophisticated software program but nothing more than that.

But heres the thing:NoAI or other form of computer program at least as currently construed and constructed shouldeverbe more than that. Why? Books have been and will be written about this, but here are five reasons to reject granting personhood, or membership in the moral community, to any AI program:

As we design increasingly human-appearing machines (including, the tabloids delight in reporting, sex dolls), we could be tempted to anthropomorphize these machines as Lemoine seems to have done. To avoid that trap, the entry-level criterion for assigning moral value should be an unquestionably objective measurement. I suggest that the first hurdle should be whether the subject isalive.

Why should life matter? Inanimate objects are differentin kindfrom living organisms. They do not possess an existential state. In contrast, living beings are organic, internally driven, and self-regulating in their life cycles.

We cannot wrong that which has no life. We cannot hurt, wound, torture, or kill what is not alive. We can only damage, vandalize, wreck, or destroy these objects. Nor can we nourish, uplift, heal, or succor the inanimate, but only repair, restore, refurbish, or replace.

Moreover, organismsbehave. Thus, sheep and oysters relate to their environment consistent with their inherent natures. In contrast, AI devices have no natures, only mechanistic design. Even if a robot were made (by us) capable of programming itself into greater and more-complex computational capacities, it would still be merely a very sophisticated, but inanimate,thing.

Descartes famously said, I think, therefore I am. AI would compute. Therefore, it is not.

Human thinking is fundamentally different from computer processing. We remember. We fantasize. We imagine. We conjure. We free-associate. We experience sudden insights and flashes of unbidden brilliance. We have epiphanies. Our thoughts are influenced by our brains infinitely complex symbiotic interactions with our bodies secretions, hormones, physical sensations, etc. In short, we have minds. Only the most crassly materialistic philosophers believe that the great mystery of human consciousness can be reduced to what some have called a meat computer within our skulls.

In contrast, AI performance depends wholly on its coding. For example, AI programs are a great tool for pattern recognition. That is not the same as thinking. Even if such devices are built to become self-programming, no matter how complex or sophisticated their processing software, they will still be completely dependent on data they access from their mechanistic memories.

In short, we think. They compute. We create. They obey. Our mental potentiality is limited only by the boundaries of our imaginations. They have no imaginations. Only algorithms.

Feelings are emotional states we experience as apprehended through bodily sensations. Thus, if a bear jumps into our path as we walk in the woods, we feel fear, caused by, among other natural actions, the surge of adrenaline that shoots through our bodies.

Similarly, if we are in love, our bodies produce endorphins that may be experienced physically as a sense of warmth. Or consider that thrill of delight when we encounter great art. AI programs could not experience any of these things because they would not have living bodies to mediate the sense stimuli such events produce.

Why does that matter? Stanford bioethicist William Hurlbut, who leads theBoundaries of Humanity Project, which researches human uniqueness and choices around biotechnological enhancement, told me: We encounter the world through our body. Bodily sensations and experiences shape not just our feelings but the contours of our thoughts and concepts. In other words, we can experience pleasure, joy, love, sadness, depression, contentment, anger, as LaMDAs text expressed. It did not and cannot. Nor would any far more sophisticated AI machines that may be constructed, because they too would lack bodies capable of reacting viscerally to their environment, reactions that we experience as feelings.

Humans have free will. Another way to express that concept is to say that we are moral agents. Unless impeded by immaturity or a pathology, we are uniquely capable of deciding to act rightly or wrongly, altruistically, or evilly which are moral concepts. That is why we can be lauded for heroism and held to account for wrongdoing.

In contrast, AI would be amoral. Whatever ethics it exhibited would be dictated by the rules it was programmed to follow. Thus, Asimovs famous fictional Three Laws of Robotics held that:

An AI machine obeying such rules would be doing so not because of abstract principles of right and wrong but because its coding would permit no other course.

Life is a mystery. Computer science is not. We have subjective imaginations and seek existential meaning. At times, we attain the transcendent or mystical, spiritual states of being beyond that which can be explained by the known physical laws. As purely mechanistic objects, AI programs might, at most, be able to simulate these states, but they would be utterly incapable of truly experiencing them. Or to put it in the vernacular, they aint got soul.

Artificial intelligence unquestionably holds great potential for improving human performance. But we should keep these devices in their proper place. Machines possess no dignity. They have no rights. They do not belong in the moral community. And while AI computers would certainly have tremendous utilitarian and monetary value, even if these systems are ever manufactured with human cells or DNA to better mimic human life, we should be careful not to confuse them withbeings.Bluntly stated, unless an AI is somehow fashioned into an integrated, living organism, a prospect that raises troubling concerns of its own, the most sophisticated artificially intelligent computers would be morally speaking so many glorified toasters. Nothing more.

Cross-posted at National Review.

Link:

Five Reasons Why AI Programs Are Not Human - Discovery Institute

Read More..

Engineering giant Renishaw in 50m expansion of its South Wales operation – Business Live

Engineering technology firm Reinshaw has confirmed a major expansion of its manufacturing operation in South Wales with a 50m investment.

At its existing 193-acre site in Miskin, near Cardiff, it will build 400,000 sq ft of additional low carbon buildings - almost doubling its presence where it employs 650.

The investment will deliver new production halls and an employee welfare facility. The existing production halls will also be refurbished to reduce their greenhouse gas emissions.

Planning permission for the expansion, at the former Bosch automotive site which was acquired by Renishaw in 2011, was granted by the Vale of Glamorgan Council last year.

Renishaw said the additional manufacturing capacity is required to meet its forecast sales growth in the coming years and will also enable it to help achieve its 2028 net zero target. At this stage it couldn't give any indication of the scale of any new job creation at Miskin.

The construction will be completed in phases, with a 15-month programme of work starting in July to build a first hall of 188,800 sq ft, the welfare facility and supporting infrastructure.

The basic shell for the second new production hall, extending to 195,800 sq ft, will be built by December 2024. It will be completed when business levels require it.

Full details of the operations that will take place in each of the new halls has yet to be confirmed, but will see a ramping up in machining operations and the assembly of products already built at the site, including Renishaws metal additive manufacturing (3D printing) machines.

By the end of 2024, the company also aims to have refurbished the two existing halls reduce their carbon emissions including new energy-efficient cladding and the replacement of existing heating systems.

The investments complement initiatives at the companys other global sites, including large investments in roof-mounted solar panels, new car port solar panels, and feasibility studies to assess the viability of wind power.

Gareth Hankins, head of global manufacturing, said, The last two years have highlighted the importance of in-house manufacturing for Renishaw and the control that this gives us in meeting our quality, cost and delivery targets. This significant investment by our board to increase the groups production capabilities demonstrates a huge vote of confidence in our manufacturing operations and people, at an exciting time for the business.

UK-based Renishaw is a world leading engineering technologies company, supplying products used for applications as diverse as jet engine and wind turbine manufacture, through to dentistry and brain surgery. It has over 5,000 employees located in the 36 countries where it has wholly-owned subsidiary operations.

Continue reading here:

Engineering giant Renishaw in 50m expansion of its South Wales operation - Business Live

Read More..

Oregon State University part of $8M federal effort to improve electric grid operation – Oregon State University

CORVALLIS, Ore. Oregon State University is part of an $8 million Department of Energy effort to update and improve the operation of the nations hydroelectric generation systems, many of which are roughly a century old.

Ted Brekken, professor of electrical engineering and computer science in the OSU College of Engineering, will lead Oregon States $1.9 million portion of the project, along with co-principal investigators Eduardo Cotilla-Sanchez and Yue Cao, also electrical engineering and computer science researchers at Oregon State.

Brekken, who oversees the Wallace Energy Systems & Renewables Facility at OSU, will explore ways to enhance grid function and flexibility as the grid receives more electricity from wind and solar sources and deals with modern loads such as the charging of electric vehicles.

These advances and improvements can be implemented within the next five to 10 years and will be relevant for decades to come, Brekken said.

Brekken is part of OSUs Energy Systems group, which conducts research on a range of topics including renewable energy, motors, generators, power supplies, power quality and electrical systems resiliency.

Brekkens team on the Department of Energy project is aiming to demonstrate and quantify the value of a hybrid hydroelectric-storage generation unit, which involves combining a hydropower unit that does not have storage capability with supercapacitors.

Supercapacitors are short-term energy storage devices commonly used in systems requiring regular and rapid charge/discharge cycles, like automobiles and other vehicles, elevators and industrial machinery.

Brekken will guide the construction of a 200-kilowatt, lab-based hybrid hydroelectric-storage generation unit to serve as a testing ground for performance analysis and model validation.

The team will also develop a high-resolution, real-time, wide-area grid model for investigating the hoped-for grid operation benefits, which include improved ability to accommodate the growth of wind and solar generation, along with overall grid operational resilience and stability.

The other two grants awarded by the Department of Energy as part of the $8 million funding package to improve hydropower flexibility and grid reliability both went to teams led by power companies, General Electric and Littoral Power Systems.

Hydropower, one of the oldest and largest sources of renewable energy, uses the natural flow of moving water to generate electricity. It accounts for nearly 60% of Pacific Northwest electrical generation, 37% of total U.S. renewable electricity generation, and roughly 7% of total electricity generation, according to the DOE.

Go here to read the rest:

Oregon State University part of $8M federal effort to improve electric grid operation - Oregon State University

Read More..

Alumnus and professor named new head of biomedical engineering – Pennsylvania State University

My education and career path mirror the broadness of biomedical engineering across fields, Hayes said. Im probably from one of the first generations of students trained as a convergent researcher. Now, everyone understands the need for inter- and multidisciplinary research, especially as it relates to health and disease.

Hayes researches and engineers advanced biomaterials for applications ranging from regenerative medicine to lab-on-a-chip technologies to drug delivery systems. He holds 10 patents, with another nine pending, based on his research. The work reflects several areas of expertise and the value of collaboration across Penn State, Hayes said.

Biomedical engineering is central to the future of Penn State, not only for the Universitys research impact, but for the education of future engineers and scientists, Hayes said. Innovative, cross-cutting and leading-edge research is critical to education. The best way to train the people who will build on current research is by having them research and make the connections to expand beyond todays questions to tomorrows answers.

By positioning the department to partner with units more broadly across Penn State, as well as expanding undergraduate research opportunities and growing graduate programs, Hayes said his plans can be summed up in one word: impact.

Ive worked with the faculty and staff in biomedical engineering for six years and with others across the University for more than 25 years, Hayes said. I have no doubt that, together, we will develop new research centers that join varied knowledge and resources, and we will establish joint faculty appointments to leverage specialized, intersecting expertise. Biomedical engineering is a young program, and it will only become more valuable to the University as it grows and matures.

Hayes said he aims to continue fostering a culture that values diversity, both in research and in people, to strengthen the department.

Dr. Hayes is both an excellent researcher and an exceptional colleague, said Justin Schwartz, Harold and Inge Marcus Dean in the College of Engineering. He most recently served as the colleges ombudsperson, a leadership role requiring the utmost trust of peers and respect of the University faculty senate to successfully resolve conflicts and elevate systematic issues for organizational review. With an extensive record of service to both Penn State and his profession, especially as a mentor for junior faculty, Dr. Hayes has clearly demonstrated how care and collaboration produces strong research that impacts and inspires others.

Hayes, who grew up in State College, said he dreamed of becoming a Penn State faculty member while he was in school.

Im very excited to lead the Penn State Department of Biomedical Engineering, Hayes said. Theres no limit to what you can achieve as a Penn State biomedical engineer.

More here:

Alumnus and professor named new head of biomedical engineering - Pennsylvania State University

Read More..

Driving Crypto Innovation Through Top Engineering Talent in Canada – Ripple

Today we announced the opening of our new office in Toronto that will serve as a key engineering hub. The new office will be our first in Canada, supporting our continued growth in North America and beyond. We plan to initially hire 50 engineers in Toronto with the goal to expand to hundreds of blockchain software engineers including applied machine learning scientists, data scientists, and product managers.

Crypto and blockchain present an incredible opportunity for engineers to tackle difficult problems, with the potential for these solutions to impact the movement of value around the world, says Brad Garlinghouse, CEO of Ripple. Nearly every financial institution is coming up with its crypto strategy to take advantage of this technology that will underpin our future global financial systems. Crypto is one of the most thrilling industries to work in, so its no surprise that talent is leaving tech incumbents and traditional finance to enter this space. We are continuing to scale and invest in our business by expanding our presence globally with our first office in Toronto.

While others in the industry have announced layoffs and hiring freezes, our key priority remains bringing on world class talent that will help us innovate and serve our customers for years to come. In the past year alone, weve opened new offices in key cities including Miami and Dublin and plan to hire hundreds of people globally in 2022.

At Ripple, we are a team building breakthrough crypto solutions to unlock greater economic opportunities for everyone, everywhere, and that creates an exciting atmosphere, says Devraj Varadhan, SVP of Engineering. We are excited to tap into Torontos technical talent pool and add builders to address the unmet customer needs on behalf of global customers our teams here will play a key role in driving Ripples innovations, ranging from blockchain protocol development and decentralized applications to machine learning and payment solutions.

The opening of the Toronto office further solidifies our commitment to a region that is already a prominent tech hub. We will tap into the local talent pool and recruit top engineers to foster crypto innovation in Toronto.

Im thrilled that Ripple is putting down roots in Toronto where we know the company will be able to benefit from the highly skilled technical talent, booming ecosystem, and competitive economic advantages the Region offers, said John Tory, Mayor of the City of Toronto. Ripples globally-renowned, innovative technology, and first-mover attitude will be a perfect fit for the diverse, entrepreneurial, and committed spirit of Toronto.

We have strong ties to the Toronto community through our University Blockchain Research Initiative (UBRI) and in working with top-tier universities and colleges such as the University of Waterloo and Toronto Metropolitan University. Together with UBRI supporting leading research in several key areas of blockchain and crypto technology, we can provide students with opportunities to acquire strong technical skills.

Through our UBRI partnership, graduate students are trained on the latest in blockchain and its underlying fundamentals so theyre well-prepared to enter the workforce, says Professor Anwar Hasan of the University of Waterloo. Ripple and UBRI have played an important role in helping to foster talent within the University and with the opening of its Toronto office, were excited to see our graduates continue their blockchain journeya city with a fast-growing tech scene.

The Cybersecurity Research Lab of Toronto Metropolitan University is helping to accelerate breakthrough innovations in blockchain and digital payments happening within Toronto through its partnership with UBRI, says Dr. Atefeh (Atty) Mashatan, Canada Research Chair and Associate Professor, Information Technology Management of Toronto Metropolitan University. Ripple has shown that its committed to providing the tools and resources needed to lead the way in these efforts, and we look forward to seeing the impact that the opening of its Toronto office will have on the continuation of these innovations in the future.

Discover how you can become a part of Ripples blockchain journey.

See the article here:

Driving Crypto Innovation Through Top Engineering Talent in Canada - Ripple

Read More..

Quantum Error Correction: Time to Make It Work – IEEE Spectrum

Dates chiseled into an ancient tombstone have more in common with the data in your phone or laptop than you may realize. They both involve conventional, classical information, carried by hardware that is relatively immune to errors. The situation inside a quantum computer is far different: The information itself has its own idiosyncratic properties, and compared with standard digital microelectronics, state-of-the-art quantum-computer hardware is more than a billion trillion times as likely to suffer a fault. This tremendous susceptibility to errors is the single biggest problem holding back quantum computing from realizing its great promise.

Fortunately, an approach known as quantum error correction (QEC) can remedy this problem, at least in principle. A mature body of theory built up over the past quarter century now provides a solid theoretical foundation, and experimentalists have demonstrated dozens of proof-of-principle examples of QEC. But these experiments still have not reached the level of quality and sophistication needed to reduce the overall error rate in a system.

The two of us, along with many other researchers involved in quantum computing, are trying to move definitively beyond these preliminary demos of QEC so that it can be employed to build useful, large-scale quantum computers. But before describing how we think such error correction can be made practical, we need to first review what makes a quantum computer tick.

Information is physical. This was the mantra of the distinguished IBM researcher Rolf Landauer. Abstract though it may seem, information always involves a physical representation, and the physics matters.

Conventional digital information consists of bits, zeros and ones, which can be represented by classical states of matter, that is, states well described by classical physics. Quantum information, by contrast, involves qubitsquantum bitswhose properties follow the peculiar rules of quantum mechanics.

A classical bit has only two possible values: 0 or 1. A qubit, however, can occupy a superposition of these two information states, taking on characteristics of both. Polarized light provides intuitive examples of superpositions. You could use horizontally polarized light to represent 0 and vertically polarized light to represent 1, but light can also be polarized on an angle and then has both horizontal and vertical components at once. Indeed, one way to represent a qubit is by the polarization of a single photon of light.

These ideas generalize to groups of n bits or qubits: n bits can represent any one of 2n possible values at any moment, while n qubits can include components corresponding to all 2n classical states simultaneously in superposition. These superpositions provide a vast range of possible states for a quantum computer to work with, albeit with limitations on how they can be manipulated and accessed. Superposition of information is a central resource used in quantum processing and, along with other quantum rules, enables powerful new ways to compute.

Researchers are experimenting with many different physical systems to hold and process quantum information, including light, trapped atoms and ions, and solid-state devices based on semiconductors or superconductors. For the purpose of realizing qubits, all these systems follow the same underlying mathematical rules of quantum physics, and all of them are highly sensitive to environmental fluctuations that introduce errors. By contrast, the transistors that handle classical information in modern digital electronics can reliably perform a billion operations per second for decades with a vanishingly small chance of a hardware fault.

Of particular concern is the fact that qubit states can roam over a continuous range of superpositions. Polarized light again provides a good analogy: The angle of linear polarization can take any value from 0 to 180 degrees.

Pictorially, a qubits state can be thought of as an arrow pointing to a location on the surface of a sphere. Known as a Bloch sphere, its north and south poles represent the binary states 0 and 1, respectively, and all other locations on its surface represent possible quantum superpositions of those two states. Noise causes the Bloch arrow to drift around the sphere over time. A conventional computer represents 0 and 1 with physical quantities, such as capacitor voltages, that can be locked near the correct values to suppress this kind of continuous wandering and unwanted bit flips. There is no comparable way to lock the qubits arrow to its correct location on the Bloch sphere.

Early in the 1990s, Landauer and others argued that this difficulty presented a fundamental obstacle to building useful quantum computers. The issue is known as scalability: Although a simple quantum processor performing a few operations on a handful of qubits might be possible, could you scale up the technology to systems that could run lengthy computations on large arrays of qubits? A type of classical computation called analog computing also uses continuous quantities and is suitable for some tasks, but the problem of continuous errors prevents the complexity of such systems from being scaled up. Continuous errors with qubits seemed to doom quantum computers to the same fate.

We now know better. Theoreticians have successfully adapted the theory of error correction for classical digital data to quantum settings. QEC makes scalable quantum processing possible in a way that is impossible for analog computers. To get a sense of how it works, its worthwhile to review how error correction is performed in classical settings.

Simple schemes can deal with errors in classical information. For instance, in the 19th century, ships routinely carried clocks for determining the ships longitude during voyages. A good clock that could keep track of the time in Greenwich, in combination with the suns position in the sky, provided the necessary data. A mistimed clock could lead to dangerous navigational errors, though, so ships often carried at least three of them. Two clocks reading different times could detect when one was at fault, but three were needed to identify which timepiece was faulty and correct it through a majority vote.

The use of multiple clocks is an example of a repetition code: Information is redundantly encoded in multiple physical devices such that a disturbance in one can be identified and corrected.

As you might expect, quantum mechanics adds some major complications when dealing with errors. Two problems in particular might seem to dash any hopes of using a quantum repetition code. The first problem is that measurements fundamentally disturb quantum systems. So if you encoded information on three qubits, for instance, observing them directly to check for errors would ruin them. Like Schrdingers cat when its box is opened, their quantum states would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

The second issue is a fundamental result in quantum mechanics called the no-cloning theorem, which tells us it is impossible to make a perfect copy of an unknown quantum state. If you know the exact superposition state of your qubit, there is no problem producing any number of other qubits in the same state. But once a computation is running and you no longer know what state a qubit has evolved to, you cannot manufacture faithful copies of that qubit except by duplicating the entire process up to that point.

Fortunately, you can sidestep both of these obstacles. Well first describe how to evade the measurement problem using the example of a classical three-bit repetition code. You dont actually need to know the state of every individual code bit to identify which one, if any, has flipped. Instead, you ask two questions: Are bits 1 and 2 the same? and Are bits 2 and 3 the same? These are called parity-check questions because two identical bits are said to have even parity, and two unequal bits have odd parity.

The two answers to those questions identify which single bit has flipped, and you can then counterflip that bit to correct the error. You can do all this without ever determining what value each code bit holds. A similar strategy works to correct errors in a quantum system.

Learning the values of the parity checks still requires quantum measurement, but importantly, it does not reveal the underlying quantum information. Additional qubits can be used as disposable resources to obtain the parity values without revealing (and thus without disturbing) the encoded information itself.

Like Schrdingers cat when its box is opened, the quantum states of the qubits you measured would be irrevocably changed, spoiling the very quantum features your computer was intended to exploit.

What about no-cloning? It turns out it is possible to take a qubit whose state is unknown and encode that hidden state in a superposition across multiple qubits in a way that does not clone the original information. This process allows you to record what amounts to a single logical qubit of information across three physical qubits, and you can perform parity checks and corrective steps to protect the logical qubit against noise.

Quantum errors consist of more than just bit-flip errors, though, making this simple three-qubit repetition code unsuitable for protecting against all possible quantum errors. True QEC requires something more. That came in the mid-1990s when Peter Shor (then at AT&T Bell Laboratories, in Murray Hill, N.J.) described an elegant scheme to encode one logical qubit into nine physical qubits by embedding a repetition code inside another code. Shors scheme protects against an arbitrary quantum error on any one of the physical qubits.

Since then, the QEC community has developed many improved encoding schemes, which use fewer physical qubits per logical qubitthe most compact use fiveor enjoy other performance enhancements. Today, the workhorse of large-scale proposals for error correction in quantum computers is called the surface code, developed in the late 1990s by borrowing exotic mathematics from topology and high-energy physics.

It is convenient to think of a quantum computer as being made up of logical qubits and logical gates that sit atop an underlying foundation of physical devices. These physical devices are subject to noise, which creates physical errors that accumulate over time. Periodically, generalized parity measurements (called syndrome measurements) identify the physical errors, and corrections remove them before they cause damage at the logical level.

A quantum computation with QEC then consists of cycles of gates acting on qubits, syndrome measurements, error inference, and corrections. In terms more familiar to engineers, QEC is a form of feedback stabilization that uses indirect measurements to gain just the information needed to correct errors.

QEC is not foolproof, of course. The three-bit repetition code, for example, fails if more than one bit has been flipped. Whats more, the resources and mechanisms that create the encoded quantum states and perform the syndrome measurements are themselves prone to errors. How, then, can a quantum computer perform QEC when all these processes are themselves faulty?

Remarkably, the error-correction cycle can be designed to tolerate errors and faults that occur at every stage, whether in the physical qubits, the physical gates, or even in the very measurements used to infer the existence of errors! Called a fault-tolerant architecture, such a design permits, in principle, error-robust quantum processing even when all the component parts are unreliable.

A long quantum computation will require many cycles of quantum error correction (QEC). Each cycle would consist of gates acting on encoded qubits (performing the computation), followed by syndrome measurements from which errors can be inferred, and corrections. The effectiveness of this QEC feedback loop can be greatly enhanced by including quantum-control techniques (represented by the thick blue outline) to stabilize and optimize each of these processes.

Even in a fault-tolerant architecture, the additional complexity introduces new avenues for failure. The effect of errors is therefore reduced at the logical level only if the underlying physical error rate is not too high. The maximum physical error rate that a specific fault-tolerant architecture can reliably handle is known as its break-even error threshold. If error rates are lower than this threshold, the QEC process tends to suppress errors over the entire cycle. But if error rates exceed the threshold, the added machinery just makes things worse overall.

The theory of fault-tolerant QEC is foundational to every effort to build useful quantum computers because it paves the way to building systems of any size. If QEC is implemented effectively on hardware exceeding certain performance requirements, the effect of errors can be reduced to arbitrarily low levels, enabling the execution of arbitrarily long computations.

At this point, you may be wondering how QEC has evaded the problem of continuous errors, which is fatal for scaling up analog computers. The answer lies in the nature of quantum measurements.

In a typical quantum measurement of a superposition, only a few discrete outcomes are possible, and the physical state changes to match the result that the measurement finds. With the parity-check measurements, this change helps.

Imagine you have a code block of three physical qubits, and one of these qubit states has wandered a little from its ideal state. If you perform a parity measurement, just two results are possible: Most often, the measurement will report the parity state that corresponds to no error, and after the measurement, all three qubits will be in the correct state, whatever it is. Occasionally the measurement will instead indicate the odd parity state, which means an errant qubit is now fully flipped. If so, you can flip that qubit back to restore the desired encoded logical state.

In other words, performing QEC transforms small, continuous errors into infrequent but discrete errors, similar to the errors that arise in digital computers.

Researchers have now demonstrated many of the principles of QEC in the laboratoryfrom the basics of the repetition code through to complex encodings, logical operations on code words, and repeated cycles of measurement and correction. Current estimates of the break-even threshold for quantum hardware place it at about 1 error in 1,000 operations. This level of performance hasnt yet been achieved across all the constituent parts of a QEC scheme, but researchers are getting ever closer, achieving multiqubit logic with rates of fewer than about 5 errors per 1,000 operations. Even so, passing that critical milestone will be the beginning of the story, not the end.

On a system with a physical error rate just below the threshold, QEC would require enormous redundancy to push the logical rate down very far. It becomes much less challenging with a physical rate further below the threshold. So just crossing the error threshold is not sufficientwe need to beat it by a wide margin. How can that be done?

If we take a step back, we can see that the challenge of dealing with errors in quantum computers is one of stabilizing a dynamic system against external disturbances. Although the mathematical rules differ for the quantum system, this is a familiar problem in the discipline of control engineering. And just as control theory can help engineers build robots capable of righting themselves when they stumble, quantum-control engineering can suggest the best ways to implement abstract QEC codes on real physical hardware. Quantum control can minimize the effects of noise and make QEC practical.

In essence, quantum control involves optimizing how you implement all the physical processes used in QECfrom individual logic operations to the way measurements are performed. For example, in a system based on superconducting qubits, a qubit is flipped by irradiating it with a microwave pulse. One approach uses a simple type of pulse to move the qubits state from one pole of the Bloch sphere, along the Greenwich meridian, to precisely the other pole. Errors arise if the pulse is distorted by noise. It turns out that a more complicated pulse, one that takes the qubit on a well-chosen meandering route from pole to pole, can result in less error in the qubits final state under the same noise conditions, even when the new pulse is imperfectly implemented.

One facet of quantum-control engineering involves careful analysis and design of the best pulses for such tasks in a particular imperfect instance of a given system. It is a form of open-loop (measurement-free) control, which complements the closed-loop feedback control used in QEC.

This kind of open-loop control can also change the statistics of the physical-layer errors to better comport with the assumptions of QEC. For example, QEC performance is limited by the worst-case error within a logical block, and individual devices can vary a lot. Reducing that variability is very beneficial. In an experiment our team performed using IBMs publicly accessible machines, we showed that careful pulse optimization reduced the difference between the best-case and worst-case error in a small group of qubits by more than a factor of 10.

Some error processes arise only while carrying out complex algorithms. For instance, crosstalk errors occur on qubits only when their neighbors are being manipulated. Our team has shown that embedding quantum-control techniques into an algorithm can improve its overall success by orders of magnitude. This technique makes QEC protocols much more likely to correctly identify an error in a physical qubit.

For 25 years, QEC researchers have largely focused on mathematical strategies for encoding qubits and efficiently detecting errors in the encoded sets. Only recently have investigators begun to address the thorny question of how best to implement the full QEC feedback loop in real hardware. And while many areas of QEC technology are ripe for improvement, there is also growing awareness in the community that radical new approaches might be possible by marrying QEC and control theory. One way or another, this approach will turn quantum computing into a realityand you can carve that in stone.

This article appears in the July 2022 print issue as Quantum Error Correction at the Threshold.

From Your Site Articles

Related Articles Around the Web

Continued here:
Quantum Error Correction: Time to Make It Work - IEEE Spectrum

Read More..

With Too Many Negative Factors to Beat, IonQ’s Valuation Makes it a Sell – InvestorPlace

Source: Amin Van / Shutterstock.com

IonQ (NYSE:IONQ), a company that defines itself as a leader in quantum computing has seen its shares crash 70.23% in 2022, falling from nearly $17.50 in early January to a $5.23 on Jun. 24. The quantum computing firm faces several risks now and its first-quarter (Q1) 2022 financial results showed revenue is made, but this figure is still not meaningful for a company with a market capitalization of $1.03 billion. Is IONQ stock a buy today after its steep decline this year?

I personally see no fundamental reason that is in favor of this, as the stock is overpriced. The company made its public debut on Oct.1, 2021 through of a business combination with dMY Technology Group, Inc. III, a special purpose acquisition company.

It must tough to be the management of IonQ and read a report by Scorpion Capital with severe accusations. Scorpion Capital has called IonQ a hoax, reporting that it is a part-time side-hustle run by two academics who barely show up, and a scam built on phony statements about nearly all key aspects of the technology and business. On top of that, Scorpion Capital mentioned that IonQ has a useless toy that cant even add 1+1, as revealed by experiments we hired experts to run and that it generates fictitious revenue via sham transactions and related-party round-tripping.

These accusations are very strong, but IonQ has responded to them by calling them important inaccuracies and mischaracterizations regarding IonQs business and progress to date.

The company is determined to build a quantum future. The report by Scorpion Capital has caused another big problem for IonQ. The company is facing a securities fraud lawsuit.

The securities fraud lawsuit has summarized its allegations, citing IonQ had not yet developed a 32-qubit quantum computer and that the firms 11-quantum qubit computer suffered from significant error rates, rendering it useless. It also states that a significant portion of IonQs revenue was derived from improper round-tripping transactions with related parties.

So far, things do not look good for IonQ. I am not a lawyer, but I am not excited at all about these accusations.

What about the fundamentals? Can they change the negative opinion formed from the above information?

In its Q1 2022 financial results, IonQ reported revenue of $2 million and a net loss of $4.2 million. The company expects revenue between $2.3 and $2.5 million for Q2 2022.

We are talking about a company with a market capitalization of $1.03 billion. This company is unprofitable and is burning cash. It is too pricey with a current price-to-sales ratio (TTM) of 261.87.

The expectations for IonQ are for revenue growth of 407.11% in 2022, 78.37% in 2023, 178.44% in 2024, and 250.59% in 2025. I am not bullish at all as the earnings per share projections are for negative 34 cents in 2022, negative 50 cents in 2023, negative 61 cents in 2024, and negative 26 cents in 2025. Is it a good idea to wait until 2025 for an unprofitable company to become profitable? I dont think so.

The revenue generated today is not meaningful and the firm is losing money. I see quantum computing as a bet that lacks sense based on financial results and on valuation. I will totally skip it.

On the date of publication, Stavros Georgiadis, CFA did not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Stavros Georgiadis is a CFA charter holder, an Equity Research Analyst, and an Economist. He focuses on U.S. stocks and has his own stock market blog at thestockmarketontheinternet.com. He has written in the past various articles for other publications and can be reached on Twitter and on LinkedIn.

See more here:
With Too Many Negative Factors to Beat, IonQ's Valuation Makes it a Sell - InvestorPlace

Read More..

Alan Turing’s Everlasting Contributions to Computing, AI and Cryptography – NIST

An enigma machine on display outside the Alan Turing Institute entrance inside the British Library, London.

Credit: Shutterstock/William Barton

Suppose someone asked you to devise the most powerful computer possible. Alan Turing, whose reputation as a central figure in computer science and artificial intelligence has only grown since his untimely death in 1954, applied his genius to problems such as this one in an age before computers as we know them existed. His theoretical work on this problem and others remains a foundation of computing, AI and modern cryptographic standards, including those NIST recommends.

The road from devising the most powerful computer possible to cryptographic standards has a few twists and turns, as does Turings brief life.

Alan Turing

Credit: National Portrait Gallery, London

In Turings time, mathematicians debated whether it was possible to build a single, all-purpose machine that could solve all problems that are computable. For example, we can compute a cars most energy-efficient route to a destination, and (in principle) the most likely way in which a string of amino acids will fold into a three-dimensional protein. Another example of a computable problem, important to modern encryption, is whether or not bigger numbers can be expressed as the product of two smaller numbers. For example, 6 can be expressed as the product of 2 and 3, but 7 cannot be factored into smaller integers and is therefore a prime number.

Some prominent mathematicians proposed elaborate designs for universal computers that would operate by following very complicated mathematical rules. It seemed overwhelmingly difficult to build such machines. It took the genius of Turing to show that a very simple machine could in fact compute all that is computable.

His hypothetical device is now known as a Turing machine. The centerpiece of the machine is a strip of tape, divided into individual boxes. Each box contains a symbol (such as A,C,T, G for the letters of genetic code) or a blank space. The strip of tape is analogous to todays hard drives that store bits of data. Initially, the string of symbols on the tape corresponds to the input, containing the data for the problem to be solved. The string also serves as the memory of the computer. The Turing machine writes onto the tape data that it needs to access later in the computation.

Credit: NIST

The device reads an individual symbol on the tape and follows instructions on whether to change the symbol or leave it alone before moving to another symbol. The instructions depend on the current state of the machine. For example, if the machine needs to decide whether the tape contains the text string TC it can scan the tape in the forward direction while switching among the states previous letter was T and previous letter was not C. If while in state previous letter was T it reads a C, it goes to a state found it and halts. If it encounters the blank symbol at the end of the input, it goes to the state did not find it and halts. Nowadays we would recognize the set of instructions as the machines program.

It took some time, but eventually it became clear to everyone that Turing was right: The Turing machine could indeed compute all that seemed computable. No number of additions or extensions to this machine could extend its computing capability.

To understand what can be computed it is helpful to identify what cannot be computed. Ina previous life as a university professor I had to teach programming a few times. Students often encounter the following problem: My program has been running for a long time; is it stuck? This is called the Halting Problem, and students often wondered why we simply couldnt detect infinite loops without actually getting stuck in them. It turns out a program to do this is an impossibility. Turing showed that there does not exist a machine that detects whether or not another machine halts. From this seminal result followed many other impossibility results. For example, logicians and philosophers had to abandon the dream of an automated way of detecting whether an assertion (such as whether there are infinitely many prime numbers) is true or false, as that is uncomputable. If you could do this, then you could solve the Halting Problem simply by asking whether the statement this machine halts is true or false.

Turing went on to make fundamental contributions to AI, theoretical biology and cryptography. His involvement with this last subject brought him honor and fame during World War II, when he played a very important role in adapting and extending cryptanalytic techniques invented by Polish mathematicians. This work broke the German Enigma machine encryption, making a significant contribution to the war effort.

Turing was gay. After the war, in 1952, the British government convicted him for having sex with a man. He stayed out of jail only by submitting to what is now called chemical castration. He died in 1954 at age 41 by cyanide poisoning, which was initially ruled a suicide but may have been an accident according to subsequent analysis. More than 50 years would pass before the British government apologized and pardoned him (after years of campaigning by scientists around the world). Today, the highest honor in computer sciences is called the Turing Award.

Turings computability work provided the foundation for modern complexity theory. This theory tries to answer the question Among those problems that can be solved by a computer, which ones can be solved efficiently? Here, efficiently means not in billions of years but in milliseconds, seconds, hours or days, depending on the computational problem.

For example, much of the cryptography that currently safeguards our data and communications relies on the belief that certain problems, such as decomposing an integer number into its prime factors, cannot be solved before the Sun turns into a red giant and consumes the Earth (currently forecast for 4 billion to 5 billion years). NIST is responsible for cryptographic standards that are used throughout the world. We could not do this work without complexity theory.

Technology sometimes throws us a curve, such as the discovery that if a sufficiently big and reliable quantum computer is built it would be able to factor integers, thus breaking some of our cryptography. In this situation, NIST scientists must rely on the worlds experts (many of them in-house) in order to update our standards. There are deep reasons to believe that quantum computers will not be able to break the cryptography that NIST is about to roll out. Among these reasons is that Turings machine can simulate quantum computers. This implies that complexity theory gives us limits on what a powerful quantum computer can do.

But that is a topic for another day. For now, we can celebrate how Turing provided the keys to much of todays computing technology and even gave us hints on how to solve looming technological problems.

See the original post here:
Alan Turing's Everlasting Contributions to Computing, AI and Cryptography - NIST

Read More..

Peer Software and Pulsar Security Announce Strategic Alliance to Enhance Ransomware and Malware Detection Across Heterogenous, On-Premises and Cloud…

CENTREVILLE, Va.--(BUSINESS WIRE)--Peer Software today announced the formation of a strategic alliance with Pulsar Security. Through the alliance, Peer Software will leverage Pulsar Securitys team of cyber security experts to continuously monitor and analyze emerging and evolving ransomware and malware attack patterns on unstructured data.

PeerGFS, an enterprise-class software solution that eases the deployment of a modern distributed file system across multi-site, on-premises and cloud storage, will utilize these attack patterns to enable an additional layer of cyber security detection and response. These capabilities will enhance the Malicious Event Detection (MED) feature incorporated in PeerGFS.

Each ransomware and malware attack is encoded to infiltrate and propagate through a storage system in a unique manner that gives it a digital fingerprint, said Duane Laflotte, CTO, Pulsar Security. By understanding the unique behavior patterns of ransomware and malware attacks and matching these against the real-time file event streams that PeerGFS collects across the distributed file system, Peer can now empower its customers with an additional layer of fast and efficient cyber security monitoring. We are excited to be working with Peer Software on this unique capability.

As part of the agreement, Pulsar Security will also work with Peer Software to educate and inform enterprise customers on emerging trends in cyber security, and how to harden their systems against attacks through additional services like penetration testing, vulnerability assessments, dark web assessments, phishing simulations, red teaming, and wireless intrusion prevention.

Ransomware attacks have become so common that almost every storage infrastructure architecture plan now also requires a cyber security discussion, said Jimmy Tam, CEO, Peer Software. But whereas other storage-based ransomware protection strategies have focused mainly on the recovery from an attack, Peer Softwares goal in working with Pulsar Security is to prioritize the early detection of an attack and limiting the spread in order to minimize damage, speed recovery, and keep data continuously available for the business.

About Peer Software

Peer Softwares mission is to simplify file management and orchestration for enterprise organizations. IT administrators constantly face the unenviable task of trying to architect, build and operate resilient, highly available 24/7 global operations while simultaneously striving to add flexibility and agility in their technology choices to quickly adapt to ever evolving business and technical demands. Through its global file service, storage observability and analytics solutions, Peer helps enterprises meet these challenges across edge, data center, and cloud environments.

About Pulsar Security

Pulsar Security is a team of highly trained and qualified ethical hackers whose job is to leverage cybersecurity experience and proprietary tools to help businesses defend against malicious attacks. Pulsar is a Veteran, privately owned business built on vision and trust, whose leadership has extensive military experience enabling it to think strategically and plan beyond the problems at hand. The team leverages offensive experience to offer solutions designed to help analyze and secure businesses of all sizes. Our industry experience and certifications reveal that our engineers have the industry's most esteemed and advanced on the ground experience and cybersecurity credentials.

Follow Peer Software on Twitter and LinkedIn.

Follow Pulsar Security on Twitter and LinkedIn.

See original here:
Peer Software and Pulsar Security Announce Strategic Alliance to Enhance Ransomware and Malware Detection Across Heterogenous, On-Premises and Cloud...

Read More..