Page 3,280«..1020..3,2793,2803,2813,282..3,2903,300..»

SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge – Design and Reuse

Licensing agreement enables machine learning intelligence with best-in-class performance and power for robotics, surveillance, autonomous, and automotive applications

SAN JOSE, Calif.-- November 18, 2020 -- SiMa.ai, the machine learning company enabling high performance compute at the lowest power, today announced the adoption of low-power Arm compute technology to build its purpose-built Machine Learning SoC (MLSoC) platform. The licensing of this technology brings machine learning intelligence with best-in-class performance and power to a broad set of embedded edge applications including robotics, surveillance, autonomous, and automotive.

SiMa.ai is adopting Arm Cortex-A and Cortex-M processors optimized for power, throughput efficiency, and safety-critical tasks. In addition, SiMa.ai is leveraging a combination of widely used open-source machine learning frameworks from Arms vast ecosystem, to allow software to seamlessly enable machine learning for legacy applications at the embedded edge.

Arm is the industry leader in energy-efficient processor design and advanced computing, said Krishna Rangasayee, founder and CEO of SiMa.ai. The integration of SiMa.ai's high performance and low power machine learning accelerator with Arm technology accelerates our progress in bringing our MLSoC to the market, creating new solutions underpinned by industry-leading IP, the broad Arm ecosystem, and world-class support from its field and development teams."

From autonomous systems to smart cities, the applications enabled by ML at the edge are delivering increased functionality, leading to more complex device requirements, said Dipti Vachani, senior vice president and general manager, Automotive and IoT Line of Business at Arm. SiMa.ai is innovating on top of Arms foundational IP to create a unique low power ML SoC that will provide intelligence to the next generation of embedded edge use cases.

SiMa.ai is strategically leveraging Arm technology to deliver its unique Machine Learning SoC. This includes:

About SiMa.ai

SiMa.ai is a machine learning company enabling high performance compute at the lowest power. Initially focused on solutions for computer vision applications at the embedded edge, the company is led by a team of technology experts committed to delivering the industrys highest frames per second per watt solution to its customers. To learn more, visit http://www.sima.ai.

See the rest here:
SiMa.ai Adopts Arm Technology to Deliver a Purpose-built Heterogeneous Machine Learning Compute Platform for the Embedded Edge - Design and Reuse

Read More..

Quantum computer race intensifies as alternative technology gains steam – Nature.com

  1. Quantum computer race intensifies as alternative technology gains steam  Nature.com
  2. Quantum Computing Market is Expected to Reach $2.2 Billion by 2026  GlobeNewswire
  3. Quantum Computing Market 2020 Size, Demand, Share, Opportunities And Forecasts To 2026 | Major Giants ID Quantique, Toshiba Research Europe Ltd, Google,Inc., Microsoft Corporation  re:Jerusalem
  4. Quantum Computing in Aerospace and Defense Market Statistics Shows Revolutionary growth in Coming decade | Want to Know Biggest Opportunity for Growth?  TechnoWeekly
  5. View Full Coverage on Google News

Read the original here:
Quantum computer race intensifies as alternative technology gains steam - Nature.com

Read More..

Quantum computing now is a bit like SQL was in the late 80s: Wild and wooly and full of promise – ZDNet

Quantum computing is bright and shiny, with demonstrations by Google suggesting a kind of transcendent ability to scale beyond the heights of known problems.

But there's a real bummer in store for anyone with their head in the clouds: All that glitters is not gold, and there's a lot of hard work to be done on the way to someday computing NP-hard problems.

"ETL, if you get that wrong in this flow-based programming, if you get the data frame wrong, it's garbage in, garbage out," according to Christopher Savoie, who is the CEO and a co-founder of a three-year-old startup Zapata Computing of Boston, Mass.

"There's this naive idea you're going to show up with this beautiful quantum computer, and just drop it in your data center, and everything is going to be solved it's not going to work that way," said Savoie, in a video interview with ZDNet. "You really have to solve these basic problems."

"There's this naive idea you're going to show up with this beautiful quantum computer, and just drop it in your data center, and everything is going to be solved it's not going to work that way," said Savoie, in a video interview with ZDNet. "You really have to solve these basic problems."

Zapata sells a programming tool for quantum computing, called Orquestra. It can let developers invent algorithms to be run on real quantum hardware, such as Honeywell's trapped-ion computer.

But most of the work of quantum today is not writing pretty algorithms, it's just making sure data is not junk.

"Ninety-five percent of the problem is data cleaning," Savoie told ZDNet in a video interview. "There wasn't any great toolset out there, so that's why we created Orquestra to do this."

The company on Thursday announced it has received a Series B round of investment totaling $38 million from large investors that include Honeywell's venture capital outfit and returning Series A investors Comcast Ventures, Pitango, and Prelude Ventures, among others. The company has now raised $64.4 million.

Also:Honeywell introduces quantum computing as a service with subscription offering

Zapata was spun out of Harvard University in 2017 by scholars including Aln Aspuru-Guzik, who has done fundamental work on quantum. But a lot of what is coming up are the mundane matters of data prep and other gotchas that can be a nightmare in a bold new world of only partially-understood hardware.

Things such as extract, transform, load, or ETL, which become maddening when prepping a quantum workload.

"We had a customer who thought they had a compute problem because they had a job that was taking a long time; it turned out, when we dug in, just parallelizing the workflow, the ETL, gave them a compute advantage," recalled Savoie.

Such pitfalls are things, said Savoie, that companies don't know are an issue until they get ready to spend valuable time on a quantum computer and code doesn't run as expected.

"That's why we think it's critical for companies to start now," he said, even though today's noisy intermediate-scale quantum, or NISQ, machines have only a handful of qubits.

"You have to solve all these basic problems we really haven't even solved yet in classical computing," said Savoie.

The present moment in time in the young field of quantum sounds a bit like the early days of microcomputer-based relational databases. And, in fact, Savoie likes to make an analogy to the era of the 1980s and 1990s, when Oracle database was taking over workloads from IBM's DB/2.

Also:What the Google vs. IBM debate over quantum supremacy means

"Oracle is a really good analogy, he said. "Recall when SQL wasn't even a thing, and databases had to be turned on a per-on-premises, as-a-solution basis; how do I use a database versus storage, and there weren't a lot of tools for those things, and every installment was an engagement, really," recalled Savoie.

"There are a lot of close analogies to that" with today's quantum, said Savoie. "It's enterprise, it's tough problems, it's a lot of big data, it's a lot of big compute problems, and we are the software company sitting in the middle of all that with a lot of tools that aren't there yet."

Mind you, Savoie is a big believer in quantum's potential, despite pointing out all the challenges. He has seen how technologies can get stymied, but also how they ultimately triumph. He helped found startup Dejima, one of the companies that became a component of Apple's Siri voice assistant, in 1998. Dejima didn't produce an AI wave, it sold out to database giant Sybase.

"We invented this natural language understanding engine, but we didn't have the great SpeechWorks engine, we didn't have 3G, never mind 4G cell phones or OLED displays," he recalled. "It took ten years from 1998 till it was a product, till it was Siri, so I've seen this movie before I've been in that movie."

But the technology of NLP did survive and is now thriving. Similarly, the basic science of quantum, as with the basic science of NLP, is real, is validated. "Somebody is going to be the iPhone" of quantum, he said, although along the way there may be a couple Apple Newtons, too, he quipped.

Even an Apple Newton of quantum will be a breakthrough. "It will be solving real problems," he said.

Also: All that glitters is not quantum AI

In the meantime, handling the complexity that's cropping up now, with things like ETL, suggests there's a role for a young company that can be for quantum what Oracle was for structured query language.

"You build that out, and you have best practices, and you can become a great company, and that's what we aspire to," he said.

Zapata has fifty-eight employees and has had contract revenue since its first year of operations, and has doubled each year, said Savoie.

Originally posted here:
Quantum computing now is a bit like SQL was in the late 80s: Wild and wooly and full of promise - ZDNet

Read More..

Construction begins for Duke University’s new quantum computing center – WRAL Tech Wire

DURHAM Construction is currently underway on a 10,000-square foot expansion of Dukes existing quantum computing center in the Chesterfield Building, a former cigarette factory in downtown Durham.

The new space will house what is envisioned to be a world-beating team of quantum computing scientists. The DQC, Duke Quantum Center, is expected to be online in March 2021 and is one of five new quantum research centers to be supported by a recently announced$115 million grant from the U.S. Department of Energy.

The Error-corrected Universal Reconfigurable Ion-trap Quantum Archetype, or EURIQA, is the first generation of an evolving line of quantum computers that will be available to users in Dukes Scalable Quantum Computing Laboratory, or SQLab. The machine was built with funding from IARPA, the U.S. governments Intelligence Advanced Research Projects Activity. The SQLab intends to offer programmable, reconfigurable quantum computing capability to engineers, physicists, chemists, mathematicians or anyone who comes forward with a complex optimization problem theyd like to try on a 20-qubit system.

Unlike the quantum systems that are now accessible in the cloud, the renamed Duke Quantum Archetype, DQA, will be customized for each research problem and users will have open access to its gutsa more academic approach to solving quantum riddles.

(C) Duke University

See the original post here:
Construction begins for Duke University's new quantum computing center - WRAL Tech Wire

Read More..

Is Now the Time to Start Protecting Government Data from Quantum Hacking? – Nextgov

My previous column about the possibility of pairing artificial intelligence with quantum computing to supercharge both technologies generated a storm of feedback via Twitter and email. Quantum computing is a science that is still somewhat misunderstood, even by scientists working on it, but might one day be extremely powerful. And artificial intelligence has some scary undertones with quite a few trust issues. So I understand the reluctance that people have when considering this marriage of technologies.

Unfortunately, we dont really get a say in this. The avalanche has already started, so its too late for all of us pebbles to vote against it. All we can do now is deal with the practical ramifications of these recent developments. The most critical right now is protecting government encryption from the possibility of quantum hacking.

Two years ago I warned that government data would soon be vulnerable to quantum hacking, whereby a quantum machine could easily shred the current AES encryption used to protect our most sensitive information. Government agencies like NIST have been working for years on developing quantum-resistant encryption schemes. But adding AI to a quantum computer might be the tipping point needed to give quantum the edge, while most of the quantum-resistant encryption protections are still being slowly developed. At least, that is what I thought.

One of the people who contacted me after my last article was Andrew Cheung, the CEO of 01 Communique Laboratory and IronCAP. They have a product available right now which can add quantum-resistant encryption to any email. Called IronCAP X, its available for free for individual users, so anyone can start protecting their email from the threat of quantum hacking right away. In addition to downloading the program to test, I spent about an hour interviewing Cheung about how quantum-resistant encryption works, and how agencies can keep their data protection one step ahead of some of the very same quantum computers they are helping to develop.

For Cheung, the road to quantum-resistant encryption began over 10 years ago, long before anyone was seriously engineering a quantum computer. It almost felt like we were developing a bulletproof vest before anyone had created a gun, Cheung said.

But the science of quantum-resistant encryption has actually been around for over 40 years, Cheung said. It was just never specifically called that. People would ask how we could develop encryption that would survive hacking by a really fast computer, he said. At first, nobody said the word quantum, but that is what we were ultimately working against.

According to Cheung, the key to creating quantum-resistant encryption is to get away from the core strength of computers in general, which is mathematics. He explained that RSA encryption used by the government today is fundamentally based on prime number factorization, where if you multiply two prime numbers together, the result is a number that can only be broken down into those primes. Breaking encryption involves trying to find those primes by trial and error.

So if you have a number like 21, then almost anyone can use factorization to quickly break it down and find its prime numbers, which are three and seven. If you have a number like 221, then it takes a little bit longer for a human to come up with 13 and 17 as its primes, though a computer can still do that almost instantaneously. But if you have something like a 500 digit number, then it would take a supercomputer more than a century to find its primes and break the related encryption. The fear is that quantum computers, because of the strange way they operate, could one day do that a lot more quickly.

To make it more difficult for quantum machines, or any other kind of fast computer, Cheung and his company developed an encryption method based on binary Goppa code. The code was named for the renowned Russian mathematician who invented it, Valerii Denisovich Goppa, and was originally intended to be used as an error-correcting code to improve the reliability of information being transmitted over noisy channels. The IronCAP program intentionally introduces errors into the information its protecting, and then authorized users can employ a special algorithm to decrypt it, but only if they have the private key so that the numerous errors can be removed and corrected.

What makes encryption based on binary Goppa code so powerful against quantum hacking is that you cant use math to guess at where or how the errors have been induced into the protected information. Unlike encryption based on prime number factorization, there isnt a discernible pattern, and theres no way to brute force guess at how to remove the errors. According to Cheung, a quantum machine, or any other fast system like a traditional supercomputer, cant be programmed to break the encryption because there is no system for it to use to begin its guesswork.

A negative aspect to binary Goppa code encryption, and also one of the reasons why Cheung says the protection method is not more popular today, is the size of the encryption key. Whether you are encrypting a single character or a terabyte of information, the key size is going to be about 250 kilobytes, which is huge compared with the typical 4 kilobyte key size for AES encryption. Even ten years ago, that might have posed a problem for many computers and communication methods, though it seems tiny compared with file sizes today. Still, its one of the main reasons why AES won out as the standard encryption format, Cheung says.

I downloaded the free IronCAP X application and easily integrated it into Microsoft Outlook. Using the application was extremely easy, and the encryption process itself when employing it to protect an email is almost instantaneous, even utilizing the limited power of an average desktop. And while I dont have access to a quantum computer to test its resilience against quantum hacking, I did try to extract the information using traditional methods. I can confirm that the data is just unreadable gibberish with no discernable pattern to unauthorized users.

Cheung says that binary Goppa code encryption that can resist quantum hacking can be deployed right now on the same servers and infrastructure that agencies are already using. It would just be a matter of switching things over to the new method. With quantum computers evolving and improving so rapidly these days, Cheung believes that there is little time to waste.

Yes, making the switch in encryption methods will be a little bit of a chore, he said. But with new developments in quantum computing coming every day, the question is whether you want to maybe deploy quantum-resistant encryption two years too early, or risk installing it two years too late.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Read this article:
Is Now the Time to Start Protecting Government Data from Quantum Hacking? - Nextgov

Read More..

CCNY & partners in quantum algorithm breakthrough | The City College of New York – The City College of New York News

Researchers led by City College of New York physicist Pouyan Ghaemi report the development of a quantum algorithm with the potential to study a class of many-electron quantums system using quantum computers. Their paper, entitled Creating and Manipulating a Laughlin-Type =1/3 Fractional Quantum Hall State on a Quantum Computer with Linear Depth Circuits, appears in the December issue of PRX Quantum, a journal of the American Physical Society.

Quantum physics is the fundamental theory of nature which leads to formation of molecules and the resulting matter around us, said Ghaemi, assistant professor in CCNYs Division of Science. It is already known that when we have a macroscopic number of quantum particles, such as electrons in the metal, which interact with each other, novel phenomena such as superconductivity emerge.

However, until now, according to Ghaemi, tools to study systems with large numbers of interacting quantum particles and their novel properties have been extremely limited.

Our research has developed a quantum algorithm which can be used to study a class of many-electron quantum systems using quantum computers. Our algorithm opens a new venue to use the new quantum devices to study problems which are quite challenging to study using classical computers. Our results are new and motivate many follow up studies, added Ghaemi.

On possible applications for this advancement, Ghaemi, whos also affiliated with the Graduate Center, CUNY noted: Quantum computers have witnessed extensive developments during the last few years. Development of new quantum algorithms, regardless of their direct application, will contribute to realizeapplications of quantum computers.

I believe the direct application of our results is to provide tools to improve quantum computing devices. Their direct real-life applicationwould emerge when quantum computers can be used for daily life applications.

His collaborators included scientists from: Western Washington University, University of California, Santa Barbara; Google AI Quantum and theUniversity of Michigan, Ann Arbor.

About the City College of New YorkSince 1847, The City College of New York has provided a high-quality and affordable education to generations of New Yorkers in a wide variety of disciplines. CCNY embraces its position at the forefront of social change. It is ranked #1 by the Harvard-based Opportunity Insights out of 369 selective public colleges in the United States on the overall mobility index. This measure reflects both access and outcomes, representing the likelihood that a student at CCNY can move up two or more income quintiles. In addition, the Center for World University Rankings places CCNY in the top 1.8% of universities worldwide in terms of academic excellence. Labor analytics firm Emsi puts at $1.9 billion CCNYs annual economic impact on the regional economy (5 boroughs and 5 adjacent counties) and quantifies the for dollar return on investment to students, taxpayers and society. At City College, more than 16,000 students pursue undergraduate and graduate degrees in eight schools and divisions, driven by significant funded research, creativity and scholarship. CCNY is as diverse, dynamic and visionary as New York City itself. View CCNY Media Kit.

Read this article:
CCNY & partners in quantum algorithm breakthrough | The City College of New York - The City College of New York News

Read More..

Quantum Computing in Aerospace and Defense Market Forecast to 2028: How it is Going to Impact on Global Industry to Grow in Near Future – Eurowire

Quantum Computing in Aerospace and Defense Market 2020: Latest Analysis:

The most recent Quantum Computing in Aerospace and Defense Market Research study includes some significant activities of the current market size for the worldwide Quantum Computing in Aerospace and Defense market. It presents a point by point analysis dependent on the exhaustive research of the market elements like market size, development situation, potential opportunities, and operation landscape and trend analysis. This report centers around the Quantum Computing in Aerospace and Defense-business status, presents volume and worth, key market, product type, consumers, regions, and key players.

Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-29723?utm_source=Eurowire/komal

The prominent players covered in this report: D-Wave Systems Inc, Qxbranch LLC, IBM Corporation, Cambridge Quantum Computing Ltd, 1qb Information Technologies Inc., QC Ware Corp., Magiq Technologies Inc., Station Q-Microsoft Corporation, and Rigetti Computing

The market is segmented into By Component (Hardware, Software, Services), By Application (QKD, Quantum Cryptanalysis, Quantum Sensing, Naval).

Geographical segments are North America, Europe, Asia Pacific, Middle East & Africa, and South America.

It has a wide-ranging analysis of the impact of these advancements on the markets future growth, wide-ranging analysis of these extensions on the markets future growth. The research report studies the market in a detailed manner by explaining the key facets of the market that are foreseeable to have a countable stimulus on its developing extrapolations over the forecast period.

Get ToC for the overview of the premium report @ https://www.quincemarketinsights.com/request-toc-29723?utm_source=Eurowire/komal

This is anticipated to drive the Global Quantum Computing in Aerospace and Defense Market over the forecast period. This research report covers the market landscape and its progress prospects in the near future. After studying key companies, the report focuses on the new entrants contributing to the growth of the market. Most companies in the Global Quantum Computing in Aerospace and Defense Market are currently adopting new technological trends in the market.

Finally, the researchers throw light on different ways to discover the strengths, weaknesses, opportunities, and threats affecting the growth of the Global Quantum Computing in Aerospace and Defense Market. The feasibility of the new report is also measured in this research report.

Reasons for buying this report:

Make an Enquiry for purchasing this Report @ https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-29723?utm_source=Eurowire/komal

About Us:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact Us:

Quince Market Insights

Ajay D. (Knowledge Partner)

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Follow this link:
Quantum Computing in Aerospace and Defense Market Forecast to 2028: How it is Going to Impact on Global Industry to Grow in Near Future - Eurowire

Read More..

What’s Next In AI, Chips And Masks – SemiEngineering

Aki Fujimura, chief executive of D2S, sat down with Semiconductor Engineering to talk about AI and Moores Law, lithography, and photomask technologies. What follows are excerpts of that conversation.

SE: In the eBeam Initiatives recent Luminary Survey, the participants had some interesting observations about the outlook for the photomask market. What were those observations?

Fujimura: In the last couple of years, mask revenues have been going up. Prior to that, mask revenues were fairly steady at around $3 billion per year. Recently, they have gone up beyond the $4 billion level, and theyre projected to keep going up. Luminaries believe a component of this increase is because of the shift in the industry toward EUV. One question in the survey asked participants, What business impact will COVID have on the photomask market? Some people think it may be negative, but the majority of the people believe that its not going to have much of an effect or it might have a positive effect. At a recent eBeam Initiative panel, the panelists commented that the reason for a positive outlook might be because of the demand picture in the semiconductor industry. The shelter-in-place and work-from-home environments are creating more need and opportunities for the electronics and semiconductor industries.

SE: How will extreme ultraviolet (EUV) lithography impact mask revenues?

Fujimura: In general, two thirds of the participants in the survey believe that it will have a positive impact. When you go to EUV, you have a fewer number of masks. This is because EUV brings the industry back to single patterning. 193nm immersion with multiple patterning requires more masks at advanced nodes. With EUV, you have fewer masks, but mask costs for each EUV layer is more expensive.

SE: For decades, the IC industry has followed the Moores Law axiom that transistor density in chips doubles every 18 to 24 months. At this cadence, chipmakers can pack more and smaller transistors on a die, but Moores Law appears to be slowing down. What comes next?

Fujimura: The definition of Moores Law is changing. Its no longer looking at the trends in CPU clock speeds. Thats not changing much. Its scaling more by bit width than by clock speed. A lot of that has to do with thermal properties and other things. We have some theories on where we can make that better over time. On the other hand, if you look at things like massively parallel computing using GPUs or having more CPU cores and how quickly you can access memory or how much memory you can access if you include those things, Moores Law is very much alive. For example, D2S supplies computing systems for the semiconductor manufacturing industry, so we are also a consumer of technology. We do heavy supercomputing, so its important for us to understand whats happening on the computing capability side. What we see is that our ability to compute is continuing to improve at about the same rate as before. But as programmers we have to adapt how we take advantage of it. Its not like you can take the same code and it automatically scales like it did 20 years ago. You have to understand how that scaling is different at any given point in time. You have to figure out how you can take advantage of the strength of the new generation of technology and then shift your code. So its definitely harder.

SE: Whats happening with the logic roadmap?

Fujimura: Were at 5nm in terms of what people are starting to do now. They are starting to plan 3nm and 2nm. And in terms of getting to the 2nm node, people are pretty comfortable. The question is what happens beyond that. It wasnt too long ago that people were saying: Theres no way were going to have 2nm. Thats been the general pattern in the semiconductor industry. The industry is constantly re-inventing itself. It is extending things longer than people ever thought possible. For example, look how long 193nm optical lithography lasted at advanced nodes. At one time, people were waiting for EUV. There was once a lot of doom and gloom about EUV. But despite being late, companies developed new processes and patterning schemes to extend 193nm. It takes coordination by a lot of people to make this happen.

SE: How long can we extend the current technology?

Fujimura: Theres no question that there is a physical limit, but we are still good for the next 10 years.

SE: Theres a lot of activity around AI and machine learning. Where do you see deep learning fitting in?

Fujimura: Deep learning is a subset of machine learning. Its the subset thats made machine learning revolutionary. The general idea of deep learning is to mimic how the brain works with a network of neurons or nodes. The programmer first determines what kind of a network to use. The programmer then trains the network by presenting it with a whole bunch of data. Often, the network is trained by labeled data. Using defect classification as an example, a human or some other program labels each picture as being a defect or not, and may also label what kind of defect it is, or even how it should be repaired. The deep learning engine iteratively optimizes the weights in the network. It automatically finds a set of weights that would result in the network to best mimic the labels. Then, the network is tried on data that it wasnt trained on to test to see if the network learned as intended.

SE: What cant deep learning do?

Fujimura: Deep learning does not reason. Deep learning does pattern matching. Amazingly, it turns out that many of the worlds problems are solvable purely with pattern matching. What you can do with deep learning is a set of things that you just cant do with conventional programming. I was an AI student in the early 1980s. Many of the best computer scientists in the world back then (and ever since) already were trying hard to create a chess program that could beat the chess masters. It wasnt possible until deep learning came along. Applied to semiconductor manufacturing, or any field, there are classes of problems that had not been practically possible without deep learning.

SE: Years ago, there wasnt enough compute power to make machine learning feasible. What changed?

Fujimura: The first publication describing convolutional neural networks was in 1975. The researcher, Dr. Kunihiko Fukushima, called it neocognitron back then, but the paper basically describes deep learning. But computational capability simply wasnt sufficient. Deep learning was enabled with what I call useful waste in massive computations by cost-effective GPUs.

SE: What problems can deep learning solve?

Fujimura: Deep learning can be used for any data. For example, people use it for text-to-speech, speech-to-text, or automatic translation. Where deep learning is most evolved today is when we are talking about two-dimensional data and image processing. A GPU happens to be a good platform for deep learning because of its single instruction multiple data (SIMD) processing nature. The SIMD architecture is also good at image processing, so it makes sense that its applied in that way. So for any problem in which a human expert can look at a picture without any other background knowledge and tell something with high probability, deep learning is likely to be able to do well.

SE: What about machine learning in semiconductor manufacturing?

Fujimura: We have already started to see products incorporating deep learning both in software and equipment. Any tedious and error-prone process that human operators need to perform, particularly those involving visual inspection, are great candidates for deep learning. There are many opportunities in inspection and metrology. There are also many opportunities in software to produce more accurate results faster to help with the turnaround time issues in leading-edge mask shops. There are many opportunities in correlating big data in mask shops and machine log files with machine learning for predictive maintenance.

SE: What are the challenges?

Fujimura: Deep learning is only as good as the data that is being given, so caution is required in deploying deep learning. For example, if deep learning is used to screen resumes by learning from labels provided by prior hiring practices, deep learning learns the biases that are already built into the past practices, even if unintended. If operators tend to make a type of a mistake in categorizing an image, deep learning that learned from the data labeled by that operators past behavior would learn to make the same mistake. If deep learning is used to identify suspected criminal behavior in the street images captured by cameras on the street based on past history of arrests, deep learning will try the best it can to mimic the past behavior. If deep learning is used to identify what a social media user tends to want to see in order to maximize advertising revenues, deep learning will learn to be extremely good at showing the user exactly what the user tends to watch, even if it is highly biased, fake or inappropriate. If misused, deep learning can accentuate and accelerate human addiction and biases. Deep learning is a powerful weapon that relies on the humans wielding it to use it carefully.

SE: Is machine learning more accurate than a human in performing pattern recognition tasks?

Fujimura: In many cases, its found that a deep learning-based program can inference better with a higher percentage of accuracy than a human, particularly when you look at it over time. A human might be able to look at a picture and recognize it with a 99% accuracy. But if the same human has to look at a much larger data set, and do it eight hours a day for 200 days a year, the performance of the human is going to degrade. Thats not true for a computer-based algorithm, including deep learning. The learning algorithms process vast amounts of data. They go through small sections at a time and go through every single one without skipping anything. When you take that into account, deep learning programs can be useful for these error prone processes that are visually oriented or can be cast into being visually oriented.

SE: The industry is working on other technologies to replicate the functions of the brain. Neuromorphic computing is one example. How realistic is this?

Fujimura: The brain is amazing. It will take a long time to create a neural network of the actual brain. There are very interesting computing models in the future. Neuromorphic is not a different computing model. Its a different architecture of how you do it. Its unclear if neuromorphic computing will necessarily create new kinds of capabilities. It does make some of them more efficient and effective.

SE: What about quantum computing?

Fujimura: The big change is quantum computing. That takes a lot of technology, money and talent. Its not an easy technology to develop. But you can bet that leading technology countries are working on it, and there is no question in my mind that its important. Take security, for example. 256-bit encryption is nothing in basic quantum computing. Security mechanisms would have to be significantly revamped in the world of quantum computing. Quantum computing used in a wrong way can be destructive. Staying ahead of that is a matter of national security. But quantum computing also can be very powerful in solving problems that were considered intractable. Many iterative optimization problems, including deep learning training, will see major discontinuities with quantum computing.

SE: Lets move back to the photomask industry. Years ago, the mask was simple. Over time, masks have become more complex, right?

Fujimura: At 130nm or around there, you started to see decorations on the mask. If you wanted to draw a circle on the wafer using Manhattan or rectilinear shapes, you actually drew a square on the mask. Eventually, it would become a circle on the wafer. However, starting at around 130nm, that square on the mask had to be written with decorations in all four corners. Then, SRAFs (sub-resolution assist features) started to appear on the mask around 90nm. There might have been some at 130nm, but mostly at 90nm. By 22nm, you couldnt find a critical layer mask that didnt have SRAFs on them. SRAFs are features on the mask that are designed explicitly not to print on the wafer. Through an angle, SRAFs project light into the main features that you do want to print on a wafer enough so that it helps to augment the amount of energy thats being applied to the resist. Again, this makes the printing of the main features more resilient to manufacturing process variation.

SE: Then multiple patterning appeared around 16nm/14nm, right?

Fujimura: The feature sizes became smaller and more complex. When we reached the limit of resolution for 193i, there was no choice but to go to multiple patterning, where multiple masks printed one wafer layer. You divide the features that you want on a given wafer layer and you put them on different masks. This provided more space for SRAFs for each of the masks. EUV for some layers is projected to go to multiple patterning, too. It costs more to do multiple patterning, but it is a familiar and proven technique for extending lithography to smaller nodes.

SE: To pattern a photomask, mask makers use e-beam mask writer systems based on variable shaped beam (VSB) technology. Now, using thousands of tiny beams, multi-beam mask writers are in the market. How do you see this playing out?

Fujimura: Most semiconductor devices are being patterned using VSB writers for the critical layers. Thats working fine. The write times are increasing. If you look at the eBeam Initiatives recent survey, the average write times are still around 8 hours. Going forward, we are moving toward more complex processes with EUV masks. Today, EUV masks are fairly simple. Rectangular writing is enough. But you need multi-beam mask writers because of the resist sensitivity. The resists are slow in order to be more accurate. We need to apply a lot of energy to make it work, and that is better with multi-beam mask writers.

SE: Whats next for EUV masks?

Fujimura: EUV masks will require SRAFs, too. They dont today at 7nm. SRAFs are necessary for smaller features. And, for 193i as well as for EUV, curvilinear masks are being considered now for improvements in wafer quality, particularly in resilience to manufacturing variation. But for EUV in particular, because of the reflective optics, curvilinear SRAFs are needed even more. Because multi-beam mask writing enables curvilinear mask shapes without a write time penalty, the enhanced wafer quality in the same mask write time is attractive.

SE: What are the big mask challenges going forward?

Fujimura: There are still many. EUV pellicles, affordable defect-free EUV mask blanks, high- NA EUV, and actinic or e-beam-based mask inspection both in the mask shop and in the wafer shop for requalification are all important areas for advancement. Now, the need to adopt curvilinear mask shapes has been widely acknowledged. Data processing, including compact and lossless data representation that is fast to write and read, is an important challenge. Optical proximity correction (OPC) and inverse lithography technology (ILT), which are needed to produce these curvilinear mask shapes to maximize wafer performance, need to run fast enough to be practical.

SE: What are the challenges in developing curvilinear shapes on masks?

Fujimura: There are two issues. Without multi-beam mask writers, producing masks with curvilinear shapes can be too expensive or may practically take too long to write. Second, controlling the mask variation is challenging. Once again, the reason you want curvilinear shapes on the mask is because wafer quality improves substantially. That is even more important for EUV than in 193nm immersion lithography. EUV masks are reflective. So, there is also a 6-degree incidence angle on EUV masks. And that creates more desire to have curvilinear shapes or SRAFs. They dont print on wafer. They are printed on the mask in order to help decrease process variation on the wafer.

SE: What about ILT?

Fujimura: ILT is an advanced form of OPC that computes the desired mask shapes in order to maximize the quality of wafer lithography. Studies have shown that ILT in particular, unconstrained curvilinear ILT can produce the best results in terms of resilience to manufacturing variation. D2S and Micron recently presented a paper on the benefits of full-chip, curvilinear stitchless ILT with mask-wafer co-optimization for memory applications. This approach enabled more than a 2X improvement in process windows.

SE: Will AI play a big role in mask making?

Fujimura: Yes. In particular, with deep learning, the gap between a promising prototype and a production-level inference engine is very wide. While there was quite a bit of initial excitement over deep learning, the world still has not seen very much in production adoption of deep learning. A large amount of this comes from the need for data. In semiconductor manufacturing, data security is extremely important. So while a given manufacturer would have plenty of data of its own kind, a vendor of any given tool, whether software or equipment, has a difficult time getting enough customer data. Even for a manufacturer, creating new data say, a SEM picture of a defect can be difficult and time-consuming. Yet deep learning programming is programming with data, instead of writing new code. If a deep learning programmer wants to improve the success rate of an inference engine from 92% to 95%, that programmer needs to analyze the engine to see what types of data it needs to be additionally trained to make that improvement, then acquire many instances of that type of data, and then iterate. The only way this can be done efficiently and effectively is to have digital twins, a simulated environment that generates data instead of relying only on physical real sample data. Getting to 80% success rate can be done with thousands of collected real data. But getting to 95% success rate requires digital twins. It is the lack of this understanding that is preventing production deployment of deep learning in many potential areas. It is clear to me that many of the tedious and error-prone processes can benefit from deep learning. And it is also clear to me that acceleration of many computing tasks using deep learning will benefit the deployment of new software capabilities in the mask shop.

Related Stories

EUVs Uncertain Future At 3nm And Below

Challenges Linger For EUV

Mask/Lithography Issues For Mature Nodes

The Evolution Of Digital Twins

Next-Gen Mask Writer Race Begins

More here:
What's Next In AI, Chips And Masks - SemiEngineering

Read More..

Physicists discover the ‘Kings and Queens of Quantumness’ – Livescience.com

Is that light particle more like a ball careening through space, or more of a smeary mess that is everywhere at once?

The answer depends on whether the absurd laws of subatomic particles or the deterministic equations that govern larger objects hold more sway. Now, for the first time, physicists have found a way to mathematically define the degree of quantumness that anything be it particle, atom, molecule or even a planet exhibits. The result suggests a way to quantify quantumness and identify "the most quantum states" of a system, which the team calls the "Kings and Queens of Quantumness."

In addition to furthering our understanding of the universe, the work could find applications in quantum technologies such as gravitational wave detectors and ultra-precise measurement devices.

Related: From Big Bang to present: snapshots of our universe through time

At the subatomic heart of reality, the bizarre world of quantum mechanics reigns. Under these mind-bending rules, tiny subatomic particles such as electrons can be paired in strange superpositions of states meaning that an electron can exist in multiple states at once and their positions around an atom and even their momentums aren't fixed until they're observed. These teensy particles even have the ability to tunnel through seemingly insurmountable barriers.

Classical objects, on the other hand, follow the normal everyday rules of our experience. Billiard balls strike off one another; cannonballs fly along parabolic arcs; and planets spin around their orbits according to well-known physical equations.

Researchers have long pondered this odd state of affairs, where some entities in the cosmos can be defined classically, while others are subject to probabilistic quantum laws meaning you can measure only probable outcomes.

But "according to quantum mechanics, everything is quantum mechanical," Aaron Goldberg, a physicist at the University of Toronto in Canada and lead author of the new paper, told Live Science. "Just because you don't see these strange things every day doesn't mean they aren't there."

What Goldberg means is that classical objects like billiard balls are secretly quantum systems, so there exists some infinitesimally small probability that they will, say, tunnel through the side of a pool table. This suggests that there is a continuum, with "classicalness" on one end and "quantumness" on the other.

A little while back, one of Goldberg's co-authors, Luis Sanchez-Soto of the Complutense University of Madrid in Spain, was giving a lecture when a participant asked him what would be the most quantum state a system could be in. "That triggered everything," Sanchez-Soto told Live Science.

Previous attempts at quantifying quantumness always looked at specific quantum systems, like those containing particles of light, and so the outcomes couldn't necessarily be applied to other systems that included different particles like atoms. Goldberg, Sanchez-Soto and their team searched instead for a generalized way of defining extremes in quantum states.

"We can apply this to any quantum system atoms, molecules, light or even combinations of those things by using the same guiding principles," Goldberg said. The team found that these quantum extremes could come in at least two different types, naming some Kings and others Queens for their superlative nature.

They reported their findings Nov. 17 in the journal AVS Quantum Science.

So what exactly does it mean for something to be "the most quantum?" Here is where the work gets tricky, since it is highly mathematical and difficult to easily visualize.

But Pieter Kok, a physicist at the University of Sheffield in England, who was not involved in writing the new paper, suggested a way to get some grasp on it. One of the most basic physical systems is a simple harmonic oscillator that is, a ball on the end of a spring moving back and forth, Kok told Live Science.

A quantum particle would be on the classical extreme if it behaved like this ball and spring system, found at specific points in time based on the initial kick it received. But if the particle were to be quantum mechanically smeared out so that it had no well-defined position and was found throughout the pathway of the spring and ball, it would be in one of these quantum extreme states.

Despite their peculiarity, Kok considers the results quite useful and hopes they will find widespread application. Knowing that there is a fundamental limit where a system is acting the most quantum it can is like knowing that the speed of light exists, he said.

"It puts constraints on things that are complicated to analyze," he added.

Goldberg said that the most readily apparent applications should come from quantum metrology, where engineers attempt to measure physical constants and other properties with extreme precision. Gravitational wave detectors, for example, need to be able to measure the distance between two mirrors to better than 1/10,000th the size of an atomic nucleus. Using the team's principles, physicists might be able to improve on this impressive feat.

But the findings could also help researchers in fields such as fiber optical communications, information processing and quantum computing. "There are probably many applications that we haven't even thought about," Goldberg said, excitedly.

Originally published on Live Science.

Go here to read the rest:
Physicists discover the 'Kings and Queens of Quantumness' - Livescience.com

Read More..

Financial services companies are starting to use the cloud for big data and AI processing – TechRepublic

The financial sector has historically been nervous about allowing its data to go off premises, making it harder to scale. Now it's allowing for some data in the cloud to speed AI and data management.

Image: iStock/alexmillos

Financial services companies are and continue to be focused on maintaining a majority of their mission-critical systems on premises, where they have direct control. They also want direct control to quickly recover systems if a failure occurs. It's also financial institutions that have the reputation of being transaction-driven and not customer-centric.

SEE: Cloud data storage policy (TechRepublic Premium)

As marketplace competition increases, most now recognize the need to store and mine customer information and the need to incorporate unstructured big data culled from the Internet, demographics, and other forms of data that don't necessarily arrive in structured data record formats.

Financial institutions mesh unstructured data together with traditional structured data so they can perform analytics and artificial intelligence (AI) on a composite of customer information. But in the process of doing so, they must find ways to scale and store an increasing amount of data that can grow exponentially overnight.

The quandary they face is that a broad scale-out of data storage and processing isn't workable with most on-premises systems, which might take a year or longer to budget for and acquire. This is where using more scalable cloud services can deliver value.

"Every organization that has data at scale can benefit from the economies of scale in the cloud," said Paul Scott Murphy, VP of product management, big data/cloud at WANdisco, a provider of distributed computing services. "The cloud has been a natural home for huge swathes of data that financial institutions use every day."

SEE: Top cloud providers in 2020: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players (TechRepublic)

Murphy said that many banking customers he works with start with cloud data hosting of their customer-related data.

"There is a natural affinity for those datasets to be used along with data from CRM, ad tech, and service desk applications that are cloud-native," he said. We've seen that once these companies establish the necessary controls, security, and governance for the data they hold in the cloud. After that, they expand to much broader types of big data, such as transactional information for real-time risk analysis, data aggregation and analytics to support loan origination decisioning, and customer and market intelligence to offer personalization."

Key factors that are moving more big data to the cloud include digital transformation, in which cloud hosting is playing a larger role; agility and cost optimization, which enable companies to move data more rapidly to the cloud than in their own data centers, and to do this in a pay-per-use mode that can trim operational and capital expense overhead; and an expanding array of AI and machine learning (ML) services and expertise that cloud providers can offer.

Together, these forces enable financial companies to bring big data and AI applications to market sooner than if they had to do it on their own.

SEE: Artificial intelligence can take banks to the next level (TechRepublic)

Critical backups and recoveries of big data are also a concern.

"One way we have addressed data backup in the cloud is through a live data strategy," Murphy said. "With this strategy, data changes are replicated immediately as they occur. This approach enables data to remain consistent across multiple environments and enables near zero recovery point objective (RPO) and recovery time objective (RTO) targets."

Murphy recommends that companies consider a live data backup strategy because it lowers the risks and costs of legacy data migration approaches and enables continuous data migrations, which are needed in a hybrid cloud computing environment that uses both on-premises and on-cloud data and applications.

"I also suggest that companies take a data-first approach for their Hadoop big data migration to the cloud," he said. "Get the data there quickly so your data scientists can begin to experiment with the new system immediately for a faster time to value."

Finally, don't consider decommissioning all or parts of your on-premises systems too hastily. If your long-term strategy is to move more data and applications to the cloud, do it gradually and test thoroughly.

This is your go-to resource for XaaS, AWS, Microsoft Azure, Google Cloud Platform, cloud engineering jobs, and cloud security news and tips. Delivered Mondays

See original here:
Financial services companies are starting to use the cloud for big data and AI processing - TechRepublic

Read More..