Page 10«..9101112..2030..»

Inside Microsofts quantum computing world | InfoWorld

Quantum computers are the future, says Microsoft CEO Satya Nadella. And he has put Microsofts money where his mouth is, making quantum computing one of the three pillars of Microsofts strategy going forward. Along with AI and mixed/augmented reality, its an area where Nadella believes that Microsoft can make a significant impact, and where it can differentiate itself from its competition.

But building a quantum computer is hard. Microsofts current progress is the result of more than 20 years of research investment, working with universities around the world, mixing pure physics with computer science, and turning experimental ideas into products. Theres a lot of ambition here, with the eventual aim of building scalable quantum computers that anyone can use.

Microsofts approach to quantum computing differs from the technologies used by companies like DWave, taking a new approach to creating the qubits, the quantum bits at the heart of the process. Working with university researchers, Microsoft has been exploring use of a new type of particle, the Majorana fermion. Initially proposed in the late 1930s, Marjorana particles have only recently been detected in semiconductor nanowires at very low temperatures.

Compared to other qubit approaches, the Majorana particles used by Microsofts quantum computers are more stable and have lower error rates, spreading out the electron state across a topological knot thats less likely to evaporate when its state is read. This topological approach to quantum computing is something that Nadella calls a transistor moment for quantum computers. It might not be the quantum processor, but its the first step on that road.

Working with a quantum computer is very different from the machines we use today. A bits 1s and 0s are replaced by a qubit with a statistical blur of fractionalized electrons that needs interpretation. With qubits temperatures at near absolute zero, another specialised low-temperature (cryogenic) computer is used to program the qubits and read results, working with quantum algorithms to solve complex problemsand promising nearly instantaneous answers to problems that could take thousands, or even millions, of years with a modern supercomputer.

You can think of the relationship between the cryogenic controller and programs running on the ultralow-temperature quantum computer as something akin to how deep-sea divers work on underwater oil rigs. The quantum computer is the well head, isolated from the rest of the world by temperature. That makes the cryogenic control computer the equivalent of a divers pressurized diving bell, giving the programs a stepping stone between the normal temperatures of the outside world and the extreme cold of the quantum refrigerator, much like how a diving bell prepares divers for working at extreme depths.

Microsofts quantum computers are unlikely to run in your own datacenters. They require specialized refrigerators to chill the qubits, which are built from carefully grown nanowires. Microsofts consortium of universities can manufacture each part separately, bringing them together to deliver the current generation of test systems.

Microsoft intends to embed its quantum hardware in Azure, running a quantum simulator to help test quantum code before its deployed to actual quantum computers. Microsoft is also working on a new language to help developers write quantum code in Visual Studio.

Microsoft Research has already delivered a first cut at a quantum programming environment in Liqui|> (usually referred to as Liquid), a set of tools to simulate a 30-qubit environment on a PC with 32GB of memory. Microsoft says youll be able to deploy large quantum simulators with more than 40 qubits in 16TB on Azure, though solving problems of that size will take a long time without the acceleration of a real quantum computer.

Still, with Liquid, you can experiment with key quantum computing concepts using F#, seeing how youll build algorithms to handle complex mathematical concepts, as well as understanding how to work with low-level error-correction algorithms.

Microsofts new quantum computing language will build on lessons learned with Liquid, but it wont be based on F#. The languages name hasnt been revealed yet, but amusingly some early screenshots of quantum code being edited in Visual Studio appeared to use the same file extension as the classic Quick Basic.

I recently spoke with Krysta Svore, the lead of Microsoft Research s Redmond Quantum computing group, which works on building the software side of Microsofts planned scalable quantum computer. Its a fascinating side of the project, taking the low-level quantum algorithms needed to work with experimental hardware and finding ways of generating them from familiar high-level languages. If Svores team is successful, you wont need to know about the quantum computer youre programming; instead, youll write code, publish it to Azure, and run it.

The goal is that youll be able to concentrate on your code, not think about the underlying quantum circuitry. For example, instead of building the connections needed to construct a quantum Fourier transform, youll call a QFT library, writing additional code to prepare, load, and read data. As Svore notes, many quantum algorithms are hybrids, mixing preprocessing and postprocessing with quantum actions, often using them as part of loops run in a classical supercomputer.

Theres also a role for AI techniques, using machine learning to identify elements of code, understanding where and how they work best.

Developers who experiment with Liquid will be able to bring their applications to the new platform, with migration tools to help with the transition. Using the Azure-based quantum simulator should help, because it supports many more qubits than a PC does. Itll also let you explore working with execution-based parallelism, where you run multiple passes over the same data, rather than the more familiar GPGPU data parallelism model.

You can get a feel for what this means for computing when you consider an 80-qubit operation. Svore notes that a single operation in a quantum computer takes 100ns, no matter how many qubits you have. The same operation in a classical computer would require more particles than in the visible universe, taking longer than the lifetime of the universe. Solving that type of problem in 100ns is a huge leap forward, one that opens new directions for scientific computing.

Microsofts quantum computing work is a big bet on the future of computing. Today, its a long way from every day use, still in the domain of pure research, even if that research is coming up with promising results.

Where Microsofts quantum-computing work really will make a difference is if it can deliver a programming environment that will let us take hard problems and turn them into quantum algorithms quickly and repeatedly, without having to go beyond the familiar world of IDEs and parallel programming constructs. Getting that right will really change the world, in ways we cant yet imagine.

Read the original post:
Inside Microsofts quantum computing world | InfoWorld

Read More..

Cryptocurrency stocks holding gains despite bitcoin pullback …

NEW YORK (Reuters) – Stocks that surged in recent weeks because of the cryptocurrency mania have managed to hold onto most of their gains despite the recent retreat in the price of bitcoin and scepticism from market participants.

A Reuters analysis of 17 stocks of companies that have made blockchain or cryptocurrency announcements showed an average gain of 224 percent through Thursdays close from they released those statements.

For example, shares of Long Island Iced Tea Corp jumped nearly 300 percent on Thursday after the beverage maker said it would rename itself Long Blockchain Corp to reflect a new focus on blockchain technology.

The moves are reminiscent of the tech boom, when the market value of companies such as Zapata and Books-A-Million rose sharply after they announced an internet business or an updated website. After the dot-com bubble burst, many of the companies went out of business or became much less valuable.

Theres been a continued surge of crypto headlines, said Michael Antonelli, managing director at Robert W. Baird in Milwaukee. Its gotten more worrisome as more companies have changed their names. Its the kind of stuff you saw back in the dot-com era.

Many of the crypto stocks came under pressure on Friday, as the price of bitcoin tumbled below $12,000 to put it on track for its worst week since 2013. Riot Blockchain dropped 15.3 percent to $23.36, and Overstock.com, which announced in August that it would accept major alt-coins as payment, was down 6.5 percent at $63.05.

Even with the declines on Friday, bitcoin itself is still more than double from its price at the start of November while the stocks are still well above their prices before the companies made cryptocurrency announcements.

While the stocks are susceptible to price moves in bitcoin itself, analysts caution investors should make sure the company has a credible business model.

It is a buyer beware time, said JJ Kinahan, chief market strategist at TD Ameritrade in Chicago.

Long term it may hurt these companies because if bitcoin does settle down to being a product that trades like most products and doesnt have crazy moves every day, it is going to make people look at these companies and ask what is really going on here.

Reporting by Chuck Mikolajczak; Editing by Lisa Von Ahn

Excerpt from:
Cryptocurrency stocks holding gains despite bitcoin pullback …

Read More..

Encryption and Export Administration Regulations (EAR)

On August 15, 2017 the Wassenaar Arrangement 2016 Plenary Agreements Implementation was published in the Federal Register.

Here is a summary of the changes made to Category 5, Part 2.

The U.S. Commerce Control List (CCL) is broken in to 10 Categories 0 9 (see Supplement No. 1 to part 774 of the EAR). Encryption items fall under Category 5, Part 2 for Information Security. Cat. 5, Part 2 covers:

1) Cryptographic Information Security; (e.g., items that use cryptography)

2) Non-cryptographic Information Security (5A003); and

3) Defeating, Weakening of Bypassing Information Security (5A004)

You can find a Quick Reference Guide to Cat. 5, Part 2 here.

The controls in Cat. 5, Part 2 include multilateral and unilateral controls. The multilateral controls in Cat. 5, Part 2 of the EAR (e.g., 5A002, 5A003, 5A004, 5B002, 5D002, 5E002) come from the Wassenaar Arrangement List of Dual Use Goods and Technologies. Changes to the multilateral controls are agreed upon by the participating members of the Wassenaar Arrangement. Unilateral controls in Cat. 5, Part 2 (e.g., 5A992.c, 5D992.c, 5E992.b) of the EAR are decided on by the United States.

The main license exception that is used for items in Cat. 5, Part 2 is License Exception ENC (Section 740.17). License exception ENC provides a broad set of authorizations for encryption products (items that implement cryptography) that vary depending on the item, the end-user, the end-use, and the destination. There is no “unexportable” level of encryption under license exception ENC. Most encryption products can be exported to most destinations under license exception ENC, once the exporter has complied with applicable reporting and classification requirements. Some items going to some destinations require licenses.

This guidance does not apply to items subject to the exclusive jurisdiction of another agency. For example, ITAR USML Categories XI(b),(d), and XIII(b), (l) control software, technical data, and other items specially designed for military or intelligence applications.

The following 2 flowcharts lay out the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography:

Flowchart 1: Items Designed to Use Cryptography Including Items NOT controlled under Category 5 Part 2 of the EAR Flowchart 2: Classified in Category 5, Part 2 of the EAR

Similarly, the following written outline provides the analysis to follow for determining if and how the EAR and Cat.5 Part 2 apply to a product incorporating cryptography. Although Category 5 Part 2 controls more than just cryptography, most items that are in Category 5 Part 2 fall under 5A002.a, 5A002.b, 5A004, or 5A992 or their software and technology equivalents.

“Encryption Outline”

1. Encryption items that are NOT subject to the EAR (publicly available)2. Items subject to Cat. 5, Part 2:

a. 5A002.a (and equivalent software under 5D002 c.1) applies to items that:

i. Use cryptography for data confidentiality; and

ii. Have in excess of 56 bits of symmetric key length, or equivalent; and

iii. Have cryptography described in 1 and 2 above that is useable without cryptographic activation or has already been activated; and

iv. Are described under 5A002 a.1 a.4; and

v. Are not described by Decontrol notes.

b. 5A992.c (and software equivalence controlled under 5D992.c) is also known as mass market. These items meet all of the above descried under 5A002.a and Note 3 to Category 5, Part 2. See the MASS MARKET section for more information.

c. 5A002.b (and software equivalence controlled under 5D002.b) applies to items designed or modified to enable, by means of cryptographic activation, an item to achieve/exceed the controlled performance levels for functionality specified by 5A002.a not otherwise enabled (e.g., license key to enable cryptography).

d. 5A004 (and equivalent software controlled under 5D002.c.3) applies to items designed or modified to perform cryptanalytic functions including by means of reverse engineering.

e. The following are less commonly used entries:

3. License Exception ENC and mass market

If you’ve gone through the steps above and your product is controlled in Cat. 5, Part 2 under an ECCN other than 5A003 (and equivalent or related software and technology), then it is eligible for at least some part of license exception ENC. The next step is to determine which part of License Exception ENC the product falls under. Knowing which part of ENC the product falls under will tell you what you need to do to make the item eligible for ENC, and where the product can be exported without a license.

Types of authorization available for license exception ENC:

a. Mass Market b. 740.17(a) c. 740.17(b)(2) d. 740.17(b)(3)/Mass market e. 740.17(b)(1)/ Mass market

4. Once you determine what authorization applies to your product, then you may have to file a classification request, annual self-classification report, and/or semi-annual sales report. The links below provide instructions on how to submit reports and Encryption Reviews:

a. How to file an Annual Self-Classification Report b. How to file a Semi-annual Report c. How to Submit an ENC or Mass market classification review

5. After you have submitted the appropriate classification and/or report, there may be some instances in which a license is still required. Information on when a license is required, types of licenses available, and how to submit are below:

a. When a License is Required b. Types of licenses available c. How to file a license application6. FAQs7. Contact us

Read the original here:
Encryption and Export Administration Regulations (EAR)

Read More..

CA Internet Security Suite Plus – Download

CA Internet Security Suite is a complete application for protecting your system from internet threats. It also carries out file backups, system scans and regulates kids’ use of the web.

CA Internet Security Suite covers four areas of security: Files, Viruses, Internet and Children. Within these modules you can scan for viruses and malware, secure documents, backup important files and protect your PC from identity theft. For children, there’s a parental blocking system, that should keep kids away from unsuitable websites.

The Internet module includes real time protection for files and incoming emails. You can start or schedule a system scan whenever you want, although they are very slow. In general CA Internet Security Suite is a heavy resource user, and even though alerts are configurable, they still interrupt and slow down your PC. Scans are deep, but don’t discriminate between good and bad internet cookies, which is annoying.

CA Internet Security Suite is comprehensive, and does do what it clams. It’s very easy to use, with an interface that anyone will understand. However, it’s too resource hungry and intrusive to recommend wholeheartedly.

Read the original post:
CA Internet Security Suite Plus – Download

Read More..

Collaborative Security: An approach to tackling Internet …

NOTE: A set of PowerPoint slides explaining Collaborative Securityis available for use in presentations.

People are what ultimately hold the Internet together. The Internets development has been based on voluntary cooperation and collaboration. Cooperation and collaboration remain the essential factors for the Internets prosperity and potential.

Collaborative Security is an approach that is characterized by five key elements:

Achieving security objectives, while preserving these fundamental properties, rights and values is the real challenge of cybersecurity strategy. The design and implementation of security solutions should be undertaken with consideration as to the potential effect they might have these fundamentals.

Everyone has a collective responsibility for the security of the Internet: multistakeholder cross-border collaboration is an essential component.

Commercial competition, politics and personal motivation play a role in how well collaboration happens. But, as collaborative efforts have demonstrated, differences can be overcome to cooperate against a threat. Such voluntary as-needed working for the benefit of everyone collaboration is remarkable for its scalability and its ability to adapt to changing conditions and evolving threats, yielding unprecedented efficacy.

Informed by these reflections, we introduce the term Collaborative Security to describe our approach for tackling Internet security issues.

Collaborative Security is an approach that is characterized by five key elements. These are described below.

1. Preserving opportunities and building confidence

The Internet enables opportunities for human, social and economic development on a global scale. Those opportunities will only be realized if Internet participants have confidence [2] that they can use the Internet for secure, reliable, private communication all across the world.

A security paradigm for the Internet should be premised on fostering confidence and protecting opportunities for economic and social prosperity, as opposed to a model that is based simply on preventing perceived harm. Moreover, security solutions should advance that objective in design, and in practice. Otherwise, security solutions may go too far, thereby jeopardizing the very infrastructure that ties together the global economy, and provides the engine for its growth.

2. Collective Responsibility

The Internet is a global interconnected network of networks. It is, in effect, a global common resource and a highly interdependent system. Participation on the Internet means global interdependency.

In an interconnected interdependent system, no one participant can achieve absolute security. And, no security solution exists in isolation. There will always be threats, so it is useful to consider security in terms of residual risks that are considered acceptable in a specific context.

Internet security depends not only on how well participants manage security risks they face, but also, importantly, how they manage security risks that they may pose to others (whether through their action or inaction) the outward risks.

These factors mean that Internet participants have:

Furthermore, if Internet participants act independently and solely in their own self-interest, not only will the security of the Internet be impacted: the overall pool of social and economic potential that the Internet offers the global community will also diminish. Therefore, Internet participants need to see this as a long-term investment for the benefit of everyone.

Note: The scope of collective responsibility extends to the system as a whole: it is not the same as asking everyone to be responsible for their part of the ecosystem. Therefore, collective responsibility requires a common understanding of the problem, shared solutions, common benefits, and open communication channels [3].

Multistakeholder cross-border collaboration is an important component of collective responsibility. Its success depends on trustful relationships between nations, between citizens and their government, between operators, service providers, and across all stakeholders.

3. Security solutions should be fully integrated with rights and the open Internet

Security solutions should be fully integrated with the important objectives of preserving the fundamental properties of the Internet (open standards, voluntary collaboration, reusable building blocks, integrity, permission-free innovation and global reach (also known as the Internet Invariants [4]) and fundamental human rights, values and expectations (e.g. privacy, freedom of expression).

Any security solution is likely to have an effect on the Internets operation and development, as well as users rights and expectations. Such effects may be positive or negative. From our perspective, it is important to find solutions that support the Internet Invariants and fundamental rights and values.

4. Security solutions need to be grounded in experience, developed by consensus and evolutionary in outlook

Security solutions need to be flexible enough to evolve over time. We know that technology is going to change and threats will adapt to take advantage of new platforms and protocols. Therefore, solutions need to be responsive to new challenges.

Like a human body that may suffer from viruses, but gets stronger and more resilient as a result, new technologies, solutions and cooperative efforts that build on lessons-learned make the Internet more resilient to threats.

Experience shows us that, in a quickly evolving system such as the Internet, an open consensus-based participatory approach is the most robust, flexible and agile.

Partial solutions and staged deployment are important and should be taken seriously. A collection of incremental solutions may be more effective in practice than a grand design. Even if an approach does not solve the problem completely, it might help to contain it, or to change the economic equation significantly enough, so as to make the vulnerability much less attractive to malicious actors.

The focus needs to be put on defining the agreed problem and finding the solution. We also need to make space for the new, the innovative and the odd. We need to be prepared to test disruptive or non-traditional ideas.

In the end a process, which draws upon the interests and expertise of a broad set of stakeholders is likely to be the surest path to success.

5. Targeting the point of maximum impact: think globally, act locally

Security is not achieved by a single treaty or piece of legislation; it is not solved by a single technical fix, nor can it come about because one company, government or actor decides security is important.

Creating security and trust in the Internet requires different players (within their different responsibilities and roles) to take action, closest to where the issues are occurring.

Typically, for greater effectiveness and efficiency, solutions should be defined and implemented by the smallest, lowest or least centralized competent community [5] at the point in the system where they can have the most impact.

Such communities are frequently spontaneously formed in a bottom-up, self-organizing fashion around specific issues (e.g. spam, or routing security) or a locality (e.g. protection of critical national infrastructure or security of an Internet exchange).

As much as possible, solutions should be based on interoperable building-blocks e.g. industry-accepted standards, best practices and approaches.

We believe that this Collaborative Security approach for addressing Internet security issues is critical for ensuring the future of the open Internet as a driver for social and economic innovation. As a network of networks without centralized control, the security of the Internet cannot be maintained by any single entity or organization. It is important that these issues be addressed by all stakeholders in a spirit of collaboration and shared responsibility in ways that do not undermine the global architecture of the Internet or curtail human rights. The Internet is for everyone: lets work together to realize its full potential.

[1]See Internet Invariants: What Really Matters http://www.internetsociety.org/internet-invariants-what-really-matters

[2]In this context, an Internet participants confidence is formed, among other things, taking into account the degree of perceived security risk associated with using the Internet and whether that degree of risk is acceptable while protecting opportunities for economic and social prosperity. A better understanding of actual risks and how to reduce them to an acceptable level are two main factors that build confidence.

[3]Please refer to Understanding Security and Resilience of the Internet http://www.internetsociety.org/sites/default/files/bp-securityandresilience-20130711.pdf

[4]See Internet Invariants: What Really Matters http://www.internetsociety.org/internet-invariants-what-really-matters

[5]In politics, such approach is called a Subsidiarity principle: Solutions should be defined and implemented by smallest, lowest or least centralized competent authority. We feel that the word community better matches the sense of bottom-up development. http://en.wikipedia.org/wiki/Subsidiarity

Read more about specific examples: Introducing Collaborative Security, our approach to Internet security issues by Internet Society CITO Olaf Kolkmann

Continued here:
Collaborative Security: An approach to tackling Internet …

Read More..

Is Quantum Computing an Existential Threat to Blockchain …

Amid steep gains in value and wild headlines, its easy to forget cryptocurrencies and blockchain arent yet mainstream. Even so, fans of the technology believe blockchain has too much potential not to have a major sustained impact in the future.

But as is usually the case when pondering whats ahead, nothing is certain.

When considering existential threats to blockchain and cryptocurrencies, people generally focus on increased regulation. And this makes sense. In the medium term, greater regulation may stand in the way of cryptocurrencies and wider mainstream adoption. However, there might be a bigger threat further out on the horizon.

Much of blockchains allure arises from its security benefits. The tech allows a ledger of transactions to be distributed between a large network of computers. No single user can break into and change the ledger. This makes it both public and secure.

But combined with another emerging (and much hyped) technology, quantum computing, blockchains seemingly immutable ledgers would be under threat.

Like blockchain, quantum computing has been making progress and headlines too.

The number of quantum computing companies and researchers continues to grow. And while there is a lot of focus on hardware, many are looking into the software as well.

Cryptography is a commonly debated topic because quantum computing poses a threat to traditional forms of computer security, most notably public key cryptography, which undergirds most online communications and most current blockchain technology.

But first, how does computer security work today?

Public key cryptography uses a pair of keys to encrypt information: a public key which can be shared widely and a private key known only to the keys owner. Anyone can encrypt a message using the intended receivers public key, but only the receiver can decrypt the message using her private key. The more difficult it is to determine a private key from its corresponding public key, the more secure the system.

The best public key cryptography systems link public and private keys using the factors of a number that is the product of two incredibly large prime numbers. To determine the private key from the public key alone, one would have to figure out the factors of this product of primes. Even if a classical computer tested a trillion keys a second, it would take up to 785 million times longer than the roughly 14 billion years the universe has existed so far due to the size of the prime numbers in question.

If processing power were to greatly increase, however, then it might become possible for an entity exercising such computing power to generate a private key from the corresponding public key. If actors could generate private keys from corresponding public keys, then even the strongest forms of traditional public key cryptography would be vulnerable.

This is where quantum computing comes in. Quantum computing relies on quantum physics and has more potential power than any traditional form of computing.

Quantum computing takes advantage of quantum bits or qubits that can exist in any superposition of values between 0 and 1 and can therefore process much more information than just 0 or 1, which is the limit of classical computing systems.

The capacity to compute using qubits renders quantum computers many orders of magnitude faster than classical computers. Google showed a D-Wave quantum annealing computer could be 100 million times faster than classical computers at certain specialized tasks. And Google and IBM are working on their own quantum computers.

Further, although there are but a handful of quantum computing algorithms, one of the most famous ones, Shors algorithm, allows for the quick factoring of large primes. Therefore, a working quantum computer could, in theory, break todays public key cryptography.

Quantum computers capable of speedy number factoring are not here yet. However, if quantum computing continues to progress, it will get there eventually. And when it does, this advance will pose an existential threat to public key cryptography, and the blockchain technology that relies on it, including Bitcoin, will be vulnerable to hacking.

So, is blockchain security therefore impossible in a post-quantum world? Will the advent of quantum computing render blockchain technology obsolete?

Maybe, but not if we can develop a solution first.

The NSA announced in 2015 that it was moving to implement quantum-resistant cryptographic systems. Cryptographers are working on quantum-resistant cryptography, and there are already blockchain projects implementing quantum-resistant cryptography. The Quantum Resistant Ledger team, for example, is working on building such a blockchain right now.

What makes quantum-resistant or post-quantum cryptography, quantum resistant? When private keys are generated from public keys in ways that are much more mathematically complex than traditional prime factorization.

The Quantum Resistant Ledger team is working to implement hash-based cryptography, a form of post-quantum cryptography. In hash-based cryptography, private keys are generated from public keys using complex hash-based cryptographic structures, rather than prime number factorization. The connection between the public and private key pair is therefore much more complex than in traditional public key cryptography and would be much less vulnerable to a quantum computer running Shors algorithm.

These post-quantum cryptographic schemes do not need to run on quantum computers. The Quantum Resistant Ledger is a blockchain project already working to implement post-quantum cryptography. It remains to be seen how successful the effort and others like it will prove when full-scale quantum computing becomes a practical reality.

To be clear, quantum computing threatens all computer security systems that rely on public key cryptography, not just blockchain. All security systems, including blockchain systems, need to consider post-quantum cryptography to maintain data security for their systems. But the easiest and most efficient route may be to replace traditional systems with blockchain systems that implement quantum-resistant cryptography.

Disclosure: The author owns assorted digital assets. The author is also a principal at Crypto Lotus LLC, a cryptocurrency hedge fund based out of the San Francisco Bay Area, and an advisor at Green Sands Equity, both of which have positions in various digital assets. All opinions in this post are the authors alone and not those of Singularity University, Crypto Lotus, or Green Sands Equity. This post is not an endorsement by Singularity University, Crypto Lotus, or Green Sands Equity of any asset, and you should be aware of the risk of loss before trading or holding any digital asset.

Image Credit: Morrowind /Shutterstock.com

Continue reading here:
Is Quantum Computing an Existential Threat to Blockchain …

Read More..

What is Quantum Computing? | SAP News Center

Whether its a astrophysical operations, weather prognosis, or explorations for locating oil and gas resources, powerful super computers are now ready to assist the computation of the most complex problems.

Yet there are some challenges that even the fastest computing machines in the world have been unable to solve, namely the simulation of molecular structures, which has left many professionals in the medical and chemical industry scratching their heads. The development of effective drugs against illnesses, as well as better quality fertilizer to help fight world hunger, is largely dependent on the ability to perform the relevant calculations.

Another example is optimization. A rucksack can hold up to 20 kilograms. If we take several objects all with a specific weight and use, a specific number of objects must be selected that does not exceed the maximum weight of the rucksack but maximizes the value. Inventory management frequently encounters these sorts of challenges, yet mathematical evidence shows that these problems cannot be solved satisfactorily using conventional computers.

This all comes down to how computers are built. The smallest possible storage unit (a bit) can have a value of either 0 or 1. Bits are physically represented by two voltage potentials that correspond to the states 0 and 1. This binary representation of information pushes it to the brink of its capabilities to perform certain tasks.

Qubits: Superposition and Entanglement

In 1981, Nobel Prize-winning physicist Richard Feynman claimed that a so-called quantum computer could be used to perform computations. This theoretical concept went on to generate a wealth of interest and has since become a broad field of research and development.

A quantum computer works with quantum bits, or qubits. In contrast to a traditional computer system, the states of qubits can overlap. In other words, they do not merely represent 0 or 1, but can achieve a mixed state where they are both 0 and 1 at the same time. This is known as a superposition. When measured however, Qubits behave like classical bits and yield the value of 0 or 1.

If various qubits are added together, they do not have a defined state but exist as a qubit entirety. In quantum mechanics, this process is known as entanglement, and refers to how the measurement of two qubits is dependent on the other. For instance, if two qubits are measured and the first measures as 1, the state of the second qubit is already known.

Overcoming Quantum Decoherence

Together, superposition and entanglement form the decisive difference from which quantum computers are said to benefit: with a given number of qubits, numerous sequences of conventional bits can also be displayed. This calculation is therefore equal to the calculation of all bit sequences simultaneously. For certain problems, this quantum parallelism ensures a decisive speed advantage compared to regular computers.

Decoherence nevertheless remains a challenge for researchers. As soon as closed quantum systems start interacting with their environment, the system and environment state are changed irreversibly and errors can occur if this happens during the calculation process.

To ensure that the operations are conducted without mistakes or errors, the quantum computer qubits should preferably be decoupled from their environment which, in turn, minimizes the time to decoherence. This can lead to a possible conflict of objectives, since it is also necessary that the state of an individual qubit can be changed from the outside.

The number of qubits also plays an important technical role the higher the number, the greater the expected speed advantage. At the same time, this increases the number of obstacles to avoid decoherence with each individual qubit.

Five Criteria for Quantum Computers

Based on these ideas, in 1996 physicist David DiVincenzo formulated five criteria that he deemed sufficient for a quantum computer:

So far, no one has succeeded in developing a system that fulfills all these requirements. This is partly due to the lack of clarity surrounding the most appropriate candidates able to physically implement qubits. The energy level of an atom and the angular moment of electrons are currently under discussion, although many other possibilities are also under research.

Applications for Quantum Computing

Further progress continues to be made in the development of quantum computers. To date, none of the prototypes have shown any definitive advantages compared against traditional super computers. This predominantly comes down to the number of qubits used. The widespread view suggests that 50 or more qubits should show a benefit a number that has been officially announced but never achieved.

Experts expect that the first standard quantum computer will appear at some point in the next 10 years. Yet for those who are expecting to have a device under their desks at home may be disappointed; for the foreseeable future, this technology will most likely only be used to perform tasks on a large scale.

Quantum Cryptography: Already in Use

Beyond the development of quantum computers, other technologies benefiting from quantum mechanical effects have sparked interest. An example of this is quantum cryptography, which has been under development since the 1970s, and is now ready for implementation.

Data is the fuel of the 21st century. The world can hugely benefit from the distribution of more devices that interconnectedly generate and analyze data. At the same time, security risks such as data theft and data abuse continue to rise. Experts have estimated that cybercrime cost the economy $454 billion in 2016.

Compared to the solutions already available, quantum cryptographic processes can provide an additional level of safety and security. Discoveries in quantum physics reveal that such encryptions are not only difficult to hack, but downright impossible if they have been implemented correctly.

The aforementioned qualities of quantum systems form the basis for this level of security. Individual light particles transfer a code that is used in message encryption. The particles cannot be intercepted and measured without disruption. If someone were to try and intercept, they would not be able to access the code without being detected.

Progress in quantum computing development is the main motivation to continue developing quantum cryptography. Current encryption processes, such as RSA, rely on the assumption that there is no process in existence fast enough for the prime factorization of large numbers. Yet in 1994, Peter Shor demonstrated that this type of algorithm can be achieved on a quantum computer. The first team to produce an adequately-sized standard quantum computer can therefore hack all such security systems.

Yet this development is still a long way away from the projected 1,000 qubits that would be needed to hack RSA. In areas where secure communication and data transfers are extremely important, quantum cryptography can already offer solutions to safeguard against current and future attacks.

Read the rest here:
What is Quantum Computing? | SAP News Center

Read More..

Bitcoin Fees Have Become Infeasible – Bitcoin News

In 2013, one bitcoin cost $20. In 2017, it costs $20 to send one bitcoin. With record highs, thriving adoption, and media attention, this should be a celebratory time for bitcoin believers. And yet its hard to shake the feeling that something isnt quite right. How did we reach a point where the worlds bank killer and Western Union crippler has become incapable of taking on the institutions it once sneered at? Bitcoin is hot as hell right now. But its also a mess.

Also read:Bitpay Plans to Use Bitcoin Cash for Payment Invoices and Debit Loads

By any reckoning, 2017 has been a phenomenal year for bitcoin. Even the currencys most ardent supporters would have struggled, 12 months ago, to predict the current state of affairs. But neither could they have envisaged, in their worst nightmares, it costing upwards of $20 to transfer a fraction of a coin. To chalk this year up as an unfettered success story calls for moving the goalposts and performing mental gymnastics. Bitcoin has made great leaps alright. Its just unfortunate that not all of them have been forwards.

It can be debated whether Satoshis white paper envisioned bitcoin as a P2P settlement for micro-transactions. What cant be debated is that bitcoin is effectively now unsendable and undependable for anything under a couple of hundred dollars. From the clearnet to the darknet, the conversation is the same: fees have become untenable. Despite this, bitcoins most ardent defenders remain in denial.

On some corners of the internet, questioning the gospel of Satoshi and the infallibility of bitcoin is heresy. I cant send a friend five dollars without a $15 transaction fee and this is the currency of the future? raged one Redditor, to which the first three responses on r/bitcoin ran:

Theres a modicum of truth to these rejoinders, but in the here and now, muh segwit or just wait for LN isnt much help.

Everyone has their price, a dollar figure at which theyd be willing to sell bitcoin, and also a figure theyre willing to pay to send it. Paying $20 to transfer $10 million of bitcoin seems reasonable. Paying the same amount to send $100 worth seems ridiculous. Bitcoin has been unsuitable for micro-transactions for some time, but its now reaching a stage where its unsuitable for mid-sized transactions.

Is bitcoin a store of wealth because thats its best use case, or has it simply morphed into one because no one can afford to move it?

Many of bitcoins new investors are of humble means, setting aside $50 a week or whatever they can spare to put into digital currency. Always store your coins in a wallet you hold the private key for, they were urged. Now theyre discovering that their only option is to store their bitcoin on an exchange, at least until their holdings reach a level where its practical to withdraw to a hardware wallet.

If cryptocurrencies were to be likened to energy sources, bitcoin would be coal: expensive to move and impractical to transport in small quantities. Its impossible to order a handful of coal every time you want to light a fire: its a sackful or nothing. Ethereum (gas) and bitcoin cash (hydro) are the opposite: cheap and on tap.

Coal does have one thing in its favor though longevity. In cryptocurrency terms, bitcoin is a veritable fossil. Its been there from the start and, thanks to its market dominance, brand recognition, and capital locked in, will be extremely hard to destroy. Scaling solutions will probably arrive, and transaction fees will eventually drop, though quite when is anyones guess. The question is if those solutions will arrive in time. Until then, bitcoin will continue to serve as coal fueling the furnace on the runaway Cryptocurrency Express: an indispensable hot mess.

What do you think is the solution to high fees? And what measures have you been taking to mitigate rising fees? Let us know in the comments section below.

Images courtesy of Shutterstock.

Express yourself freely at Bitcoin.coms user forums. We dont censor on political grounds. Check forum.Bitcoin.com.

See the article here:
Bitcoin Fees Have Become Infeasible – Bitcoin News

Read More..

Key (cryptography) – Wikipedia

In cryptography, a key is a piece of information (a parameter) that determines the functional output of a cryptographic algorithm. For encryption algorithms, a key specifies the transformation of plaintext into ciphertext, and vice versa for decryption algorithms. Keys also specify transformations in other cryptographic algorithms, such as digital signature schemes and message authentication codes.

In designing security systems, it is wise to assume that the details of the cryptographic algorithm are already available to the attacker. This is known as Kerckhoffs’ principle “only secrecy of the key provides security”, or, reformulated as Shannon’s maxim, “the enemy knows the system”. The history of cryptography provides evidence that it can be difficult to keep the details of a widely used algorithm secret (see security through obscurity). A key is often easier to protect (it’s typically a small piece of information) than an encryption algorithm, and easier to change if compromised. Thus, the security of an encryption system in most cases relies on some key being kept secret.

Trying to keep keys secret is one of the most difficult problems in practical cryptography; see key management. An attacker who obtains the key (by, for example, theft, extortion, dumpster diving, assault, torture, or social engineering) can recover the original message from the encrypted data, and issue signatures.

Keys are generated to be used with a given suite of algorithms, called a cryptosystem. Encryption algorithms which use the same key for both encryption and decryption are known as symmetric key algorithms. A newer class of “public key” cryptographic algorithms was invented in the 1970s. These asymmetric key algorithms use a pair of keys or keypair a public key and a private one. Public keys are used for encryption or signature verification; private ones decrypt and sign. The design is such that finding out the private key is extremely difficult, even if the corresponding public key is known. As that design involves lengthy computations, a keypair is often used to exchange an on-the-fly symmetric key, which will only be used for the current session. RSA and DSA are two popular public-key cryptosystems; DSA keys can only be used for signing and verifying, not for encryption.

Part of the security brought about by cryptography concerns confidence about who signed a given document, or who replies at the other side of a connection. Assuming that keys are not compromised, that question consists of determining the owner of the relevant public key. To be able to tell a key’s owner, public keys are often enriched with attributes such as names, addresses, and similar identifiers. The packed collection of a public key and its attributes can be digitally signed by one or more supporters. In the PKI model, the resulting object is called a certificate and is signed by a certificate authority (CA). In the PGP model, it is still called a “key”, and is signed by various people who personally verified that the attributes match the subject.[1]

In both PKI and PGP models, compromised keys can be revoked. Revocation has the side effect of disrupting the relationship between a key’s attributes and the subject, which may still be valid. In order to have a possibility to recover from such disruption, signers often use different keys for everyday tasks: Signing with an intermediate certificate (for PKI) or a subkey (for PGP) facilitates keeping the principal private key in an offline safe.

Deleting a key on purpose to make the data inaccessible is called crypto-shredding.

For the one-time pad system the key must be at least as long as the message. In encryption systems that use a cipher algorithm, messages can be much longer than the key. The key must, however, be long enough so that an attacker cannot try all possible combinations.

A key length of 80 bits is generally considered the minimum for strong security with symmetric encryption algorithms. 128-bit keys are commonly used and considered very strong. See the key size article for a more complete discussion.

The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher. Elliptic curve cryptography may allow smaller-size keys for equivalent security, but these algorithms have only been known for a relatively short time and current estimates of the difficulty of searching for their keys may not survive. As of 2004, a message encrypted using a 109-bit key elliptic curve algorithm had been broken by brute force.[2] The current rule of thumb is to use an ECC key twice as long as the symmetric key security level desired. Except for the random one-time pad, the security of these systems has not (as of 2008[update]) been proven mathematically, so a theoretical breakthrough could make everything one has encrypted an open book. This is another reason to err on the side of choosing longer keys.

To prevent a key from being guessed, keys need to be generated truly randomly and contain sufficient entropy. The problem of how to safely generate truly random keys is difficult, and has been addressed in many ways by various cryptographic systems. There is a RFC on generating randomness (RFC 4086, Randomness Requirements for Security). Some operating systems include tools for “collecting” entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high quality randomness.

For most computer security purposes and for most users, “key” is not synonymous with “password” (or “passphrase”), although a password can in fact be used as a key. The primary practical difference between keys and passwords is that the latter are intended to be generated, read, remembered, and reproduced by a human user (although nowadays the user may delegate those tasks to password management software). A key, by contrast, is intended for use by the software that is implementing the cryptographic algorithm, and so human readability etc. is not required. In fact, most users will, in most cases, be unaware of even the existence of the keys being used on their behalf by the security components of their everyday software applications.

If a password is used as an encryption key, then in a well-designed crypto system it would not be used as such on its own. This is because passwords tend to be human-readable and,hence, may not be particularly strong. To compensate, a good crypto system will use the password-acting-as-key not to perform the primary encryption task itself, but rather to act as an input to a key derivation function (KDF). That KDF uses the password as a starting point from which it will then generate the actual secure encryption key itself. Various methods such as adding a salt and key stretching may be used in the generation.

Read more:
Key (cryptography) – Wikipedia

Read More..

Quantum Computing Explained | What is Quantum Computing?

In this series, Life’s Little Mysteries explains complex subjects in exactly 200 words.

Ordinarycomputersmanipulate “bits” of information, which, like light switches, can be in one of two states (represented by 1 or 0). Quantum computers manipulate “qubits”: units of information stored in subatomic particles, which, by thebizarre laws of quantum mechanics, may be in states |1> or |0>,orany “superposition” (linear combination) of the two. As long as the qubit is left unmeasured, it embodies both states at once; measuring it “collapses” it from the superposition to one of its terms. Now, suppose a quantumcomputerhas two qubits. If they were bits, they could be inonly oneof four possible states (00,01,10,11). A pair of qubits also has four states (|00>,|01>,|01>,|11>), but it can also exist in any combination of all four. As you increase the number of qubits in the system, you exponentially increase the amount of information they can collectively store. Thus, one can theoretically work with myriad information simultaneously byperforming mathematical operations on a system of unmeasured qubits (instead of probing one bit at a time), potentially reducing computing times for complex problems from years to seconds. The difficult task is to efficiently retrieve information stored in qubits and physicists aren’t there yet.

Follow Natalie Wolchover on Twitter @nattyover. Follow Life’s Little Mysteries on Twitter @llmysteries, then join us onFacebook.

Go here to read the rest:
Quantum Computing Explained | What is Quantum Computing?

Read More..