Page 4,197«..1020..4,1964,1974,1984,199..4,2104,220..»

Hitachi rack servers get VMware Cloud treatment The Register – The Register

There's some new wine in Hitachi's Unified Compute Platform of converged and hyperconverged server bottles specifically VMware Cloud Foundation software added to its converged rack-scale (RS) product and Xeon SP processors to its hyperconverged systems.

There is no Xeon SP upgrade news for its CB2500 and CB500 blade chassis systems, with their CB520H (E5-2660 CPU) and CB520X (dual E7-8800 CPUs) blades.

The existing rack-scale system comes with two node types: a 2U single node (1 or 2 Xeon E5-2600 v3 CPUs) and a 2U four-node (dual Xeon E5-2600 v3 CPUs). Hitachi's new turnkey Unified Compute Platform RS (UCP RS) is described as a fully integrated, software-defined data centre (SDDC) rack-scale platform, based on VMware Cloud Foundation with hybrid public/private cloud use in mind.

Gartner describes VMware Cloud Foundation* as "an application-independent, common virtual data center infrastructure that can run atop an existing private data center infrastructure, a public cloud service or a combination of the two."

Customers can deploy the VMware SDDC stack or build their own using the Hitachi vSAN-ready node and VMware software.

The rack features:

Hitachi UCP RS

The V210 is a hybrid disk/flash node and the V210F is an all-flash node. Both can use 1 or 2 E5-2699 22-core or E5-2680 v4 14-core or E5-2650 v4 12-core CPUs. The V210 can also use 1 to 2 E5-2620 v4 8-core processor and the V210F 1 to 2 E5-2650 v3 10-core processors.

Hitachi claims its updated RS uniquely automates provisioning, managing and monitoring for SDDCs.

A fully populated rack can be brought into operation in under five hours.

Find out more about UCP RS here. We expect Xeon SP processor upgrades to ripple through the UCP RS systems in the next few months.

Hitachi UCP RS products are generally available to customers and partners in all regions.

Hitachi's UCP HC embraces, as do the existing HCs, hybrid disk/flash and all-flash systems.

Hitachi UCP HC product

Think of the HC products as being based on or similar to the rack-scale nodes. In the hybrid disk/flash line there are V210 2U 1-node and V240 2U 4-node systems, with the equivalent all-flash versions being the V210-F and V240-F. All of them use Xeon E5-2600 series processors.

The latest HCs feature Xeon SP processors, and support NVMe flash drives. There are five models:

The V120F supports up to 12 2.5-inch drive slots.

Hitachi UCP HC V120F

The all-flash vSAN storage cache can use NVMe, SAS or SATA SSDs with Intel NVMe DC P3700 or P3600 SSD as options.

For all-flash vSAN capacity storage includes Samsung PM863a SATA SSDa (3.8TB, 1.92TB and 960GB respectively) or Intel's S4500 SATA SSD.

VMware integration includes vCenter Server, vRealize Orchestrator, Operations, Log Insight and Automation. Hitachi says these systems feature less than 32 seconds of downtime [a year], multi-site business continuity, local or remote replication, non-disruptive upgrades and maintenance, and per-VM availability policy and change on the fly.

Fault domains can be created to increase availability. To protect against one rack failure you need two replicas and a witness across three failure domains. A stretched active:active vSAN cluster can be split across two data centre sites with automated failover and zero data loss.

There are both deduplication and compression for space efficiency, plus RAID-5 and RAID-6 inline erasure coding.

The UCP HC systems are on VMware's vSAN hardware compatibility list. Hitachi will say that these HC systems are the best choice for VMware shops because of the level of VMware integration high enough for it to win a VMware partner of the year award.

*Market Trends: Software-Defined Infrastructure Who Can Benefit? (Gartner, June 2017)

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Follow this link:
Hitachi rack servers get VMware Cloud treatment The Register - The Register

Read More..

CenturyLink enhances VMware-based DCC platform, touts software-defined data center approach – FierceTelecom

CenturyLink is giving businesses the option to migrate to a hybrid cloud environment that balances public cloud agility with the security and dedicated infrastructure of a private offering with its DCC (Dedicated Cloud Compute) Foundation.

Based on VMware Cloud Foundation and high-performance HPE ProLiant servers, CenturyLinks DCC Foundation is a fully private service thatoffers customers an updated architecture moving to a converged, software-defined data center (SDDC) model to help businesses overcome challenges of lengthy provisioning, configuration errors and costly processes by automating labor-intensive tasks and operationalizing private cloud on-demand.

RELATED: CenturyLinks Hussain: Network virtualization is a cultural transformation

Available now to customers and partners in North America, Europe and Asia Pacific, this enhanced dedicated cloud serviceis built on one of the largest integrated solutions networks and supported by thousands of experienced CenturyLink support staff with advanced certifications.

The new enhanced service is focused on helping businesses and other partners get more out of a managed cloud service. Built on VMware Cloud Foundation, DCC Foundation is a cloud infrastructure platform that accelerates IT's time-to-market by providing a factory-integrated cloud infrastructure stack.

The platform includes a complete set of software-defined services for computing, storage, networking and security. DCC Foundation combines VMware Cloud Foundation with HPE ProLiant servers and automation and management capabilities to deliver an enterprise-grade, globally available service with consistent customer experience across private and public clouds.

CenturyLink said that integration of DCC Foundation with CenturyLink Cloud Application Manager further enhances support of multitiered hybrid-cloud configurations.

"The software-defined data center approach enables the flexible delivery of enterprise applications connected to our global network across 32 hosting locations on four continents, said David Shacochis, VP of Hybrid IT product management for CenturyLink, in a release.

As a multielement platform, CenturyLink's DCC Foundation delivers on four main elements to assist businesses in their migration to hybrid cloud environments:

Reduced security risks: Microsegmentation allows for security policies to be applied across the data center, with granular firewalling by workload. IT teams can define security policies and controls for each workload based on dynamic security groups, enabling immediate responses to threats inside the data center and enforcement down to the individual virtual machine.

Global deployment options: Customizable configurations can be deployed across multiple data centers in North America, Europe and Asia Pacific, with scale options from four nodes to multiple 32-node configurations. Customers can use Managed Services Anywhere from CenturyLink to provide application life-cycle management through CenturyLink Cloud Application Manager.

Agile enterprise applications with hybrid cloud: Businesses can replicate entire application environments to remote data centers for disaster recovery, move them around their corporate data centers, or deploy them in a hybrid cloud environment without disrupting the applications. DCC Foundation leverages vCloud Availability to facilitate self-service migration of VMware workloads from customers' present environments to their new CenturyLink environment.

Predictable performance: DCC Foundation is built on best-of-breed Hewlett Packard Enterprise(HPE) ProLiant servers, helping to ensure predictable, high-performance operations for hyperconverged infrastructures on which to deliver critical business applications.

With this service, businesses can rapidly deploy new workloads and innovations in an easily scalable, highly secure environment, Shacochis said.

CenturyLinks expanded managed private cloud service builds on the strategic collaboration the service provider previously began with VMware as a way to assist enterprise customers in their transition to a hybrid cloud architecture.Earlier this year, CenturyLink worked with VMware to deepen the level of SDDCtechnologies available to enterprise customers. This collaboration will help preserve and enhance customer investments in on-premises data centers and extend strategic workloads and applications to the cloud.

Read the original post:
CenturyLink enhances VMware-based DCC platform, touts software-defined data center approach - FierceTelecom

Read More..

Bitcoin vs. The NSAs Quantum Computer Bitcoin Not Bombs

Yesterday we learned from new Snowden leaks that the NSA is working to build a quantum computer. The Washington Post broke the story with the rather sensationalist headline, NSA seeks to build quantum computer that could crack most types of encryption.

Naturally, this raised much concern among the new Bitcoiners on Reddit and Facebook. The reality, however, is there wasnt much disclosed that people didnt already know or expect. Weve known that the NSA has openly sponsored quantum computing projects in the past. The fact that it has an in-house project called Penetrating Hard Targetsis new, but not really unexpected. We learned this project has a $79.7 million budget, but quite frankly that isnt that much. And as The Post notes, the documents dont reveal how far along they are in their research andIt seems improbable that the NSA could be that far ahead of theopen world without anybody knowing it.

Nevertheless, this seems like a good time to discuss the implications of quantum computing with respect to the future of Bitcoin.

Lets start with a little primer for those who are unfamiliar with quantum computing.Todays computers encode information into bits binary digits, either 0 or 1. These bits are usually stored on your computers hard disk by changing the polarity of magnetization on a tiny section of a magnetic disk, or stored in RAM or flash memory represented by two different levels of charge in a capacitor. Strings of bits can be combined to produce data that is readable by humans. For example, 01000001 represents the letter A in theextended ASCII table. Any calculations that need to be performed with the bits are done one at a time.

Quantum computers, on the other hand, use the various states of quantum particles to represent quantum bits (qubits). For example, a photon spinning vertically could represent a 1, while a photon spinning horizontally could represent a 0. But photons can also exist in a rather weird state called superposition. That is,while they can spin vertically, horizontally, and diagonally, they can also spin in all those directionsat the same time. Dont ask me how thats possible, its the bizarro world of quantum mechanics.

What this means for practical purposes is while a traditional computer can perform only one calculation at a time, a quantum computer could theoretically perform millions of calculations all at once, improving computing performance by leaps and bounds.

Now when journalists write things like, In room-size metal boxes secure against electromagnetic leaks, the National Security Agency is racing to build a computer that could break nearly every kind of encryption used to protect banking, medical, business and government records around the world, it naturally makes people think its the end of cryptography as we know it. But that isnt the case.

Lets consider the type attack most people think of when hear of quantum computersa brute force attack. This is where you just keep checking different keys until you eventually find the right one. Given enough time, you could brute force any encryption key. The problem is it would take billions or trillions of years for a modern computer to brute force a long encryption key. But surely quantum computers could do this right? This is from Bruce Schneiers 1996 book, Applied Cryptography:

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.3810-16erg/Kelvin, and that the ambient temperature of the universe is 3.2Kelvin, an ideal computer running at 3.2K would consume 4.410-16ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.211041ergs. This is enough to power about 2.71056single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldnt have the energy left over to perform any useful calculations with this counter.

But thats just one star, and a measly one at that. A typical supernova releases something like 1051ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will beunfeasibleuntilcomputers are built from something other than matter and occupy something other than space.

To recap, if you could harness all the energy from a supernova and channel it into an ideal computer, you still couldnt brute force a typical encryption key. Needless to say, if you are going to break commercial encryption algorithms youre going to have to attack the underlying math.

Today, most public-key encryption algorithms rely on either the difficulty of integer factorization (RSA) or the difficulty of discrete logarithm problems (DSA/El Gamal, and Elliptic Curve Cryptography).In 1994, mathematician Peter Shor demonstrated an efficient quantum algorithm for factoring and calculating discrete logarithms that would break public-key encryption when used with a quantum computer. This wouldnt break all types of cryptography, however. Traditional symmetric-key cryptography and cryptographic hash functions would still be well out of range of quantum search algorithms.

Impact on Bitcoin

Bitcoin uses several cryptographic algorithmsThe Elliptic Curve Digital Signature Algorithm (ECDSA) for signing transactions and the hash functions SHA-256 and RIPEMD160. If the NSA succeeds in developing a cryptologically useful quantum computer, ECDSA would fall while SHA-256 and RIPEMD160 would remain secure.

The good news is that ECDSA should be relatively easy to swap out if/when it becomes compromised. It would be much worse if SHA-256 were to go down. If youre not in tune to the mechanics of Bitcoin, SHA-256 is used in Bitcoin mining. At the moment, billions of dollars have been spent on custom computer chips that do nothing but perform SHA-256 calculations. If SHA-256 were to go down, those custom chips would turn into expensive paperweights. If that happened suddenly (as opposed to allowing for a smooth transition to another hash function), it would be pretty catastrophic. The security in bitcoin relies on the fact that it would be too difficult and expensive for an attacker to command 51% of the processing power in the network. A sudden switch to another hash function would significantly compromise security and likely cause the price to tank. But as I mentioned, Bitcoiners can rest easy because SHA-256 isnt threatened by quantum computers (although that doesnt mean someone wont find a feasible attack in the future).

Back to ECDSA. This algorithm generates a public/private key pair. In Bitcoin, you keep the private key secret and use it sign your transactions, proving to the network that you own the bitcoins associated with a particular bitcoin address. The network verifies your signature by using the corresponding public key. A functioning quantum computer would allow the NSA to derive anyones private key from their public key. So do this mean that the NSA would be able to steal everyones bitcoins? Not exactly.

Heres the thing, in Bitcoin your public key isnt (initially) made public. While you share your Bitcoin address with others so that they can send you bitcoins, your Bitcoin address is only a hash of your public key, not the public key itself. What does that mean in English? A hash function is a one-way cryptographic function that takes an input and turns it into a cryptographic output. By one-way I mean that you cant derive the input from the output. Its kind of like encrypting something then losing the key. To demonstrate, lets calculate the RIPEMD160 hash of Hello World.

A Bitcoin address is calculated by running your public key through several hash functions as follows:

All of that is a complicated way of saying that while an attacker with a quantum computer could derive the private key from the public key, he couldnt derive the public key from the Bitcoin address since the public key was run through multiple quantum-resistant one-way hash functions.

However, you do have to broadcast your public key to the network to make a transaction, otherwise there is no way to verify your signature. What this implies is that in the face of an NSA quantum computer all Bitcoin addresses would have to be considered one-time use addresses. Whenever you make a transaction you would have to send any excess bitcoin to a newly generated address as change. If you didnt remove the entire balance from your address, the NSA could steal the remainder.While this is inconvenient, it would buy the developers enough time to swap out ECDSA for a quantum-resistant digital signature scheme.

Post-Quantum Digital Signatures

This section is going to be a little technical but hopefully not too difficult for beginners to follow. There are several different types of post-quantum public-key encryption systems: lattice-based, code-based, multivariate-quadratic, and hash-based. As I already mentioned, cryptographic hash functions are presumed to be quantum-resistant. Given that, it should be possible to build a replacement digital signature scheme for ECDSA using only hash functions. Lets take a look at these hash-based systems since they are easy to understand and the hash functions theyre based on are already widely used.

Lamport One-Time Signature Scheme (LOTSS)

To begin, were going to want to use a hash function with at least a 160-bit output to provide adequate security. RIPEMD160 or SHA-1 should work. To generate the public/private key pair, well start by generating 160 pairs of random numbers (320 numbers total). This set of random numbers will serve as the private key.

To generate the public key well take the RIPEMD160 hash of each of the 320 random numbers. (Note: Im going to have to cut the numbers in half to fit them in this table)

Now to sign a message with a Lamport signature well first create a message digest by hashing the message with RIPEMD160 (in Bitcoin we would hash the transaction) then converting the output to binary. Well once again use Hello World as an example.

Next, well match up each binary digit with each pair in our private key. If the bit is 0 we will add the first number in the pair to our signature, if it is 1 well add the second.

Finally to verify the signature is valid, youll first create a message digest using the same process as above. Then hash each of the 160 numbers in the signature with RIPEMD160. Finally, check to make sure these hashes match the hashes in the public key that correspond with the message digest.

So there you have it, a quantum-resistant digital signature scheme using only hash functions. Only the person in possession of the 320 random numbers in the private key could have generated a signature that hashes to the public key when compared to the digest. However, while his scheme does in fact work, it isnt without problems. First, as the name suggests, LOTSS signatures can only be used once. The reason for this is because you are essentially releasing half of your private key with each signature. If you were to sign multiple messages, your private key would be completely compromised. If this were used in Bitcoin, you still could only use each Bitcoin address once.

Equally problematic, the key sizes and signatures are ridiculously large. The private and public keys are 6,400 bytes compared to 32 and 64 for the ECDSA private and public keys. And the signature is 3,200 bytes compared to 71-73 bytes. Bitcoin already has issues with scalability, increasing the key and signature sizes by that much would make the problems much worse.

The Lamport private key can be dramatically reduced in size by generating the random numbers from a single random seed. To do this you would just take RIPEMD160(seed + n) where n starts at 1 and gets incremented to 320. Unfortunately, the size of the private key isnt so much the problem as is the size of the public key and signature. There is another one-time signature scheme called Winternitz signatures that has the potential to reduce key size but at the cost of hash operations. Fortunately, we arent done yet.

Merkle-Signature Scheme (MSS)

The Merkle Signature Scheme combines the one-time signature scheme (either Lamport or Winternitz) with a Merkle tree (also called a hash tree). This allows us to use one public key to sign many messages without worrying about compromising security. Lets see how this works.

Well start by generating a number of Lamport key pairs. The number well generate will be equal to the number of signatures we want to get out of a single public key. Lets just say eight as an example. Next well calculate a Merkle tree using each of the eight Lamport public keys. To do this, the public keys are paired together, hashed, then the hashes are concatenated together and hashed again. This process is repeated until something looking like an NCAA Tournament bracket is formed.

The hash at the very top of the tree (the Merkle root) is the Merkle public key. This massively reduces the public key size from 6,400 bytes in the Lamport signature to only 20 bytes, the length of a single RIPEMD160 hash.

To calculate a signature, you select one of your Lamport key pairs and sign the message digest just like before. This time, the signature will be the Lamport signature plus each one of leafs in the Merkle tree leading from the public key to the root.

In the above diagram the signature would be:

To verify the Merkle signature one would just verify the Lamport signature, then check to make sure the leafs hash to the Merkle public key. If so, the signature is valid.

There are several advantages of the MSS over LOTSS. First, the public and private keys are reduced to 20 bytes from 6,400 bytes. Also, you can create multiple signatures per public key. But there is still a major draw back. The more messages you want to sign with your public key, the larger the Merkle tree needs to be. The larger the tree, the larger the signature. Eventually the signature starts to become impractically large, especially for use in Bitcoin. This leads us to the final post-quantum signature schemes well discuss.

CMSS And GMSS

MSS has been known for over 30 years and has remained essentially unscathed despite extensive cryptanalysis. However, most of the improvements to it have come in the last five years or so. In my brief survey of the literature, it seems a couple signature schemes by Buchmann, Dahmen, Klintsevich, et. al., are the most promising of the lot. These are the Improve Merkle Signature Scheme (CMSS) and Generalized Merkle Signature Scheme (GMSS) (Links to the academic papers can be found here and here). Two of the cryptographers behind this signature scheme are authors of a textbook on post-quantum cryptography.

Both CMSS and GMSS offer substantially improved signature capacity with reasonable signature lengths and verification times. GMSS in particular offers virtually unlimited signature capacity at 280 signatures but with slower performance in others areas compared to CMSS. They accomplishes this by breaking the system up into separate Merkle trees of 2n leafs. A signature from the root tree is used to sign the public key of the tree below it which signs the tree below it and so on.

So it seems to me that either of these signature schemes would be a serious candidate to replace Bitcoins ECDSA in a post-quantum world. But why not just go ahead and implement it now and rather than wait until the NSA springs a surprise on us? Lets do a little comparison and take a look at the time (t) and memory (m) requirements for each. CMSS variants have signature capacities of 220, 230, and 240 while GMSS has signature capacities of 240 and 280. I would assume that 240 if not 230 would be plenty for Bitcoin as I cant imagine someone would make more than a billion or trillion transactions from a single address. Also, GMSS can be optimized for faster verification times but at the expense of a 25% larger signature.

So from the table we can see that CMSS and GMSS actually perform better than ECDSA in public key size and signature time. However, in the critical variable that will affect scalability, signature size, they dont perform nearly as well. Verification time for CMSS is actually better than ECDSA which would actually improve scalability and the optimized variant of GMSS is relatively close, but signature size for both would definitely be an issue. Consider some very rough estimates: the average transactions size is currently about 500 bytes, either CMSS or GMSS would push it up over 4000 bytes. That means you could be looking at an increase in the size of the block chain of upwards of 700%. The block chain is currently at 12.7 gigabytes. Had Bitcoin employed either of these signature schemes from the beginning, it would be over 100 gigabytes right now. Signature and key size isnt a problem that is unique to hash-based signature schemes either, most of the others are in the same ballpark.

Also, note the insane keygen time for GMSS. If you left your computer running for 24 straight hours you would have only generated 3 bitcoin address and thats using the optimized variant with larger signatures! I suspect, however, that an ASIC hardware wallet would significantly improve that performance. Keygen for CMSS isnt that bad.

So in other words, Bitcoin cant adopt one of these signature schemes at the moment if we want to scale beyond present capacity. However, by the time quantum computers become viable, Moores law will likely have brought the cost of storage and processing power down to the point where CMSS, GMSS or one of the other types of post-quantum signature schemes could easily be merged into Bitcoin. Until then, lets not lose any sleep over Penetrating Hard Targets.

Original content by Chris, copyleft, tips welcome

Related

Excerpt from:
Bitcoin vs. The NSAs Quantum Computer Bitcoin Not Bombs

Read More..

qBitcoin: A Way of Making Bitcoin Quantum-Computer Proof? – IEEE Spectrum

A new quantum cryptography-based Bitcoin standard has been proposed that could harden the popular cryptocurrency against the advent of full-fledged quantum computers. Bitcoin as it now exists involves traditional public key cryptography and thus could conceivably be hacked by a future quantum computer strong enough to break it. However, quantum cryptography, which is based not on difficult math problems but the fundamental laws of physics, is expected to be strong enough to withstand even quantum computer-powered attacks.

The proposal, dubbed qBitcoin, posits transmission of quantum cryptographic keys between a remitter and a receiver of the eponomous named cryptocurrency, qBitcoin. The system would use provably secure protocols such as theBB84quantum key distribution scheme.

To exchange qBitcoin, then, requires that there be a transmission network in place that can send and receive bits of quantum information, qubits. And that is no mean feat, considering it typically involves preserving the polarization states of individual photons across thousands of kilometers. To date, there are five knownquantum key distributionnetworks in the United States, Switzerland, Austria, and Japan. China is working ontheir ownmassive 2000-km link, as well. And a number of satellite-to-satellite and satellite-to-ground quantum key distribution networks are alsobeingdevelopedandprototyped.

Which is to say that qBitcoin or something like it could not be scaled up today. But if the quantum computer singularity is approaching, in which a powerful enough machinecould threaten existing cryptography standards, quantum cryptography would be an essential ingredient of the post-Y2Q age. So existing quantum key distribution networks might at least serve as outposts in a burgeoning global quantum network, like Western Union stations in the early days of the telegraph.

Some things about qBitcoin might appear the same to any Bitcoin user today. Bitcoin is a peer to peer system, and qBitcoin is also peer to peer, says Kazuki Ikeda, qBitcoins creator and PhD student in physics at Osaka University in Japan.Hesays compared to Bitcoin, qBitcoin would offer comparable or perhaps enhanced levels of privacy, anonymity, and security. (That said, his paper that makes this claim is still under peer review.)

However, the lucrative profession ofBitcoin mining, under Ikedas protocol, would be very different than what it is today. Transactions would still need to be verified and secured. Butinstead of todays system of acryptographic puzzles, qBitcoins security would rely on a 2001proposalfor creating aquantum digital signature.Such a signature would rely on the laws of quantum physics to secure the qBitcoin ledger from tampering or hacking.

Ikeda's proposal is certainly not the first to suggest a quantum-cryptographic improvement onclassical-cryptography-based digital currencies. Other proposals in2010,2016,and evenearlier this yearhave also offered up variations on the theme. All work to mitigate against the danger large-scale quantum computers would represent to Bitcoin.

Of course, not every solution to the quantum singularity is as promising as every other. A person going by the handle amluto criticized Ikedas qBitcoin proposal onaprominent message boardlast week. (amluto claimed to be author of one of aprevious quantum currency proposalsfrom 2010presumably the 2010 proposals co-author Andrew Lutomirski, althoughIEEE Spectrumwas unable to confirm this supposition at press time.)

This is nonsense It's like saying that you can transmit a file by mailing a USB stick, which absolutely guarantees that you, the sender, no longer have the original file. That's wrongall that mailing a USB stick guarantees is that you don't have the USB stick any more, not that you didn't keep a copy of the contents. Similarly, quantum teleportation eats the input state but says nothing about any other copies of the input state that may exist.

Ikeda says he disagrees with the analogy. The point, he says, is that there are no other copies of the input state as it's called abovein other words of the quantum keys that secure qBitcoin. So, Ikeda says, qBitcoin is safe just like Bitcoin is safe today.

But one day, thanks to quantum computers, Bitcoin, will no longer be safe. Someone will needto save it. And, no matter who devises the winning protocol, the thing that threatens Bitcoinmay in fact also be the thing that comes to its rescue: The cagey qubit.

Read more:
qBitcoin: A Way of Making Bitcoin Quantum-Computer Proof? - IEEE Spectrum

Read More..

Hype and cash are muddying public understanding of quantum … – Phys.Org

An ion trap used for quantum computing research in the Quantum Control Laboratory at the University of Sydney. Michael Biercuk, Author provided Special piping and wiring supports quantum research in the Sydney Nanoscience Hub. Credit: AINST, Author provided

It's no surprise that quantum computing has become a media obsession. A functional and useful quantum computer would represent one of the century's most profound technical achievements.

For researchers like me, the excitement is welcome, but some claims appearing in popular outlets can be baffling.

A recent infusion of cash and attention from the tech giants has woken the interest of analysts, who are now eager to proclaim a breakthrough moment in the development of this extraordinary technology.

Quantum computing is described as "just around the corner", simply awaiting the engineering prowess and entrepreneurial spirit of the tech sector to realise its full potential.

What's the truth? Are we really just a few years away from having quantum computers that can break all online security systems? Now that the technology giants are engaged, do we sit back and wait for them to deliver? Is it now all "just engineering"?

Why do we care so much about quantum computing?

Quantum computers are machines that use the rules of quantum physics in other words, the physics of very small things to encode and process information in new ways.

They exploit the unusual physics we find on these tiny scales, physics that defies our daily experience, in order to solve problems that are exceptionally challenging for "classical" computers. Don't just think of quantum computers as faster versions of today's computers think of them as computers that function in a totally new way. The two are as different as an abacus and a PC.

They can (in principle) solve hard, high-impact questions in fields such as codebreaking, search, chemistry and physics.

Chief among these is "factoring": finding the two prime numbers, divisible only by one and themselves, which when multiplied together reach a target number. For instance, the prime factors of 15 are 3 and 5.

As simple as it looks, when the number to be factored becomes large, say 1,000 digits long, the problem is effectively impossible for a classical computer. The fact that this problem is so hard for any conventional computer is how we secure most internet communications, such as through public-key encryption.

Some quantum computers are known to perform factoring exponentially faster than any classical supercomputer. But competing with a supercomputer will still require a pretty sizeable quantum computer.

Money changes everything

Quantum computing began as a unique discipline in the late 1990s when the US government, aware of the newly discovered potential of these machines for codebreaking, began investing in university research

The field drew together teams from all over the world, including Australia, where we now have two Centres of Excellence in quantum technology (the author is part of of the Centre of Excellence for Engineered Quantum Systems).

But the academic focus is now shifting, in part, to industry.

IBM has long had a basic research program in the field. It was recently joined by Google, who invested in a University of California team, and Microsoft, which has partnered with academics globally, including the University of Sydney.

Seemingly smelling blood in the water, Silicon Valley venture capitalists also recently began investing in new startups working to build quantum computers.

The media has mistakenly seen the entry of commercial players as the genesis of recent technological acceleration, rather than a response to these advances.

So now we find a variety of competing claims about the state of the art in the field, where the field is going, and who will get to the end goal a large-scale quantum computer first.

The state of the art in the strangest of technologies

Conventional computer microprocessors can have more than one billion fundamental logic elements, known as transistors. In quantum systems, the fundamental quantum logic units are known as qubits, and for now, they mostly number in the range of a dozen.

Such devices are exceptionally exciting to researchers and represent huge progress, but they are little more than toys from a practical perspective. They are not near what's required for factoring or any other application they're too small and suffer too many errors, despite what the frantic headlines may promise.

For instance, it's not even easy to answer the question of which system has the best qubits right now.

Consider the two dominant technologies. Teams using trapped ions have qubits that are resistant to errors, but relatively slow. Teams using superconducting qubits (including IBM and Google) have relatively error-prone qubits that are much faster, and may be easier to replicate in the near term.

Which is better? There's no straightforward answer. A quantum computer with many qubits that suffer from lots of errors is not necessarily more useful than a very small machine with very stable qubits.

Because quantum computers can also take different forms (general purpose versus tailored to one application), we can't even reach agreement on which system currently has the greatest set of capabilities.

Similarly, there's now seemingly endless competition over simplified metrics such as the number of qubits. Five, 16, soon 49! The question of whether a quantum computer is useful is defined by much more than this.

Where to from here?

There's been a media focus lately on achieving "quantum supremacy". This is the point where a quantum computer outperforms its best classical counterpart, and reaching this would absolutely mark an important conceptual advance in quantum computing.

But don't confuse "quantum supremacy" with "utility".

Some quantum computer researchers are seeking to devise slightly arcane problems that might allow quantum supremacy to be reached with, say, 50-100 qubits numbers reachable within the next several years.

Achieving quantum supremacy does not mean either that those machines will be useful, or that the path to large-scale machines will become clear.

Moreover, we still need to figure out how to deal with errors. Classical computers rarely suffer hardware faults the "blue screen of death" generally comes from software bugs, rather than hardware failures. The likelihood of hardware failure is usually less than something like one in a billion-quadrillion, or 10-24 in scientific notation.

The best quantum computer hardware, on the other hand, typically achieves only about one in 10,000, or 10-4. That's 20 orders of magnitude worse.

Is it all just engineering?

We're seeing a slow creep up in the number of qubits in the most advanced systems, and clever scientists are thinking about problems that might be usefully addressed with small quantum computers containing just a few hundred qubits.

But we still face many fundamental questions about how to build, operate or even validate the performance of the large-scale systems we sometimes hear are just around the corner.

As an example, if we built a fully "error-corrected" quantum computer at the scale of the millions of qubits required for useful factoring, as far as we can tell, it would represent a totally new state of matter. That's pretty fundamental.

At this stage, there's no clear path to the millions of error-corrected qubits we believe are required to build a useful factoring machine. Current global efforts (in which this author is a participant) are seeking to build just one error-corrected qubit to be delivered about five years from now.

At the end of the day, none of the teams mentioned above are likely to build a useful quantum computer in 2017 or 2018. But that shouldn't cause concern when there are so many exciting questions to answer along the way.

Explore further: Developing quantum algorithms for optimization problems

This article was originally published on The Conversation. Read the original article.

See more here:
Hype and cash are muddying public understanding of quantum ... - Phys.Org

Read More..

High-Dimensional Quantum Encryption Takes Place in Real-World … – Photonics.com

OTTAWA, Ontario, Aug. 25, 2017 A quantum-secured message containing more than one bit of information per photon has been sent through the air above the city of Ottawa, Ontario, Canada. According to scientists, this is the first time high-dimensional quantum encryption has been demonstrated with free-space optical communication in real-world conditions.

A research team from the University of Ottawa demonstrated 4D quantum encryption so-called because each photon is encoded with two bits of information, providing the four possibilities of 00, 01, 10 or 11 over a free-space optical network spanning two buildings 0.3 kilometers apart.

Researchers have demonstrated sending messages in a secure manner using high dimensional quantum cryptography in realistic city conditions. Courtesy of SQO team, University of Ottawa.One of the primary problems faced during any free-space experiment is dealing with air turbulence, which can distort the optical signal. For the tests, the researchers brought their laboratory optical setups to two different rooftops and covered them with wooden boxes to provide some protection from the elements. After much trial and error, they successfully sent messages secured with 4D quantum encryption over their intracity link. The messages exhibited an error rate of 11 percent, below the 19 percent threshold needed to maintain a secure connection.

The researchers compared 4D encryption with 2D, finding that, after error correction, they could transmit 1.6 times more information per photon with 4D quantum encryption, even with turbulence.

In addition to sending more information per photon, high-dimensional quantum encryption can tolerate more signal-obscuring noise before the security of the transmission is threatened. Noise can arise from turbulent air, failed electronics, detectors that don't work properly or from attempts to intercept the data.

This higher noise threshold means that when 2D quantum encryption fails, you can try to implement 4D because it, in principle, is more secure and more noise resistant, said researcher Ebrahim Karimi.

As a next step, the researchers plan to implement their scheme into a network that includes three links that are about 5.6 kilometers apart, using adaptive optics to compensate for the turbulence. Eventually, the team hopes to link this network to one that already exists in the city.

Our long-term goal is to implement a quantum communication network with multiple links but using more than four dimensions while trying to get around the turbulence, said researcher Alicia Sit.

The demonstration showed that it could one day be practical to use high-capacity, free-space quantum communication to create a highly secure link between ground-based networks and satellites.

Our work is the first to send messages in a secure manner using high-dimensional quantum encryption in realistic city conditions, including turbulence, said Karimi. The secure, free-space communication scheme we demonstrated could potentially link Earth with satellites, securely connect places where it is too expensive to install fiber, or be used for encrypted communication with a moving object, such as an airplane.

The research was published in Optica, a journal of The Optical Society (doi: 10.1364/OPTICA.4.001006).

Excerpt from:
High-Dimensional Quantum Encryption Takes Place in Real-World ... - Photonics.com

Read More..

For the First Time Ever, Quantum Communication is Demonstrated in Real-World City Conditions – Futurism

In BriefResearchers have sent the first high-dimensional, quantum-encrypted message through the air above a city. This real world test means high-capacity, free-space quantum communication will one day be practical and secure, enabling a global quantum network.

In a massive step forward, researchers have sent the first quantum-secured message through the air above a city containing more than one bit of information. This proof-of-concept success means that high-capacity, free-space quantum communication will one day be both a practical and secure process between satellites and Earthand a worldwide quantum encryption network will also be feasible.

In their demonstration, researchers used 4D quantum encryption to transmit data over a free-space optical network between two buildings. The buildings on the University of Ottawa campus stand 0.3 kilometers apart. The high-dimensional encryption scheme is described as 4D because it sends more information, as every photon encodes two bits of information. This, in turn, means that each photon carries four possibilities with it: 00, 01, 10, or 11.

High-dimensional quantum encryption is also more secure because it can tolerate more signal-obscuring noise such as noise from failed electronics, turbulent air, malfunctioning detectors, and even interception attempts without rendering the transmission unsecured. This higher noise threshold means that when 2D quantum encryption fails, you can try to implement 4D because it, in principle, is more secure and more noise resistant, Ebrahim Karimi said in a news release.

Current algorithms are unlikely to be secure in the future as computers become more powerful. Therefore, researchers are working to master stronger encryption techniques such as light-harnessing quantum key distribution, which uses the quantum states of light particles to encode and send the decryption keys for encoded data.

Now, the concept of quantum communications like this has been a theoretical concept until recently, because global implementation will demand transmission between Earth and satellites. Scientists have been using horizontal tests through the air over distances because the distortion that signals encounter can mimic what they might go through as they pass through the atmosphere. This successful demonstration proved that successful encryption is possible, despite distortion.

These researchers ported their optical setups from the lab to two different rooftops for the testing and protected them from the elements with wooden boxes. After some trial and error, the team successfully used this intracity link to send secure messages using 4D quantum encryption. The error rate for the messages was 11 percent, well below the 19 percent secure connection threshold. The team also compared 4D and 2D encryption, and they found that they were able to transmit 1.6 times more data per photon after error correction using 4D quantum encryption, in spite of turbulence.

Next, this research team plansto test the technology in a three-link network that spans longer distances, with each link about 5.6 kilometers apart. They will also use adaptive optics technology to compensate for the turbulence. The long-term goal is to link the network to the existing city network, creating a quantum communication network with multiple links but using more than four dimensions while trying to get around the turbulence, graduate student and team member Alicia Sit said in the press release.

See original here:
For the First Time Ever, Quantum Communication is Demonstrated in Real-World City Conditions - Futurism

Read More..

Hedvig storage upgrade adds flash tier, encryption options – TechTarget

Hedvig Inc. today launched the third version of its software-defined storage product featuring support for flash tiering, built-in encryption technology and new plug-ins for third-party backup and container technologies.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Hedvig storage software runs on commodity hardware. Hedvig doesn't sell the hardware, but it supports moving data between fast flash-based SSDs and a tier of slower, less expensive HDDs. Hedvig's new FlashFabric enables two storage tiers in all-SSD server clusters that can span on-premises and public cloud environments.

Hedvig Distributed Storage Platform 3.0 detects performance differences in SSDs, according to Rob Whiteley, Hedvig vice president of marketing. He said those SSDs can be traditional SAS or SATA, newer latency-lowering NVMe-based PCI Express, or emerging 3D XPoint technology from Intel and Micron.

Our system has always been very flash-friendly from a write perspective. There were just some additional things we wanted to do from a read perspective. Rob Whiteleyvice president of marketing, Hedvig

"There are configurations where the customer will have some amount of higher performance, higher cost NVMe flash, plus some amount of more traditional enterprise-grade flash," Whiteley said. "And what they actually want is the ability to automatically tier in and out of different flavors of flash."

Howard Marks, founder and chief scientist at storage test lab DeepStorage LLC, said Hedvig's 3.0 release is not major from a technological standpoint, but the company is early with its support for "all-flash hybrids" with more than one type of SSD.

"The majority of the all-flash systems we see today have one pool of flash. But between NVMe and upcoming post-flash memories like 3D XPoint, we are going to have at least two tiers of solid state," Marks said. "That means folks like Hedvig, who have the logic for dealing with multiple tiers built into their system, have an advantage."

Whitely said the Hedvig storage software tracks data reads and writes at a granular level to ensure the hottest data lands on the highest performing storage media. To enable the SSD tiering, Hedvig engineers created write-through read caches that could take advantage of different flash tiers, he said.

"Our system has always been very flash-friendly from a write perspective," Whitely said. "There were just some additional things we wanted to do from a read perspective."

New Hedvig storage security features include software-based encryption for data in use, in flight and at rest; advanced audit logging designed to enhance the product's monitoring and analytics engines; and improved multi-tenant role-based access control tying into Lightweight Directory Access Protocol and Microsoft Active Directory.

Hedvig's 256-bit Encrypt360 technology secures data through proxy software deployed on host compute servers to minimize the performance hit. The software supports the Advanced Encryption Standard New Instructions from Intel to accelerate host encryption.

Hedvig software deduplicates data before encryption. As with deduplication and replication, Hedvig enables customers to turn encryption on and off on a per-volume, or virtual disk (vDisk), basis, Whiteley said.

In the past, Hedvig advised customers to use self-encrypting drives or third-party products for in-flight encryption, Whiteley said.

"What we've found in the software-defined storage world is self-encrypting drives are often a generation or two behind in hardware technology, and they're a lot more expensive," he said. "Plus, how you then do the key management becomes a very difficult proposition for a lot of large enterprises."

Hedvig does not supply a key management system. The company tested and validated Amazon Web Services' Key Management Service option, and depending on the API, could plug into other third-party key management systems, according to Whiteley.

When setting up a cluster, the Hedvig storage proxy reaches out to the key management system for a unique encryption key for each vDisk. The vDisk keys are cached at the proxy and stored in Hedvig's metadata engine, according to Eric Carter, the company's senior director of product management.

The third feature set in Hedvig's new 3.0 storage software is CloudScale Plugins for Veritas, VMware and Red Hat products, to add to the company's existing support for Docker and OpenStack.

The new Veritas OpenStorage Technology plug-in will enable NetBackup customers to connect to Hedvig for deduplicated backup storage. Whiteley said the Veritas NetBackup plug-in is "probably the most-requested customer feature besides encryption."

Hedvig already had a VMware vSphere Web Client plug-in, but it is now certified with new backup and security capabilities. In addition, Hedvig Storage Proxy containers are now Red Hat-certified and published in the Red Hat Container Catalog. The containers support Red Hat Enterprise Linux and Red Hat's OpenShift container application development platform.

Pricing remains unchanged for the Hedvig Distributed Storage Platform, which becomes generally available Friday. Hedvig partners with Cisco, Dell EMC, Hewlett Packard Enterprise (HPE), Lenovo, Quanta and Super Micro Computer on hardware.

Hedvig and HPE in June launched a validated bundled option combining Hedvig's software-defined storage with HPE Apollo 4200 servers. Whiteley said the bundled product, for which HPE provides first-line support, has already grown to about half the opportunities in the company's sales pipeline.

"Just having the HPE sales force boots on the ground is going to be a big driver for both their growth and their market acceptance," Marks said. "If an HPE sales guy sells Hedvig, it counts against their storage quota. Sales guys sell what you incent them to sell."

Read the rest here:
Hedvig storage upgrade adds flash tier, encryption options - TechTarget

Read More..

Hedvig Bakes Encryption into Software-Defined Storage Platform – IT Business Edge (blog)

Data, in theory, should always be secure and universally available. In practice, data ends up being accessible to only a handful of applications via storage systems incapable of encrypting data.

To make data both inherently more secure and accessible, Hedvig has updated its Distributed Storage Platform with Encrypt360 software to enable IT organizations to encrypt data at the server before storing it. Rob Whiteley, vice president of marketing for Hedvig, says this approach means that all the data passing through its software-defined storage (SDS) platform running on that server can be encrypted on a per-volume basis.

Whiteley says that approach is not only more efficient, it also eliminates the need to depend on magnetic storage drives to encrypt all the data at rest residing on the drive.

The data gets encrypted at the server, so its not only encrypted on the drive, but also as it moves between the storage system and the server, says Whiteley.

Designed to be deployed across multiple platforms, Hedvig Distributed Storage Platform version 3.0 includes enhanced plug-ins for VMware to provide additional security, backup and data protection capabilities. In addition, Hedvig has developed a plug-in to support OpenStorage Technology (OST) developed by Veritas Technologies. Hedvig has also extended its existing support for Docker containers by having its implementation of a Hedvig Storage Proxy container certified by Red Hat. The Hedvig proxy container has also been published on the Red Hat Container Catalog. Hedvig already supports OpenStack environments, as well.

Whiteley says that as software deployed on a server, the Hedvig approach to SDS only adds about 10 percent overhead compared to running software on each local storage array. But because storage is now managed at the server level, Whiteley says IT organizations gain flexibility, better security and lower total cost of storage ownership. Because the Hedvig Distributed Storage Platform is based on a multi-tenant architecture, IT organizations have the option of deploying it on-premises or in the cloud, adds Whiteley.

With this update to the Hedvig Distributed Storage Platform, Whiteley says the company has also updated the Hedvig FlashFabric software the company developed to provide additional auto-tiering and read cache capabilities. Hedvig FlashFabric provides a mechanism to network together all-Flash arrays in a way that Whiteley says can be easily extended to support NVMe, 3D Xpoint and other flash technologies as they become available.

The battle between proponents of various approaches to SDS is already fierce. The first issue IT organizations need to contend with is where they want SDS to run. Historically, storage has been managed by controller software running on dedicated hardware. As SDS running on the server becomes a more viable option, the question IT organizations will need to consider is what level of performance tradeoff is acceptable to reduce overall storage and security management overhead.

Read this article:
Hedvig Bakes Encryption into Software-Defined Storage Platform - IT Business Edge (blog)

Read More..

How to use EFS encryption to encrypt individual files and folders on Windows 10 – Windows Central

How do I encrypt files in Windows 10?

Encrypting File System (EFS) is an encryption service found in Windows 10 Pro, Enterprise, and Education. A cousin to BitLocker, which can encrypt entire drives at once, EFS lets you encrypt individual files and folders.

Encryption is tied to the PC user, so if a different user is logged in than the user who encrypted the files, those files will remain inaccessible.

EFS encryption isn't as secure as other encryption methods, like BitLocker, because the key that unlocks the encryption is saved locally. There's also a chance that data can leak into temporary files since the entire drive is not encrypted.

Still, EFS is a quick and easy way to protect individual files and folders on a PC that's shared amongst several users. Encrypting with EFS doesn't take long let's take a look at how it's done.

EFS is only available on Pro, Enterprise, and Education versions of Windows 10. If you're using Windows 10 Home, you're out of luck. You also need to be using a password with your user account, preferably strong and difficult to crack.

Once you've encrypted a file or folder, Windows will automatically remind you that you should create a backup key in case you run into a problem where you can no longer log into your user account that's tied to the encrypted files. This requires some sort of removable media. In our case, we use a USB thumb drive.

Have a file or folder in mind for encryption? Here's how to enable EFS.

Click Properties.

Click the checkbox next to Encrypt contents to secure data.

Click Apply. A window will pop up asking you whether or not you want to only encrypt the selected folder, or the folder, subfolders, and files.

Click OK.

Files that you've encrypted with EFS will have a small padlock icon in the top-right corner of the thumbnail or icon.

After enabling EFS, a small icon will appear in the system tray in the bottom-right corner of your screen. This is your reminder to back up your EFS encryption key.

Click Back up now (recommended).

Click Next.

Type a password in the first Password field.

Click Next.

Click the USB drive.

Type a filename.

Click Next.

Click OK.

That's it. If you ever lose access to your user account, the backup key can be used to access the encrypted files on the PC.

Read more:
How to use EFS encryption to encrypt individual files and folders on Windows 10 - Windows Central

Read More..