Page 4,201«..1020..4,2004,2014,2024,203..4,2104,220..»

Cryptocurrency Market cap Sets a New All-time High of Over $150bn – newsBTC

Big things are happening in the cryptocurrency world as of late. With the Bitcoin price slowly picking up again, things are looking positive. As a result, the total cryptocurrency market cap has increased to $150bn for the first time. This new all-time high should not be ignored by any means, though. It is a big milestone for all cryptocurrencies and digital assets in circulation right now. The growth throughout 2017 has been nothing short of spectacular.

It is good to see the total cryptocurrency market cap grow by leaps and bounds. More specifically, there have been a few dips over the past few weeks. As a result, this causes some uneasiness in the cryptocurrency world as a whole. Right now, things are slowly returning back to normal. This change is mainly driven by the Bitcoin price recovering, as well as some alternative currencies noting major value gains. One notably absent coin on this list is Ethereum, which still struggles to maintain the $300 mark for the time being.

That being said, things look pretty healthy in the world of cryptocurrency right now. With the market cap appreciating in value interesting things are on the horizon. It is the first time we see the $150bn value being surpassed. This is another major milestone for Bitcoin and all other cryptocurrencies in existence. After all, very few people assumed achieving such a goal was even possible, to begin with. Things have an interesting way of working themselves out, to say the least.

All of this seemingly indicates how the markets continue to evolve. Although there is still a bearish sentiment to take into account, things arent looking half bad. There will be some market resistance for some time to come, though. Every new all-time high is often followed by a correction of some sorts. Right now, that correction seemingly takes place as we speak. Once this downtrend is over, however, things can get pretty exciting. It is not unlikely we will see another all-time high for both Bitcoin and Ethereum in the near future. If that is the case, a $200bn market cap isnt out of the question either.

With both Bitcoin and Ethereum dominating the charts this year, 2017 has been quite impressive. It is expected there will be an even bigger uptrend in the months to come. Some experts predict a Bitcoin price of $10,000 by next year. It is still a high value but after this year, nothing seems to be impossible. Only time will tell what the future may bring for cryptocurrencies in general. With no major breakouts recorded despite this market cap increase, it will be exciting to see what comes next. Interesting things are happening, that much is certain.

Header image courtesy of Shutterstock

View original post here:
Cryptocurrency Market cap Sets a New All-time High of Over $150bn - newsBTC

Read More..

Biz sends apps to public cloud, waves ‘bye to on-premises server … – The Register

Research has found businesses need to hire more server staff but they're in limited supply.

In the latest Voice of the Enterprise: Servers and Converged Infrastructure study, 451 Research finds that 64.7 per cent of its respondents are wanting to recruit server-focused staff, up from 62.1 per cent a year ago, due to business growth. But some 42.4 per cent are needing more server staff because of IT organisational changes, and that's up from 24.2 per cent a year ago.

The recruiting difficulties cover both servers and converged infrastructure, and the picture drawn by the 451ers starts with businesses taking advantage of the public cloud to send applications there, reducing the need for on-premises server specialists. This tends to reduce the overall pool of server staff.

The 451ers say 69.7 per cent of respondents said current candidates lack skills and experience. There is also a lack of candidates by region, and high salaries point to a shrinking pot of available talent.

Another trend is that towards converged and hyper-converged infrastructure and more automation, orchestration and even software-defined technologies generally, which sends the need for server specialists down while positively affecting that for server generalist staff.

Click chart for embiggenment

However the cost of using the public cloud rises as more of it gets used and that starts a countervailing tendency to run more applications on-site, while still using the public cloud, for which you need servers and server staff to deploy, optimise and run them.

Companies are finding it hard to recruit the server staff they need, both specialist and generalist. Christian Perry, Research Manager and lead analyst of 451 Researchs study, said: The good news is that there remains a need for specialists across both standalone servers and converged and hyperconverged infrastructures. This is especially true within LOBs or remote divisions or departments.

But, Perry said: When determining the optimal mix of on- and off-premises compute resources, there is no doubt this is hampered by the availability of specialist skills and regional availability. Whether organisations will realise their expected server staff expansion remains to be seen due to hiring difficulties.

The 451ers suggest that server, converged and hyper-converged system suppliers need to work with their customers to help them understand long-term staffing needs better. For example, choosing a hyper-converged approach could reduce the need for server specialists.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Here is the original post:
Biz sends apps to public cloud, waves 'bye to on-premises server ... - The Register

Read More..

Druva Raises Another $80 Million – Channel Partners

PRESS RELEASEDruva, the global leader in cloud data protection and management, today announced $80 million of growth equity funding, bringing the total raised to approximately $200 million. The latest funding investment was led by Riverwood Capital, with strong participation from Sequoia Capital India, Nexus Venture Partners, Tenaya Capital, and most other existing venture investors. Druva will leverage this late-stage investment to dramatically accelerate research and development, expand go-to-market efforts worldwide, and lead the industry in redefining how enterprises protect, manage, and use their data.

Druvas success is fueled partially by the rapid expansion of the data protection industry, with market size expected to be $28 billion in 2022 for both cloud-based and on-premises servers,[1] in addition to the rapid cloud data protection and management adoption by Global 5000 organizations. In May, the company announced its leadership in the cloud server data protection market, realizing more than 300 percent year-over-year growth in infrastructure data protection revenue. Additionally, Druva Cloud deployments now span more than 4,000 enterprise customers, including 10 percent of the worlds Fortune 500 companies.

Cloud Data Protection and Management solutions are massively disrupting the secondary storage industry, said Jeff Parks, co-founder and general partner at Riverwood Capital. Druva delivers an as-a-service protection and management solution for all enterprise data encompassing infrastructure, endpoints, and cloud applications. We are impressed by Druvas ability to help organizations redefine their data protection and management strategy in a cloud-first world leveraging the performance, scale, ease of use and TCO benefits of the public cloud and SaaS. The effectiveness of Druvas technology has been lauded by a large list of customers. With high customer satisfaction, strong brand loyalty, proven technology innovation and seasoned leadership team, Druva is best positioned to drive the as-a-Service transformation of enterprise data protection and management.

We see todays digital transformation as a data transformation, and protecting data in todays cloud-connected environment requires a fresh approach, said Jaspreet Singh, co-founder and chief executive officer at Druva. Druvas as-a-Service solution eliminates costly and complex infrastructure to quickly and seamlessly protect, govern, and gain intelligence from their data when and where its needed.

Druva Drives High Value With Druva Cloud Platform

Earlier this month, Druva announced the Druva Cloud Platform Tech Preview, which delivers a single-pane-of-glass view for protection and management of endpoints, servers, and cloud applications. The platform converges the award-winning Druva Phoenix and Druva inSync cloud solutions, and offers a unified view into services and data. Simplified global search and centralized visibility and control reduces costs and complexity for managing enterprise data.

With the proliferation of ransomware and the need for governance and compliance around initiatives like GDPR, Druvas innovations are well positioned to serve this growing market. Druva Cloud Platform is the next generation of cloud data protection and management, completely delivered as a service, said Milind Borate, co-founder and chief technology officer at Druva. Druvas unique, patented technology including time-indexed metadata, global scale-out deduplication, instant access, auto-tiering, and advanced search and analytics are critical capabilities that enable

Pages: 1 2 Next

Read more:
Druva Raises Another $80 Million - Channel Partners

Read More..

CrashPlan alternatives: How to move to another home backup solution – Macworld

If youve read any articles about Mac-based local and cloud backup software and services by me or any other long-time tech writers, youll know that, first, we largely recommended Code42s CrashPlan for Home and, second, we have long also had concerns about it. That turned out to be reasonable, given that Code42 has announced the end of its Home product. Now it's time to pursue a CrashPlan alternative, and this article will help get you started.

First off, why did we like CrashPlan for Home so much? It was comprehensive, letting you back up nearly anything to anything: from a computer to external drives; from one computer to another you controlled for networked or remote backup; from one computer to a peer, a computer run by a friend or colleague, with full encryption so that person didnt need to worry about protecting your files; and to CrashPlans central cloud servers. It also had two strong options for user-controlled encryption.

CrashPlans funky old client will be no more soon.

But that was balanced with how ugly, awkward, and slow the Java-based client software was. Yes, Java! Code42 had promised a native Mac client starting years ago, which it deliveredonly to business users. Over the last few years, it got rid of multi-year, highly discounted subscriptions, and a method of seeding a backup by sending a hard drive and the complementary method of restoring by having them send a backup on a drive to you.

On August 22, Code42 announced it will discontinue its home offering, focusing instead on business and enterprise customers. While I long expected it, Code42s reassurances over the years feel a bit like ashes to those that stuck with the software.

Theyre not shutting down their Home servers tomorrow, or even soon, but if youre a user, you could wind up with a decision point to make in as soon as 60 days. I have suggestions for how you can shift your backup strategy and enhance it.

Code42 will stop operating its CrashPlan for Home cloud services on October 22, 2018. As of August 22, it no longer offers renewals or new subscriptions. All customers received a two-month extension on their expiration date to make sure nobody was canceled immediately. (There are no refunds, which seems unfair to recent subscribers. Without offering legal advice, you can check with your states consumer-protection agency about whether this violates regulations in your state.)

But heres the problem. If youre using CrashPlan in any reasonable way, youre not just cloning your current set of files, youre archiving older versions. The value of continuous cloud-based backup is having access to often many previous versions of the same file, including deleted files. You can configure CrashPlan and many other cloud services to control the depth of archives, when theyre culled, and how long and whether to retain deleted files.

Because Code42 will be shutting down its Home servers, unless youve maintained a separate local, networked, or peer-to-peer backup over the same period of time with the same settings, youll lose your archivesunless you migrate to another one of its services.

Code42 is offering a highly discounted migration option to its Small Business service that retains all your files (up to 5TB per computer) and gives you access to the native CrashPlan client that was once promised for Home users. Code42 will charge you nothing for the remainder of your Home subscription, 75 percent off the rack rate for 12 months, and then the full price. (If your Home subscription expires after the October 22, 2018, cutoff date, Code42 will migrate your files automatically to keep the paid-for service in operation.)

This flavor is $10 per month per computer, twice that of the Home services individual rate (if paid annually). But if you were using CrashPlans family offering, you paid as little as $12.50 a month on an annual basis for up to 10 computers. The Small Business software doesnt support peer-to-peer backups, but I suspect that feature was most important years ago before cloud storage was abundant and inexpensive. CrashPlan is also offering a discount on one-time rival Carbonites backup offerings, which I dont recommend for Mac users, for reasons described in the next section.

Given theres no penalty as long as your subscription is active, the path of least resistance would be migration to the Small Business offering if you have more than a few months left. This lets you evaluate other options and get the benefit of the better software without having to make a quick decision.

Id also suggest migration within Code42s systems if its critical to you to not lose any of these past archives. CrashPlan offers no tools to download or extract entire archives.

A separate strategy to retain archives and abandon Code42 would be to use CrashPlans restore feature to find a snapshot or snapshots of particular folders and retrieve those and keep those stored locally with carefully chosen names so you can walk backward in time to find those files.

While your account remains active, you can also use CrashPlans Web app to retrieve files. Youre limited to 500MB in a given restore set at a time.

If your archives arent important to you in the long run, or youre using Dropbox or other sync services to handle archives of files you create and modify, then youre not tied down. Lets look at how to cut the cord.

Because CrashPlan comprises local computer, networked, peer-to-peer, and cloud-based backup software and services, its possible you will need multiple methods to replace it. I recommend for most people that you have a clone of your system, an offsite clone or archive, and a cloud-based archive. (The clone allows a quick recovery from a failed or corrupted drive; the offsite clone can offer a similar benefit for a stolen computer or one destroyed in a disaster. If you encrypt your backup drive, you dont have to worry as much about it being stolen from an offsite location, too.)

Some people use very few applications, and rely on cloud-based photo, email, contacts, and calendars, in which case the most critical part is being able to have two backups beyond those synced documents and other files. Syncing services arent perfect, though its been a long time since I last heard of any major service having any data loss for customers.

Backblaze offers streamlined, speedy cloud backups.

Switch your cloud backup. The cloud part of CrashPlan is easiest. I recommend Backblaze hands down. Its affordable relative to CrashPlan for Small Business at $5 a month, $50 a year, or $95 for two years. It has a native and exceedingly fast backup client, recently upgraded to be even faster. Its been reliable in my usage of nearly two years, and its highly recommended by a number of long-time Mac pundits, writers, and tech heads who I know and trust. With a gigabit Internet connection, my backups can pass hundreds of megabits a second upstream.

Backblaze wont archive system files; thats the right behavior for a clone, and not for archiving software, anyway. What makes it stand out over Carbonite, which I dont recommend, is its encryption implementation. Lets be fair: CrashPlan does it best, if you use either of two strong options they offer. Using CrashPlans crummy Home or newer native clients, all encryption and decryption can happen using a key only you possess and know and entirely in the client.

Backblaze has the right set up for encryption, allowing you to choose a private key only you know and can access. Data is encrypted in its client and sent to its servers. Carbonite lacks this option on its Mac clients. Backblaze falls down only in restoring files: it only restores via a Web app, which requires its servers to temporarily possess your key. That opens a place of risk if its server software were compromised or it faced secret government orders, which are unfortunately a real thing in the U.S. and other countries. Id like them to evolve past this, and offer native on-computer decryption, which removes the risk nearly entirely of third-party access.

(You can read more details about CrashPlan, Backblaze, Carbonite, and other cloud-based backup services encryption implementations in a feature I wrote last year.)

Time Machine lets you pick any drive as a backup destination.

Switch your local and networked backup. If you were using CrashPlan for local or networked backup, the easiest swap is to Time Machine. Time Machine has a primary problem of being a black box, and when something goes wrong with an archive, you cant repair it. This is especially true with Time Capsule, which has an internal drive on which you cant run Disk Utilitys First Aid. Since I recommend rotating your clones offsite, Time Capsule also requires owning two Time Capsules to accomplish that, or using an attached external drive, which is very slow. I do recommend Time Machine for local and networked backup via a drive attached to one of your Macs as a combination of clone and archive. Just own two similar capacity drives, keep one offsite securely, and rotate them occasionally.

Time Machine has deep archives accessed via an outdated graphical interface.

You should also enable encryption on any drive you use with Time Machine. Then if someone were to obtain your Time Machine drive when your computer was powered down or grab one of your offsite drives, your data remains effectively impregnable. (See these instructions for turning encryption on with an external drive.)

Ive also experimented with using the Arq archiving software as a Time Machine and cloud service alternative. Arq archives files in human-readable format, not a proprietary one. It can archive them remotely to a variety of consumer-level and enterprise-class cloud account and usage-based storage systems. I reviewed Arq a few months ago. Its not terribly complicated and lets you set your own encryption for each archived destination. Depending on your needs, Econ Technologies ChronoSync might be the better option, even though its deeply complicated and better suited for sync or for very fiddly archiving plans; it has archive features and works with local and networked drives, and various cloud services, too.

Switch your cloning. If you were using CrashPlan to clone your systemCode42 didnt recommend that! But you could do it, anyway. Switch instead to Time Machine, which creates an effective clone as part of its basic operations; or pick SuperDuper or Carbon Copy Cloner, software dedicated to creating scheduled clones on local drives or to disk images.

Switch your peer-to-peer backup. If youve been using CrashPlan to swap files with someone you know elsewhere also running the software, theres no direct replacement, and it may be time to start rotating backups offsite to a safe-deposit box or other secure location. More advanced users could look into using SFTP (Secure FTP), which uses a secure connection to access files, and will work over the Internet if your computer has a publicly routable IP address. It can be enabled as easily as checking the Remote Access box in the Sharing system preference pane, and it allows logins via macOS accounts. Pair this with Arq or ChronoSync.

If you want to continue to be able to restore files from your CrashPlan archives for as long as your subscription is active using the Mac client, you have to leave the software installed. You also cannot delete a backup set or change the contents of the set. If you do so, CrashPlan deletes the files that you removed or the entire backup set from your archives.

Instead, use Settings > Backup to change the frequency from Always to run in the least frequent amount of time, like 6:00 am to 6:01 am on Mondays.

However, if youre ready to remove the application entirely and never retrieve archives or use the Web site for restoring (limited to 500MB of restoration at a time), follow Code42s instructions on using its uninstall app. This allows directs you to find additional folders to delete that may have temporary or cached data.

Code42s decision reminds us how much other peoples business plans can affect our need for the persistence of data. Because Code42 uses a proprietary format and youre just effectively renting space on its servers, you cant retrieve your raw archives and move them. Its not like shifting from one email program to another.

As part of any change you make, if you need deep archives that you own for a long time or forever, Id urge that you look into software that lets you retain those and in a format you can read without requiring third-party software.

Read more:
CrashPlan alternatives: How to move to another home backup solution - Macworld

Read More..

VMware shares to surge more than 20% because the Amazon cloud threat is overblown: Analyst – Yahoo Finance

Wall Street rarely talks about its mistakes, but Deutsche Bank admitted it overestimated the Amazon (NASDAQ: AMZN) Web Services threat to VMware's (NYSE: VMW) business.

The firm raised its rating for VMware shares on Monday to buy from hold, saying the company's server virtualization software can continue to thrive in a cloud-computing world.

"We've spent much of the last two years worried about VMware's on-premise core server business given its maturity and the threat from AWS/Cloud adoption [Amazon Web Services]," analyst Karl Keirstead wrote in a note to clients entitled "Overcoming our AWS fears."

"This upgrade should be seen in the context of growing evidence that large enterprises are embracing a hybrid model, materially lowering the out-year risk profile of VMware shares."

The hybrid model is defined by companies using both local servers on-site and cloud-computing servers off-site. Keirstead said he realized the staying power of VMWare's on-site server market was more "durable" than he originally forecast.

"We believe that large enterprises are migrating IT workloads to the public cloud model at a slower-than-expected pace and are electing to ramp spending to modernize their on-premise IT infrastructures," he wrote. "Our recent checks agree that VMware technology is proving to be more durable than they would have thought 12-18 months ago."

As a result, Keirstead increased his VMware price target to $120, which is 24 percent higher than Monday's close. His previous price target was $110.

VMware shares are outperforming the market this year. Shares have risen 23.2 percent year to date through Monday compared with the S&P 500's 8.5 percent gain.

The analyst said he is also cautiously optimistic about the VMware and Amazon AWS strategic partnership announced in October, which enables access to AWS computing power for the company's customers.

"We are positive on the deal for both parties. It is hard to imagine how this could end up being a net negative for either party," he wrote. "We conclude that the stock can still work even if the initial lift from VMware Cloud on AWS is modest."

VMware will report second-quarter earnings on Thursday after the market close. Its stock traded up 1.8 percent short after Tuesday's market open.

CNBC's Michael Bloom contributed to this story.

More From CNBC

Go here to see the original:
VMware shares to surge more than 20% because the Amazon cloud threat is overblown: Analyst - Yahoo Finance

Read More..

AMD Lines Up New China Datacenter Partners – EnterpriseTech

(Virgiliu Obada/Shutterstock)

As the server market stalls, processor makers continue to search for greener pastures. Among the most promising is the booming Chinese datacenter market, where processor maker Advanced Micro Devices announced expanded partnerships this week with a trio of hyperscale providers along with Lenovo, which has vowed to cultivate the Asian server market since acquiring IBM's server business in 2014.

AMD (NASDAQ: AMD) announced deployments of its EPYC server processor during an event in Beijing, including new customer JD.com. The Chinese e-commerce giant along with Internet search giant Baidu (NASDAQ: BIDU) said they would deploy the AMD server processor during the second half of this year while media and web services provider Tencent (HKG: 0700) rolls out the AMD platforms in its datacenters by year's end.

Meanwhile, AMD said partner Lenovo (HKG: 0992) would introduce EPYC-based ThinkSystem servers early next year.

Lenovo and other AMD server partners that include Chinese HPC vendor Sugon also cited the new processor's balance of high-speed I/O (the AMD processor includes 128 lanes of PCI Express 3), memory bandwidth and cores. New customer JD.com (NASDAQ: JD) said it would deploy a 32-core version of the EPYC in its datacenters with eight memory channels as it collaborates with the chipmaker on cloud, big data and artificial intelligence deployments.

Lenovo said it expects to deploy single- and dual-socket servers based on the AMD processor as part of its expansion throughout Asia.

Tencent Cloud will roll out AMD-based cloud servers with up to 64 process cores by the end of the year as it expands its cloud services.

The EPYC processor scales from eight to 32 "Zen" processor cores with each core supporting two high-end threads. Each processor comes with eight channels of memory. The two-socket version of the server processor can support up to 32 DDR4 modules on 16 memory channels, which works out to about 4 Tb of total memory capacity.

AMD along with server makers such as Lenovo are targeting the booming Chinese IT market as the server sales plateau elsewhere. The partners also stressed they are targeting emerging enterprise workloads such as e-commerce and big data. In so doing, the chip maker is gaining a foothold in "one of the fastest growing technology markets in the world," noted Forrest Norrod, AMD's general manager for enterprise, embedded and semicustom products.

AMD's server processor design is built around its high-end x86 core "with server features" dubbed "Zen." The new processor microarchitecture includes high-bandwidth and low-latency features that target emerging "mega" datacenters. AMD claimed at a recent chip industry conference that Zen delivers a 52 percent performance increase as measured in instructions per cycle compared to earlier AMD processors.

The new 14-nanometer core is based on an emerging chip processing technology called FinFET. According to reports, a 7-nanometer version of Zen is expected by 2020.

Related

About the author: George Leopold

George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.

Originally posted here:
AMD Lines Up New China Datacenter Partners - EnterpriseTech

Read More..

IEEE Approves Standards Project for Quantum Computing … – insideHPC

William Hurley is chair of IEEE Quantum Computing Working Group

Today IEEE announced the approval of the IEEE P7130Standard for Quantum Computing Definitions project. The new standards project aims to make Quantum Computing more accessible to a larger group of contributors, including developers of software and hardware, materials scientists, mathematicians, physicists, engineers, climate scientists, biologists and geneticists.

While Quantum Computing is poised for significant growth and advancement, the emergent industry is currently fragmented and lacks a common communications framework, said Whurley (William Hurley), chair, IEEE Quantum Computing Working Group. IEEE P7130 marks an important milestone in the development of Quantum Computing by building consensus on a nomenclature that will bring the benefits of standardization, reduce confusion, and foster a more broadly accepted understanding for all stakeholders involved in advancing technology and solutions in the space.

The purpose of this project is to provide a general nomenclature for Quantum Computing that may be used to standardize communication with related hardware, and software projects. This standard addresses quantum computing specific terminology and establishes definitions necessary to facilitate communication.

Confusions exist on what quantum computing or a quantum computer means, added Professor Hidetoshi Nishimori of the Tokyo Institute of Technology and IEEE P7130 working group participant. This partly originates in the existence of a few different models of quantum computing. It is urgently necessary to define each key word.

Sign up for our insideHPC Newsletter

See the original post:
IEEE Approves Standards Project for Quantum Computing ... - insideHPC

Read More..

Introducing Australia’s first quantum computing hardware company – Computerworld Australia

Australia's first quantum computing hardware company launched today, with the goal of producing a 10 qubit integrated circuit prototype by 2022.

Silicon Quantum Computing (SQC) Pty Ltd will develop and commercialise a prototype circuit, which will serve as a "forerunner to a silicon-based quantum computer" the company said.

The company has been formed by existing investors into the Centre for Quantum Computation and Communication Technology (CQC2T); namely USNW (which has invested $25 million into the centre), Commonwealth Bank of Australia ($14m), Telstra ($10m) and the Federal Government ($25m over five years as part of the National Innovation and Science Agenda).

The NSW Government today said it had pledged $8.7m towards the venture, the money coming from its Quantum Computing Fund which wasannounced in July.

SQCs board is made up of Michelle Simmons, UNSW Professor of Physics and director of the CQC2T; Hugh Bradlow, Telstras chief scientist; David Whiteing, Commonwealth Bank of Australias chief information officer; and Glenys Beauchamp, secretary of the Department of Industry, Innovation and Science.

Corporate lawyer and company director Stephen Menzies will act as interim chair.

We have a board which is very corporately focused, on developing and funding the engineering work to develop a ten qubit device. We will fund hardware. From that we will develop a patent pool which we hope will be without peer in the world, Menzies said.

In the first five years were very focused, the business plan is focused, on the patents associated with an engineered 10 quibit device. But beyond that we see that we have a stage on which we can develop across Australia, and Australian institutions, a broad quantum industry.

The company is seeking a further three shareholders to bring the total investment up to $100m.

The company will need additional moneys, and the business plan contemplates it will have additional shareholders who will join. All of whom we hope will bring strategic focus to the business and company, and also will bring their own enthusiasm and passion for quantum technologies, Menzies added.

SQC which will operate within the Centre for QTC at UNSW in Sydney has already started recruiting for forty roles, including 25 postdoctoral researchers, 12 PhD students, and a number of lab technicians.

Huge potential

Telstras Hugh Bradlow reiterated the telcos aim, revealed in June, to offer quantum computing to customers as-a-service.

Everyone knows that Telstra aims to be a globally leading technology company and if were going to do that we have to be at the forefront of 21st Century computing. [When realised] our customers are going to have access to a computer of unprecedented power and theyre not going to have the faintest idea of how to use it. So its Telstras aim to be in a position that, when that happens, we are skilled and knowledgeable about how to deliver those services to our customers. We look forward to taking [SQCs] products and putting them into our cloud services offerings in the future, he said.

Dilan Rajasingham, head of emerging technologies at Commonwealth Bank of Australia spoke of the huge potential of the technology.

Quantum computing is a revolutionary technology. It will transform the world as we know it. Weve invested more than $14m in quantum computing, because we believe in its future promise, we believe in its future capability, we believe in its potential as a differentiator. Not just for those of us involved, but also for Australia in general, he said.

We believe that quantum computing could be the foundation of a new high-tech ecosystem that can comes from Australia, our home, our biggest market and a key part of our identity. More than that though we are creating something newEven though the machine is still a few years away, the time for investment is now.

Senator Arthur Sinodinos, Minister for Industry, Innovation and Science, said the company would help give Australia a competitive advantage over the rest of the world.

"As a country we punch above our weight when it comes to knowledge creation but we really need to be doing more when it comes to commercialising our great ideas here in Australia. Its very important we do that. Thats not to say we commercialise every idea in this country but too many ideas do go offshore, Sinodinos said.

"Whatever sector of innovation we want to be really good in, we want to be world beaters. We want to create a competitive advantage, command a premium. And you do that by doing something new, something others find it hard to replicate or it takes them time to replicate, and by the time theyve replicated it youve moved on to something else. This is what this is all about, creating a world competitive advantage that we can build on with great upstream and downstream effects over time.

Global race

The SQC is now part ofa global race to build a quantum computer, building on the silicon-based approach of the CQC2T.

That race is hotting up. In July Microsoft cemented its long-standing quantum computing research relationship with the University of Sydney, with the signing of a multi-year investment deal understood to be in the multiple millions.

While nobody has yet built a proven quantum computer, a number of firms have already announced plans to make the technology commercially available.

Researchers at Googles Quantum AI Laboratory said in aMarchNatureeditorialthat the company would commercialise quantum technologies within five years. In the same month, IBM announcedits commercial'Q' quantum computing programwould deliver paid quantum computing services via the cloud to usersbefore the end of the year.

Microsoft, however, told Computerworld in July that it was still trying to figure out a business model for the technology.

Error: Please check your email address.

Tags Centre for Quantum Computation and Communication Technology (CQC2T)Arthur SinodinosNSW GovernmentCBAUNSWSilicon Quantum ComputingCommonwealth Bank of AustraliaHugh BradlowTelstraDilan Rajasinghamuniversity of new south wales

More about AustraliaCommonwealth BankCommonwealth Bank of AustraliaDepartment of IndustryFederal GovernmentGoogleIBMMicrosoftNSW GovernmentQQuantumTechnologyUniversity of SydneyUNSW

See more here:
Introducing Australia's first quantum computing hardware company - Computerworld Australia

Read More..

How quantum mechanics can change computing – The Conversation US

In early July, Google announced that it will expand its commercially available cloud computing services to include quantum computing. A similar service has been available from IBM since May. These arent services most regular people will have a lot of reason to use yet. But making quantum computers more accessible will help government, academic and corporate research groups around the world continue their study of the capabilities of quantum computing.

Understanding how these systems work requires exploring a different area of physics than most people are familiar with. From everyday experience we are familiar with what physicists call classical mechanics, which governs most of the world we can see with our own eyes, such as what happens when a car hits a building, what path a ball takes when its thrown and why its hard to drag a cooler across a sandy beach.

Quantum mechanics, however, describes the subatomic realm the behavior of protons, electrons and photons. The laws of quantum mechanics are very different from those of classical mechanics and can lead to some unexpected and counterintuitive results, such as the idea that an object can have negative mass.

Physicists around the world in government, academic and corporate research groups continue to explore real-world deployments of technologies based on quantum mechanics. And computer scientists, including me, are looking to understand how these technologies can be used to advance computing and cryptography.

In our regular lives, we are used to things existing in a well-defined state: A light bulb is either on or off, for example. But in the quantum world, objects can exist in a what is called a superposition of states: A hypothetical atomic-level light bulb could simultaneously be both on and off. This strange feature has important ramifications for computing.

The smallest unit of information in classical mechanics and, therefore, classical computers is the bit, which can hold a value of either 0 or 1, but never both at the same time. As a result, each bit can hold just one piece of information. Such bits, which can be represented as electrical impulses, changes in magnetic fields, or even a physical on-off switch, form the basis for all calculation, storage and communication in todays computers and information networks.

Qubits quantum bits are the quantum equivalent of classical bits. One fundamental difference is that, due to superposition, qubits can simultaneously hold values of both 0 and 1. Physical realizations of qubits must inherently be at an atomic scale: for example, in the spin of an electron or the polarization of a photon.

Another difference is that classical bits can be operated on independently of each other: Flipping a bit in one location has no effect on bits in other locations. Qubits, however, can be set up using a quantum-mechanical property called entanglement so that they are dependent on each other even when they are far apart. This means that operations performed on one qubit by a quantum computer can affect multiple other qubits simultaneously. This property akin to, but not the same as, parallel processing can make quantum computation much faster than in classical systems.

Large-scale quantum computers that is, quantum computers with hundreds of qubits do not yet exist, and are challenging to build because they require operations and measurements to be done on a atomic scale. IBMs quantum computer, for example, currently has 16 qubits, and Google is promising a 49-qubit quantum computer which would be an astounding advance by the end of the year. (In contrast, laptops currently have multiple gigabytes of RAM, with a gigabyte being eight billion classical bits.)

Notwithstanding the difficulty of building working quantum computers, theorists continue to explore their potential. In 1994, Peter Shor showed that quantum computers could quickly solve the complicated math problems that underlie all commonly used public-key cryptography systems, like the ones that provide secure connections for web browsers. A large-scale quantum computer would completely compromise the security of the internet as we know it. Cryptographers are actively exploring new public-key approaches that would be quantum-resistant, at least as far as they currently know.

Interestingly, the laws of quantum mechanics can also be used to design cryptosystems that are, in some senses, more secure than their classical analogs. For example, quantum key distribution allows two parties to share a secret no eavesdropper can recover using either classical or quantum computers. Those systems and others based on quantum computers may become useful in the future, either widely or in more niche applications. But a key challenge is getting them working in the real world, and over large distances.

Continue reading here:
How quantum mechanics can change computing - The Conversation US

Read More..

Q2 2017 Akamai State Of The Internet / Security Report Analyzes Re-Emergence Of PBot Malware; Domain Generation … – GuruFocus.com

CAMBRIDGE, Mass., Aug. 22, 2017 /PRNewswire/ -- Newly released data shows that distributed denial of service (DDoS) and web application attacks are on the rise once again, according to the Second Quarter, 2017 State of the Internet / Security Report released by Akamai Technologies, Inc. (NASDAQ: AKAM). Contributing to this rise was the PBot DDoS malware which re-emerged as the foundation for the strongest DDoS attacks seen by Akamai this quarter.

In the case of PBot, malicious actors used decades-old PHP code to generate the largest DDoS attack observed by Akamai in the second quarter. Attackers were able to create a mini-DDoS botnet capable of launching a 75 gigabits per second (Gbps) DDoS attack. Interestingly, the Pbot botnet was comprised of a relatively small 400 nodes, yet still able to generate a significant level of attack traffic.

Another entry on the "everything old is new again" list is represented by the Akamai Enterprise Threat Research Team's analysis of the use of Domain Generation Algorithms (DGA) in malware Command and Control (C2) infrastructure. Although first introduced with the Conficker worm in 2008, DGA has remained a frequently used communication technique for today's malware. The team found that infected networks generated approximately 15 times the DNS lookup rate of a clean network. This can be explained as the outcome of access to randomly generated domains by the malware on the infected networks. Since most of the generated domains were not registered, trying to access all of them created a lot of noise. Analyzing the difference between behavioral characteristics of infected versus clean networks is one important way of identifying malware activity.

When the Mirai botnet was discovered last September, Akamai was one of its first targets. The company's platform continued to receive and successfully defended against attacks from the Mirai botnet thereafter. Akamai researchers have used the company's unique visibility into Mirai to study different aspects of the botnet, most specifically in the second quarter, its C2 infrastructure. Akamai research offers a strong indication that Mirai, like many other botnets, is now contributing to the commoditization of DDoS. While many of the botnet's C2 nodes were observed conducting "dedicated attacks" against select IPs, even more were noted as participating in what would be considered "pay-for-play" attacks. In these situations, Mirai C2 nodes were observed attacking IPs for a short duration, going inactive and then re-emerging to attack different targets.

"Attackers are constantly probing for weaknesses in the defenses of enterprises, and the more common, the more effective a vulnerability is, the more energy and resources hackers will devote to it," said Martin McKeay, Akamai senior security advocate. "Events like the Mirai botnet, the exploitation used by WannaCry and Petya, the continued rise of SQLi attacks and the re-emergence of PBot all illustrate how attackers will not only migrate to new tools but also return to old tools that have previously proven highly effective."

By the Numbers:

Other key findings from the report include:

A complimentary copy of the Q2 2017 State of the Internet / Security Report is available for download at http://akamai.me/2i9vrdz. Download individual charts and graphs, including associated at http://akamai.me/2w6mI1v.

MethodologyThe Akamai Second Quarter, 2017 State of the Internet / Security Report combines attack data from across Akamai's global infrastructure and represents the research of a diverse set of teams throughout the company. The report provides analysis of the current cloud security and threat landscape, as well as insight into attack trends using data gathered from the Akamai Intelligent Platform. The contributors to the State of the Internet / Security Report include security professionals from across Akamai, including the Security Intelligence Response Team (SIRT), the Threat Research Unit, Information Security, and the Custom Analytics group.

About AkamaiAs the world's largest and most trusted cloud delivery platform, Akamai makes it easier for its customers to provide the best and most secure digital experiences on any device, anytime, anywhere. Akamai's massively distributed platform is unparalleled in scale with over 200,000 servers across 130 countries, giving customers superior performance and threat protection. Akamai's portfolio of web and mobile performance, cloud security, enterprise access, and video delivery solutions are supported by exceptional customer service and 24/7 monitoring. To learn why the top financial institutions, e-commerce leaders, media & entertainment providers, and government organizations trust Akamai please visit http://www.akamai.com, blogs.akamai.com, or @Akamai on Twitter.

View original content with multimedia:http://www.prnewswire.com/news-releases/q2-2017-akamai-state-of-the-internet--security-report-analyzes-re-emergence-of-pbot-malware-domain-generation-algorithms-relationship-between-mirai-command--control-and-attack-targets-300507459.html

SOURCE Akamai Technologies, Inc.

Read the original:
Q2 2017 Akamai State Of The Internet / Security Report Analyzes Re-Emergence Of PBot Malware; Domain Generation ... - GuruFocus.com

Read More..