Page 40«..1020..39404142..5060..»

Central Banks Can’t Ignore the Cryptocurrency Boom – Bloomberg

When the cryptocurrency Exio Coin starts a round of fundraising on Sept. 7, its founders say the unit will come with a unique distinction: the first to be endorsed by a sovereign nation.

The identity of the government backer wont be revealed until October,and Bloomberg News has no way of verifying the claim of support.According to co-founder Sunny Johnson though, the supporter is one of “the worlds richest countries” on a per capita basis.

The claim of official approval highlights how the boom in cryptocurrencies and their underlying technology is becoming too big for central banks, long the guardian of official money, to ignore. From speculative betting to trading solar power, digital money is proliferating.

Until recently, officials at major central banks were happy to watch as pioneers in the field progressed by trial and error, safe in the knowledge that it was dwarfed by roughly $5 trillion circulating daily in conventional currency markets. But now as officials turn an eye toward the increasingly pervasive technology, the risk is that theyre reacting too late to both the pitfalls and the opportunities presented by digital coinage.

“Central banks cannot afford to treat cyber currencies as toys to play with in a sand box,” said Andrew Sheng, chief adviser to the China Banking Regulatory Commission and Distinguished Fellow of the Asia Global Institute, University of Hong Kong. “It is time to realize that they are the real barbarians at the gate.”

Bitcoin — the largest and best-known digital currency — and its peers pose a threat to the established money system by effectively circumventing it. Money as we know it depends on the authority of the state for credibility, with central banks typically managing its price and/or quantity. Cryptocurrencies skirt all that and instead rely on their supposedly unhackable technology to guarantee value.

If they dont get a handle on bitcoin and their ilk,and more people adopt them, central banks could see an erosion of their control over the money supply. The solution may be in the old adage, if you cant beat them, join them.

The Peoples Bank of China has done trial runs of its prototype cryptocurrency, taking it a step closer to being the first major central bank to issue digital money. The Bank of Japan and the European Central Bank have launched a joint research project which studies the possible use of distributed ledger — the technology that underpins cryptocurrencies — for market infrastructure.

Read more about Chinas digital currency efforts.

The Dutch central bank has created its own cryptocurrency — for internal circulation only — to better understand how it works. And Ben Bernanke, the former chairman of the Federal Reserve who has said digital currencies show “long term promise,” will be the keynote speaker at a blockchain and banking conference in October hosted by Ripple, the startup behind the fourth largest digital currency.

Russia, too, has shown interest in ethereum, the second-largest digital currency, with the central bank deploying a blockchain pilot program.

In the U.S., both banks and regulators are studying distributed ledger technology and Fed officials have made a couple of formal speeches on the topic in the past 12 months, but have voiced reservations about digital currencies themselves.

Fed Governor Jerome Powell said in March there were significant policy issues concerning them that needed further study, including vulnerability to cyber-attack, privacy and counterfeiting. He also cautioned that a central bank digital currency could stifle innovations to improve the existing payments system.

At the same time, central bankers are obviously wary of the risks posed by alternative currencies — including financial instability and fraud. One example: The Tokyo-based Mt. Gox exchange collapsed spectacularly in 2014 after disclosing that it lost hundreds of millions of dollars worth of bitcoin.

But for all their theoretical tinkering, official-money guardians have largely stood by as digital currencies have taken off. The explosion in initial coin offerings, or ICOs, is evidence. Investors have poured hundreds of millions of dollars into the digital currency market this year alone.

The dollar value of the 20 biggest cryptocurrencies is around $150 billion, according to data from Coinmarketcap.com. Bitcoin itself has soared more than 380 percent this year and hit a record — but its also prone to wild swings, like a 50 percent slump at the end of 2013.

“At a global level, there is an urgent need for regulatory clarity given the growth of the market,”said Daniel Heller, Visiting Fellow at the Peterson Institute for International Economics and previously head of financial stability at the Swiss National Bank.

Rather than trying to regulate the world of virtual currencies, central banks are mainly warning of risks and attempting to garner some advantage from distributed-ledger technology for their own purposes, like upgrading payments systems.

Carl-Ludwig Thiele, a board member of Germanys Bundesbank, has described bitcoin as a niche phenomenon but blockchain as far more interesting, if it can be adapted for central-bank use. In July, Austrias Ewald Nowotny said the hes open to new technologies but doesnt believe that will lead to a new currency, and that dealing in bitcoin is effectively gambling.

There could also be a monetary policy aspect to consider. ECB Governing Council member Jan Smets said in December that a central-bank digital currency could give policy makers more leeway when interest rates are negative. Policy makers have long been concerned that if they cut rates too low, people will simply hoard cash. The ECBs deposit rate is currently minus 0.4 percent.

Other central banks see the uses of distributed ledger technology, but worry about the abuses virtual money can be put to outside the official system — like criminal money laundering and the sale of illegal goods. Thats not to mention the risk that virtual currencies could pose to the rest of the financial system if the bubble were to pop.

Bank of England Governor Mark Carney — who has said blockchain shows great promise — also warned regulators this year to keep on top of developments in financial technology if they want to avoid a 2008-style crisis.

While Mt. Gox cast a shadow over bitcoin in Japan, it now has many supporters in the worlds third-biggest economy. Parliamentpassed a law in April this year making it a legal method of payment. Japans largest banks have invested in bitcoin exchanges and small-cap stocks linked to the cryptocurrency or its underlying technology have rallied this year as it begins to win favor with some retailers.

With the nations Financial Services Agency responsible for bitcoins regulation, the BOJ remains focused on studying its distributed ledger technology.

“Central banks are not yet ready for regulating digital currencies,”said Xiao Geng, a professor of finance and public policy at the University of Hong Kong. “But they have to in the future since unregulated digital currencies are prone to crime and Ponzi-type speculation.”

To be sure, the attraction of virtual currencies for many remains speculation, rather than for households or companies buying and selling goods.

“It is a fad that will die down and it will be used by less than 1 percent of consumers and accepted by even fewer merchants,” said Sumit Agarwal of Georgetown University, who was previously a senior financial economist at the Federal Reserve Bank of Chicago. “Even if we can make the digital currency safe it has many hurdles.”

The founders of Exio Coin argue they have developed a middle way with principles of governance that will set the trend for the blockchain industry. While some regulation is inevitable, cryptocurrencies are intended to be a global form of currency and not subject to the rules and regulations of one jurisdiction, said Johnson.

With all the misgivings about cryptocurrencies, having a sovereign endorser — rather than an issuer — may be a pragmatic way of offering the benefits of digital money with less of the worry.

“With no one central bank maintaining control Exio Coin will retain its decentralized characteristics,” Johnson said. “The sovereign endorser shares our vision for the future.”

With assistance by Brett Miller, Lucy Meakin, Carolynn Look, and Justina Lee

Read the original post:
Central Banks Can’t Ignore the Cryptocurrency Boom – Bloomberg

Read More..

Whoppercoin is a cryptocurrency you can eat or trade – The Verge

Have you ever wanted to trade something fun, rather than boring stuff like oil, iron ore, or shares? Maybe something like burgers? Now you can kind of. Burger King in Russia has just announced a new loyalty program using virtual coins called Whoppercoins, which is hosted on the Waves blockchain platform. Waves allows users to swap and trade blockchain tokens which have an inherent value on a peer-to-peer exchange.

A supply of 1 billion Whoppercoins have been issued so far, and customers will receive one Whoppercoin for every ruble spent ($1 is 59 RUB). They can redeem one Whopper burger with 1,700 Whoppercoins, which are stored in a digital wallet. While the Whopper cryptocurrency is a bit of a gimmick, customers can still trade and transfer the coins, just like any other cryptocurrency.

Whoppercoin even has its own dedicated asset page, which describes it as a token for buying burgers in Russian Burger King and for the stock exchange. Burger King Russia says it will release an app for the program in the Apple Store and Google Play in September.

Now Whopper is not only burger that people in 90 different countries love its an investment tool as well, Ivan Shestov, head of external communications at Burger King Russia said in a statement. While Shestov saying eating Whoppers now is a strategy for financial prosperity tomorrow, may be a bit of a stretch, free burgers is something Ill always put my hand up for.

Visit link:
Whoppercoin is a cryptocurrency you can eat or trade – The Verge

Read More..

Learn about cryptocurrency mining with this interactive blockchain demo – TNW

In order to understand cryptocurrency youll need to grasp the fundamentals of blockchain. Luckily, someone made an interactive tutorial that not only explains the idea, but walks users through mining valid hashes with their CPUs.

Blockchain Demo overcomes part of the learning curve by showing you exactly what mining looks like. Its a creative way to adapt a useless form of mining for educational purposes. Youll need hardware like a miner a computer built for the job if you want to make any actual money.

Were past the days of adapting an old computer for the job, entry-level units have six GPUs. The bare-bones miner costs about $300, but considering many manufacturers have a minimum order amount, sometimes in the range of 100 units, this isnt a weekend hobby.

Credit: Blockchain Demo

Blockchain Demos creator,Sean Han, explained his project on Product Hunt:

Blockchain Demo is my attempt at demystifying the technology behind cryptocurrencies. It has a living blockchain, a peer-to-peer network, and a user tour.

Hans tool guides you through the process of mining a block using the same type of ledger technology powering Bitcoin.

For a deeper dive into the concept, you can also check out the inspiration for Blockchain Demo in this video by Anders.

Read next: This t-shirt subscription box is the epitome of startup fetishism

See the original post:
Learn about cryptocurrency mining with this interactive blockchain demo – TNW

Read More..

$160 Billion: Cryptocurrency Market Sets New All-Time High … – CoinDesk

Investment in cryptocurrencies continues to increase.

Spurred by increases in investment, the total value of the more than 800 publicly traded cryptocurrencies and crypto assets pushed past $160 billion for the first time ever, according to data provider CoinMarketCap.

With the move, the figure is now up 1,500 percent from the $10 billion observed at the start of the year.

Notably, the new high was set even as bitcoin, the market’s largest asset, continued its recent pattern of sideways trading, hovering in the $4,400-range, or about 1% below its all time high of $4,522.13 on the CoinDesk Bitcoin Price Index (BPI).

Similar, ether and bitcoin were mostly flat on the day’s trading.

At press time, it seemed most of the growth in the top 10 cryptocurrencies was consolidated to two assets, with Ripple’s XRP token and monero’s XMR token rising 9% and 7.9%, respectively, over the last 24 hours.

On XMR’s increase, the asset regained a spot in the top 10 cryptocurrencies by market capitalization after some time out of the spotlight.

Elsewhere, investment in litecoin appeared to cool after it set all-time highs earlier in the day’s trading. After rising past $60 for the first time since the protocol was introduced in 2011, the token was down nearly 2%.

Ruler image via Shutterstock

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Have breaking news or a story tip to send to our journalists? Contact us at [emailprotected].

See the article here:
$160 Billion: Cryptocurrency Market Sets New All-Time High … – CoinDesk

Read More..

Quantum computing event explores the implications for business – Cambridge Network

A free, one-day ‘Executive Track’ on the issue – part of an international workshop on quantum-safe cryptography – takes place on Wednesday 13 September at the Westminster Conference Centre, London. It focuses on the implications for businesses and highlights developments underway to address them.

Government cyber-security agencies (UK, US, Canada) and experts from universities and industry (including Amazon, BT, Cisco and Microsoft) will present and discuss the issues and potential solutions to this fundamental technological development that threatens catastrophic damage to Government, industry and commerce alike.

Find out more and book your place at this free event here

The Executive Track on13 Septemberis designed for business leaders and will outline the state of the quantum threat and its mitigation for a C-level audience including CEOs, CTOs and CISOs.

Attendees will learn how quantum computers are poised to disrupt the current security landscape, how government and industry organisations are approaching this threat, and the emerging solutions to help organisations protect their cyber systems and assets, now and into the future of quantum computing.

___________________________________________________

See the article here:
Quantum computing event explores the implications for business – Cambridge Network

Read More..

Cost Reduction Strategies on Java Cloud Hosting Services – InfoQ.com

Key Takeaways

Cloud resources can be expensive, especially when you are forced to pay for resources that you dont need; on the other hand resource shortages cause downtimes. Whats a developer to do? In this article we will discuss techniques for determining the golden medium that lets you pay for just the resources you actually consume, without being limited as your application capacity requirements scale.

The first step to any solution of course is admitting that you have a problem. Below are some details on the issue that many cloud users face.

Almost every cloud vendor offers the ability to choose from a range of different VM sizes. Choosing the right VM size can be a daunting task; too small and you can trigger performance issues or even downtimes during load spikes. Over-allocate? Then during normal load or idle periods all unused resources are wasted. Does this scenario look familiar from your own cloud hosted applications?

And when the project starts growing horizontally, the resource inefficiency issue replicates in each instance, and so, the problem grows proportionally.

In addition, if you need to add just a few more resources to the same VM, the only way out with most of current cloud vendors is to double your VM size. See the sample of AWS offering below.

(Click on the image to enlarge it)

Exacerbating the problem, you need to incur downtime when you move, by stopping a current VM, performing all steps of application redeploy or migration, and then dealing with the inevitable associated challenges.

This shows that VMs are not quite flexible and efficient in terms of resource usage, and limits adjustment according to variable loads. Such lack of elasticity directly leads to overpaying.

If scale out is not helping to use resources efficiently, then we need to look inside our VMs for a deeper understanding of how vertical scaling can be implemented.

Vertical scaling optimizes memory and CPU usage of any instance, according to its current load. If configured properly, this works perfectly for both monoliths, as well as microservices.

Setting up vertical scaling inside a VM by adding or removing resources on the fly without downtimes is a difficult task. VM technologies provide memory ballooning, but its not fully automated, requiring tooling for monitoring the memory pressure in the host and guest OS, and then activating up or down scaling as appropriate. But this doesn’t work well in practice, as the memory sharing should be automatic in order to be useful.

Container technology unlocks a new level of flexibility thanks to its out-of-box automatic resource sharing among containers on the same host, with a help of cgroups. Resources that are not consumed within the limit boundaries are automatically shared with other containers running on the same hardware node.

And unlike VMs, the resource limits in containers can be easily scaled without reboot of the running instances.

As a result, the resizing of the same container on the fly is easier, cheaper and faster than moving to larger VMs.

There are two types of containers application and system containers. An application container (such as Docker or rkt) typically runs in as little as a single process, whereas a system container (LXD, OpenVZ) behaves like a full OS and can run full-featured init systems like systemd, SysVinit, and openrc, that allow processes to spawn other processes like openssh, crond, or syslogd, together inside a single container. Both types support vertical scaling with resource sharing for higher efficiency.

Ideally on new projects you want to design around application containers from the ground up, as it is relatively easy to create the required images using publicly available Docker templates. But there is a common misconception that containers are good only for greenfield applications (microservices and cloud-native). The experience and use cases prove possibility to migrate existing workloads from VMs to containers without rewriting or redesigning applications.

For monolithic and legacy applications it is preferable to use system containers, so that you can reuse architecture, configuration, etc., that were implemented in the original VM design. Use standard network configurations like multicast, run multiple processes inside a container, avoid issues with incorrect memory limits determination, write on the local file system and keep it safe during container restart, troubleshoot issues and analyze logs in an already established way, use a variety of configuration tools based on SSH, and be liberal in relying on other important “old school” tasks.

To migrate from VMs, monolithic application topology should be decomposed into small logical pieces distributed among a set of interconnected containers. A simple representation of the decomposition process is shown in the picture below.

Each application component should be placed inside an isolated container. This approach can simplify the application topology in general, as some specific parts of the project may become unnecessary within a new architecture.

For example, Java EE WebLogic Server consists mainly of three kinds of instances required for running in a VM: administration server, node manager and managed server. After decomposition, we can get rid of the node manager role, which is designed as a VM agent to add/remove managed server instances, as now they will be added automatically by the container and attached directly to administration server using the container orchestration platform and a set of WLST (WebLogic Server Scripting Tool) scripts.

To proceed with migration, you need to prepare the required container images. For system containers, that process might be a bit more complex than for application containers, so either build it yourself or use an orchestrator like Jelastic with pre-configured system container templates.

And finally, deploy the project itself and configure the needed interconnections.

Now each container can be scaled up and down on the fly with no downtime. It is much thinner compared to virtual machines, so this operation takes much less time compared to scaling with VMs. And the horizontal scaling process became very granular and smooth, as a container can be easily provisioned from the scratch or cloned.

For scaling Java vertically, it is not sufficient to just use containers; you also need to configure the JVM properly. Specifically, the garbage collector you select should provide memoryshrinking in runtime.

Such GC packages all the live objects together, removes garbage objects, uncommit and releases unused memory back to the operation system, in contrast to non-shrinking GC or non-optimal JVM start options, where Java applications hold all committed RAM and cannot be scaled vertically according to the application load. Unfortunately, the JDK 8 default Parallel garbage collector (-XX:+UseParallelGC) is not shrinking and does not solve the issue of inefficient RAM usage by JVM. Fortunately, this is easily remedied by switching to Garbage-First (-XX:+UseG1GC).

Lets see the example below. Even if your application has low RAM utilization (blue in the graph), the unused resources cannot be shared with other processes or other containers as its fully allocated to the JVM (orange).

(Click on the image to enlarge it)

However, the good news for the Java ecosystem is that as of JDK 9, themodern shrinking G1 garbage collector is enabled by default. One of its main advantages is the ability to compact free memory space without lengthy GC pause timesand uncommit unused heap.

Use the following parameter to enable G1, if you use JDK lower than 9th release:-XX:+UseG1GC

The following two parameters configure the vertical scaling of memory resources:

Also, the application should periodically invoke Full GC, for example, System.gc(), during a low load or idle stage. This process can be implemented inside the application logic or automated with a help of the external Jelastic GC Agent.

In the graph below, we show the result of activating the following JVM start options with delta time growth of about 300 seconds:

-XX:+UseG1GC -Xmx2g -Xms32m

(Click on the image to enlarge it)

This graph illustrates the significant improvement in resource utilization compared to the previous sample. The reserved RAM (orange) increases slowly corresponding to the real usage growth (blue). And all unused resources within the Max Heap limits are available to be consumed by other containers or processes running in the same host, and not wasted by standing idle.

This proves that a combination of container technology and G1 provides the highest efficiency in terms of resource usage for Java applications in the cloud.

The last (but not least) important step is to choose a cloud provider with a “pay per use” pricing model in order to be charged only based on consumption.

Cloud computing is very often compared to electricity usage, in that it provides resources on demand and offers a “pay as you go” model. But there is a major difference – your electric bill doesnt double when you use a little more power!

Most of the cloud vendors provide a “pay as you go” billing model, which means that it is possible to start with a smaller machine and then add more servers as the project grows. But as we described above, you cannot simply choose the size that precisely fits your current needs and will scale with you, without some extra manual steps and possible downtimes. So you keep paying for the limits – for a small machine at first, then for one double in size, and ultimately horizontal scaling to several underutilized VMs.

In contrast to that, a “pay as you use” billing approach considers the load on the application instances at a present time, and provides or reclaims any required resources on the fly, which is made possible thanks to container technology. As a result, you are charged based on actual consumption and are not required to make complex reconfigurations to scale up.

(Click on the image to enlarge it)

But what if you are already locked into a vendor with running VMs, and youre paying for the limits and not ready to change it, then there is still a possible workaround to increase efficiency and save money? You can take a large VM, install a container engine inside and then migrate the workloads from all of the small VMs. In this way, your application will be running inside containers within the VM – a kind of “layer-cake”, but it helps to consolidate and compact used resources, as well as to release and share unused ones.

Realizing benefits of vertical scaling helps to quickly eliminate a set of performance issues, avoid unnecessary complexity with rashly implemented horizontal scaling, and decrease cloud spends regardless of application type – monolith or microservice.

Ruslan Synytsky is CEO and co-founder of Jelastic, delivering multi-cloud Platform-as-a-Service for developers. He designed the core technology of the platform that runs millions of containers in a wide range of data centers worldwide. Synytsky worked on building highly-available clustered solutions, as well as enhancements of automatic vertical scaling and horizontal scaling methods for legacy and microservice applications in the cloud. Rich in technical and business experience, Synytsky is actively involved in various conferences for developers, hosting providers, integrators and enterprises.

See the original post:
Cost Reduction Strategies on Java Cloud Hosting Services – InfoQ.com

Read More..

Cloud Native The Perfect Storm for Managed SD-WAN Services – Network World

We are excited to announce today that Silver Peak has joined MEF. With 130+ members, MEFs new SD-WAN initiatives are intended to address implementation challenges and help service providers to accelerate managed SD-WAN service deployments. Some of this work involves defining SD-WAN use cases, and a key use case revolves around connecting distributed enterprises and users to cloud-hosted SaaS applications and IaaS.

Enterprise CIOs continue to accelerate the pace of corporate digital transformation initiatives, often including plans to migrate enterprise applications to the cloud. Cloud-first is often the preferable choice for hosting new applications, enabling enterprises to securely connect users to applications from anywhere and across any type of WAN service.

The migration from data center-hosted to cloud-hosted applications is a perfect storm for building an SD-WAN. The best-in-class SD-WAN solutions and services take an application-aware approach that focuses on performance and availability.

Internet-Destined Traffic on the Rise

Two years ago, as an analyst, one of the questions that I always asked of my enterprise clients was, what percentage of your applications traffic is destined for the Internet? At the time, that percentage was less than 50%. Today, based on many recent conversations with Tier 1 service providers and distributed enterprises, I estimate that the percentage of internet-bound traffic has increased to 85%.

However, using broadband Internet services for the enterprise WAN, particularly for accessing cloud-native applications, poses additional challenges for enterprises that are concerned with the security, performance and visibility of their applications and network.

In fact, according to a recent Frost & Sullivan SD-WAN blog, 43% of enterprises chose improved cloud connectivity to deliver a better SaaS experience as the second most compelling reason to deploy an SD-WAN solution.

Ensuring High-Quality Cloud Connectivity

So how do enterprise IT managers ensure the equivalent customer experience when it comes to performance, security and visibility for cloud-hosted SaaS applications in contrast to data-center hosted applications and irrespective of the underlying network connectivity?

A best-in-class SD-WAN cloud connect use case can address the performance, security and visibility challenges for both on-net and off-net sites and across any network service including broadband.

Today, managed service providers offer either private MPLS or Ethernet cloud connect services for enterprises to connect on-net branch sites to a limited set of SaaS and IaaS providers. There are four key challenges that may limit the opportunity to fully address the managed cloud connect market:

1. Existing service provider cloud connect service offerings for enterprise users across off-net sites must be backhauled to their nearest on-net provider PoP. This can introduce latency and adversely affect SaaS application performance.

2. The complexity of identifying and securing all of the enterprises SaaS applications traffic requires additional resources and security policy flexibility to integrate a secure web gateway, enterprise branch firewall, or network-based security service.

3. The requirement to identify, manage and prioritize trusted applications vs. personal web applications (YouTube, Facebook or Netflix) on the first packet is difficult once an application flow has been directed to a specific path.

4. There are incremental expense, time and capital resource requirements to establish direct cloud connect peering relationships with every SaaS provider and for all SaaS data center sites.

Silver Peak SD-WAN Addresses Cloud Challenges

The Silver Peak Unity EdgeConnectSP SD-WAN solution addresses a full spectrum of key requirements for developing and deploying a compelling SD-WAN cloud connect service that can address all four key challenges:

1. Dynamic and secure steering of cloud-destined application traffic to any SaaS provider

2. Policy-based automated local internet breakout for trusted cloud applications with First-packet iQ that identifies and classifies applications on the first packet of each connection

3. High-performance SaaS optimization which calculates the round-trip latency and automatically selects the optimal cloud connect path for 50+ SaaS applications

4. Simple security service chaining to secure web gateways and industry-leading next generation firewalls to support granular security policies for SaaS and web-based applications.

5. Minimize the requirement to backhaul all off-net cloud-destined applications to the nearest service provider MPLS PoP

By taking advantage of these advanced EdgeConnectSP features and capabilities, service providers can extend their existing cloud connect services beyond the MPLS private cloud connect use cases and offer an advanced SD-WAN Cloud Connect service. This creates an enormous opportunity for service providers to offer tiered, managed cloud connect services that enable SLAs for public, private on-net and off-net deployments of SaaS applications.

As a new MEF member, Silver Peak looks forward to contributing to and enhancing the service provider market opportunity for new on-demand, tiered managed SD-WAN services.

Read the original here:
Cloud Native The Perfect Storm for Managed SD-WAN Services – Network World

Read More..

Whatchu doin’ Upthere? Western Digital moves on cloud storage space – The Register

Western Digital Corporation has bought Upthere, a consumer data storage startup with its own public cloud.

Upthere was founded in 2011 by director Bertrand Sarlet, VP for business development Alex Kushnir, and Roger Bodamer. CEO Chris Bourdon joined Upthere as VP products in August 2012 after being Apple’s senior product line manager in the Mac OS X area. He was promoted to CEO in December 2015 when Bodamer left.

Sarlet worked at Apple from 1997 to 2011, and was previously at Steve Job’s NeXT.

The company, based in Redwood City, CA, stores users’ data photos, videos, documents and music in its own data centre and says this about its core technology:

We believe that the time has come for the world to live off of the cloud on a day-to-day basis, not merely treat it as a secondary backup or sync location. This means, however, that writing to and reading from the cloud needs to be robust and fast enough to replace local storage. Rather than juggling multiple copies of a file between devices, our direct write technology keeps everything in the cloud, freeing the device to do what it does best creating and consuming content. In order to overcome the technical challenges of this new model of computing, we knew we needed to own, optimize, and deeply integrate each component in our system this is the primary reason we built our full technology stack.

Consumers run a local Upthere app on their iOS, Android, macOS and Windows devices.

On beta test exit the app was described as “a smarter way to keep, find, and share all of your files. Instead of storing your files on your devices which takes up lots of space, we safely and privately store your files directly in the cloud. Upthere breaks through the capacity limits of your devices providing one unified place for all your files that you can access from any device.”

Upthere has had a single and large funding round of $77m in July last year, probably sparked by the successful beta. There were six investors: Elevation Partners, Floodgate, GV, Kleiner Perkins Caufield & Byers, NTT DOCOMO Ventures, and Western Digital Capital.

The acquisition price has not been revealed. We think it is well beyond $100m and gives WDC the ability to develop and operate its own cloud storage data centres around the world, using its own storage media drives to do so. This will put it into competition with all other consumer data storage and sharing businesses, such as Amazon, Box, and Dropbox.

The Upthere business will be folded into Western Digital’s Client Solutions business unit, run by SVP and GM Jim Welsh. Chris Bourdon is joining WD’s Client Solutions business as a strategic leader and the Upthere team is joining WD as well. Barbara Nelson, who recently joined WD from IronKey, where she ran the Imation-owned business, will run the Clouds Services business inside WD’s Cloud Solutions unit.

Upthere pricing is $1.99/month and includes the Home app and 100GB of storage. There is a free three-month trial. Download the app here.

Sponsored: The Joy and Pain of Buying IT – Have Your Say

Continue reading here:
Whatchu doin’ Upthere? Western Digital moves on cloud storage space – The Register

Read More..

Nearly 300 years worth of porn was recorded to test Amazon’s … – Neowin

In the present, just as in 1724, the human race found itself in an adventurous spirit, a spirit that asked big questions, a spirit that demanded to know what the boundaries were and to push the envelope. Beaston02 wanted to know if the unlimited cloud storage offered by Amazon was really unlimited, so he recorded live webcam porn to see if he could break the boundaries of “unlimited” cloud storage. And if you started watching his recorded porn back in 1724 you would only just have finished it.

Amazon suspended its unlimited storage option in June – many speculate that it was due to beaton02’s giant porn stash – but he himself claims that decision has nothing to do with him. The Redditor claims that he has more of a problem with hoarding data than he does with porn, as the whole thing started out as a part of a bigger project to learn new code and to test himself to see how much data he could capture.

So he set off and after 5 to 6 months he had captured nearly 2 petabytes (1.8) of pornography by setting up a number of programs to record a number of free Livestream webcam shows. Which if you break it down equates to the following: 23.4 years worth of HD porn, 102 years worth of porn at 720p, or a whopping 3 centuries worth (293 years) at 480p.

Beaston02 has stopped his recording adventures but has released how he did it, and the torch has been passed on to another group of diligent porn archivists – if we are going to let that become a title – who have collectively embarked on the Petabyte Porn Project, which has the similar goal of recording and storing recorded porn on Amazon and Google Drive. The Petabyte Porn Project allegedly stockpiles 12 terabytes a day.

Earlier this month the plans for the largest data storage centre that will be built in the Arctic Circle in Norway was reported on, back then some naysayers questioned what could possibly demand such a huge secure facility, it seems that question has been answered.

Go here to see the original:
Nearly 300 years worth of porn was recorded to test Amazon’s … – Neowin

Read More..

pCloud First Cloud Storage Provider to Offer Lifetime Plan | 08/29/17 … – Markets Insider

BAAR, Switzerland, Aug. 29, 2017 /PRNewswire/ —

What is pCloud?

pCloud is a personal cloud space where files and folders can be stored. It has a user-friendly interface that clearly shows where everything is located and what it does. The software is available for almost any devices and platforms iOS and Android devices, Mac, Windows, and Linux. By installing pCloud on the computer (through its desktop application pCloud Drive), the app creates a secure virtual drive which expands local storage space. Every change made in a pCloud account can be seen immediately on all other devices – computer, phone or tablet. All devices are instantly synchronized and have direct file access to any update. And if that’s not enough, pCloud offers a new, industry-first LIFETIME PLAN so everyone will have unlimited, secure storage space forever.

How is pCloud unique from other cloud storage services?

The main difference is that pCloud does not take space on the computer. pCloud Drive acts as a virtual hard disk drive, which allows users to access and work with content in the cloud, without using any local space.

What is significant about the lifetime plan?

The introduction of the Lifetime plan is something that no other company in the cloud storage market has done before. It gives users the chance to invest in a secure storage solution and eliminate the risk of losing their files to external drives, which have an average lifespan of around 5 years.

External hard-drives are in imperfect solution as they cost hundreds of dollars and have a 20% chance of breaking down in the first year, not to mention the risk of being stolen. The cost of recovering information on an external hard drive is extremely high, and can often exceed $1,000.

Over a long period, the cost of other cloud storage services is exorbitant and prohibitive.

How much does pCloud lifetime storage cost?

With pCloud, there are no monthly or yearly payments. For one payment users get storage for a lifetime.

About pCloud

pCloud was launched just over 3 years ago and has grown into a community of more than 7 million users from around the world. Today, the service is among the top five players in the global cloud storage market. In 2015 the company received a round A series of investments amounting to $3 million for the expansion of the service in the international scene. pCloud has over 1.4 billion uploaded files and over eight PetaBytes of maintained information.

Media Contact:Tunio ZaferCEO rel=”nofollow”>172930@email4pr.com+41 43 508 59 48

View original content with multimedia:http://www.prnewswire.com/news-releases/pcloud-first-cloud-storage-provider-to-offer-lifetime-plan-300510271.html

SOURCE pCloud

Read more here:
pCloud First Cloud Storage Provider to Offer Lifetime Plan | 08/29/17 … – Markets Insider

Read More..