Page 4,046«..1020..4,0454,0464,0474,048..4,0604,070..»

Holders of Over 11 Million Bitcoin are Proving That Hodl Is Not Just a Meme – CCN.com

Many bitcoin traders laugh at the idea of HODL (holding on for dear life). They think that it is not a sound trading or investing strategy. If you buy and hold on the way down, its very likely that youll use the same approach on the way up. The rigidity of this method makes it difficult for many investors to lock in gains.

Nevertheless, the strategy is so popular that it has become a meme.

HODLs acceptance appears to be bordering obsession. A new report reveals that millions of BTCs have not moved in a year.

The S&P 500 is up nearly 27% year-to-date. If the index closes the year with gains around that number, then many investors would consider 2019 as a good year.

Bitcoin holders are also having a great year. The top cryptocurrency is by over 95% year-to-date. The numbers align. According to The Block, 61% of bitcoin holders are sitting in profits.

Even with gains that are over 250% higher than the S&P 500, theres a sense that bitcoin investors are far from satisfied. Many expect mind-numbing and jaw-dropping performance from the dominant cryptocurrency.

I say this because BitInfoCharts show that 11.58 million BTCs have not moved in over a year.

In other words, 64% of the over 18.04 million bitcoin in circulation are not moving. This means that only 6.46 million BTCs are being used for speculation or payment settlement. At bitcoins current price of $7,260, only $48.99 billion worth of BTCs have been changing hands over the last year.

This has tremendous bullish implications for the number one cryptocurrency.

Haters like Peter Schiff always claim that bitcoin has no intrinsic value. They say that unlike gold which can be used for electronics, bitcoin doesnt have any utility. More importantly, it is not backed by anything that can prove its value.

Well, bitcoins value comes from its scarcity. There will only be 21 million BTCs in existence. On top of that, around 4 million BTCs are lost. The scarcity is real.

We spoke to Mati Greenspan, founder of Quantum Economics and asked whether 11.58 million being HODLed has long-term bull bullish implications. He told CCN,

Yes

The analyst then referred us to one of his recent tweets.

Trader Max echoes Mati Greenspans sentiments. The trader told CCN,

There are too many variables at play but scarcity is a good price driver.

The good news for HODLers is that bitcoin is about to get more scarce in the coming months.

If analysts believe that scarcity drives the price of bitcoin, then it would be fair to assume that the top cryptocurrencys value would soar in the coming months. In about six months, bitcoin block rewards will be reduced by half from 12.5 BTC down to 6.25 BTC. This means that there will only be around 900 BTCs issued to miners on a daily basis.

The dramatic drop in supply due to the halving would alleviate selling pressure at the very least. The reduction in selling is likely to drive prices higher. That might be more than enough to trigger a massive buying frenzy.

Therefore, HODL and halving are two of bitcoins strongest narratives.

Disclaimer: The above should not be considered trading advice from CCN. The writer owns bitcoin and other cryptocurrencies. He holds investment positions in the coins but does not engage in short-term or day-trading.

This article was edited by Sam Bourgi.

More here:
Holders of Over 11 Million Bitcoin are Proving That Hodl Is Not Just a Meme - CCN.com

Read More..

Over $5 Billion Worth of Bitcoin Moved in Minutes; What Happened? – newsBTC

If youve been on Crypto Twitter at all over the past few hours, youve likely noticed a lot of buzz about Bitcoin, specifically large BTC transactions. CoinDesks Wong Joon Ian noted that either @whale_alert (a bot tracking large and suspicious cryptocurrency transactions) is broken or several billion worth of BTC just moved around in a few minutes.

Indeed, Whale Alert registered a series of over 10 transactions of over 50,000 BTC (worth over $400 million as of the time of writing this) in the span of some twenty minutes.

Many analysts quickly reacted to the transactions, making claims that they signify that the Bitcoin price bottom is in, or that a strong BTC drop is about to take place. Unfortunately for traders, the transactions likely mean none of that.

So why are the funds moving? What does it mean for Bitcoin? And who is the entity playing around with hundreds of millions worth of the leading cryptocurrency?

Well firstly, to clarify, the funds were not being moved by multiple entities. As The Blocks head of research, Larry Cermak, pointed out, its the same address constantly moving the same stash. The 55,337 BTC (~$410.6M) is now parked in this address, then drawing attention to a new address in which there are thousands of coins deposited at.

As to why the funds are moving, Whale Alert itself noted that the transactions can likely be classified as peeling transactions which is normal behavior for wallets that many exchanges use.

Right now, it isnt too clear that the peeled funds are being used for, or which exchange is involved in the transactions (some suggest its Bitfinex).

While this uptick in transaction volume seems to just be an exchange doing, well, exchange things, analysts think that the uptick in Bitcoins on-chain metrics hint at an impending price bull run.

Per previous reports from NewsBTC, creator of Look Into Bitcoin, one Philip Swift, remarked in a ten-part Twitter thread that he thinks the next macro BTC bull market is near. One of his main reasonings was that Bitcoins Network Momentum indicator, which tracks the movement of coins to determine the usage of the network, has begun to trend higher, bouncing off bear market levels. This is something often seen six to 10 weeks prior to the beginning of a bull market, Swift remarked.

Thats not all. He added that the cryptocurrency is holding above its 350-day simple moving average; this is important as the price moving and holding above this moving average has always indicated the start of Bitcoin bull markets. And also, the Golden Ratio Multiplier, an equation that the analyst created to analyze the BTC price, implies that the cryptocurrency could see an explosive move to $12,000 to $13,000 by January of February.

More:
Over $5 Billion Worth of Bitcoin Moved in Minutes; What Happened? - newsBTC

Read More..

Bitcoin pares early losses, turns flat on the day near $7,500 – FXStreet

Bitcoin (BTC) fell to a daily low of $7,380 during the early trading hours of the Asian session on Sunday but didn't have a difficult time retracing its losses. As of writing, the BTC/USD pair was virtually unchanged on the day at $7,500. The lack of any significant developments that would attract the cryptocurrency market's attention causes major cryptocurrencies to remain stuck in their recent ranges. Even after the Istanbul network update, Ethereum continues to move sideways around $150.

The Relative Strength Index (RSI) on the daily chart stays directionless near the 50 mark, revealing the pair's indecisiveness in the short-term. Additionally,a symmetrical triangle seems to be forming on the same chart, further supporting the view that the pair will remain neutral. Meanwhile, the pair seems to be holding above the 20-day moving average (MA) for the second straight day but that by itself is not enough to suggest that buyers are looking to take control of the action.

The initial support for BTC could be seen at $7,380 (December 7 low)ahead of $7,080 (December low) and $6,500 (Nov. 25 low). Resistances, on the other hand, align at $8,000 (Fibonacci 38.2% retracement of October 25 - Novemberdrop), $8,500 (Fibonacci 50% retracement of October 25 - November 25 drop) and $8,720(100-day MA).

Go here to read the rest:
Bitcoin pares early losses, turns flat on the day near $7,500 - FXStreet

Read More..

Bitcoin On-Chain Momentum Is Crossing Bullish: Willy Woo – Bitcoinist

The past month has been a pretty challenging time for bitcoin HODLers, and cryptocurrency in general. There has been precious little to celebrate, save for the fact that prices could easily have gone lower still.

So isnt it about time we had some positive news? Luckily, who should pop up at just the perfect juncture, but Master of On-Chain Analysis, Willy Woo. And according to him, BTCs on-chain momentum is crossing bullish.

Woo made the claim in a tweet posted earlier today, along with a chart featuring several unlabelled wiggly lines. Unfortunately, he wasnt even able to tell us what the indicator is. Apparently it is proprietary to Adaptive Capital, Woos analytics-led hedge fund project with Murad Mahmudov and David Puell.

All Woo did say was that it tracks investor momentum, presumably by considering a combination of bitcoin on-chain transaction volume, and perhaps UTXOs (unspent transaction outputs) to chart BTC HODLing. There appear to be at least two sets of indicators, one of which looks like a set of different timescale moving averages.

After November we had to cover our faces with our hands for, and sit and watch through slits in our fingers. Perhaps thats fair enough.

You might also be heartened by Woos assertion that the bitcoin bottom is most likely in, saying that, anything lower will be just a wick in the macro view.

From here on in we are front running in preparation for the halving, he says.

When questioned on his degree of confidence in this particular bitcoin prediction, Woo pointed out that:

I only tweet when Im at high confidence, else it erodes my rep.

During the summer, Woo introduced us to the Bitcoin Difficulty Ribbon, giving a simple metric to compare several moving averages of mining difficulty on different timescales. This is also currently starting to compress, indicating that now is a good time to buy bitcoin.

Does Willy Woos comments on on-chain bitcoin volume make sense? Let us know in the comments!

Images via Shutterstock, Willy Woo

Read the rest here:
Bitcoin On-Chain Momentum Is Crossing Bullish: Willy Woo - Bitcoinist

Read More..

Amazon’s cloud business bombards the market with dozens of new features as it looks to preserve its lead – CNBC

Amazon's cloud business now has over 175 different services for customers to use. That's up from more than 100 services two years ago and 140 last year.

Don't worry, no one will ask you to name them all. But the fact is, Amazon Web Services, a 13-year-old division of the e-commerce company, is coming out with new technologies for its customers really fast, making the competition look like slackers.

It's important for Amazon Web Services to show off new ideas, as it's Amazon's main source of operating income. Amazon is ahead of all other companies in the growing cloud infrastructure market, where software developers can pay for however much computing and storage they use, rather than rely on their companies' existing facilities. It helps that Amazon was earlier to market than other big competitors like Microsoft and Google, but it's maintained that position by continuously adding new features.

In 2018 Amazon controlled about 47.8% of the market, according to technology industry research firm Gartner. That's down from 49.4% in 2017. Amazon would like to see its share widen, not narrow.

At the annual AWS Reinvent conference in Las Vegas on Tuesday, Amazon announced new chips to run customers' applications in its data centers, plus new services and feature enhancements for developers to check out. Although AWS boss Andy Jassy snuck in a few potshots at competitors Google, IBM, Microsoft and Oracle, he spent more of his stage time touting existing and new capabilities before an audience of 65,000. It was about tools in other words, it was about adding to Amazon's technological lead. To underline the point, he called to the stage Goldman Sachs CEO David Solomon and Cerner CEO Brent Shafer, who talked up their companies' use of AWS.

"There were AWS 28 launches announced today, 23 of which were made during Andy's keynote," an AWS spokesperson told CNBC in an email on Tuesday.

Highlights included:

Graviton2. AWS is launching more powerful processors it developed in house based on the Arm architecture to power computing resources, representing an alternative to existing cloud servers containing Intel and AMD chips. The chips promise to provide lower cost for the same level of performance in tasks like handling user requests in applications, analyzing user data or monitoring performance.

Wave Length. The new Wave Length service, thanks to collaborations with Verizon and other service providers, will enable faster cloud computing and storage services to keep applications moving quickly as 5G arrives.

Fraud Detector. A new service for fraud detection will help companies suss out fake sign-ups and transactions from stolen credit cards. It draws on knowledge Amazon has built up over the years about selling products online.

Contact Lens. New analytics technology for its Connect contact center service can recognize people's emotions on phone calls coming in from customers, so representatives can provide better support.

Kendra. Another service, Kendra, will be able to search for information stored in various enterprise content repositories, including Box and Microsoft's SharePoint.

Managed Apache Cassandra Service. Amazon revealed a new service for using the open-source database Cassandra that will compete with products from a start-up called DataStax. The company has done this before with companies like Elastic and MongoDB.

CodeGuru is a new offering that programmers can tap when they want a computer to review their source code so that it runs efficiently. The service will work with code storage service GitHub, which is owned by cloud rival Microsoft.

SageMaker. People who come up with artificial-intelligence models can now use a web application from AWS called SageMaker Studio IDE that's designed just for that work. In addition, a new tool called SageMaker Autopilot can help customers train AI models all one has to do is feed it some data.

There's more, but the point is that AWS just rolled out a whole bunch of bells and whistles for companies big and small. If keeping track of all the new stuff feels overwhelming, that's kind of the point. Amazon wants people to feel like it's coming out with so much that no one can keep up.

The only thing Amazon did not announce: price cuts.

Follow @CNBCtech on Twitter for the latest tech industry news.

Link:
Amazon's cloud business bombards the market with dozens of new features as it looks to preserve its lead - CNBC

Read More..

What is Infrastructure-as-a-Service? Everything you need to know about IaaS – TechRadar

Not every company has a vast IT operation. This might involve a data center with business servers, network switches and equipment, storage -- and the related IT service management staff needed to run it. Yet, with the emergence of cloud computing for storage and web-based software, the concept of outsourcing the computing power itself to the cloud became viable.

Known as Infrastructure-as-a-Service (or IaaS), the idea is to move most of the complexity of IT involving servers, storage and networking and move that out to the cloud where it is managed by a third party. In essence, IaaS gives you access to a data center in the cloud, although there are some important things to know about how this actually works.

Before diving into the key components of Infrastructure-as-a-Service, its important to understand how the concept even developed. Cloud computing became more viable once Internet speeds increased, host providers started addressing security concerns, and businesses started relying on web-based apps (known as Software-as-a-Service or SaaS). A next evolutionary step called Platform-as-a-Service (or PaaS) involves the hardware and operating systems needed to run corporate apps or customer-facing apps; companies can focus on the applications and not the hardware (patches, security, updates, and maintenance).

Infrastructure-as-a-Service expands on both of these models. Typically, this means the entire IT operation is cloud-based, including the software, servers, networks, and storage. Lets cover each of those, and also explain what is not part of Infrastructure-as-a-Service.

Knowing the key components of Infrastructure-as-a-Service is important, especially since there are still aspects that are managed by your company and not the cloud provider. As mentioned, IaaS typically involves three key components: the servers, network, and storage.

As with most web-based apps, Infrastructure-as-a-Service almost always involves hosted software. This can be the business apps used to run your company, the email clients, the office productivity apps, and just about anything you can think of to run your business. However, it might not include the in-house software you develop and host.

For servers, the cloud provider is tasked with all of the maintenance, updates, endpoint security, and management related to keeping the cloud running at optimal levels. You can trust that the infrastructure management you run on the remote cloud servers is maintained properly. For companies with on-premise data centers, you know that it often requires a full staff of operators to install servers, keep them updated, and fix any problems.

Storage is another key component and the classic (original) definition of cloud computing. Most companies first realized the benefits of the cloud when they started using web-based apps and started relying on cloud storage, which means more elastic file storage that can expand and contract to meet your demands and company growth strategies. To end-user sin your company, cloud storage appears to be infinite and always expanding.

Infrastructure-as-a-Service also involves network monitoring and management, and can also expand and change as needed for your company. This can involve all of the network security features you might need, the network management and throttling, and maintenance.

Its important to know that Infrastructure-as-a-Service does not alleviate all possible IT work from the equation. What is often left to the company to manage involves any custom, in-house software development and also the business computers, printers, and mobile devices such as smartphones, that attach to the cloud and benefit from Infrastructure-as-a-Service. Often, there is a middleware component as well, especially if you also use an internal data center and need to make connections to the Infrastructure-as-a-Service provider or between custom apps.

As you can imagine, the key benefit here is reduced complexity. The cloud hosting provider is most of the complexity to manage and update servers, maintain network topologies, and to make sure the storage is always available and archived. As a company moves from SaaS only, the PaaS used with custom apps, up to IaaS as a more complete solution, the benefits also increase in terms of dealing with less and less complexity.

Another benefit has to do with security. Many companies are dealing with security issues on a continual basis -- security on servers, networks, within storage archives, and even with end-users. With Infrastructure-as-a-Service, the security issues move from the data center out to the end-user, and IT staff will typically shift to a support role for end-users where they can assist with problems but also educate employees about proper security protocols.

Another shift is that the IT employees become partners with the host provider and their role tends to be more about on-premise support. This often alleviates staff to focus on strategy, partnering with the provider to orchestrate cloud services, and develop long-term plans for IT operations, without the typical micro-management duties involved with servers, networks, and storage.

In the end, Infrastructure-as-a-Service is a way to outsource complexity and refocus on internal needs, employee support, and in-house development and infrastructure duties.

Read the original post:
What is Infrastructure-as-a-Service? Everything you need to know about IaaS - TechRadar

Read More..

Report: Growing HCI Space Boosted by Cloud, AI – Virtualization Review

News

The hyperconverged infrastructure (HCI) space is growing fast as it adjusts to new technologies and factors such as artificial intelligence and hybrid/multicloud implementations, according to a new report from research firm Gartner Inc.

The company's new "Magic Quadrant for Hyperconverged Infrastructure" report finds Nutanix and VMware at the top of a pack of "Leaders" that also includes Dell EMC, Cisco and HPE. Gartner describes HCI as "a category of scale-out software-integrated infrastructure that applies a modular approach to compute, network and storage on standard hardware, leveraging distributed, horizontal building blocks under unified management." It basically substitutes old-world, proprietary, hardware-based, purpose-built systems with software-centric, integrated systems running on commercial off-the-shelf servers, with the focus on virtualized networking, compute and storage.

The report says better HCI scalability and management functionality will result in 70 percent of enterprises running some form of HCI (that is, appliance, software, cloud-tethered) by 2023, up from less than 30 percent this year.

While the movement is growing, it's also evolving, Gartner said, with some implementations leveraging AI to automatically improve performance and prevent failures, and others increasingly supporting different kinds of cloud implementations.

The cloud, the report indicates, can almost be thought of as a double-edged sword.

"For most HCI vendors, the public cloud is an extension of the strategy, but also could be a strategic threat if IT leaders buy public cloud services in lieu of spending on their own infrastructure," the report says. Furthermore, different kinds of cloud applications are being considered. "At the same time, HCI vendors have expanded their strategy to embrace hybrid/multicloud deployments, as either backup targets or disaster recovery options, or as an alternative for on-premises infrastructure for unpredictable or cyclical resource requirements."

Overall, Gartner said of the market, "Hyperconverged infrastructure solutions are making substantial inroads into a broader set of use cases and deployment options, but limitations exist. Infrastructure and operations (I&O) leaders should view HCI solutions as tools in the toolbox, rather than as panaceas for all IT infrastructure problems."

As noted, many of those tools come from vendors in the "Leaders" quadrant: Nutanix, VMware, Dell EMC, Cisco and HPE. Microsoft was the only vendor in the "Visionaries" section, while Huawei and Pivot3 were named "Challengers" and the final camp of "Niche Players" consisted of Scale Computing, Huayun Data Group, Red Hat, Sangfor Technologies, StorMagic, DataCore and StarWind.

To be eligible for the study, HCI vendors' functional criteria included:

Gartner also revisited the cloud factor in providing context for the report.

"One of the attractions of integrated systems and HCI is the potential to create a cloudlike provisioning model while maintaining physical control of IT assets and data on-premises in the data center, remote site or branch office," the report said. "Over the next few years, cloud deployment models will become increasingly important to meet both short-term scale-up/scale-down requirements and backup and disaster-recovery requirements. An important question for users is whether HCI is a stepping stone to the cloud or a 'foreseeable future' resting place for applications; and ultimately, whether it is a good alternative to the public cloud from performance, manageability at scale and cost perspectives."

A copy of the report licensed for distribution is available from VMware here.

About the Author

David Ramel is an editor and writer for Converge360.

Read the original here:
Report: Growing HCI Space Boosted by Cloud, AI - Virtualization Review

Read More..

How AWS Plans to Speed Up the Cloud – Toolbox

Amazon Web Services is aiming to make the most of 5G telecoms by embedding cloud resources at the network edge closer to users devices. It will also help the global leader in cloud computing platforms to distance itself from competitors.

AWS Wavelength a new service announced at the Amazon subsidiarys Re:invent conference in Las Vegas features tools for storage, analytics, compute and databases that speeds performance by lowering latencies.

Such latencies, or network delays, occur when data and instructions traverse the connection points that separate devices like smartphones and sensors from data centers.

Its among a host of new products, services and features trotted out at the conference. They include Outpost, which lets enterprise users run AWS on premises, and Local Zone, which delivers select AWS services to geographic areas.

With them, customers can exploit fifth generation's wider spectrum of frequencies for applications that range from streaming video to self-guided machines and smart cities. They also can improve power consumption and reduce bandwidth.

Installing servers in the data centers of network operators cuts transfer times by taking links out of those chains. As a result, AWS claims latencies can be lowered from hundreds of milliseconds to the single digits.

Thats vital for emerging technologies like self-driving vehicles, which must parse and model sensor-data to make decisions in real time. According to AWS, Wavelength can improve speeds by 20x over existing 4G networks.

The Dell Technologies subsidiary is partnering with network operators in major markets worldwide. They include Verizon, which is testing its mobile edge computing service in Chicago and intends to expand its 5G Ultra Bandwidth to 30 cities by years end.

British mobile operator Vodafone, South Koreas SK Telecom and Japans KDDI also have signed on and will begin offering Wavelength next year.

AWS says Wavelength will service 69zones in 22 regions, enabling worldwide coverage as the new standard gets rolled out.

Mapbox and the Finnish augmented-reality specialist Varjo Technologies are putting Wavelength through trials. Varjo is using the services improved speeds to blend virtual reality with real-time image rendering to improve resolution for immersive computing applications. The apps can run without the need for dedicated local servers.

Mapboxs 1.7-million-member developer community can benefit from the artificial intelligence that guides users around obstacles like traffic jams and road construction as they drive to their destinations. Company execs say Wavelength permits automatic updating based on data from millions of sensors, allowing customers to refresh maps in pages and applications faster with 5G.

The Outposts service lets companies import cloud-ready rack servers for low-latency apps that AWS installs, maintains and updates. Local Zone lets users tap AWS infrastructure for faster processing of media and entertainment, advertising technologies, electronic design automation and machine learning.

With nearly half the global market for cloud services, AWS isnt content to rest on its considerable lead over second-place Microsoft Azure and stragglers Google, IBM and Oracle.

The measure of success will be whether lower latencies translate to faster transitions among corporate users seeking to offload more of their storage and computing infrastructure to outsource providers.

View original post here:
How AWS Plans to Speed Up the Cloud - Toolbox

Read More..

20 VPS providers to shut down on Monday, giving customers two days to save their data – ZDNet

At least 20 web hosting providers have hastily notified customers today, Saturday, December 7, that they plan to shut down on Monday, giving their clients two days to download data from their accounts before servers are shut down and wiped clean.

The list of providers that notified customers about their impending shutdown includes:

ArkaHostingBigfoot ServersDCNHostHostBRZHostedSimplyHosting73KudoHostingLQHostingMegaZoneHostingn3ServersServerStrongSnowVPSSparkVPSStrongHostingSuperbVPSSupremeVPSTCNHostingUMaxHostingWelcomeHostingX4Servers

All the services listed above offer cheap low-end virtual private servers (VPSes). The providers appear to be using servers hosted in ColoCrossing data centers, a source told ZDNet.

Furthermore, all the websites feature a similar page structure, share large chunks of text, use the same CAPTCHA technology, and have notified customers using the same email template.

All clues point to the fact that all 20 websites are part of an affiliate scheme or a multi-brand business ran by the same entity.

The initial reaction on bulletin boards dedicated to discussing web hosting topics was that someone might be sabotaging the company behind all these VPS providers by sending spoofed emails and hoping that customers jump ship.

This proved to be false. In the hours after they received the notifications, several users confirmed the email's legitimacy by analyzing email headers, confirmed the shutdown with the support staff at their respective VPS provider, and found a copy of the same message in their web hosting dashboards.

Since then, customers have shifted from surprise to anger. Some said inquiries about refunds remained unanswered.

Those who didn't lose too much money quickly realized they were set to work the weekend, as they had to download all their data and find a new provider, in order to avoid a prolonged downtime on Monday, when the 20 providers are set to shut off servers.

Online, the phrase "exit scam" is now being mentioned in several places [1, 2]. Some theories claim the company behind all these VPS providers is running away with the money it made in Black Friday and Cyber Monday deals.

Paranoia is high, and for good reasons. As several users have pointed out, the VPS providers don't list physical addresses, don't list proper business registration information, and have no references to their ownership. Effectively, they look like ghost companies.

Requests for comment sent by ZDNet to some of the VPS providers remained unanswered before this article's publication.

A user impacted by the shutdown told ZDNet that the number of VPS providers shutting down might also be higher than 20, as not all customers might have shared the email notification online with others.

Another source pointed out that a search for a server IP address used by one of the soon-to-close VPS providers shows it has also been used by other companies that also provide cheap VPS hosting services -- which are also using websites with a similar structure and templates as some of the services shutting down.

(h/t Bad Packets)

Read the original:
20 VPS providers to shut down on Monday, giving customers two days to save their data - ZDNet

Read More..

Finally: AWS Gives Servers A Real Shot In The Arm – The Next Platform

Finally, we get to test out how well or poorly a well-designed Arm server chip will do in the datacenter. And we dont have to wait for any of the traditional and upstart server chip makers to convince server partners to build and support machines, and the software partners to get on board and certify their stacks and apps to run on the chip. Amazon Web Services is an ecosystem unto itself, and it owns a lot of its own stack, so it can just mike drop the Graviton2 processor on the stage at re:Invent in Las Vegas and dare Marvell, Ampere, and anyone else who cares to try to keep up.

And that is precisely what Andy Jassy, chief executive officer of AWS, did in announcing the second generation of server-class Arm processors that the cloud computing behemoth has created with its Annapurna Labs division, making it clear to Intel and AMD alike that it doesnt need X86 processors to run a lot of its workloads.

Its funny to think of X86 chips as being a legacy workload that costs a premium to make and therefore costs a premium to own or rent, but this is the situation that AWS is itself setting up on its infrastructure. It is still early days, obviously, but if even half of the major hyperscalers and cloud builders follow suit and build custom (or barely custom) versions of the Arm Holdings Neoverse chip designs, which are very good indeed and on a pretty aggressive cadence and performance roadmap, then a representative portion of annual X86 server chip shipments could move from X86 to Arm in a very short time call it two to three years.

Microsoft has made no secret that it wants to have 50 percent of its server capacity on Arm processors, and has recently started deploying Marvells Vulcan ThunderX2 processors in its Olympus rack servers internally. Microsoft is not talking about the extent of its deployments, but our guess is that it is on the order of tens of thousands of units, which aint but a speck against the millions of machines in its server fleet. Google has dabble in Power processors for relatively big iron and has done some deployments, but again we dont know the magnitude. Google was rumored to be the big backer that Qualcomm had for its Amberwing Centriq 2400 processor, and there are persistent whispers that it might be designing its own server and SmartNIC processors based on the Arm architecture, but given the licensing requirements, it seems just as likely that Google would go straight to the open source RISC-V instruction set and work to enhance that. Alibaba has dabbled with Arm servers for the past three years, and in July announced its own Xuantie 910 chip, based on RISC-V. Huawei Technologys HiSilicon chip design subsidiary launched its 64-core Kunpeng 920, which we presume is a variant of Arms own Ares Neoverse N1 design and which we presume will be aimed at Chinese hyperscalers, cloud builders, telcos, and other service providers. We think that Amazons Graviton2 probably looks a lot like the Kunpeng 920, in fact, and they probably borrow heavily from the Arm Ares design. As is the case with all Arm designs, they do not include memory controllers or PCI-Express controllers, which have to be designed or licensed separately from third parties.

This time last year, AWS rolled out the original Graviton Arm server chip, which had 16 vCPUs running at 2.3 GHz; it was implemented in 16 nanometer processes from Taiwan Semiconductor Manufacturing Corp. AWS never did confirm if the Graviton processor had sixteen cores with no SMT or eight cores with two-way SMT, but we think it does not have SMT and that it is just a stock Cosmos core, itself a tweaked Cortex-A72 or Cortex-A75 core, depending. The A1 instances on the EC2 compute facility at AWS could support up to 32 GB of main memory and had up to 10 Gb/sec of network bandwidth coming out of its server adapter and up to 3.5 Gb/sec of Elastic Block Storage (EBS) bandwidth. We suspect that this chip had only one memory controller with two channels, something akin to an Intel Xeon D aimed at hyperscalers. This was not an impressive Arm server chip at all, and more akin to a beefy chip that would make a very powerful SmartNIC.

In the history of AWS, a big turning point for us was when we acquired Annapurna Labs, which was a group of very talented and expert chip designers and builders in Israel, and we decided that we were going to actually design and build chips to try to give you more capabilities, Jassy explained in his opening keynote at re:Invent. While lots of companies, including ourselves, have been working with X86 processors for a long time Intel is very close partner and we have increasingly started using AMD as well if we wanted to push the price/performance envelope for you, it meant that we had to do some innovating ourselves. We took this to the Annapurna team and we set them loose on a couple chips that we wanted to build that we thought could provide meaningful differentiation in terms of performance and things that really mattered and we thought people were really doing it in a broad way. The first chip that they started working on was an Arm-based chip that we called our Graviton chip, which we announced last year as part of our A1 instances, which were the first Arm-based instances in the cloud and these were designed to be used for scale out workflows, so containerized microservices and web-tier apps and things like that.

The A1 instances have thousands of customers, but as we have pointed out in the past and just now, it is not a great server chip in terms of its throughput, at least not compared to its peers. But AWS knew that, and so did the rest of us. This was a testing of the waters.

We had three questions we were wondering about when we launched the A1 instances, Jassy continued. The first was: Will anybody use them? The second was: Will the partner ecosystem step up, support the tool chain required for people to use Arm-based instances? And the third was: Can we innovate enough on this first version of this Graviton chip to allow you use Arm-based chips for a much broader array of workloads? On the first two questions, weve been really pleasantly surprised. You can see this on the slide, the number of logos, loads of customers are using the A1 instances in a way that we hadnt anticipated and the partner ecosystem has really stepped up and supported our base instances in a very significant way. The third question whether we can really innovate enough on this chip we just werent sure about and its part of the reason why we started working a couple of years ago on the second version of Graviton, even while we were building the first version, because we just didnt know if were going to be able to do it. It might take a while.

Chips tend to, and from what little we know, the Graviton2 is much more of a throughput engine and can also, it looks like, hold its own against modern X86 chips at the core level, too, where single thread performance is the gauge.

The Graviton2 chip, with over 30 billion transistors, and up to 64 vCPUs and again, we think these are real cores and not the thread count in half the number of cores. We know that Graviton2 it is a variant of the 7 nanometer Neoverse N1, which means it is a derivative of the Ares chip that Arm created to help get customers up to speed. The Ares Neoverse N1 has a top speed of 3.5 GHz, with most licensees driving the cores, which do not have simultaneous multithreading built in, at somewhere between 2.6 GHz and 3.1 GHz, according to Arm. The Ares core has 64 KB of L1 instruction cache and 64 KB of data cache, and the instruction caches across the cores are coherent on a chip. (This is cool.) The Ares design offers 512 KB or 1 MB of private L2 cache per core, and the core complex has a special high bandwidth, low latency pipe called Direct Connect that links the cores to a mesh interconnect that links all of the elements of the system on chip together. The way Arm put together Ares, it can scale up to 128 cores in a single chip or across chiplets; the 64-core variant had eight memory controllers and eight I/O controllers and 32 core pairs with their shared L2 caches.

We think Graviton2 probably looks a lot like the 64-core Ares reference design with some features added in. One of those features is memory encryption, which is done with 256-bit keys that are generated on the server at boot time and that never leave the server. (It is not clear what encryption technique is used, but it is probably AES-256.)

Amazon says that the Graviton2 chip can deliver 7X the integer performance and 2X the floating point performance of the first Graviton chip. That first stat makes sense at the chip level and the second stat must be at the core level or it makes no sense. (AWS was vague.) Going from 16 cores to 64 cores gives you 4X more integer performance, and moving from 2.3 GHz to 3.2 GHz would give you another 39 percent, and going all the way up to 3.5 GHz would give you another 50 percent on top of that, yielding 6X overall. The rest would be improvements in cache architecture, instruction per clock (IPC), and memory bandwidth across the hierarchy. Doubling up the width of floating point vectors is easy enough and normal enough. AWS says further that the Graviton2 chip has per-core caches that are twice as big and additional memory channels (it almost has to by definition) and that these features together allow a Graviton2 to access memory 5X faster than the original Graviton. Frankly, we are surprised that it is not more like 10X faster, particularly if Graviton2 has eight DDR4 memory channels running at 3.2 GHz, as we suspect that it does.

Here is where it gets interesting. AWS compared a vCPU running on the current M5 instances to a vCPU running on the forthcoming M6g instances based on the Graviton2 chip. AWS was not specific about what test was used on what instance configuration, so the following data could be a mixing of apples and applesauce and bowling balls. The M5 instances are based on Intels 24-core Skylake Xeon SP-8175 Platinum running at 2.5 GHz; this chip is custom made for AWS, with four fewer cores and a slightly higher clock speed (400 MHz) than the stock Xeon SP-8176 Platinum part. Here is how the Graviton2 M6g instances stacked up against the Skylake Xeon SP instances on a variety of workloads on a per-vCPU basis:

Remember: These comparisons are pitting a core on the Arm chip against a hyperthread (with the consequent reduction in single thread performance to boost the chip throughput). These are significant performance increases, but AWS was not necessarily putting its best Xeon SP foot forward in the comparisons. The EC2 C5 instances are based on a Cascade Lake Xeon SP processors, with an all core turbo frequency of 3.6 GHz, and it looks like they have a pair of 24-core chips with HyperThreading activated to deliver 96 vCPUs in a single image. The R5 instances are based on Skylake Xeon SP-8000 series chips (which precise one is unknown) with cores running at 3.1 GHz; it looks like these instances also have a pair of 24-core chips with HyperThreading turned on. These are both much zippier than the M5 instances on a per vCPU basis, and more scalable in terms of throughput across the vCPUs, too. It is very likely that the extra clock speed on these C5 abnd R5 instances would close the per vCPU performance gap. (It is hard to say for sure.)

The main point here is that we suspect that AWS can make processors a lot cheaper than it can buy them from Intel 20 percent is enough of a reason to do it, but Jassy says the price/performance advantage is around 40 percent. (Presumably that is comparing the actual cost of designing and creating a Graviton2 against what we presume is a heavily discounted custom Skylake Xeon SP used in the M5 instance type.) And because of that AWS is rolling out Graviton2 processors to sit behind Elastic MapReduce (Hadoop), Elastic Load Balancing, ElastiCache, and other platform-level services on its cloud.

For the rest of us, there will be three different configurations of the Graviton2 chips available as instances on the EC2 compute infrastructure service:

The g designates the Graviton2 chip and the d designates that it has NVM-Express flash for local storage on the instance. All of the instances will have 25 Gb/sec of network bandwidth and 18 Gb/sec of bandwidth for the Elastic Block Storage service. There will also be bare metal versions, and it will be interesting to see if AWS implemented the CCIX interconnect to create two-socket or even four-socket NUMA servers or stuck with a single-socket design.

The M6g and M6gd instances are available now, and the compute and memory optimized versions will be available in 2020. The chip and the platform and the software stack are all ready, right now, from the same single vendor. When is the last time we could say that about a server platform? The Unix Wars. . . . three decades ago.

Read the original here:
Finally: AWS Gives Servers A Real Shot In The Arm - The Next Platform

Read More..