Page 1,972«..1020..1,9711,9721,9731,974..1,9801,990..»

Microsoft Azure Blazes The Disaggregated Memory Trail With zNUMA – The Next Platform

Dynamic allocation of resources inside of a system, within a cluster, and across clusters is a bin-packing nightmare for hyperscalers and cloud builders. No two workloads need the same ratios of compute, memory, storage, and network, and yet these service providers need to present the illusion of configuration flexibility and vast capacity. But capacity inevitably ends up being stranded.

It is absolutely unavoidable.

But because main memory in systems is very expensive, and will continue to grow more expensive over time relative to the costs of other components in the system, the stranding of memory capacity has to be minimized, and it is not as simple as just letting VMs grab the extra memory and hoping that the extra megabytes and gigabytes yield better performance when they are thrown at virtual machines running atop a hypervisor on a server. The number of moving parts here is high, but dynamically allocating resources like memory and trying to keep it from being stranded meaning all of the cores in a machine have memory allocations and there is memory capacity left over that cant be used because there are no cores assigned to it is far better than having a static configuration of memory per core. Such as the most blunt approach, which would be to take the memory capacity, divide it by the number of cores, and give each core the same sized piece.

If you like simplicity, that works. But we shudder to think of the performance implications that such a static linking of cores and memory might have. Memory pooling over CXL is taking off among the hyperscalers and cloud builders as they try to deploy that new protocol it atop CPUs configured with PCI-Express 5.0 peripheral links. We covered Facebooks research and development recently as well as some other work being done at Pacific Northwest National Laboratory, and have discussed the prognostications about CXL memory from Intel and Marvell as well.

Microsofts Azure cloud has also been working on CXL memory pooling as it tries to tackle stranded and frigid memory, the latter being a kind of stranded memory where there are no cores left on the hypervisor to tap into that memory and the former being a broader example of memory that is allocated by the hypervisor for VMs but is nonetheless never actually used by the operating system and applications running in the VM.

According to a recent paper published by Microsoft Azure, Microsoft Research, and Carnegie Mellon University, DRAM memory can account for more than 50 percent of the cost of building a server for Azure, which is a lot higher than the average of 30 percent to 35 percent that we cited last week when we walked the Marvell CXL memory roadmap into the future. But this may be more of a function of the deep discounting that hyperscalers and cloud builders can get in a competitive CPU market, with Intel and AMD slugging it out, and that DRAM memory for servers is much more constrained and that Micron Technology, Samsung, and SK Hynix as well as their downstream DIMM makers can charge what are outrageous prices compared to historical trends because there is more demand than supply. And when it comes to servers, we think the memory makers like it that way.

Memory stranding is a big issue because that capital expense for memory is huge. If a hyperscaler or cloud builder is spending tens of billions of dollars a year on IT infrastructure, then it is billions of dollars on memory, and driving up memory usage in any way has to potential to save that hyperscaler or cloud builder hundreds of millions of dollars a year.

How bad is the problem? Bad enough for Microsoft to cite a statistic from rival Google, which has said that the average utilization of the DRAM across its clusters is somewhere around 40 percent. That is, of course, terrible. Microsoft took measurements of 100 clusters running on the Azure cloud that is clusters, not server nodes, and it did not specify the size of these clusters over a 75 day period, and found out some surprising things.

First, somewhere around 50 percent of the VMs running on these Azure clusters never touch 50 percent of the memory that is configured to them when they are rented. The other interesting bit is that as more and more of the cores are allocated to VMs on a cluster, the share of the memory that becomes stranded rises. Like this:

To be specific, when 75 percent of cores in a cluster are allocated, 6 percent of the memory is stranded. This rises to 10 percent of memory when 85 percent of the cores are allocated to VMs, 13 percent at 90 percent of cores, and full loading of cores it can hit 25 percent and outliers can push that to as high 30 percent of DRAM capacity across the cluster being stranded. On the chart on the right above, the workload changed halfway through and there was a lot more memory stranding.

The other neat thing Microsoft noticed on its Azure clusters which again have VMs of all shapes and sizes running real-world workloads for both Microsoft itself and its cloud customers that almost all VMs that companies deploy fit within one NUMA region on a node within the cluster. This is very, very convenient because spanning NUMA regions really messes with VM performance. NUMA spanning happens on about 2 percent of VMs and on less than 1 percent of memory pages, and that is no accident because the Azure hypervisor tries to schedule VMs both their cores and their memory on a single NUMA node by intent.

The Azure cloud does not currently pool memory and share it across nodes in a cluster, but that stranded and frigid DRAM memory could be moved to a CXL memory pool without any impact to performance, and some of the allocated local memory on the VMs in a node could be allocated out to a CXL memory pool, which Microsoft calls a zNUMA pool because it is a zero-core virtual NUMA node, and one that Linux understands because it already supports CPU-less NUMA memory extensions in its kernel. This zNUMA software layer is clever in that it has statistical techniques to learn which workloads have memory latency sensitivity and those that dont. So, workloads dont have such sensitivity, they get their memory allocated all or in part out to the DRAM pool over CXL and if they do, then the software allocates memory locally on the node and also from that core-less frigid memory. Here is what the decision tree looks like to give you a taste:

This is a lot hairier than it sounds, as you will see from reading the paper, but the clever bit as far as we are concerned is that Microsoft has come up with a way to create CXL memory pools that doesnt mess with applications and operating systems, which it says is a key requirement for adding CXL extended memory to its Azure cloud. The Azure hypervisor did have to be tweaked to extend the API between the server nodes and the Autopilot Azure control plane to the zNUMA external memory controller, which has four 80-bit DDR5 memory channels and multiple CXL ports running over PCI-Express 5.0 links that implements the CXL.memory load/store memory semantics protocol. (We wonder if this is a Tanzanite device, which we talked about recently after Marvell acquired the company.) Each CPU socket in the Azure cluster links to multiple EMCs and therefore multiple blocks of external DRAM that comprise the pool.

The servers used in the Microsoft test are nothing special. They are two-socket machines with a pair of 24-core Skylake Xeon SP-8157M processors. It looks like the researchers emulated a CPU with a CXL memory pool by disabling all of the cores in one socket and making all of its memory available to the first socket over UltraPath links. It is not at all clear how such vintage servers plug into the EMC device, but it must be a PCI-Express 3.0 link since that is all that Skylake Xeon SPs support. We find it peculiar that the zNUMA tests were not run with Ice Lake Xeon SP processors with DDR5 memory on the nodes and PCI-Express 5.0 ports.

The DRAM access time on the CPU socket in a node was measured at 78 nanoseconds and the bandwidth was over 80 GB/sec from the socket-local memory. The researchers say that when using only zNUMA memory the bandwidth is around 30 GB/sec, or about 75 percent of the bandwidth of a CXL x8 link, and it added another 67 nanoseconds to the latency.

Here is what the zNUMA setup looks like:

Microsoft says that a CXL x8 link matches the bandwidth of a DDR5 memory channel. In the simplest configuration, with four or eight total CPU sockets, each EMC can be directly connected to each socket in the pod and that cable lengths are short enough so that the latency out to the zNUMA memory is an additional 67 nanoseconds. If you want to hook the zNUMA memory into a larger pool of servers say, a total of 32 sockets then you can lower the amount of overall memory that gets stranded but you have to add retimers to extend the cable and that pushes the latency out to zNUMA memory to around 87 nanoseconds.

Unstranding the memory and driving up overall utilization of the memory is a big deal for Microsoft, but there are performance implications of using the zNUMA memory:

Of the 158 workloads tested above, 20 percent had no slowdown using CXL memory, and 23 percent had a slowdown of 5 percent or less. Which is good. But as you can see, some workloads were hit pretty hard. About a quarter of the workloads had a 20 percent or greater performance hit from using zNUMA memory for at least some of their capacity and 12 percent of the workloads had their performance cropped by 30 percent or more. Applications that are already NUMA aware have been tweaked so they understand memory and compute locality well, and we strongly suspect that workloads will have to be tweaked to use CXL memory and controllers like the EMC device.

And just because we think all memory will have CXL attachment in the server over time does not mean we think that all memory will be local and that CXL somehow makes latency issues disappear. It makes it a little more complicated than a big, fat NUMA box. But not impossibly more complicated and that is why research line the zNUMA effort at Microsoft is so important. Such research points the way on how this can be done.

Here is the real point: Microsoft found that by pooling memory across 16 sockets and 32 sockets in a cluster, it could reduce the memory demand by 10 percent. That means cutting the cost of servers by 4 percent to 5 percent, and that is real money in the bank. Hundreds of millions of dollars a year per hyperscaler and cloud builder.

We are counting on people creating the PCI-Express 6.0 and 7.0 standards and the electronics implementing these protocols to push down to reduce latencies as much as they push up to increase bandwidth. Disaggregated memory and the emergency of CXL as a universal memory fabric will depend on this.

Link:
Microsoft Azure Blazes The Disaggregated Memory Trail With zNUMA - The Next Platform

Read More..

PT ALTO Network targets best-in-region service availability with 90% faster RTO from Veeam – Intelligent CIO

As a provider of vital transaction processing and payments infrastructure, PT ALTO Network aims to ensure that its services are always available when customers need them. To achieve its vision of becoming the most reliable payment infrastructure provider in Southeast Asia, the company looked to Veeam for a way to accelerate backup and restore processes.

If you use an ATM or POS in Indonesia, theres a good chance that PT ALTO Network will process the transaction. For more than 25 years, the company has delivered ATM and POS switching services-and PT ALTO Network is now expanding its offering into digital/online payments processing.

As one of four domestic switching institutions or payment infrastructure service providers, we play a big role in the sustainability of the payment system in Indonesia, thus any downtime in our systems has a big impact on the Indonesian economy, said Hendri Desungku Wuntoro, IT Infrastructure Manager at PT ALTO Network.

For that reason, uptime is our top priority. Our long-term objective is to be the most reliable payment infrastructure company in Southeast Asia.

PT ALTO Network relies on a hybrid cloud infrastructure consisting of virtual servers running on-premises and on Docker microservices running in the Amazon Web Services (AWS) cloud to underpin its services.

For high availability, the company operates separate primary and Disaster Recovery data centers, each configured with 15 bare-metal servers running 300 VMware virtual machines (VMs) for production services.

Our company is regulated by the Indonesian central bank, which means we have stringent service-level agreements [SLAs] for availability and data protection, said Wuntoro. Depending on the specific SLA, we back up our data daily, weekly or monthly.

PT ALTO Network targeted drastic improvements to its backup process. In the past, the company managed its backup processes manually, which was time-consuming and involved hours of hands-on work each day. PT ALTO Network aimed to use leading-edge technologies to improve effectiveness, speed and accuracy in its backup process.

Manually backing up our data was inefficient, but the larger concern for the business was how long it took to restore our VMs, said Wuntoro. If one of our systems experienced an issue, this manual approach made it difficult and time-consuming, as it could take as long as five hours to rebuild it from scratch, which was a significant source of business risk. To solve that challenge, we looked for a fresh approach.

The Veeam solution

After assessing the data protection market, PT ALTO Network selected Veeam Availability Suite, including Veeam Backup & Replication and monitoring and analytics from Veeam ONE.

We felt that Veeam offered the best local support, which was extremely important to us, said Wuntoro. As well as scoring highly with trusted analysts like the technological research and consulting firm Gartner, the solution is fully certified by VMware a must-have for PT ALTO Network.

Working with Veeam, the company ran a proof of concept (POC) to test how quickly it could back up and restore its systems. Based on the success of the POC exercise, PT ALTO Network engaged Veeam to deploy and configure the solution to protect all VMs across its business.

Towards the end of our POC, one of our servers suffered a crash, said Wuntoro. Fortunately, wed already backed up the VMs using Veeam. We restored the system into production with just a couple of clicks. That positive experience convinced us that Veeam Availability Suite was the right choice for our business.

Today, PT ALTO Network uses the Veeam solution to back up 300 VMs in its on-premises environment. By deploying the solution on top of its VMs, the company avoids the need to procure additional hardware, helping to reduce operational costs.

We can now orchestrate all our backups from a single point of control cutting our management activities from three hours to just five minutes per day, said Wuntoro. Veeam saves us time and helps us ensure that we are meeting our availability and data protection SLAs. For example, we now trigger backups automatically and receive instant alerts if they dont complete successfully. If we need to restore a VM, its faster and easier than ever: from five hours before to as little as 30 minutes today.

PT ALTO Network has accelerated its growth for the past four years. Even as the company scales out its digital platforms, the Veeam solution helps it ensure that data protection processes continue to run smoothly and efficiently.

Over the last four years, our IT team has grown from two to eight full-time equivalents [FTEs], said Wuntoro. If we still relied on data protection processes using old methods, we would need 20 to 30 FTEs to do the same task as we do now.

Moreover, PT ALTO Network is already finding innovative ways to reuse its backup data to enhance availability.

We were facing an intermittent stability issue with one of our VMs, which proved tough to pin down, said Wuntoro. Using the Veeam solution, we cloned the server and restored it to a range of different hosts, helping us diagnose and fix the problem. We are confident that Veeam Availability Suite will play an important role as PT ALTO Network strives to become Southeast Asias most reliable payment infrastructure provider.

Facebook Twitter LinkedInEmailWhatsApp

Read more here:
PT ALTO Network targets best-in-region service availability with 90% faster RTO from Veeam - Intelligent CIO

Read More..

How to select the optimal container storage – The Register

Commercial Enterprises are reducing their application development lifecycles to meet changing user demands, a trend which is in turn driving adoption of microservices application architectures. And in the cloud-native era, they usually turn to Kubernetes containers (K8s) for the job.

Created in 2014, Kubernetes is the portable, extensible open source platform which was built to manage containers on a large scale. It also supports application extensions and failover, and allows containers to be utilized in production, which has helped to promote container development.

A survey published by the Cloud Native Computing Foundation (CNCF) in March 2022 concluded that 96% of enterprises are using or evaluating K8s. Those companies usually start by running stateless applications, such as web services, on containers. But as the technology has developed and IT departments have become more familiar with its benefits, usage has extended to stateful applications including databases and middleware. The CNCF found that nearly 80% of customers plan to run stateful applications on containers for example.

To store the data created by those stateful applications on persistent disk, the K8s storage interface has been separated from subsequent K8s version releases. It now operates as an independent storage interface standard - the Container Storage Interface (CSI) for which many vendors have developed and released CSI plug-ins.

The result is that CSI-based storage can be integrated into containers to enable K8s to directly manage storage resources, including basic tasks like create, mount, delete, and expand as well as advanced operations such as snapshot, clone, QoS, and active-active deployments.

Matching storage to containerized applications

Based on stored data types, container storage can be classified into SAN, NAS, and object storage, each of which is best suited to different environments.

At the early stages of container evolution, enterprises used local disks to store database applications like MySQL. At that time, data volumes were small and containerized applications were not mission-critical so investment was limited, despite the fact that these databases needed high performance and availability, as well as low latency.

Local disks are no longer preferred however, largely because they don't support container failover in case of a node exception, forcing manual interventions which can take several hours. As well as poor availability and high maintenance, local disk storage is also impacted by resource isolation - different servers are stored in different places leading to difficulties sharing the storage resource pool and inefficient capacity utilization.

SAN storage offers superior performance, but fails to satisfy the high-availability needs of the database. Because it's bound to the containers of a failed node, it cannot automatically fail over for example, meaning manual intervention is again required.

Some customers using MySQL databases also use enterprise-level NAS storage. Automatic failover is possible in this case because NAS supports multi-mounting. If a node is faulty and the containers fail over, they can be remounted on the destination drive. Data stored on NAS is also shared in multiple locations and does not need to be copied in failover scenarios. As such, recovery times are reduced to minutes, and availability can see as much as a 10X improvement. Storage utilization is also optimized because shared NAS capacity offers an overall TCO which is up to 30% lower than equivalent local disk storage solutions.

New applications such as AI training can demand random read/writes from billions of unstructured files with sizes ranging from several KBs to several MBs. They need concurrent access to dozens and even thousands of server GPU resources to run, and the underlying storage needs low latency to accelerate GPU response times and boost GPU utilization.

SAN storage is insufficient here because it doesn't allow data to be shared among large scale clusters which comprise thousands of compute servers. Equally, object storage performs poorly with random read/writes and only supports sequential read/writes of cold data retained for long periods, but seldom accessed, in archiving applications.

NAS storage circumvents limitations

NAS storage supports multi-node sharing and is the only applicable storage option in this scenario. A common option for enterprises sees them implement a distributed NAS solution based on Ceph/GlusterFS with local server disks for example, where data is spread across multiple nodes. But it;s important to note that network latency issues may impact its performance.

Huawei NAS enterprise storage can multiply the performance of this solution by several times using the same Ceph/GlusterFS configuration. For example, a large commercial bank uses Ceph distributed storage with local server disks, but the system supported only 20,000 IOPS in AI applications during a test. After replacing the its existing storage system with Huawei OceanStor Dorado NAS all-flash solution, the bank saw dual-controller performance easily reach 400,000 IOPS, a 20-fold increase in AI analysis efficiency.

The Huawei OceanStor Dorado delivers leading NAS performance and reliability for workloads which process large numbers of small files. It uses a globally shared distributed file system architecture and intelligent balancing technology to support file access on all controllers, eliminate cross-core and -controller transmission overheads and reduce network latency.

The intelligent data layout technology can accelerate file location and transmission. OceanStor Dorado has demonstrated 30% better performance for applications relying on small file input/output (I/O) compared to the industry benchmark, enabling enterprises to deal with large volumes of small files stored in containers with ease.

It offers five levels of reliability covering in disk, architecture, system, solution, and the cloud. Techniques include Active-Active NAS access which operates two storage controllers simultaneously to deliver fast failover and minimum interruption to NAS services in the event of one failing. All of that adds up to 7-nines reliability for always-on services and containerized data which is always available. A more detailed description of the Huawei OceanStor Dorado NAS all-flash storage system is available here.

Because containers may run on any server within a cluster and fail over from one server to another, container data needs to be shared among multiple nodes. So container storage solutions have to share data while handling concurrent random read/writes generated by high volumes of small files, particularly when it comes to supporting new application development. At the end of day, a Kubernetes container can be summarized as an application-oriented environment that stores its data in files, which is what makes the Huawei OceanStor Dorado NAS all-flash storage system a good choice for the job.

Sponsored by Huawei.

Follow this link:
How to select the optimal container storage - The Register

Read More..

Building the sustainable HPC environments of the future – ComputerWeekly.com

In this guest post, Mischa van Kesteren, sustainability officer at HPC systems integrator OCF runs through the wide variety of ways that large-scale computing environments can be made to run more energy efficiently.

Supercomputers are becoming more energy hungry. The pursuit of Moores Law and ever greater hardware performance has led to manufacturers massively ramping up the power consumption of components.

For example, a typical high performance computing (HPC) CPU from 10 years ago would have a thermal design power (TDP) of 115 Watts today that figure is closer to 200.

Modern GPUs can exceed 400 Watts. Even network switches, which used to be an afterthought from a power consumption perspective can now consume over 1KW of power in a single switch.

And the race to achieve exascale has pushed the power consumption of the fastest supercomputer on the planet from 7.9MW in 2012 to 29.9MW in 2022.

In this era of climate chaos, is this justifiable? Ultimately, yes. Whilst 29.9MW is enough electricity to power 22,000 average UK households, the research performed on these large systems is some of the most crucial to how we will navigate the challenges we are facing and those to come, whether thats research into climate change, renewable energy or to combat disease.

It is vital, however, that we continuously strive to find ways of running HPC infrastructures as efficiently as possible.

The most common method of measuring the power efficiency of a datacentre is through its power utilisation efficiency (PUE). Traditional air-cooled infrastructure blows hot air through the servers, switches and storage to cool their components and then air-conditioning is used to remove the heat from that air before recirculating it. And this all consumes a lot of power.

The air-cooling often has a PUE in excess of two, meaning the datacentre consumes twice as much power as the IT equipment. The goal is to reduce the PUE of the HPC infrastructure as close to one as possible (or even lower).

A more efficient method is to cool the hot air with water. Water transfers heat over 20 times faster than air making it far better for cooling hardware. Air cooled components can use water through rear door heat exchangers which place a large radiator at the rear of the rack (filled with cold water), cooling all the hot air that is exhausted by the servers.

Get the flow rate and water temperature right and you can remove the need for air conditioning all together. This can get the PUE down to closer to 1.4.

Alternatively, components can be fitted with water blocks on the CPU, GPU, networking etc, which directly cool the components, removing the need for air cooling all together. This is far more efficient, bringing the PUE down further, possibly to less than 1.1.

Ultimately, we need to do something with the waste heat. A good option is to make use of free cooling. This is where you use the air temperature outside to cool the water in your system. The highest outdoor temperature recorded in the UK was 38.7 C.

Computer components are rated to run at up to double that so as long as the transfer medium is efficient enough (like water) you can always cool your components for just the energy used by the pumps. This is one of the reasons why you hear about datacentres in Norway and Iceland being so competitive they can make use of free cooling far more judiciously due to their lower temperatures.

Taking things one step further, the heat can be used for practical purposes rather than exhausted into the air. There are a few innovative datacentres which have partnerships with local communities to provide heating to homes from their exhaust heat, or even the local swimming pool. The energy these homes would have consumed to heat themselves has in theory been saved, which can bring the PUE of the total system below one.

The next step which is being investigated is to store the heat in salt, which can hold it indefinitely, to make allowances for the differences in heating requirements and compute utilisation. Imagine the knock-on effect of the traditional Christmas maintenance window where IT infrastructure is turned off just when those local households need heat the most.

One thing you may have noticed about all of these solutions is they are largely only practical at scale. It is not a coincidence that vast cloud datacentres and colocation facilities are the places where these innovations are being tested, that is where they work best. The good news is the industry seems to be moving in that direction anyway as the age of the broom cupboard server room is fading.

However, in the pursuit of economies of scale, public cloud providers are operating huge fleets of servers, many of which are underutilised. This can be clearly seen in the difference in price between on demand instances that run when you want them to (typically at peak times) and spot instances which run when it is most affordable for the cloud provider.

Spot instances can be up to 90% cheaper. As cloud pricing is based almost entirely on the power consumption of the instance you are running, there must be a huge amount of wasted energy costed into the price of the standard instances.

Making use of spot instances allows you to run HPC jobs in an affordable manner, and in the excess capacity of the cloud datacentres, improving their overall efficiency. If you are running your workloads on demand, however, you can make this inefficiency worse.

Luckily HPC workloads often can fit the spot model. Users are familiar with the interaction of submitting a job and walking away, letting the scheduler determine when the best time to run that job is.

Most of the major cloud providers offer the functionality to set a maximum price you are willing to pay when you submit a job and wait for the spot market to reach that price point.

This is only one element of HPC energy efficiency, there is a whole other world of making job times shorter through improved coding, right sizing hardware to fit workloads and enabling power saving features on the hardware itself to name a few.

HPC sustainability is such a huge challenge that involves everyone who interacts with the HPC, not just the designers and infrastructure planners. However, that is a good place to start. Talking to those individuals that can build in the right technologies from the start ensures that they will provide you with a sustainable HPC fit for the future.

Read more:
Building the sustainable HPC environments of the future - ComputerWeekly.com

Read More..

Bitcoin to make new all-time-highs within 24 months: Coinshares CSO – Cointelegraph

Bitcoin (BTC) may have further to fall, but CoinShares chief strategy officer Meltem Demirors believes the top cryptocurrency will reach new all-time highs within the next 24 months.

Speaking on CNBCs Squawk Box on Monday, Demirors noted that Bitcoin has always been a cyclical asset with drawdowns from peak to trough at 80 to 90% historically.

With Bitcoin currently sitting at about 65% down from its all-time highs in November 2021, Demirors believes there is still room for some downward correction.

However, Demirors noted there has been strong support around $20,000 and that she did not expect Bitcoin to fall below $14,000. She predicted the pain would be a distant memory by 2024, saying:

Bitcoin is currently priced at $19,401, down 2% in 24 hours and down 72% from its all-time high.

A reversal may be some time off, however, given Demirors can see no near upside catalysts which could signal more pain in store for weaker crypto projects.

We obviously had a lot of liquidations, a lot of insolvencies that had a massive impact on the market. [...] Were talking about $10, $20, $30 billion of capital that has basically evaporated overnight:

Demiror said she expected a large number of crypto assets to be wiped out during the bear market, similar to what has been seen in tech stocks.

Theres a very long, long tail of crypto assets that I think will go to zero, that doesnt really have any long-term prospect as weve seen with so many tech stocks as well.

Louis Schoeman, managing director at broker comparison site Forex Suggest, has a similar view. In a recent 9News report, hepredicted that the current crypto downturn could kill off as much as 90 percent of all crypto projects.

This is a cleansing process, Schoeman said, adding that only the strongest crypto projects will survive this bear market.

But it also serves as a massive opportunity for many no-coiners to enter the crypto market for the first time. Fortune favors the brave in crypto right now.

Related: Despite 'worst bear market ever,' Bitcoin has become more resilient, Glassnode analyst says

Last month, billionaire entrepreneur Mark Cuban said he doesnt expect the crypto bear market to be over until theres a better focus on applications with business-focused utility.

Cuban also believes mergers between different protocols and blockchains will eventually see the crypto industry consolidate, as thats what happens in every industry.

Read more:
Bitcoin to make new all-time-highs within 24 months: Coinshares CSO - Cointelegraph

Read More..

3 key metrics suggest Bitcoin and the wider crypto market have further to fall – Cointelegraph

The total crypto market capitalization has fluctuated in a 17% range in the $840 billion to $980 billion zone for the past 28 days. The price movement is relatively tight considering the extreme uncertainties surrounding the recent market sell-off catalysts and the controversy surrounding Three Arrows Capital.

From July 4 to 11, Bitcoin (BTC) gained a modest 1.8% while Ether (ETH) price stood flat. More importantly, the total crypto market is down 50% in just three months, which means traders are giving higher odds of the descending triangle formation breaking below its $840 billion support.

Regulation uncertainties continue to weigh down investor sentiment after the European Central Bank (ECB) released a report concluding that a lack of regulatory oversight added to the recent downfall of algorithmic stablecoins. As a result, the ECB recommended supervisory and regulatory measures to contain the potential impact of stablecoins in European countries' financial systems.

On July 5, Jon Cunliffe, the deputy governor for financial stability at the Bank of England (BoE) recommended a set of regulations to tackle the cryptocurrency ecosystem risks. Cunliffe called for a regulatory framework similar to traditional finance to shelter investors from unrecoverable losses.

The bearish sentiment from late June dissipated according to the Fear and Greed Index, a data-driven sentiment gauge. The indicator reached a record low of 6/100 on June 19 but improved to 22/100 on July 11 as investors began to build the confidence in a market cycle bottom.

Below are the winners and losers from the past seven days. Notice that a handful of mid-capitalization altcoins rallied 13% or higher even though the total market capitalization increased by 2%.

Aave (AAVE) gained 20% as the lending protocol announced plans to launch an algorithmic stablecoin, a proposal that is subject to the community's decentralized autonomous organization.

Polygon (MATIC) rallied 18% after projects formerly running in the Terra (LUNA) now called Terra Classic (LUNC) ecosystem started to migrate over to Polygon.

Chiliz (CHZ) hiked 6% after the Socios.com app announced community-related features to boost user engagement and integration with third-party approved developers.

The OKX Tether (USDT) premium measures the difference between China-based peer-to-peer trades and the official U.S. dollar currency. Excessive cryptocurrency retail demand pressures the indicator above fair value at 100%. On the other hand, bearish markets likely flood Tether's (USDT) market offer, causing a 4% or higher discount.

Tether has been trading at a 1% or higher discount in Asian peer-to-peer markets since July 4. The indicator failed to display a sentiment improvement on July 8 as the total crypto market capitalization flirted with $980 billion, the highest level in 24 days.

To confirm whether the lack of excitement is confined to the stablecoin flow, one should analyze futures markets. Perpetual contracts, also known as inverse swaps, have an embedded rate that is usually charged every eight hours. Exchanges use this fee to avoid exchange risk imbalances.

A positive funding rate indicates that longs (buyers) demand more leverage. However, the opposite situation occurs when shorts (sellers) require additional leverage, causing the funding rate to turn negative.

Related: Analysts say Bitcoin range consolidation is most likely until a macro catalyst emerges

Perpetual contracts reflected a neutral sentiment as Bitcoin, Ethereumand Ripple (XRP) displayed mixed funding rates. Some exchanges presented a slightly negative (bearish) funding rate, but it is far from punitive. The only exception was Polkadot's (DOT) negative 0.35% weekly rate (equal to 1.5% per month), but this is not especially concerning for most traders.

Considering the lack of buying appetite from Asia-based retail markets and the absence of leveraged futures demand, traders can conclude that the market is not comfortable betting that the $840 billion total market cap support level will hold.

The views and opinions expressed here are solely those of the author and do not necessarily reflect the views of Cointelegraph. Every investment and trading move involves risk. You should conduct your own research when making a decision.

Read the original post:
3 key metrics suggest Bitcoin and the wider crypto market have further to fall - Cointelegraph

Read More..

How This Member Of Parliament Rescued Bitcoin Amid New Regulatory Reforms In Europe – Forbes

Last week, the European Commission, European Union (EU) lawmakers, and member states (known as a trilogue in European politics) agreed on historic reforms for cryptocurrency regulation. I caught up with the Member of the European Parliament (MEP) who was in charge of drafting of the Market in Crypto-Assets (MiCA) legislation. MEP Stefan Berger not only led the drafting of the legislation in committee, but also was responsible for incorporating compromise amendments and resolutions.

It was important that in the end Parliament, Commission and Council took together the path of innovation and technology openness, instead the path of ban, said Berger. In March of 2022, Berger had dealt with an attempt to thwart the mandate toward a trilogue where some sought to include a divisive provision that could have effectively banned bitcoin (BTC BTC ) over energy concerns.

In Bergers estimation, the agreed-upon regulations in MiCA will be a global role model that could influence how other countries move forward with crypto-asset regulations. "MiCA is a European success story. Europe is the first continent to launch a crypto-asset regulation and will be a global role model, said Berger to me in declaring the victory. Celebrating, Berger shared with me that, Particularly as rapporteur, this is a great feeling. We set clear rules for a harmonised market that will provide legal certainty for crypto-asset issuers, guarantee a level playing field for service providers and ensures high standards for consumers and investors.

Berger celebrated the success on Twitter, including his happiness at avoiding an outright ban on proof-of-work, when he stated, MiCA Trilogue Breakthrough! Europe is the first continent with crypto asset regulation. Parliament, Commission & Council have agreed on balanced #MiCA. For me as reporter is was important that there is no ban on technologies like #PoW...

Dr. Stefan Berger exclaims the excitement of a breakthrough where the European Parliament, ... [+] Commission, and Council have agreed on a balanced MiCA. It was important to Berger that there be no ban on technologies like PoW.

While any immediate ban on bitcoin and proof-of-work in Europe has been avoided, a press release explaining the final version of MiCA does include some provisions affecting proof-of-work. Actors in the crypto-assets market will be required to declare information on their environmental and climate footprint...Within two years, the European Commission will have to provide a report on the environmental impact of crypto-assets and the introduction of mandatory minimum sustainability standards for consensus mechanisms, including the proof-of-work, said the release.

The press release also highlights new accountability standards for crypto-asset service providers (CASP) as well. With the new rules, [CASPs] will have to respect strong requirements to protect consumers wallets and become liable in case they lose investors crypto-assets, said the release.

Other key provisions in MiCA included how the European Banking Authority (EBA) will be tasked with maintaining a public register of non-compliant CASPs, and how all CASPs will need an authorisation in order to operate within the EU.

Regarding stablecoins, the press release notes that, Every so-called stablecoin holder will be offered a claim at any time and free of charge by the issuer, and the rules governing the operation of the reserve will also provide for an adequate minimum liquidity. Furthermore, all so-called stablecoins will be supervised by the European Banking Authority (EBA), with a presence of the issuer in the EU being a precondition for any issuance.

Non-fungible tokens (NFTs), i. e. digital assets representing real objects like art, music and videos, will be excluded from the scope except if they fall under existing crypto-asset categories, stated the release. However, MiCA requires that within 18 months, the European Commission will be tasked to prepare a comprehensive assessment and, if deemed necessary, a specific, proportionate and horizontal legislative proposal to create a regime for NFTs and address the emerging risks of such new market.

According to Berger, cryptocurrencies had been both out of scope in European legislation with divergent laws existing between the EU Members. So far, crypto-assets, such as cryptocurrencies, have been out of the scope of the European legislation and too many often divergent laws exists in Member States, said Berger.

Ultimately, Berger was consistent and influential in his role as a lead negotiator on the MiCA package in his desire to avoid a proof-of-work when challenged in March of 2022. His tweet on March 25 illustrates his excitement at the good news of maintaining his mandate going into the negotiations with the Trilogue, at a time when even Berger had expressed sentiments about not being sure how this would turn out because of politics. Berger stated in the tweet, Good news! My mandate is NOT challenged. I will now go into the trilogue negotiations with the position that there will be no #PoW ban. The EU Parliament gives me tailwind & shows innovative strength.

Dr. Stefan Berger, lead Parliamentarian for the MiCA regulations in Europe, describes how his ... [+] mandate would not be challenged and his position was maintained that there would be no #PoW ban.

For the United States, the issues related to the conflicts in state-by-state money transmission laws may face similar overhauls as both the White House and Congress will be highly focused on potentially sweeping federal legislation that could have exclusive jurisdiction over state laws. Additionally, the United Kingdom may feel similar pressure to react to how Europe has been a first-mover with crypto-asset regulation. The hypothesis of whether harmonizing consistent regulations across a continent can stabilize the currently tumultuous crypto-asset marketplace can now be tested and both the U.S. and U.K. will certainly be watching to see how the industry and marketplace reacts to the new MiCA laws in Europe.

Read the original:
How This Member Of Parliament Rescued Bitcoin Amid New Regulatory Reforms In Europe - Forbes

Read More..

Ethereums Vitalik Buterin Claps Back at Bitcoin Maxis Who Mock Proof of Stake – Decrypt

Vitalik Buterin is taking to Twitter to defend Ethereums move to proof of stake.

The Ethereum co-founder took a shot at Swan Bitcoins managing editor Nick Payton, who argued Thursday that any cryptocurrency that powers a proof of stake blockchainwhich uses validators with pledged, or staked, assets to verify transactionsis a security.

The fact that you can vote on something to change its properties is proof that its a security, Payton said. The insult hits at a sore spot for a crypto industry that has for years battled with the Securities and Exchange Commissionand one that is particularly touchy for Ethereum investors, since the matter of whether or not ETH should be considered a security remains an open question.

Early Friday morning, Buterin called Paytons assertion a bare-faced lie.

It's amazing how some [proof-of-work] proponents just keep repeating the unmitigated bare-faced lie that [proof-of-stake] includes voting on protocol parameters (it doesn't, just like [proof-of-work] doesn't) and this so often just goes unchallenged, he said, adding, Nodes reject invalid blocks, in [proof-of-stake] and in [proof-of-work]. It's not hard.

Proof of work, which involves the participation of miners who devote large amounts of computing power to solve complex, mathematical problems, is currently how both Bitcoin and Ethereum validate transactions and secure their networks. Ethereum, however, is in the process of transitioning to proof of stake through a long-awaited update now known as the Merge.

In his defense of proof of stake, Buterin took his retort one step further with a tongue-in-cheek grammar correction for the editor.

In English when talking about things like proof of stake, we don't say It's a security," we say "it's secure." I know these suffixes are hard though, so I forgive the error, Buterin said.

While Bankless founder Ryan Sean Adams called the retort the spiciest Vitalik tweet Ive ever read, this is far from the first time Buterins gotten into arguments with anti-Ethereum Bitcoiners online.

Earlier this month Buterin responded to Bitcoin maximalist Jimmy Song, who argued that proof of stake does not provide decentralized consensus because it does not, in Songs view, solve The Byzantine Generals Problem. Song was referring to the problem of achieving consensus without dome centralization through a trusted single party. Consensus in crypto is achieved when multiple entities are all able to agree on the same data without the intervention of a central authoritythis enables blockchain transactions.

But for Vitalik, Songs argument hinges on a technicality.

If there's a long-established tradition of people debating A vs B based on deep arguments touching on math, economics and moral philosophy, and you come along saying B is dumb because of a one-line technicality involving definitions, you're probably wrong, Buterin said.

Get the biggest crypto news stories + weekly roundups and more!

Follow this link:
Ethereums Vitalik Buterin Claps Back at Bitcoin Maxis Who Mock Proof of Stake - Decrypt

Read More..

Charlie Munger: Everybody Should Avoid Crypto ‘as if It Were an Open Sewer, Full of Malicious Organisms’ Featured Bitcoin News – Bitcoin News

Berkshire Hathaway Vice Chairman Charlie Munger, Warren Buffetts right-hand man, has a message for investors considering cryptocurrency. Never touch it, he stressed, adding that everyone should follow his example and avoid crypto as if it were an open sewer, full of malicious organisms.

Charlie Munger, Warren Buffetts right-hand man and longtime business partner, threw more insults at cryptocurrency in an interview with The Australian Financial Review, published Tuesday. Munger previously called bitcoin rat poison and said last year that he hated the success of BTC.

Noting that the crypto craze is a mass folly, he told the publication:

I think anybody that sells this stuff is either delusional or evil. I wont touch the crypto.

The Berkshire executive continued: Im not interested in undermining the national currencies of the world.

Munger was then asked what advice he would give to other investors who may be considering investing in cryptocurrency. Total avoidance is the correct policy, he replied, adding:

Never touch it. Never buy it. Let it pass by.

Like Buffett, Munger believes that stocks of real cash-generating companies are better investments. Stocks have a real interest in real businesses, he stressed.

In contrast, Crypto is an investment in nothing, and the guy whos trying to sell you an investment in nothing says, I have a special kind of nothing thats difficult to make more of,' he described.

Munger emphasized: I dont want to buy a piece of nothing, even if somebody tells me they cant make more of it I regard it as almost insane to buy this stuff or to trade in it. He elaborated:

I just avoid it as if it were an open sewer, full of malicious organisms. I just totally avoid and recommended everybody else follow my example.

Munger has never been a fan of bitcoin or any other cryptocurrencies. In February, he said that the government should ban BTC, calling it a venereal disease. He has praised China several times in the past for banning crypto, stating that he wished cryptocurrency has never been invented. In May last year, he said that bitcoin was disgusting and contrary to the interest of civilization.

In May, Munger said: I try and avoid things that are stupid and evil and make me look bad in comparison to somebody else and bitcoin does all three. He added, Its stupid because its still likely to go to zero.

What do you think about the comments by Berkshire Hathaway Vice Chair Charlie Munger on crypto? Let us know in the comments section below.

A student of Austrian Economics, Kevin found Bitcoin in 2011 and has been an evangelist ever since. His interests lie in Bitcoin security, open-source systems, network effects and the intersection between economics and cryptography.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

More here:
Charlie Munger: Everybody Should Avoid Crypto 'as if It Were an Open Sewer, Full of Malicious Organisms' Featured Bitcoin News - Bitcoin News

Read More..

Bitcoin was supposed to hedge against inflationhere’s why it hasn’t worked that way – CNBC

Bitcoin has plunged in value this year, weakening the argument often made by crypto enthusiasts that it can be an effective hedge against inflation during times of economic turmoil.

Bitcoin advocates have long argued that its scarcity would protect its value during times of rising inflation. Unlike central banks which can increase the supply of money there are a fixed number of coins, which keeps them scarce.

Even before the market crashed, there was debate about whether or not bitcoin would hold its value. Billionaire investor Paul Tudor Jones was bullish on bitcoin as an inflation hedge, while Dallas Mavericks owner and investor Mark Cuban dismissed the idea as a "marketing slogan."

Another argument is that bitcoin, along with other similar cryptocurrencies, will have an intrinsic store of value over time as it becomes more accepted, like gold. Supporters believe it will be seen as an asset that won't depreciate over time.

However, this has not been proven to be true, at least not yet. The value of the cryptocurrency market overall has plummeted alongside rising inflation, with bitcoin losing half of its value since January. As of Friday, the price of bitcoin is $21,833, according to Coin Metrics.

With crypto, "the extent of [price] volatility is so significant, it's very hard for me to view it as a long-term store of value," Anjali Jariwala, certified financial planner and founder of Fit Advisors, tells CNBC Make It.

Jariwala says that crypto in general is a new type of asset that doesn't yet function either as a sought-after commodity like gold, or even as a currency, "because it's not easily exchanged for a good or service." Despite its scarcity, the price of a cryptocurrency like bitcoin is still based largely on consumer sentiment, she says.

"It's tricky because it's supposed to act like a currency, it's taxed like property and some people compare it to a commodity. At the end of the day, it really is its own asset class that doesn't have a pure definition."

Another consideration is that cryptocurrencies like bitcoin have only been around for just over a decade. Because of this, "there isn't enough history there in terms of historical data to really understand what purpose it serves as an investment," Jariwala says.

While cryptocurrencies like bitcoin are "not proven" to be a reliable, long-term store of value, they could still gain acceptance over time and become less volatile, Omid Malekan, an adjunct professor at Columbia Business School specializing in crypto and blockchain technology, tells CNBC Make It.

"Once volatility smooths out, we will have a better picture of how it responds to macro developments, like the rate of inflation or what the Fed is doing," he says, cautioning that current crypto prices could reflect all sorts of inputs aside from inflation, like too many overleveraged cryptocurrency lenders or a lack of regulation.

Either way, crypto as a whole remains a highly speculative investment. Jariwala recommends only investing with money you're prepared to lose. She also says to think of crypto investing as a long-term strategy and "stick to that strategy even during times like this."

Cryptocurrency might evolve into a more mature asset that can be a hedge against inflation. But "we just don't know yet, until we see more of a track history with it," says Jariwala.

Sign up now:Get smarter about your money and career with our weekly newsletter

Don't miss:25-year-old TikTok creator with 7 million followers saves 50% of her income: 'I don't touch it'

Original post:
Bitcoin was supposed to hedge against inflationhere's why it hasn't worked that way - CNBC

Read More..