Page 4,206«..1020..4,2054,2064,2074,208..4,2204,230..»

Philippines Central Bank Grants First Cryptocurrency Exchange Licenses – CoinDesk

The central bank of the Philippines has granted licenses to two local bitcoin exchanges, according to reports.

Daily newspaper The Philippine Starreportedthe developments on Sunday, citing statements from central bank chief Nestor Espenilla Jr.

The Bangko Sentral ng Pilipinas firstreleased its rules for domestic exchanges in February, seeking to lay down a foundation for the country's nascent cryptocurrency space. Yet the central bank has seen relatively little interest from prospective applicants, according to one official who commented to local media in late July.

That said, Espenilla, who spoke during a financial technology event over theweekend, indicated that the central bank is being proactive about bringing exchanges under its regulatory auspices.

"We see a rapid increase in the trajectory. It is coming from a small base but increasing that is why we decided to require them to register," he told attendees.

Espenilla also offered some figures on the local bitcoin trade, according to the news source, saying that exchanges are seeing as much as $6 million in volume a month a figure that represents three times the $2 million per month seen last year.

"We are moving to regulate them," Espenilla emphasized.

Financial districtof Manillaimage via Shutterstock

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Have breaking news or a story tip to send to our journalists? Contact us at [emailprotected].

See the article here:
Philippines Central Bank Grants First Cryptocurrency Exchange Licenses - CoinDesk

Read More..

AMD Releases Beta Graphics Driver for Better Cryptocurrency Mining – Bitcoin Magazine

A few days ago, AMD released the Radeon Software Crimson ReLive Edition Beta for Blockchain Compute driver. According to the release notes on the tech giants website, the software optimizes the performance for Blockchain Compute Workloads, thereby boosting the efficiency of cryptocurrency mining rigs that are using a GPU for mining (eg., Ethereum mining rigs).

Currently, the graphics driver can be downloaded from AMDs official website. The beta software supports desktop GPUs from AMD Radeon HD 7700, and it can be installed on 64-bit Windows 7 (Service Pack 1 or higher required) and 64-bit Windows 10 systems. AMD highlighted in the release notes that the graphics driver is not intended to boost users gaming performance. The company added that since this is a beta software, it will not be supported with further updates, upgrades or bug fixes.

AMDs new beta driver is designed to fix an issue related to the DAG (directed acyclic graph) size. As the number of blocks in the Ethereum blockchain increases (taking roughly 14 seconds to generate a block), so does Ethereums epoch (a 100-hour window). For every epoch, or 30,000 blocks, a DAG is generated. As the DAG size grows, the memory requirements for mining Ethereum increase. Since the memory footprint of the workload is increasing, it will, at a certain time, overflow from the graphics cards memory and will be stored in the main system memory. The main system memory is much slower than accessing the GPUs VRAM. If a mining rig is slower to access the memory, it will result in performance penalties concerning the miners hashrate.

AMDs new beta driver appears to have fixed the DAG issue. According to TechPowerUp, there is only a minimal difference between mining different DAG sizes with the beta software. Compared to the old driver, the AMD Radeon RX Vega 64 8GB (1546 MHz/945 MHz) experienced an 81 percent increase in the hashrate mining DAG 199.

The Reddit community has also confirmed that AMDs new update is resulting in greater hashrates for their GPUs.

My RX Vega went from 31 to 37Mh/s mining ETH only. Very nice improvement, wrote a user named Hot-Diggity-Daffodil.

Just got these new drivers installed on one of my 6 gpu rigs. MSI RX 580 8GBs confirmed back up to 29.5 from 27.5. Installing on other rigs now. Using BBT modded ROMs, another user called TheHansGruber wrote in the /r/EtherMining subreddit.

AMDs beta driver will boost the performance of Ethereum mining rigs for a while. However, if Ethereum evolves from proof of work to proof of stake, with a first step toward this model expected on November 1, GPUs will be less needed over time. At the instance of proof of stake, the mining is based on coin ownership rather than hash power.

Original post:
AMD Releases Beta Graphics Driver for Better Cryptocurrency Mining - Bitcoin Magazine

Read More..

VR World Decentraland Raises $25.5 Million In Cryptocurrency – UploadVR

Decentraland is an open-source initiative that will allow users to create land and objects for use in its virtual space. While not dissimilar to Second Life, Decentraland distinguishes itself with blockchain technology- the same record keeping innovation that powers Bitcoin, Etherium, and other virtual currencies- which generates 10m x 10m blocks of space called LAND alongside the cryptocurrency MANA tokens which willpower the virtual economy. One LAND costs 1000 MANA tokens.

Decentraland recently finished its initial coin offering (ICO)- a sort of cryptocurrency hybrid of a traditional IPO and a crowdfunding initiative- taking in $25.5 million in the digital coinage Etherium in exchange for the first MANA tokens. Players will use these tokens to buy not just property, but goods and services on sale in the virtual world.

Initially set to go for nine days, or until $25 million was raised, the ICO began on August 17th with coins on sale for 0.080 ETH (approx. USD 26) per LAND and was slated to slowly rise to a maximum of 0.133 ETH (USD 43). However, demand was so overwhelming that 7,000 transactions could not be processed before the company hit its fundraising cap, ending the ICO.

Investors arent the only ones who will own MANA tokens to start. According to the Decentraland blog, 40 percent of the token supply will be allocated to the launch contributors; 20 percent is reserved to incentivize content creators and developers to build inside of Decentraland; 20 percent will go to the development team, early contributors and advisors; and the remaining 20 percent will be held by the Decentraland Foundation.

The first parcels of virtual real estate will go on sale in a few months. Decentraland is still preparing a land allocation policy to ensure equitable distribution and to ensure buyers can procure contiguous blocks of LAND. In the trailer above the platform can be seen running on the HTC Vive, though were not sure if its coming to the Oculus Rift too.

Related

Read more:
VR World Decentraland Raises $25.5 Million In Cryptocurrency - UploadVR

Read More..

How do you bring artificial intelligence from the cloud to the edge? – TNW

Despite the enormous speed at processing reams of data and providing valuable output, artificial intelligence applications have one key weakness: Their brains are located at thousands of miles away.

Most AI algorithms need huge amounts of data and computing power to accomplish tasks. For this reason, they rely on cloud servers to perform their computations, and arent capable of accomplishing much at the edge, the mobile phones, computers and other devices where the applications that use them run.

In contrast, we humans perform most of our computation and decision-making at the edge (in our brain) and only refer to other sources (internet, library, other people) where our own processing power and memory wont suffice.

This limitation makes current AI algorithms useless or inefficient in settings where connectivity is sparse or non-present, and where operations need to be performed in a time-critical fashion. However, scientists and tech companies are exploring concepts and technologies that will bring artificial intelligence closer to the edge.

A lot of the worlds computing power goes to waste as thousands and millions of devices remain idle for a considerable amount of time. Being able to coordinate and combine these resources will enable us to make efficient use of computing power, cut down costs and create distributed servers that can process data and algorithms at the edge.

Distributed computing is not a new concept, but technologies like blockchain can take it to a new level. Blockchain and smart contracts enable multiple nodes to cooperate on tasks without the need for a centralized broker.

This is especially useful for Internet of Things (IoT), where latency, network congestion, signal collisions and geographical distances are some of the challenges we face when processing edge data in the cloud. Blockchain can help IoT devices share compute resources in real-time and execute algorithms without the need for a round-trip to the cloud.

Another benefit to using blockchain is the incentivization of resource sharing. Participating nodes can earn rewards for making their idle computing resources available to others.

A handful of companies have developed blockchain-based computing platforms. iEx.ec, a blockchain company that bills itself as the leader in decentralized high-performance computing (HPC), uses the Ethereum blockchain to create a market for computational resources, which can be used for various use cases, including distributed machine learning.

Golem is another platform that provides distributed computing on the blockchain, where applications (requestors) can rent compute cycles from providers. Among Golems use cases is training and executing machine learning algorithms. Golem also has a decentralized reputation system that allows nodes to rank their peers based on their performance on appointed tasks.

From landing drones to running AR apps and navigating driverless cars, there are many settings where the need to run real-time deep learning at the edge is essential. The delay caused by the round-trip to the cloud can yield disastrous or even fatal results. And in case of a network disruption, a total halt of operations is imaginable.

AI coprocessors, chips that can execute machine learning algorithms, can help alleviate this shortage of intelligence at the edge in the form of board integration or plug-and-play deep learning devices. The market is still new, but the results look promising.

Movidius, a hardware company acquired by Intel in 2016, has been dabbling in edge neural networks for a while, including developing obstacle navigation for drones and smart thermal vision cameras. Movidius Myriad 2 vision processing unit (VPU) can be integrated into circuit boards to provide low-power computer vision and image signaling capabilities on the edge.

More recently, the company announced its deep learning compute stick, a USB-3 dongle that can add machine learning capabilities to computers, Raspberry PIs and other computing devices. The stick can be used individually or in groups to add more power. This is ideal to power a number of AI applications that are independent of the cloud, such as smart security cameras, gesture controlled drones and industrial machine vision equipment.

Both Google and Microsoft have announced their own specialized AI processing units. However, for the moment, they dont plan to deploy them at the edge and are using them to power their cloud services. But as the market for edge AI grows and other players enter the space, you can expect them to make their hardware available to manufacturers.

Credit: Shutterstock

Currently, AI algorithms that perform tasks such as recognizing images require millions of labeled samples for training. A human child accomplishes the same with a fraction of the data. One of the possible paths for bringing machine learning and deep learning algorithms closer to the edge is to lower their data and computation requirements. And some companies are working to make it possible.

Last year Geometric Intelligence, an AI company that was renamed to Uber AI Labs after being acquired by the ride hailing company, introduced a machine learning software that is less data-hungry than the more prevalent AI algorithms. Though the company didnt reveal the details, performance charts show that XProp, as the algorithm is named, requires much less samples to perform image recognition tasks.

Gamalon, an AI startup backed by the Defense Advanced Research Projects Agency (DARPA), uses a technique called Bayesian Program Synthesis, which employs probabilistic programming to reduce the amount of data required to train algorithms.

In contrast to deep learning, where you have to train the system by showing it numerous examples, BPS learns with few examples and continually updates its understanding with additional data. This is much closer to the way the human brain works.

BPS also requires extensively less computing power. Instead of arrays of expensive GPUs, Gamalon can train its models on the same processors contained in an iPad, which makes it more feasible for the edge.

Edge AI will not be a replacement for the cloud, but it will complement it and create possibilities that were inconceivable before. Though nothing short of general artificial intelligence will be able to rival the human brain, edge computing will enable AI applications to function in ways that are much closer to the way humans do.

This post is part of our contributor series. The views expressed are the author's own and not necessarily shared by TNW.

Read next: How to follow today's eclipse, even if you live outside the US

Original post:
How do you bring artificial intelligence from the cloud to the edge? - TNW

Read More..

The rice of cloud, avocado of virtualization and salmon of doubt: Let’s eat storage sushi – The Register

We've got a few storage news sushi snacks to start off your week. Get your chopsticks out and lift up each of these little beauties to get a taste of who's doing what in the land of the data-baiters, virtualizer commercializers and the cloud crowd.

Data protection supplier Code42 has partnered with cloud-based e-discovery software Zapproved LLC to produce CloudPreserve for Code42 and Legal Hold Pro.

Legal Hold Pro is Zapproveds product which automates legal holds and data preservation requests. Customers use Code42s endpoint data protection software to back up employee data to a secure on-premises or cloud location and then Legal Hold Pro can be used to wall off part of it behind a legal hold barrier. That data is preserved for litigation and e-discovery purposes.

Corporate legal departments can select computers protected by Code42 as an additional preservation source within the Legal Hold Pro interface. Businesses have the ability to apply in-place preservation holds on custodian computer data directly from Legal Hold Pro.

The two say identified files on those computers can be collected every minute without any manual collection processes and without disrupting the custodian. Legal teams will have faster access to files for litigation and compliance purposes, point-in-time historical visibility to the documents, and protection from device tampering and ransomware-related file loss. Unlimited cloud storage ensures that the protected data is continually maintained and accessible.

End-point protector, file sharer and governancer (Is that a word? Ed) announced Druva Cloud Platform to provide a unified control plane for data management services across endpoint, server, and cloud application data.

This Druva Cloud Platform features time-indexed metadata, global scale-out de-duplication, auto-tiering, RESTful APIs for access and ecosystem integration, and highly elastic search and analytics capabilities.

Jaspreet Singh, co-founder and CEO of Druva, said the Druva Cloud Platform is designed to provide data protection, governance, and intelligence across the full data footprint of the enterprise all delivered as a service. Search, deletion, recovery, compliance monitoring and more functions are applicable across the full data set.

There is no dedicated hardware and storage infrastructure to drive up total cost of ownership, and the associated software that must replicate data between systems to address discrete data challenges (eg, backup, disaster recovery, compliance, eDiscovery, analytics).

Druva says server, endpoint, cloud workload, and application data are optimised as a single, globally de-duplicated data set, and natively managed in the cloud. Companies can achieve global visibility and policy management across all their data from a single control plane.

It claims the product's storage uses heuristics and machine learning to optimize and auto-tune data protection policies, resulting in significantly shorter backup windows.

Find out more here. Druva Cloud Platform is available in Tech Preview, shipping GA in calendar Q4 in 2017.

Findings of the Druva 2017 VMware Cloud Migration Survey are:

Panzura announced its Mobile Client which enables organizations to natively share files from the cloud to any device at any location.

It provides Enterprise File Sync and Share (EFSS) functionality so users can give employees access to the data via all iOS, Android, PC, and Mac devices.

Features include:

Hyper-converged virtual SAN storage software supplier StorMagic says its SvSAN product will form an integral part of Ciscos Secure Ops managed cybersecurity offering for OT (Operational Technology) networks, targeted to medium sized and large enterprises.

Ciscos Secure Ops Solution offering helps businesses manage cybersecurity risk and compliance requirements in industrial automation environments. Example customers include Diamond Offshore, active in, unsurprisingly, in offshore drilling and a provider of contract drilling services to the global energy industry. It is also used by a major oil and gas company to protect its ICS Network.

StorMagic says SvSAN forms one of several tightly integrated products and services brought together to form a single product that offers network monitoring and data flow.

Secure Ops original architecture ran in its data centre and was dependent on non-HA local storage to provide the required performance and uptime. Cisco revised the architecture to include a client-side compute infrastructure. This enabled Secure Ops to provide security protection at the client site in combination with a data centre component that collects and analyses critical information.

The updated design uses SvSAN for clusters of dense rack servers for high-availability. At each client site there is a cluster of two Cisco UCS C220 M4 rack servers that communicate back to another cluster of two Cisco UCS C240 M4 rack servers in a Secure Ops data centre. The servers are virtualized using VMware and the storage is virtualized by StorMagic using internal server disks. There is no need for external storage arrays at any of the sites. SvSAN mirrors data between servers to provide the required high availability at each location.

Enterprise backup and business continuity supplier Unitrends says its Recovery Series backup appliances and Unitrends Backup virtual appliances have hypervisor integration for Nutanix Acropolis Hypervisor (AHV).

It helps to eliminate the VMware tax as Unitrends' CEO Paul Brady explained in a canned quote: "Our joint customers running 3rd party hypervisors on Nutanix are looking for new ways to increase the ROI of their infrastructure, and have told us Acropolis is a great way to do that by eliminating the high cost of VMware licensing. Without proper backup, data management, and enterprise cloud continuity in place, customers could never make that ROI a reality."

Unitrends will extend its core data centre backup and recovery capabilities for Nutanix to the purpose-built Unitrends Cloud which provides continuity services that fill gaps enterprises face with hyperscale clouds like Amazon Web Services (AWS), such as recovery Service Level Agreements (SLAs), failover scalability, and cost-effective compliance for DR testing and long-term retention.

We've reached the end of the sushi buffet. We hope your storage news hunger has been sated.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Read the rest here:
The rice of cloud, avocado of virtualization and salmon of doubt: Let's eat storage sushi - The Register

Read More..

70% of firms face skill shortages for server-based roles – Cloud Pro

IT organisations are finding it increasingly difficult to recruit for roles across traditional servers and converged infrastructure, due to a decline in the number of applicants with specialist skills, research has found.

A need to drive down the costs associated with hosting data in the public cloud has forced many businesses to either preserve or expand their on-premise servers, which has placed greater pressures on the hiring of employees with server-based skills.

'Voice of the Enterprise: Servers and Converged Infrastructure', which collected data from 525 web-based surveys and 19 phone interviews, found that almost 70% of organisations said that current candidates for those roles lack the skills and experience needed.

"Most IT managers are closely scrutinising their deployment options instead of blindly following the pack to IaaS and other off-premises cloud services," said Christian Perry, research manager at 451 Research.

"When determining the optimal mix of on and off-premises compute resources, there is no doubt this is hampered by the availability of specialist skills and regional availability," he added.

As cloud migration continues to increase, analysts expect that the talent pool of server specialists will continue to shrink.

Is it time for you to move to the cloud? Understand your options for a smart cloud migration strategy with this free whitepaper from F5.

Download now

The problem is exacerbated somewhat by a lack of available internal talent to help facilitate moves away from the public cloud, as the research suggests there has been a tendency in the past to hire those with only general skills rather than server specialists.

Almost 40% of respondents said that they focused on "IT generalists", citing the need hire for developing methodologies such as automation and software-defined technologies.

"The time and resource savings from these new technologies results in a slightly reduced need for server specialists," added Perry. "The good news is that there remains a need for specialists across both standalone servers and converged and hyperconverged infrastructures."

View original post here:
70% of firms face skill shortages for server-based roles - Cloud Pro

Read More..

Info on 1.8 million Chicago voters exposed on Amazon server – USA TODAY

A test voting card for a punch voting system.(Photo: Elizabeth Weise)

SAN FRANCISCO Names, addresses, dates of birth and other information about Chicagos 1.8 million registered voters was left exposed and publicly available online on an Amazon cloud-computingserver for an unknown period of time, the Chicago Board of Election Commissions said.

The database file was discovered August 11by asecurity researcher at Upguard, a company that evaluates cyber risk. The companyalerted election officials in Chicago on August 12 and thefile was taken down three hours later. The exposure was first made public on Thursday.

The database was overseen by Election Systems & Software, an Omaha, Neb.-based contractor that provides election equipment and software.

The voter data was a back-up file stored on Amazons AWS servers and included partial Social Security numbers, and in some cases, driver's license and state ID numbers, Election Systems & Software said in a statement.

Amazon's AWS cloud service provides online storage, but configuring the security settings for that service is up to the user and is not set by Amazon. The default for all of AWS' cloud storage is to be secure, so someone within ES&S would have had to choose to configure it as public.

The incident is an example of the potential problems raised by an increasingly networked and connected voting system whose security systems have not necessarily kept up especially at atime when Russia is known to be probing U.S. election systems.

It's also the latest example of sensitive data left exposed on cloud computing servers, vulnerabilities that cybersecurity firm Upguard has been identifying.Similar configuration issues on Amazon cloud servers have left exposed Verizon, Dow Jones andRepublican National Committee data.

More: Verizon, Dow Jones leaks a reminder: safeguard your cloud data

Every copy of data is a liability, and as it becomes easier, faster, and cheaper to transmit, store, and share data, these problems will get worse, said Ben Johnson, chief technical officer at California-based Obsidian Security, and a Chicago voter.

Electronic Systems & Softwareis in the process of reviewing allprocedures and protocols, including those of its vendors, to ensure all data and systems are secure and prevent similarsituations from occurring,it said in a statement.

No ballot information or vote totals were included in the database files and the information was not connected to Chicago's voting or tabulation systems, ES&Ssaid.

We were deeply troubled to learn of this incident, and very relieved to have it contained quickly, said Chicago Election Board Chairwoman MariselHernandez. We have been in steady contact with ES&S to order and review the steps that must be taken, including the investigation of ES&Ss AWS server," she said.

The database was discovered by Upguard's director of strategyJon Hendren. The company routinely scans for open and misconfigured files online and on AWS, the biggest provider of the cloud computing services.

The database also included encrypted versions of passwords for ES&S employee accounts.The encryption was strong enough to keep out a casual hacker but by no means impenetrable, said Hendren.

It would take a nation state, but it could be done if you have sufficient computing power, he said. The worse-case scenario is that they could be completely infiltrated right now, he said.

If the passwords are weak, they could be cracked in hours or days. If they are credentials that ES&S employees use elsewhere (corporate VPN) without two-factor authentication, then the breach could be way more serious, said Tony Adams of a Secureworks, an Atlanta-based computer security firm.

The implications of the exposure are much broader thanChicagobecause Election Systems & Software is the largest vendor of voting systems in the United States, said Susan Greenhalgh, an election specialist with Verified Voting, a non-partisan election integrity non-profit.

If the breach in Chicago is an indicator of ES&S's security competence, it raises a lot of questions about their ability to keep both the voting systems they run and their own networks secure, she said.

Russia is known to have probed at least 38 state voter databases prior to the 2016 election, federal officials have said. Because of that, the fact that the Chicago data was available to anyone with an Internet accounteven if they had to poke around a bit to find it representsa risk, Obsidian Security's Johnson said.

"Its hard to say malicious actors have found the data, but it is likely some were already hunting for it. Now, with more headlines and more examples of where to look, you can bet that malicious actors have already written the equivalent of search engines to more automatically find these hidden treasures of sensitive data," Johnson said.

Read or Share this story: https://usat.ly/2wh4aw6

Read more:
Info on 1.8 million Chicago voters exposed on Amazon server - USA TODAY

Read More..

Qualcomm moved its Snapdragon designers to its ARM server chip. We peek at the results – The Register

Hot Chips Qualcomm moved engineers from its flagship Snapdragon chips, used in millions of smartphones and tablets, to its fledgling data center processor family Centriq.

This shift in focus, from building the brains of handheld devices to concentrating on servers, will be apparent on Tuesday evening, when the internal design of Centriq is due to be presented at engineering industry conference Hot Chips in Silicon Valley.

The reassignment of a number of engineers from Snapdragon to Centriq may explain why the mobile side switched from its in-house-designed Kryo cores to using off-the-shelf ARM Cortex cores, or minor variations of them. Effectively, it put at least a temporary pause on fully custom Kryo development.

Not all the mobile CPU designers were moved, and people can be shifted back as required, we're told. Enough of the team remained on the mobile side to keep the Snapdragon family ticking over, The Register understands from conversations with company execs.

Late last year, Qualcomm unveiled the Snapdragon 835, its premium system-on-chip that will go into devices from top-end Android smartphones to Windows 10 laptops this year. That processor uses not in-house Kryo cores but slightly modified off-the-shelf CPU cores likely a mix of four Cortex-A53s and four A72 or A73s licensed from ARM. Qualcomm dubs these "semi-custom" and "built on ARM Cortex technology."

In May, Qualcomm launched more high-end Snapdragons for smartphones: the 660 and the 630. However, the 660 uses eight Kryo cores cannibalized from the Snapdragon 820 series, and the 630 uses eight stock ARM Cortex-A53 cores.

This isn't to say ARM's stock cores are naff. This shift means Qualcomm's other designs its GPUs, DSPs, machine-learning functions, and modems have to shine to differentiate its mobile system-on-chips from rivals also using off-the-shelf Cortexes. It's a significant step for Qualcomm, which is primarily known for its mobile processors and radio modem chipsets.

For what it's worth, Qualcomm management say they're simply using the right cores at the right time on the mobile side, meaning the off-the-shelf Cortex CPUs are as good as their internally designed Snapdragon ones.

On Tuesday evening, an outline of the Centriq 2400 blueprints will be presented by senior Qualcomm staffers to engineers and computer scientists at Hot Chips in Cupertino, California. We've previously covered the basics of this 10nm ARMv8 processor line. Qualy will this week stress that although its design team drew from the Snapdragon side, Centriq has been designed from scratch specifically for cloud and server workloads.

Centriq overview ... Source: QualcommClick to enlarge any picture

This is where you can accuse of Qualcomm of having its cake and eating it, though: in its Hot Chips slides, seen by The Register before the weekend, the biz boasts that Centriq uses a "5th generation custom core design" and yet is "designed from the ground up to meet the needs of cloud service providers."

By that, it means the engineers, some of whom are from the Snapdragon side, are working on it are on their fifth generation of custom CPU design, but started from scratch to make a server-friendly system-on-chip, said Chris Bergen, Centriq's senior director of product management.

However you want to describe it, looking at the blueprints, you can tell it's not exactly a fat smartphone CPU.

Its 48 cores, codenamed Falkor, run 64-bit ARMv8 code only. There's no 32-bit mode. The system-on-chip supports ARM's hypervisor privilege level (EL2), provides a TrustZone (EL3) environment, and optionally includes hardware acceleration for AES, SHA1 and SHA2-256 cryptography algorithms. The cores are arranged on a ring bus kinda like the one Intel just stopped using in its Xeons. Chipzilla wasn't comfortable ramping up the number of cores in its chips using a ring, opting for a mesh grid instead, but Qualcomm is happy using a fast bidirectional band.

The shared L3 cache is attached to the ring and is evenly distributed among the cores, it appears. The ring interconnect has an aggregate bandwidth of at least 250GB/s, we're told. The ring is said to be segmented, which we're led to believe means there is more than one ring. So, 24 cores could sit on one ring, and 24 on another, and the rings hook up to connect everything together.

Speaking of caches, Qualcomm is supposed to be shipping this chip this year in volume but is still rather coy about the cache sizes. Per core, there's a 24KB 64-byte-line L0 instruction cache, a 64KB 64-byte-line L1 I-cache, and a 32KB L1 data cache. The rest the L2 and L3 sizes are still unknown. The silicon is in sampling, and thus you have to assume Intel, the dominant server chipmaker, already has its claws on a few of them and studied the design. Revealing these details wouldn't tip Qualcomm's hand to Chipzilla.

Get on my level ... The L1 and L0 caches

The L0 cache is pretty interesting: it's an instruction fetch buffer built as an extension to the L1 I-cache. In other words, it acts like a typical frontend buffer, slurping four instructions per cycle, but functions like a cache: it can be invalidated and flushed by the CPU, for example. The L2 cache holds both data and instructions, and is an eight-way job with 128-byte lines and a minimum latency of 15 cycles for a hit.

Let me level with you ... The L2 cache

The L3 cache has a quality-of-service function that allows hypervisors and kernels to organize virtual machines and threads so that, say, a high priority VM is allowed to occupy more of the cache than another VM. The chip can also compress memory on the fly, with a two to four cycle latency, transparent to software. We're told 128-byte lines can be squashed down to 64-byte lines, where possible, with error correction.

When Qualcomm says you get 48 cores, you get 48 cores. There's no hyperthreading or similar. The Falkors are paired into duplexes that share their L2 cache. Each core can be powered up and down, depending on the workload, from light sleep (CPU clock off) to full speed. It provides 32 lanes of PCIe 3, six channels of DDR4 memory with error correction and one or two DIMMs per channel, plus SATA, USB, serial and general purpose IO interfaces.

I've got the power ... Energy-usage controls

Digging deeper, the pipeline is variable length, can issue up to three instructions plus a direct branch per cycle, and has eight dispatch lanes. It can execute out of order, and rename resources. There is a zero or one cycle penalty for each predicted branch, a 16-entry branch target instruction cache, and a three-level branch target address cache.

Well oiled system ... The Centriq's pipeline structure

Make like a tree and get outta here ... The branch predictor

Hatched, matched, dispatched ... The pipeline queues

Loaded questions ... The load-store stages of the pipeline

It all adds up ... The variable-length integer-processing portion

The chip has an immutable on-die ROM that contains a boot loader that can verify external firmware, typically held in flash, and run the code if it's legit. A security controller within the processor can hold public keys from Qualcomm, the server maker, and the customer to authenticate this software. Thus the machine should only start up with trusted code, building a root of trust, provided no vulnerabilities are found in the ROM or the early stage boot loaders. There is a management controller on the chip whose job is to oversee the boot process.

We'll be at Hot Chips this week, and will report back with any more info we can find. When prices, cache sizes and other info is known, we'll do Xeon-Centriq-Epyc specification comparison.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Original post:
Qualcomm moved its Snapdragon designers to its ARM server chip. We peek at the results - The Register

Read More..

Bitcoin Analysts Compete for the Highest Price Forecast – Bloomberg

Even the skeptics cant avoid weighing in onbitcoin.

It seems like everyone is coming up with a price forecast these days, with some of the biggest banks including Goldman Sachs Group Inc. jumping into the action, while speculators to long-time investors are also making their bets.

The consensus is that the biggest cryptocurrency will face some resistance around $4,500 to $4,800 and correct, to then continue rallying. How high? Pantera Capital Managements Paul Veradittakit, Tom Lee at Fundstrat Global Advisors and John Spallanzani at GFI Group Inc. see it going to $6,000 by year-end, while Ronnie Moas at Standpoint Research says it will keep rising to $7,500 in 2018.

Bitcoin has been on a tear this year, more than tripling in value as it crossed the $4,000 mark and touched a record $4,477 last week. Its since retreated about 7 percent from the high as investors took profit and assessed whether the rally had gone too far. Growing adoption and institutional investor interest, agreement on a mechanism to speed up transactions and regulatory steps that will help the asset broaden its reach are some of the reasons that explain the gains.

Were in a very healthy position right now, saidVeradittakit, vice president of Pantera Capital, which has invested in bitcoin since 2014. Theres a lot of interest from traders and mainstream finance on the rise of all these new crytpo currencies, but when they first get exposure into the space, theyll go into bitcoin. It has the most liquidity and biggest brand name.

Veradittakit said bitcoin will hover around current levels and rally further once the underlying technology is upgraded in November, when the block size in the bitcoin blockchain is set to double to two megabytes, increasing transaction speed. Hes also encouraged by reports from the local exchanges Pantera invests in that cross-border transactions are increasing.

Read more about the bitcoin development dispute

But the road ahead might get rocky. Goldman Sachs technical analyst Sheba Jafari wrote in a note to clients Aug. 13 that bitcoin coulderase around 40 percent of its value after reaching $4,827. On a separate note, Goldman Sachs analysts said the space is getting big enough at over $100 billion in market capitalization that it warrants watching.

Spallanzani, chief macro strategist at GFI Group, also predicts a sizeable fall to as low as $3,000 unless it manages to break the $4,500 level it tested last week. But then it should rebound and climb to as high as $10,000 in 2018, he said.

It will have to retrace a bit more before we have enough power to break through, Spallanzani said. He recommends buying bitcoin when its above $3,800 and selling when its below that level.

Not everyone is so bullish. Roy Sebag, who said he first invested in bitcoin in 2011, said he sold most of his 17,000 bitcoin between May and June because he believes the long-term value will be zero.

Its completely devolved from the original promise, said Sebag, founder and chief executive officer of Goldmoney Inc., which oversees about $2 billion of assets. Bitcoin and cryptocurrencies in general are exhibiting a mania, fueled by speculative fervor.

Amid the frenzy, some analysts have steered clear of making price predictions, while still dipping their toes in bitcoin waters.

Read more on how to get exposure to bitcoin without owning it

Tom Price, a Morgan Stanley equity strategist, said bitcoin compares to gold in that both offer similar benefits as a store of value, such as being fungible, durable, portable, divisible and scarce. Still, a lot of time and trust-building will be needed before it becomes clear whether bitcoin will also undermine demand for the metal, he said.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Cryptocurrencies including bitcoin are still very volatile and thus not particularly safe, but that could change as their value rises and liquidity increases, wroteBank of America Merrill Lynch strategists Martin Mauro, Cheryl Rowan and Matthew Trapp earlier this month. They score well when it comes to diversification, as their correlation to equities, bonds, commodities, currencies or selected measures of risk is near zero, the strategists said.

More longer term, bitcoin will climb to$25,000 by 2022, Fundstrats Lee said, as recent regulatory approval for options trading and settlement implies a significant rise in institutional holdings of bitcoin, while he estimates user accounts are likely to rise 50 percent and usage per account to climb 30 percent.

Moas of Standpoint Research said in an Aug. 14 report that bitcoin could rise to $50,000 by 2027 as he expects cryptocurrency users will grow to as high as 100 million users from 10 million today in the next couple of years.

It looks to me as though we are at the same point in the adoption curve as we were in 1995 with the Internet, Moas wrote. Cryptocurrency is becoming more widely accepted by the day.

Visit link:
Bitcoin Analysts Compete for the Highest Price Forecast - Bloomberg

Read More..

Bitcoin About To Burst – Seeking Alpha

Bitcoin (Pending:COIN) (OTCQX:GBTC) was in a bubble in late 2013. And again in 2017. Actually this year, there have been bubbles within bubbles, with the March to June parabolic rally followed by another +100% rise from July to August.

Where does it end?

Well, I read today it was going to the moon, which is interesting as I am coming to the exact opposite conclusion.

I think Bitcoin has either topped or will top on the next high in the $4,400s.

Here's why.

There's been no shortage of top callers this year and they have added fuel to the fire. However, I imagine most have given up after the move above $4,000.

Worse still, from what I see, a few are trying to buy this 10% dip for another high. When bears get greedy and try to speculate on one more high, it is a massive red flag for the rally.

I would like to point I have not been a bear or tried to call a top. Far from it; my June article, Bitcoin - We've Seen This Bubble Before (And It's Bullish), contained a target of >$4,000 and this chart:

The exact target was $4,347, but the main takeaway was to expect a blow-off move higher, which I think we can all agree we just had. $4,480 was the all-time high made last week, and the weekly candle from this point shows indecision and a lack of demand.

One of the reasons for the call higher was the observation Bitcoin has been in a bubble before - in 2012 to 2014 - and the 2017 rally was taking a comparable trajectory. Here is an updated chart:

The rallies aren't a perfect copy, but the stages of the trend and the accompanying sentiment are comparable.

Sentiment and how price moves are the most important factors of my analysis. I'm not analyzing the fundamentals as I don't believe fundamentals are responsible for the huge gains in 2017.

Here is what I said last time out:

The way price moves is a reflection of changing fundamentals, sentiment and positioning. We know what participants have done in the past under certain conditions and we know what they are doing now. We can't know for certain what people will do in the future, but participants and the decisions they make are fairly consistent; they respond in similar ways under similar conditions. It allows us to make an educated guess.

So my educated guess now is that Bitcoin is topping. The bubble callers in June were right about many things, but painfully early. Cryptocurrencies have proliferated and drawn in many inexperienced traders at inflated traders. I see brokers promoting cryptos all over the web.

Source: eToro

I think the only thing missing from the equation was the blow-off move to really spark the mania phase and flush out the bears. But now we've had it.

When bubbles pop, the usual reaction is for price to give up 80-90% of the gains and never fully recover. But Bitcoin is not "usual".

For a start, the way it recovered from the 2012-2014 bubble to form yet another one brings into doubt if these were indeed bubbles in the first place.

And the price movements echo a related instrument, gold (GLD), which rallied in comparable moves (or bubbles) over a much longer time period.

OK, so Bitcoin made the same moves in a tenth of the time, but the timing doesn't really bother me too much. It is actually pretty logical that cryptocurrencies move a lot faster than gold did back in the 1980s. And anyway, I'm more interested in the reaction in gold from the 2011 highs and if it can act as a guide.

Zooming into the way gold topped, there are again some similarities.

I distinctly remember traders (and even gold bears) buying gold for one more high as the pattern near the 2011 highs looked like a bullish triangle consolidation. That didn't turn out too well, and Bitcoin has a very similar pattern and associated sentiment.

Whether or not the comparison continues on the lower time frames remains to be seen, but I still think the general path of the gold decline from 2011 to 2015 could act as a decent guide. There could be a sharp drop, but importantly gold tells us not to expect a crash; cryptocurrencies are here to stay.

Over its four-year decline, gold worked its way to the last major consolidation area at $1,030 (March 2008-October 2009). A proportional move in Bitcoin targets the $2,800-2,900 consolidation range highs in a lot less time (more like 5 months than 50). The 38.2% Fibonacci retrace of the entire 2015 to 2017 rally comes in at $2,800, and this is a standard retrace for a powerful rally and therefore my target.

Based on evidence of trading patterns in gold and Bitcoin itself, plus all the usual telltale signs of a bubble, I think Bitcoin is in the process of topping and will soon fall back to $2,800-2,900.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: I would short a move to $4300 using a spreadbet on prices with my UK broker.

Read more:
Bitcoin About To Burst - Seeking Alpha

Read More..