Page 4,236«..1020..4,2354,2364,2374,238..4,2504,260..»

Mozilla’s Send is basically the Snapchat of file sharing – The Verge

Mozilla has launched a new website that makes it really easy to send a file from one person to another. The site is called Send, and its basically the Snapchat of file sharing: after a file has been downloaded once, it disappears for good.

That might sound like a gimmick, but it underscores what the site is meant for. Its designed for quick and private sharing between two people not for long-term hosting or distributing files to a large group.

Though cloud hosting and local services like AirDrop have made sharing files much easier than it used to be, it can still be frustrating to get someone a file. Email attachments often cap out at 20MB or so. And while you can add something to a storage service like Dropbox, its then sitting there taking up space, with no indication of whether the file has been downloaded yet and is safe to remove.

Send gets around all of that. It supports files up to 1GB, and after uploading something, itll give you a link to send to someone else. That link will expire once theyve downloaded it or once 24 hours have passed. So someone elses procrastination is really your biggest limitation here. Files are also encrypted as theyre uploaded, and Mozilla says it does not have the ability to access the content of your encrypted file.

Mozilla is classifying Send as an experiment for now, so its possible that the site wont be around forever. But the service already seems useful. And if it catches on, maybe itll stick around as a way to keep sending stuff.

Read more from the original source:
Mozilla's Send is basically the Snapchat of file sharing - The Verge

Read More..

Unisecure Data Centers Offers 15% Discount On Cloud Server Hosting Services – HostReview.com (press release)

06:34:00 - 02 August 2017

Philadelphia, US, August 2, 2017 | We are delighted to announce that Unisecure Data Center is now offering 15% off on Cloud hosting services. This offer is applicable for the whole month of August 2017 and can be availed by all new customers looking for Cloud hosting solutions as well as to the existing who want to shift to CloudComputing Services .

Unisecure has a broad affair of 20 years in Web hosting and Data Center industry. The organization is perceived for its expert skill, quality deliverance, and astounding fast activity bolsters 24x7x365. Unisecure has earned the trust of clients over the world in almost 20 nations with an immense customer base of 40,000 clients including a few Fortune 500 organizations.

"As a premier data center company, we needed to offer something unique to our clients which will help them to take an advantage of this fast growing technology in the server industry. While we have declared a few markdown offers before, this is the biggest opportunity we are putting forth to the people who want to experience the world of cloudcomputing services & solutions. We believe that organizations are intending to move from an on-premise IT infrastructure to the cloud, these organizations should profit the advantage of this offer and band together with us for their different facilitating needs" said Benjamin, Vice President - Business Development at Unisecure.

Olivia, Head of Business Development says: "One of our sole reason for bringing down the cloud costs is to give the small businesses an opportunity to explore and grow by reducing IT cost.

About Unisecure

Unisecure is a US-based dedicated web server hosting and data center services provider with several world class data centers in the USA. Unisecure started in 1996 and since has effectively conveyed various ventures in the territories of Data Center Services, Dedicated Servers, VPS facilitating, colocation, Cloud arrangements and Disaster Recovery Services.

Unisecure offer 99.995% Network Uptime SLA ensure. They have three privately owned state-of-art data centers located in the US, catering to the customers across the globe. The data centers are encouraged with modern and redundant framework; Unisecure is among a couple of suppliers to offer Linux and Windows hosting services. The competitive offerings and service levels by Unisecure are translated into customers' delight.

For more information, visit http://www.unisecure.com

Read more from the original source:
Unisecure Data Centers Offers 15% Discount On Cloud Server Hosting Services - HostReview.com (press release)

Read More..

How The Cloud Will Disrupt The Ad Tech Stack – AdExchanger

The Sell Sider is a column written by the sell side of the digital media community.

Todays column is written by Danny Khatib, co-founder and CEO at Granite Media.

One of the most powerful aspects of the cloud platform is the innovation created by the unbundling of component services. There is a full menu of options for every hardware and software component, and companies can mix and match to achieve their desired configuration, trading off service and cost for each component. No more monolithic apps.

For the web stack, a company can rent elastic hardware from a primary service like Amazon Web Services or Google Cloud, plug in content delivery network services from a different vendor, install basic application monitoring from yet another vendor, and the list goes on. A company can also run its independent data stack in parallel storing logs at one provider, using one of many data pipeline services, pushing data to a separate structured data warehouse while selecting a decoupled, best-in-class visualization tool to make sense of it all.

Changing any of these decisions at any layer has super-low friction and only requires one or two developers or operations employees to manage it all. Now that is disruption.

Isnt this how the ad tech stack should run, too? Lets imagine that future.

The ad stack of the future will be cloud-based, component-driven, functionally independent from parallel web and data stacks and will have every component decoupled and rebundled at the customers discretion. Importantly, almost all layers of the existing ad stack will be reconceived as operational infrastructure, not as access to demand or supply.

The Basic Layer

Buyers and sellers will each run their own ad servers, and access to the general RTB bidstream between them will be a single component service for each party, which will often be managed by the cloud provider hosting the server. A server will be swappable without affecting access to the bidstream.

The bidstream itself will be a commodity delivery service, similar to basic web traffic table stakes for cloud providers or component providers, with no charge other than the cloud resources used to manage the bidstream. The major cost decision point for both sellers and buyers will be the desired maximum queries per second to be supported by the server. If publishers want to manage more bids per second, they will have to pay for the resources to manage them, not for the value of the bids. Just imagine a content delivery network charging more for articles or users that monetize well, no, thank you.

Gone will be the days where publishers run 10 bidstreams in parallel, because there will no longer be a need to do so. Publishers will manage demand through one component pipe that doesnt affect other layers of the stack, and they will pay a cloud usage fee to manage it. Publishers will get a single unified auction, and buyers wont have to solve for deduplication anymore.

The Service Layer

Around this basic layer, component services will flourish. A publishers server could run one of several available auction engines that house the priority and decisioning rules to select a winning bid. It will enable intelligent bid filtering services to manage bidstream cost, and also many different internal monitoring services for bidstreams, ad serving reports, custom metrics and so on. A separate cookie-matching service will be easy to plug in, as will a creative diagnostics service to help detect pesky redirects and creatives that hurt user engagement.

The buyers server will run in a similar fashion, with campaign management as a component.Server logs will be pushed to a parallel data stack for offline analysis by yet another service. The client SDK to fetch ads from the server will be a separate component, probably just open-source software. Its not tied to a particular server, and its most definitely not tied to a particular demand source.

The Transaction Layer

Best-of-breed components will be designed to tackle secure stream connections, identity verification, transaction confirmation and financial settlement.

In the offline world, a seller can choose to accept Visa, MasterCard or American Express, and the buyer decides which to use. Similarly, a sellers server will be able to disclose what transaction, verification and settlement providers are supported, and buyers can respond with which service to choose, such as a preference for Moat or Integral Ad Science and how they prefer to pay.

The buyer will get to choose from a pre-approved list of vendors supported by the seller, then the sellers server will render the code and pixels required for third-party verification for a particular impression.

Financial settlement might be bundled with transaction confirmation or offered as a separate service. Any services that dont involve the handling of money should charge based on resources used for a given number of transactions. Any service that involves financial settlement can charge a percentage of revenue since there is financial risk to be managed, and money is the underlying resource used. Again, this is all functionally independent of access to demand, supply or any other layer in the stack.

Instead of using ads.txt, publishers will manage a name server to verify identify and defend against fraud, similar to how domain name server resolution works.

The Data Layer

Ad networks will reshape themselves as data providers that plug into buyers and sellers servers but dont reroute the bidstream. Deals and unique data can be managed by inserting rules and attributes into both servers so a bid request can be signed with additional deal ID attributes before it is sent to a particular buyers server, which has been configured to look for it.

There will be no more reselling problems because the bidstream integrity is preserved.Networks and other data providers can try several different business models, such as charging on transactions or revenue, depending on the unique insights they provide.

As we move away from monolithic apps in the ad stack toward cloud-based component services, buyers and sellers will absolutely win. For the ad tech ecosystem, there are large implications for who might be the long-term winners and losers and how consolidation will play out, but well leave those predictions for a future column.

Follow Danny Khatib (@khatibda), Granite Media (@Granite_Media) and AdExchanger (@adexchanger) on Twitter.

Original post:
How The Cloud Will Disrupt The Ad Tech Stack - AdExchanger

Read More..

Packet launches edge compute service in 15 global locations – RCR Wireless News

Packet, a New York-based startup that specializes in bare metal infrastructure, recently launched its new edge compute service in 15 locations across the globe. Eleven of these locations are new, which are online in Los Angeles, Seattle, Dallas, Chicago, Ashburn, Atlanta, Toronto, Frankfurt, Singapore, Hong Kong and Sydney.

Contrary to its other locations, the new edge compute locations provide a single server configuration based on an Intel SkyLake processor. However, Packet intends to bring the majority of its server options to these new locations later.

While edge compute is still in its infancy, new experiences are driving demand for distributed infrastructure, especially as software continues its relentless pursuit down the stack, said Zachary Smith, a co-founder and CEO of Packet. We believe that the developers building these new experiences are hungry for distributed, unopinionated and yet fully automated compute infrastructure and thats what were bringing to the market today.

Packet was founded in 2014 as a way to provide developers with un-opinionated access to infrastructure. I saw this huge conflict happening between this proprietary lock-in view of infrastructure that was being provided by the cloud, and open source software that effectively wanted to eat that value all the way down, said Smith. We started Packet with the idea we can provide this highly automated, fundamental compute service with as little opinion as possible, while still meeting the demands of a millennial-based developer.

Packet currently has about 11,000 users on its platform. The company is expanding its services to make it more appealing for businesses that demand low latency communication. In addition, the company offers private deployments. I think we are the only platform that is purely focused on providing developer automation, a.k.a., a cloud platform, without opinion, said Smith. Every other cloud provider is based on multi-tenancy, or virtualization or some other service because they want to lock you in. We are really the only one that is out there to automate hardware.

Packet describes itself as the bare metal cloud company for developers. Bare metal cloud servers do not have a hypervisor. The company believes the next-generation of cloud computing will require customized hardware, and that placing metal power at the edge will play a significant role in fueling the internet of things (IoT).

As these new edge centers grow, it could be pressed they will evolve into data centers overtime. I could see these things essentially staying as pretty bespoke edge markets with a core market being very similar to major public clouds today, Smith said. Its pretty difficult to put everything in every location.

The company intends to expand to new locations within the next six to 12 months, and add to the portfolio of its current locations.

Related Posts

See the original post here:
Packet launches edge compute service in 15 global locations - RCR Wireless News

Read More..

IBM adds Optane to its cloud, only as storage and without GPUs – The Register

IBM's made good on its promise to fire up a cloud packing Intel's Optane non-volatile memory in the second half of 2017. But Big Blue has fallen short of the broad services suite it foreshadowed and can't even put Optane to work as memory.

Big Blue announced the availability of servers with Optane inside on Tuesday. You can run Intel's baby on selected IBM cloud bare metal configurations that give you the chance to provision a server with the 375GB SSD DC-4800X. Because that's a PCIe device, you can either have an Optane or a GPU, not both.

Another limitation is that you can only use Optane as storage, which is nice because it's pleasingly fast storage. But if you wanted to try Optane as a massive pool of memory, a role Intel feels is particularly impactful, you can't do that. Scheduled availability is to be determined, IBM says in its fine print.

The inability to use Optane as memory makes IBM's announcement of the service incongruent, as it lauds Optane as is the first product to combine the attributes of memory and storage, thereby delivering an innovative solution that accelerates applications through faster caching and faster storage performance to increase scale per server and reduce transaction costs for latency sensitive workloads.

But IBM can't do that now. And can't say when it will.

For now, Optane's only available in five IBM data centres. If latency between you and Dallas, London, Melbourne, Washington DC or San Jose, California, is going to be a problem, this service may not be for you.

We'd love to tell you more about the price of the service, but the online server configuration tool IBM suggests does not have an option for Optane that your correspondent could find. Nor is a price list apparent.

We can tell you that Optane-packing servers in the IBM cloud can run Windows Server 2012 or 2016, Red Hat Enterprise Linux 6.7 and up, or ESXi 5.5 and 6.0.

Sponsored: The Joy and Pain of Buying IT - Have Your Say

Read more from the original source:
IBM adds Optane to its cloud, only as storage and without GPUs - The Register

Read More..

Hybrid cloud technology gets the most out of primary storage workloads – TechTarget

The promise of smooth-functioning, cost-effective hybrid cloud storage has long been of interest to IT professionals. "Hybrid" has been in the cloud lexicon from the beginning, when the National Institute of Standards and Technology issued its original definitions of various cloud deployment models.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Hybrid cloud storage broadens the workload deployment choice to more than one cloud and enables compelling use cases, such as off-site backup, disaster recovery and cloud bursting. Done right, an enterprise hybrid cloud improves IT agility while reducing cost.

Up until recently, however, major challenges kept companies from embracing the promise of hybrid clouds, particularly for primary storage. These obstacles fall into four categories:

Adopt a cloud services mindset, in which resources are provisioned and services delivered on demand and paid for as they're consumed.

Fortunately, as the cloud market and technologies mature, most of these adoption barriers are gradually being dismantled. Based on our recent research, IT manager confidence in public clouds has increased significantly in the past couple of years, leading to their adoption for an expanding set of workloads. In addition, rapidly maturing cloud storage, networking and orchestration technologies are bringing hybrid cloud primary storage closer to reality, while products that enable simple and streamlined data portability are beginning to alleviate lock-in concerns.

Even as these obstacles are lowered or removed, buyers need a way to sort and distinguish competitors. To accomplish that, let's look at a set of criteria that defines what we believe is the sweet spot: hybrid cloud services that allow you to better and more fully support primary apps and data.

To overcome the limitations of existing approaches and ensure that a hybrid cloud primary storage product meets all your needs, start with an on-premises private cloud. This must include self-service provisioning and pay-as-you-go billing for infrastructure and app services. Among other benefits, this approach will help your organization adopt a cloud services mindset in which resources are provisioned and services delivered on demand and paid for as they're consumed. A private cloud also will lay the groundwork for a hybrid IT infrastructure.

Beyond that starting point, here's a list of criteria that will let you get the most out of your enterprise hybrid cloud investment:

Product offerings that satisfy the majority of these criteria will more likely provide the choice, agility, control and cost you should expect from hybrid cloud technology storage. A full set of these characteristics is seldom found in a single product, however. Let's briefly consider the field of existing offerings to see how they deliver these capabilities.

Existing products that connect on-premises storage with a public cloud service come in several flavors. While several of these claim to offer hybrid cloud capabilities, some come closer than others to meeting our criteria:

Object storage products are on track to provide the majority of the hybrid cloud capabilities on our wish list, except most don't adequately support traditional file- and block-based applications. These may include the apps on which you may be running your business. If you're focused on moving legacy workloads to the cloud and running them in a hybrid fashion, then object storage software likely won't meet all of your needs.

As this category develops, we believe some products will support particular workloads and deployment scenarios, such as lifting and shifting existing apps to the cloud. Others will be more general purpose. Look for hybrid products that provide the scalability and flexibility you'll need to grow, along with automated cross-cloud orchestration and management to minimize hands-on admin support.

Hybrid cloud technology products open new possibilities for deploying production applications. For example, if you're already running workloads in the public cloud and find your monthly bill growing too large, these products give you the flexibility to run selected workload tiers -- such as the presentation layer -- in the public cloud where they will benefit from the elasticity, while bringing more cost-sensitive portions of the workload back on premises. Hybrid cloud may also benefit on-premises apps, such as data analytics workloads, by providing an opportunity to burst out workloads that run primarily on premises but access public cloud resources as needed.

However, hybrid cloud isn't a panacea, so choose your vendor wisely. Go beyond simply tire kicking and carefully evaluate each product against your objectives and existing environment to determine which the best fit is.

We believe that hybrid cloud will become a reality for production apps and their associated primary storage in 2017 and 2018. Keep an eye out for new approaches and products, including enhancements to those described in this article, particularly to cloud-enabled, software-defined storage. These promise to change the way we as IT professionals think about the hybrid cloud and its role in running the business.

Here is the original post:
Hybrid cloud technology gets the most out of primary storage workloads - TechTarget

Read More..

IBM and Sony breakthrough on tape storage density could lower cold storage costs again – GeekWire

Sometimes its easy to forget that we used to store all our data on magnetic tape. Yet tape storage is still a very cost-effective way to store rarely accessed data, and a new breakthrough from IBM that dramatically increases the capacity of tape storage might make for lower cloud storage costs if it catches on in mass production.

IBM researchers have figured out a way to store 201 GBs of data on a square inch of tape, which as Ars Technica reports could allow partners like Sony Storage Media to create 330TB tape drives the size of the palm of your hand. Those drives, coupled together in massive arrays inside data centers, would keep tape alive as a storage option for people running their own data centers and could also encourage cloud providers that have targeted tape users to lower cloud storage prices.

Most data centers rely on solid-state storage (SSD) drives for day-to-day storage because they are so much faster than tape drives at retrieving and storing data, but they are more expensive to acquire and maintain. Tape storage is almost as old as computing itself, and it is still used for whats referred to as cold storage, or data that doesnt need to be accessed very frequently, such as financial records. But there have been concerns that tape storage is reaching a practical limit in capacity, and as the Internet of Things becomes reality, data storage needs will explode.

IBM and Sonys breakthrough involves the use of sputter deposition, a method of adding layers to a material that has been used for years to make hard drives but hasnt been added to tape before. It increases the storage density of the tape, and a new lubricant developed for the tape makes sure it moves smoothly at speed.

It could take quite some time before this technology makes it into commercial products, and it wont be a cheaper alternative to modern tape storage for a while until the production kinks are worked out. But it could keep tape storage alive as a cost-effective storage medium for several more years.

Continued here:
IBM and Sony breakthrough on tape storage density could lower cold storage costs again - GeekWire

Read More..

WTF is bitcoin cash and is it worth anything? – TechCrunch

Early yesterday morning bitcoins blockchain forked meaning a separate cryptocurrency was created called bitcoin cash.

The way a fork works is instead of creating a totally new cryptocurrency (and blockchain) starting at block 0, a fork just creates a duplicate version that shares the same history. So all past transactions on bitcoin cashs new blockchain are identical to bitcoin cores blockchain, with future transactions and balances being totally independent from each other.

For practical matters, all this really means is that everyone who owned bitcoin before the fork now has an identical amount of bitcoin cash that is recorded in bitcoin cashs forked blockchain.

But its not exactly this easy. If you control your own private keys, or hold your bitcoin in an exchange that said it would credit users balances with bitcoin cash, youre fine and can access your newfound cryptocurrency right now.

If you held your bitcoin with a provider like Coinbase, which said before the fork they arent planning on distributing bitcoin cash to users or even interacting with the new blockchain at all, then you may be out of luck.

To be clear this doesnt mean companies like Coinbase and Gemini are taking your bitcoin cash for themselves. Its just that they think its a distraction and not really going to be worth anything in the long run. If this proves to be false and the coins hold value, these companies will most likely end up distributing them to users.

If you know anything about cryptocurrencies you know there are a ton of them. Like thousands of them. Some are legitimate and substantially different (arguably better) than bitcoin, and some are pretty much just copycats trying to make a quick buck.

Bitcoin cash is just another modified cryptocurrency.

But its getting more attention right now for a few reasons:

First, it was created as a result of forking bitcoin core, and not created from scratch. But this isnt new other cryptocurrencies have also forked from bitcoinin the past, and are nowhere near as valuable as bitcoin cash currently is. That being said, it does mean that anyone who held bitcoin before yesterday now potentially has access to an equal amount of bitcoin cash, which is giving it a lot of attention, as people are saying its free money.

Secondly, its getting attention because the hard fork was timed to coincide with bitcoin core activating a change in its code called BIP 148, which was a highly publicized event in itself. This Bitcoin Improvement Proposal was the result of months of negotiation among major players and activatedSegregated Witness, something that will help bitcoin core scale going forward.

Right now, bitcoin cash is actually worth quite a bit on paper at least. Some are trading it at around a value of $400 per coin, which makes it the fourth-largest cryptocurrency by market cap right now.

But heres the thing its currently really hard to sell bitcoin cash. While some exchanges have added the new currency for trading, liquidity is super low, which is why some say the price is being artificially inflated. Because most exchanges arent accepting deposits yet, the only bitcoin cash available to trade is currency that was credited by exchanges after the fork. Users holding bitcoin cash outside of exchanges, or in exchanges that dont support trading, are stuck waiting.

So the moral of the story is that theres probably a ton of bitcoin cash waiting to be sold, as soon as people can transfer it. Thats because theres not a whole lot of incentive to keep the coins, especially when people think it is overvalued and want to quickly cash out. And the price has already fallen take a look at the price moment today in USD. Its already down from a high of $680 to around $350 on Bitfinex, one exchange that is offering a market for the new currency.

Now this isnt to say its going to be worthless. Just look at Ethereum Classic, a hard fork of Ethereum. After that fork it dropped to about $1 per ETC, but a few months later is now worth around $15 per ETC. Of course, this price pales in comparison to the $220 that regular Ethereum currently trades at.

By the way, if youre wondering why exchanges arent accepting deposits of bitcoin cash, its because its nearly impossible to send bitcoin cash over the blockchain right now. This is because the newly forked blockchain hasnt yet adjusted its difficulty, which happens automatically every 2016 blocks. So its takingway too long to mine blocks and confirm transactions. For reference, one block today took 10 hours to mine, compared to the 10 minutes it should. Most exchanges require 6 or 7 block confirmations before they credit a deposit, so you can see how its basically impossible to move around bitcoin cash.

So whats next? The general consensus in the cryptocurrency community is that most people are just going to sell bitcoin cash as soon as they get the chance to which, if happens, will further drive down the price. But theres always a chance that people will flock to this coin and it actually retains or appreciates in value. Essentially, like everything else in crypto, no one knows whats about to happen next.

Read more:
WTF is bitcoin cash and is it worth anything? - TechCrunch

Read More..

CBOE plans to launch bitcoin futures, announces agreement with Winklevoss brothers’ digital currency exchange – CNBC

The Wall Street Journal first reported news of the agreement Tuesday.

"We very much look forward to responding to the growing interest in cryptocurrencies through the creation of bitcoin futures traded on a regulated derivatives exchange," CBOE Holdings Chairman and CEO Ed Tilly said in a release.

CBOE Holdings' other subsidiaries include the Bats exchanges.

In late April, the U.S. Securities and Exchange Commission said it would review its rejection of the Winklevoss brothers' application to list a bitcoin exchange-traded fund on the Bats BZX exchange.

The SEC declined to comment.

"By working with the team at CBOE, we are helping to make bitcoin and other cryptocurrencies increasingly accessible to both retail and institutional investors," Gemini CEO Tyler Winklevoss said in the release.

On July 24, the CFTC announced it approved digital currency-trading platform LedgerX for clearing derivatives, which would mark the first federally supervised options venue for bitcoin.

LedgerX said at the time it plans to launch bitcoin options in early fall for institutional investors, although those firms could, in turn, offer retail investor products.

Bitcoin has more than doubled in value this year, while rival digital currency Ethereum has gained more than 2,000 percent. The value of all digital currencies has jumped from around $20 billion at the beginning of this year to more than $100 billion, according to CoinMarketCap.

More here:
CBOE plans to launch bitcoin futures, announces agreement with Winklevoss brothers' digital currency exchange - CNBC

Read More..

UASF Revisited: Will Bitcoin’s User Revolt Leave a Lasting Legacy? – CoinDesk

Attention this week has so far focused on a group of bitcoin users that successfully split off the blockchain to form their own cryptocurrency.

But fascinating asthe real-time market creation of Bitcoin Cash has been,for those who have closely watched developments, August 1 marked another lesser-acknowledged milestone the passing of the deadline for a controversial scaling proposal Bitcoin Improvement Proposal (BIP) 148.

That's when a vocal group of users had scheduleda so-called "Independence Day." The goal was to push through a long-stalled coding optimization calledSegregated Witness (SegWit), designed to increase and redefine the network's capacity. The software upgrade would find node operators (users who store transaction history) initiating the move, hoping to lead the way for miners and startups.

And while it's faded a bit into the background, you could argue that, even amidst a busy season for scaling proposals, BIP 148 was perhaps the most influential.

The scaling "agreement" Segwit2x followed soon after, proposing to add a feature that BIP 148 wouldn't have provided: a boost to the block size parameter. Bitcoin Cash was even more explicitly a response to BIP 148 hence, why both were scheduled for the same day.

Butthose two proposals had one other thing in common, and that was giving some degree of power over the software transition to theminersthat secure the blockchain.

Beforeits introduction, for example, SegWit had stalled for months due to its reliance on the idea miners would signal support toactivate the change. However, only about 25 to 50% of mining pools did so from November to June.

Then, suddenly, two weeks before the scheduled UASF, and with little time to spare, mining pools rallied around either Segwit2x or BIP 91 on its own, toactivate SegWit.

UASF supporters don't see this asa coincidence.

Blockchain startup founder Ragnar Lifthrasir, a public UASF proponent, told CoinDesk:

"UASF worked as designed and predicted, it is activating SegWit."

It's a narrative that adds evidence to the idea that some changes to the bitcoin protocol (and perhaps all public blockchains) are destined to be political.

As ethereum classic and bitcoin cash have now proved, there's capital to be created in splits. The more nuanced argument is that they also seek to aid research and understanding of the science behind open blockchains, though with economic risk to users.

In bitcoin, it could be said the scaling debate has called to mind the balance of power between its major network participants startups, miners, developers and users. And the argument continues to be that UASF was a movement of the people, one that like any social revolution, was perhaps destined to be feared by the powers that be.

While bitcoin users may be predisposed to such narratives, it's certainly one that has resonated with supporters.

"We found out that not just miners, but some VCs and bitcoin startups didn't like the power of users, that's why they came up with Segwit2x, to obscure UASF's success and precedent," Lifthrasir he said.

Heargues that it was a question of incentives. Mining pools didnt want to risk that their 12.5 bitcoin block rewards (worth approximately $33,000 today) would berejected, but they didn't want to support the UASF effort.

"This means hashing power follows nodes and users, not the reverse," Lifthrasir argued, and he isn't the only UASF supporter to feel this way (or that this is important).

"Basically, BIP 148 was an early success," remarked Bitcoin Core developer Luke Dashjr, one of the UASF'smore ardent supporters.

In the end, BIP 148 was sidestepped by another network proposal. But it's the mere threat of action, supporters argue, was enough.

Calin Culianu, a developer for Bitcoin Cash, the version of the bitcoin protocol boasting no SegWit and 8MB blocks, even agreed that Segwit2x was likely a response to BIP 148 on some level.

Although, Culianu has a different way of thinking about it, arguing that BIP 148 supporters used scare tactics to make it sound like it had more supportthan it did.

"Miners got antsy, people got scared and everyone met in New York to hash out a plan," he said, alluding to how Segwit2x was originally determined.

The question now seems whether this tactic is good for bitcoin's development.

Culianu almost seesawed on the question of whether UASF was a good thing for his project, as it could be said it spurred the "big block movement" to action.

He concluded:

"UASF was the spark that made all this happen, for better or for worse."

Image via Michael del Castillo for CoinDesk

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Interested in offering your expertise or insights to our reporting? Contact us at [emailprotected].

Here is the original post:
UASF Revisited: Will Bitcoin's User Revolt Leave a Lasting Legacy? - CoinDesk

Read More..