Page 4,398«..1020..4,3974,3984,3994,400..4,4104,420..»

‘FG should create framework for cryptocurrency trading’ – Guardian

The slowness in the adoption and trading in Crypto currency in Nigeria has been blamed on the Federal Governments failure to adopt a framework for the implementation of blockchain that can give rise to its participation. Already many countries have done that but for some reasons it has not happened in Nigeria.

This was the view of the Chief Executive Officer/Principal Consultant, Cryto Plus Certified, Peter Ayoade Moradeyo, in Lagos, who stressed that there was need to educate Nigerians appropriately as to what cryptocurrency really meant and what disruption it will address.

Crypto currency is a digital asset from block chain technology designed to work as a medium of exchange using cryptography to secure transactions and to control the creation of additional units of the currency. Crypto currencies are a subset of alternative currencies or specifically digital currencies.

As it is today, report has it that Automated Teller Machines (ATM) for crypto currencies have been launched in Canada, U.K. Germany, South Korea, Brazil, India, among others with the sole aim of aiding banking technology.

Moradeyo said the success story of Crypto currency dated back to 2009, during the financial breakout, when Bitcoin was formed on blockchain technology, adding that it was introduced in Asian countries and gradually to Europe, U.S. among others.

3 hours ago Business News

16 hours ago Business

16 hours ago Business News

Here is the original post:
'FG should create framework for cryptocurrency trading' - Guardian

Read More..

Kraken opens Dash trading as cryptocurrency trading volumes soar – LeapRate

Dash, the rising alternative to bitcoin, today announced a partnership with one of the worlds oldest bitcoin exchanges with the largest selection of digital assets and national currencies, Kraken Digital Asset Exchange.

Which DASH pairs will be available for trading?

The partnership between Dash and Kraken comes in the wake of a record surge for the cryptocurrency, which experienced a 6x increase in price per ($11 to $72 USD) and a 10x increase in trading volume ($3 million to $30 million USD) across Q1.

Dash VP of Business Development, Daniel Diaz, said:

Kraken is excited to offer Dash on their trading platform and our teams are working closely to ensure clients can begin trading the currency immediately. Kraken is an incredibly well established and well structured organization, and amongst the best in the exchange business. In terms of reputation, they represent the highest standard for client satisfaction. Dash is a project that has implemented very original ideas that resonate well with the market, and as a top tier exchange, Krakens mission is to provide clients with access to digital currencies that are in demand and provide value.

Following several business partnerships around the world, the implementation of the Sentinel software upgrade and the announcement of revolutionary decentralized payments system called Evolution, Dash has been on record breaking trajectory.

Its total market cap skyrocketed from $78 million USD (January 1st) to an all time high of $835 million USD (March 18th), with new international markets unlocked alongside user demand.

Daniel Diaz continued:

As the leading exchange in the Euro market, Krakens global reach helps Dash successfully meet the needs of our users and investors. The entire integration experience was very positive and we have high expectations for the partnership going forward. This is a significant achievement for Dash because our ecosystem needs high quality and trustworthy exchanges like Kraken to thrive, and we know they will play an important role as a fiat gateway.

Based in San Francisco with offices around the world, Krakens trading platform is consistently rated the best and most secure digital asset exchange by independent news media. Kraken investors include Blockchain Capital, Digital Currency Group, Hummingbird Ventures, Money Partners Group, and SBI Investment

Kraken is expected to offer Dash margin trading in the near future.

See more here:
Kraken opens Dash trading as cryptocurrency trading volumes soar - LeapRate

Read More..

Using Komprise to Archive Cold Data to Cloud Storage – DABCC.com

This article discusses using Komprise to analyze data across on-premises storage to identify cold data and then move it to Google Cloud Storage.

Typically, 60% to 80% of data is infrequently accessed within months of creation, yet consumes the same expensive resources as hot data.

Komprise is analytics-driven data management software that analyzes data usage and growth across on-premises storage. Komprise identifies cold data and then moves it transparently to the appropriate class of Cloud Storage based on customer-defined policies.

To support both existing on-premises and new cloud-native use cases, the moved data is accessible both as files, exactly as before, and as files or objects in the cloud.

As data footprints expand rapidly, Komprise is working with customers across industries such as financial services, healthcare, and engineering who are streamlining costs, building a path to the cloud, and increasing the resiliency of their data in use cases like the following:

Read the entire article here,Using Komprise to Archive Cold Data to Cloud Storage

More:
Using Komprise to Archive Cold Data to Cloud Storage - DABCC.com

Read More..

FPT Telecom and IIJ launch cloud computing service in Vietnam – Nikkei Asian Review

HO CHI MINH CITY -- FPT Telecom of Vietnam and Internet Initiative Japan on Thursday launched a cloud computing service for individual, business and enterprise customers in Vietnam.

FPT Telecom has called the new service, FPT HI GIO Cloud, the first full-scale, full-spectrum and quality cloud computing service in Vietnam.

FPT and IIJ launched the FPT HI GIO Cloud service in Vietnam on Thursday.

FPT and IIJ launched the FPT HI GIO Cloud service in Vietnam on Thursday.

Nguyen Van Khoa, general director of FPT Telecom, which is part of leading Vietnamese information technology group FPT, stressed the new service would provide access computing services via a stable network.

The product enables users to quickly launch virtual machines instead of investing in physical devices.

"We will lead the market in Vietnam to tap demand for cloud computing," General Director of IIJ Global Solutions Vietnam Ryo Matsumoto told the Nikkei Asian Review. FPT Telecom aims to acquire around 4,000 enterprise customers within a year. FPT Telecom and IIJ also aim to tap individual customers in a country where 70% of the more than 90 million population is expected to have access to the internet by 2020.

IIJ, one of Japan's leading internet and network solutions providers, has already launched similar cloud services in Singapore, Indonesia, and Thailand. FPT Telecom and IIJ are looking to launch additional joint projects related to security and network management in the coming years, according to Matsumoto.

In Vietnam global players such as IBM, Google, Symantec, Amazon, Oracle and Microsoft have launched their own cloud and joint services by teaming up with local telecom companies and using mobile broadband infrastructure. FPT and IIJ's partnership will further intensify competition in this area.

Vietnam ranked 14th in the Asia-Pacific region in the Asia Cloud Computing Association's Cloud Readiness Index 2016, coming after Singapore, Malaysia, Thailand, the Philippines and Indonesia.

See the rest here:
FPT Telecom and IIJ launch cloud computing service in Vietnam - Nikkei Asian Review

Read More..

Prepare your server fleet for a private cloud implementation – TechTarget

Private cloud services promise flexibility and scalability, while allowing organizations to maintain full control...

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

of their enterprise data centers. It's a compelling goal -- but private cloud implementation can be challenging and frustrating.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure. IT leaders must evaluate their current server fleet to ensure that each system offers the features needed to support virtualization and the subsequent cloud stack. Here are some considerations that can help sanity check whether your data center infrastructure is ready for private cloud implementation.

It's important to understand individual processor technologies and properly enable each feature before you deploy hypervisors and, eventually, the private cloud stack. For example, processors will invariably require hardware virtualization support through processor extensions, including Intel Virtualization Technology and AMD Virtualization. This technology typically includes support for the second level address translation required to translate physical memory space to virtual memory space at processor hardware speeds.

The path from a traditional data center to a private cloud starts at the lowest levels of the infrastructure.

Enable AMD No eXecute (NX) and the Intel eXecute Disable (XD) bits for processors, which will mark memory pages to prevent buffer overflow attacks and other malicious software exploits. You can typically enable processor extensions and NX/XD bits through the system BIOS or the Unified Extensible Firmware Interface (UEFI).

Consider the processor core/thread count for each server. Hypervisors, such as ESXi 6.0, demand a host server with at least two processor cores, but this is generally a bare minimum system requirement. Additional processor cores will vastly expand the number of VMs and workloads that each server can handle, and you can treat each additional processor thread as a separate core. For example, an AMD Opteron 6200 Series processor can support VMware ESXi 6.5 with eight cores and total of 16 threads; an Intel Xeon E5-2600-v4 Series processor offers 24 cores and a total of 48 threads.

Finally, consider the availability of UEFI on the server. UEFI is a later-type of system firmware -- a kind of advanced BIOS -- that allows additional flexible boot choices. For example, UEFI allows servers to boot from hard disk drives, optical discs and USB media -- all larger than 2 TB. However, it's important to evaluate the boot limitations of the hypervisor. As an example, ESXi 6.0 does not support network booting or provisioning with VMware Auto Deploy features -- this requires traditional BIOS and isn't currently supported by UEFI. If you change from BIOS to UEFI after you install a hypervisor, it might cause boot problems on the system. Consequently, it's a good idea to identify the firmware when identifying processors on each server.

Every VM or container exists and runs in a portion of a server's physical memory space, so memory capacity plays a critical role in server virtualization and in private cloud implementation. Hypervisors, such as ESXi, typically recommend a system with at least 8 GB to host the hypervisor and allow capacity for at least some VMs in production environments. Private cloud stacks such as OpenStack are even lighter, recommending only 2 GB for the platform -- each VM will demand more memory.

However, such memory recommendations are almost trivial when compared to the memory capacity of modern servers. As an example, a Dell R610 rackmount server is rated to 192 GB, while a Dell R720 is rated to 768 GB of memory capacity. This means existing enterprise-class servers already possess far more than the required minimum amount of memory needed for virtualization and a private cloud implementation. The real question becomes: how many VMs or containers do you intend to operate on the server, and how much memory will you provision to each instance? These considerations can vary dramatically between organizations.

As you virtualize, and place more workloads on, physical servers, network utilization increases dramatically. Network limitations can cause contention between workloads and result in network bandwidth bottlenecks that can impair the performance and stability of other workloads. This can be particularly troublesome during high-bandwidth tasks like VM backups, especially when multiple VMs attempt the same high-bandwidth tasks simultaneously.

This makes adequate bandwidth and network architecture choices critical on the road to private cloud implementation. A hypervisor, such as ESXi, typically demands at least one Gigabit Ethernet (GbE) port. Although a faster Ethernet port, such as 10 GbE, can alleviate bandwidth bottlenecks, it is often preferable to deploy two or more GbE ports instead. Multiple physical ports can present several important benefits. For example, you can combine multiple GbE ports can to aggregate the bandwidth of slower, less expensive network adapters and cabling infrastructure. This can also build resilience since a port failure at the server or corresponding switch port can failover to another port.

Storage is another core attribute of virtualization, so pay close attention to issues like storage capacity. A hypervisor like ESXi typically needs about 10 GB of storage divided between a boot device -- which creates a VMFS volume -- and a scratch partition on the boot device. Private cloud services platforms like OpenStack recommends at least 50 GB of disk space. The real capacity issue depends on the number of VMs and the amount of storage you allocate to each VM instance. An environment that uses few fixed VM disk images may need less capacity than an environment that deploys many different VM images with various storage requirements. As a rule, 1 TB should be adequate for a typical virtualized server.

Local storage capacity is typically not a gating issue with modern servers and storage equipment. In actual practice, however, enterprise servers rarely depend on local per-server storage, and instead use shared storage systems. In this case, the primary server concern may be adequate local storage to boot the system, but defer to a storage area network (SAN) for VM and workload data retention. This means the server should include adequate SAN support, such as two or more dedicated Ethernet ports (i.e., iSCSI or FCoE) or Fibre Channel ports for redundant SAN connectivity. Disks should always provide some level of RAID support -- RAID 5 or even RAID 6 can offer the best data protection and rebuild performance to hot spare disks.

As more VMs coexist on fewer physical servers, a server fault or failure can impact more VMs, which can be disruptive. As a business embraces virtualization and moves toward private cloud implementation, the underlying server hardware should include an array of resiliency features that can forestall failures.

Critical server hardware should include redundant power supplies and intelligent, firmware-based self-diagnostics that can help technicians identify and isolate faults. Modern servers typically include a baseboard management controller capable of system monitoring and management. If a server fails, it may be crucial to remove and replace the failed unit quickly.

Inside the server, select and enable memory resilience features like advanced error correcting code to catch single- and multi-bit errors, memory mirroring, hot spares that can swap in a backup DIMM if one DIMM fails and memory scrubbing -- sometimes called demand and patrol scrubbing -- that can search for and address memory errors on-demand or at regular intervals.

Any capable configuration management tool or framework can summarize and report many of these attributes for you directly from the local configuration management database. This can ease the time-consuming and error-prone manual review of physical systems and hypervisors. But a review of servers and hypervisors is really just the start of a private cloud implementation -- they form the critical cornerstone for other components, like storage, networks and software stacks, within the infrastructure.

OpenStack support lifecycles grow for the enterprise

The on-premises vs. cloud computing battle continues

Don't label all infrastructure as a commodity

Continue reading here:
Prepare your server fleet for a private cloud implementation - TechTarget

Read More..

Is That Old Cloud Instance Running? How Visibility Saves Money in the Cloud – Business 2 Community

Is that old cloud instance running?

Perhaps youve heard this around the office. It shouldnt be too surprising: anyone whos ever tried to load the Amazon EC2 console has quickly found how difficult it is to keep a handle on everything that is running. Only one region gets displayed at a time, which makes it common for admins to be surprised when the bill comes at the end of the month. In todays distributed world, it not only makes sense for different instances to be running in different geographical regions, but its encouraged from an availability perspective.

Webcast, April 18th: Google Analytics Setup and Basics: Measuring What Matters for Business Impact

On top of this multi-region setup, many organizations are moving to a multi-cloud strategy as well. Many executives are stressing to their operations teams that its important to run systems in both Azure and AWS. This provides extreme levels of reliability, but also complicates the day-to-day management of cloud instances.

So is that old cloud instance running?

You may get a chuckle out of the idea that IT administrators can lose servers, but it happens more frequently than we like to admit. If you only ever log in to US-East1, then you might forget that your dev team that lives in San Francisco was using US-West2 as their main development environment. Or perhaps you set up a second cloud environment to make sure your apps all work properly, but forgot to shut them down prior to going back to your main cloud.

Thats where a single-view dashboard can provide administrators with unprecedented visibility into their cloud accounts. This is a huge benefit that leads to cost savings right off the bat, as the cloud servers running that you forgot about or thought you turned off can be seen in a single pane of glass. Knowledge is power: now that you know it exists, you can turn it off. You also get an easy view into how your environment changes over time, so youll be aware if instances get spun up in various regions.

This level of visibility also has a freeing effect, as it can lead you to utilizing more regions without fear of losing instances. Many folks know they should be distributed geographically, but dont want to deal with the headache of keeping track of the sprawl. By tracking all of your regions and accounts in one easy-to-use view, you can start to fully benefit from cloud computing without wasting money on unused resources.

Go here to see the original:
Is That Old Cloud Instance Running? How Visibility Saves Money in the Cloud - Business 2 Community

Read More..

Alphabet’s Verily shows off health-focused smartwatch – Ars Technica

Enlarge / The Verily Study Watch, strategically photographed to not show how thick it is.

Alphabet's Life Sciences division, called Verily, is giving the world a peek at its health-focused smartwatch. The Google sister company introduced the "Verily Study Watch" on its blog today, calling it an "investigational device" that aims to "passively capture health data" for medical studies.

Many wearables technically capture health data with simple heart-rate sensors, but Verily's watch aims to be a real medical device.The blog post saysthe devicecan track"relevant signals for studies spanning cardiovascular, movement disorders, and other areas." The Study Watch does this by usingelectrocardiography(ECG) and by measuringelectrodermal activity and inertial movements.

The Study Watch beams this datato Verily's cloud infrastructure for all sorts of big-data analysis. Study Watch seems to be the Verily hardware platform of the future, with the company saying the watch will be used in several studies being run by Verily and its partners. The company specifically said the watch would be used in "Baseline Study," a Verily project that aims to measure what a healthy human looks like, and the "Personalized Parkinson's Project."

With the goal of Study Watch to be an unobtrusiveway to collect medical data, battery life is a concern. Verily promises "a long battery life of up to one week" for the device. The "always-on" display seems to be e-ink, which ispractically a requirement for any watch with a week-long battery life. Verily alsogave the watch enough storage to keep "weeks' worth of raw data" encrypted on the device, removing the need to frequently sync with cloud servers. There also isn't much in the way of user features: Study Watch displays the time and date, and that's it for now. The watch is capable of getting over-the-air software updates, though, so the interface might change.

There's no word on price, as the Study Watch is "not for sale." It's just something that will be given out to participants in Verily's medical studies.

More:
Alphabet's Verily shows off health-focused smartwatch - Ars Technica

Read More..

Solving the puzzle of hybrid cloud [Q&A] – BetaNews

Many enterprises are moving towards hybrid cloud environments, but they face a challenge when it comes to working out how to control their cloud use effectively.

If they fail to do this and govern their cloud use properly, then any gains in agility they achieve will come with high costs and operational risks. We spoke to Andrew Hillier, CTO of Cirba, the company behind the Densify.comSaaS hybrid capacity analytics software, to find out how enterprises can bridge the gap between cloud hype and reality.

BN: What are the main factors driving the move towards cloud use?

AH: There are numerous factors that are driving the move towards cloud use for all sizes of enterprises. Agility is cited the most, as public cloud allows businesses to scale their operations up or down depending on needs, while quickly responding to the business. Cost competitiveness is another. We have even heard from some customers within which boards are pushing to get out of the business of owning and operating data centers. And even if that isn't the case, most organizations want to avoid expanding their physical footprint or building more data centers -- we have one customer that was running out of power in their data center and had the choice of looking to relocate or use the cloud. The reasons are as varied as the organizations using cloud resources.

BN: What's the impact to the business of IT infrastructure in the cloud now being an operational rather than a capital expense?

AH: One of the benefits of old school data centers is that organizations knew how to plan out their costs. They typically over-provisioned to plan for growth and sunk a bunch of money into hardware and software. The move to cloud presents a more dynamic cost structure with variable usage, costs and even changing vendor catalogues and cost structures. That traditional planning exercise is still important, but rather than happening cyclically, it is now an ongoing, dynamic process. This means that everything needs to happen faster, and it requires consistency in approach, with the proper governance and policies in place to not make a mess or have costs spiral out of control. To constantly optimize public cloud OPEX, and to strike the right balance between incurring new costs versus leveraging the sunk CAPEX costs of existing infrastructure, requires more than spreadsheets and smart people. It needsanalytics that can incorporate a dynamic model of demand and a deep, up-to-date understanding of the various hosting environments, to scientifically determine the best options for workload placements and resource allocations. We are finding that most companies don't have their heads around how to do this yet.

BN: How important is proper governance of cloud strategy?

AH: All of the benefits of migrating towards a cloud strategy -- flexibility, scalability, security, etc. -- are not worth anything if your IT department hasn't figured out how to properly manage and govern its cloud usage effectively. Proper governance is the most important aspect of a successful cloud migration strategy, but it is often an afterthought in many organizations, as that early cloud adopters in Dev/Test didn't always need to deal with governance rules, and they didnt experience the long-term costs that can rack up when things move into production. In fact, I've heard surprised reactions from many IT decision makers who began the migration process and were shocked when expected savings turned into higher monthly costs.

To avoid this, companies need to have a formal set of criteria and guidelines for decision making as to where to place workloads in order to effectively leverage both on premise and cloud infrastructure. And this isn't just a documentation and awareness exercise -- these criteria need to be codified into the analytics and automation systems being used, so the right decisions can be made rapidly and automatically. This will help not just in governance and compliance, but also in ongoing cost control. All in all, it's important that CTOs and IT teams remember that the reality of the cloud wont ever live up to the hype if its not being governed and controlled properly.

BN: Does too much control risk loss of agility?

AH: It doesn't have to -- the ability to codify the ground rules and use analytics to optimize and automate cloud hosting decisions means that complex hybrid decisions can be made in real time, giving the business exactly what they need without slowing them down. But don't expect cloud provisioning and orchestration tools to have this level of intelligence -- being good at provisioning a new system is completely different from being very good at figuring out exactly where that system should be provisioned, especially in hybrid cloud. To automate these processes and not create an ungoverned mess, you need analytics that can model all necessary criteria for the workloads, the capabilities of the infrastructure and the governing policies.

BN: What's the future for hybrid cloud?

AH: One thing I think we can all agree on is that cloud is becoming more widely accepted and will play an increasing role in infrastructure strategies. How much ends up in the cloud is anyones guess at this point, and will likely vary significantly for each organization depending on the nature of their applications and the patterns of their workloads, but we do know that customers see public cloud as the yard stick for speed and cost against which internal IT departments will be measured. I believe this will be the year that many companies will finally figure out how to leverage that yard stick to scientifically determine what to migrate to the cloud successfully, and what not to. By properly analyzing workloads, many organizations will undoubtedly find that a portion of their applications are more cost-effective and efficient in on-prem virtual environments, at least until their existing data centers run out of space.

And we shouldn't underestimate the power of virtualization -- the rise of interest in bare metal offerings, where you can bring your own hypervisor, is also changing the dynamic. The ability to intelligently stack workloads and over commit resources is clearly very powerful, and also translates to the cloud as a powerful way to drive cost savings. VMware's partnership with AWS is going to enable this, and it could have a major impact on the way organizations adopt 'cloud,' as well as the systems they need to manage and optimize these hybrid environments.

Image Credit: Maksim Kabakou / Shutterstock

Excerpt from:
Solving the puzzle of hybrid cloud [Q&A] - BetaNews

Read More..

Bitcoin bears ramp up bets virtual currency will fall – MarketWatch

Bets that the bitcoin price will fall have surpassed bets that it will rise for the first time since February on one of the worlds largest digital currency exchanges, stoking contrarian speculation that the digital currency could headed for a new all-time high.

Open short interest on Bitfinex surpassed long interest on Thursday for the first time since February. The last time shorts eclipsed longs, the bitcoin price gained nearly $300 over the following three weeks, rising from $993 on Feb 13 to an all-time high of $1,285.

Bitfinex, a top digital currency exchange by trading volume, is also one of only a handful to allow customers to trade on margin.

In financial markets, short interest is sometimes viewed as a contrarian indicator because it leaves assets vulnerable to whats known as a short squeeze. In a short squeeze, investors who bet against the asset are forced to buy it back to close out their positions at a loss, causing the price to move sharply in the other direction.

The buildup in shorts proves that the bitcoin scaling debate isnt over yet, said Chris Dannen, a founding partner at Iterative Instinct, a small New York-based private-equity fund that trades crypto-assets.

Dannen was referring to a rift in the bitcoin community over how to upgrade bitcoins software to allow the network to process transactions more quickly and efficiently.

That debate has quieted down in recent weeks as bitcoin miners have backed away from a controversial proposal called bitcoin unlimited that wouldve raised the limit on how much transaction data can be stored in each block of the bitcoin blockchain.

Investors feared that, if support for the proposal passed a certain threshold, but fell short of unanimous adoption, it could split the network into two different coins.

Amith Nirgunarthy, director of marketing and high net-worth partnerships at Bitcoin IRA, was reluctant to read too much into the shift in positioning.

Theres been a lot of good news in the cryptocurrency space as of late, he said.

Earlier this week, Blockchain Capital, a venture fund focused on blockchain initiatives, closed its $10 million initial coin offeringthe first of its kind in the U.S.after just six hours.

A blockchain is a decentralized, cryptographically secured ledger that powers digital currencies like bitcoin.

The bitcoin BTCUSD, +0.97% price retreated on Thursday after touching its highest level in three weeks. One coin was recently trading at $1,165.

View original post here:
Bitcoin bears ramp up bets virtual currency will fall - MarketWatch

Read More..

Bitcoin After Eight Years: More Virtual Than Real? – Wall Street Journal – Wall Street Journal (blog) (subscription)

4/12/2017 4:57PM Recommended for you Stop Making 'Treadmill' To-Do Lists Now! 2/27/2017 12:52PM Emanuel on Gas Tax: 'I'm for Raising It' 4/12/2017 1:01PM The Best ETFs for 5% Yields 4/11/2017 1:50PM Daughter of Immigrants to Entrepreneur, and More 4/11/2017 11:30AM Desktop PCs Are Making a Comeback 4/12/2017 11:00AM How Mothers Can Survive Their Daughters' Teen Years 4/10/2017 3:28PM Attorney and Daughter of United Passenger Speak 4/13/2017 1:46PM Spicer Comments on Afghanistan Bombing 4/13/2017 3:27PM Housing Prices Soar in Millennial Cities 4/11/2017 6:00AM Film Clip: 'The Lost City of Z' 4/12/2017 1:49PM Three Things to Watch For in Tillerson's Russia Visit 4/12/2017 6:00AM Desktop PCs Are Making a Comeback 4/12/2017 11:00AM

Computer makers like HP, Dell and Microsoft are rethinking the old standby, says WSJ's Geoffrey A. Fowler. Now, you can buy a PC with a huge screen that surrounds your head, one built into speakers, even one with top and bottom touch screens, like something out of Star Trek. Photo/Video: Emily Prapuolenis/The Wall Street Journal

On April 16, Turkey's citizens will vote on a constitutional referendum that could dramatically reshape the government and keep President Erdogan in office for another two decades. Heres a look at Erdogan's political career over the past two and a half decades. Photo: Associated Press

An attorney for David Dao, the passenger dragged off a United Airlines flight, said at a news conference Thursday that Dr. Dao suffered a concussion during the incident and will likely sue. Daos daughter Crystal Pepper also spoke about the toll the incident has taken on the family. Photo: Kamil Krzaczynski/Reuters.

What household products do scientists believe are a link to the dramatic rise in thyroid cancer over the past 40 years? Duke University's Dr. Julie Sosa and WSJ's Tanya Rivero discuss on Lunch Break. Photo: Getty

In an Oval Office interview with The Wall Street Journal on Wednesday, President Trump affirmed that North Korea is the U.S.'s biggest international threat. WSJ's Gerald F. Seib gives us more insight on what Mr. Trump had to say about Washington's posture toward Pyongyang. Photo: Jason Andrew for WSJ

San Francisco startup Marble is testing its takeout food delivery robots in The Mission and Potrero Hill neighborhoods. Photo/Video: Emily Prapuolenis/The Wall Street Journal

Many young Chinese, especially women, who seek fame and fortune online, go under the knife. They believe that to become an online star, they first need to get an online-star face through plastic surgeries. Photo: Andrea Cavazzuti

Excerpt from:
Bitcoin After Eight Years: More Virtual Than Real? - Wall Street Journal - Wall Street Journal (blog) (subscription)

Read More..