Page 4,217«..1020..4,2164,2174,2184,219..4,2304,240..»

School phones go on ‘the cloud’ – The Ridgefield Press

Out with the old, in with the new phones.

Thats the motto Ridgefield Public Schools are embracing this summer after suffering 45 outages with its phone system over the last two years.

Dr. Robert Miller, the districts technology director, announced Monday that all nine schools have transitioned onto a cloud system thats hosted by TPX Communications.

Weve heard a lot of very positive thank yous and feedback from the staff here with the first part of the rollout yesterday, he said Tuesday.

Miller told The Press that under the new system which taxpayers approved at a $550,000 price tag this spring every district phone connects out through the Internet over to TPXs cloud-based servers.

The service provider is where it gets all the information to and from, he said. It gets the calls and routes the calls, and it really becomes our service provider for our telephone services.

Miller said that the change decreases the odds of future outages.

Theres always disadvantages, but I wouldnt necessarily say that theyre drawbacks, he said. Theres really a lot less that could go wrong.

On the cloud

The district started the bid process for a cloud-based system in March, distributing a request for proposal online and throughout social media.

Miller received a total of 12 proposals.

We brought the top two vendors in for a product demonstration, he said. When we were all done, our team debated pros and cons of each one.

The district ultimately chose TPX Communications.

We decided that this one best met our needs.

Right timing

Miller said that replacing the old system was inevitable, and would have cost taxpayers more in the long run had the capital item not been approved in May.

When you looked at the cost associated with it, we know that we have to incorporate the cost of upgrading the entire system anyway in a year or two so if we invest anything in a minor upgrade, that upgrade will be wasted, he told The Press.

We were also told that any time we had an outage, that we were rolling the dice, he said. At any point, we may not be able to recover our equipment in the first place. So we wanted to go to a stable system as soon as possible.

Increased mobility

Miller said one of his favorite aspects of being on the cloud-based system is that the district can handle any type of situation.

If we ever had a true emergency and I needed to move a building from one location to another if Central Office is compromised and we have to power for five days, I can pick up our phones and take them over to the town library or another school, as long as theres Internet access, he said. I can re-setup shop wherever we need to go.

Another feature of the cloud-based phone system is that high school teachers will be able to send or receive calls on their Chromebooks.

It allows for more mobility across the district, Miller said.

Making the transfer

The move to the new phone system has two phases.

On Aug. 7, the first phase was incorporated in the offices of each school.

The second phase, slated for Aug. 21, will incorporate the classrooms into the new system.

Miller acknowledged that the process of transferring phone systems is complex.

He has called the vendor at least once a week this summer.

Its a lot of data. Its a lot of looking at each individual room across the district, he said. There are different licenses associated with cloud services, so its figuring out what features each individual room or person needs.

Its been a lot of implementation on the back end, making sure the data is right and that the company can configure everything to our needs.

Training

Over last two weeks, there have been various training sessions held to help administrative staff smoothen the transition to the new system.

The topics range from making phone calls, voicemail, and holds, to switching from paper to digital faxes.

Theres a lot there, he said. You kind of just have to use it and learn it on the fly. If youre using it, thats how you gain experience on it. Some people are more nervous than others, and thats normal. But for the most part, Ive heard very positive feedback.

After classrooms transition to the new system later this month, Miller will adopt the training sessions to meet the needs of teachers.

Well be able to even after school starts get in some revisions and tweaks as we move forward, he said. Every time you have a new system, you have to continuously reflect: How is it going? What are the changes we need to make? Whats going well? Whats not going well?

That cyclical pattern is something we need to make sure thats part of the process.

Read more here:
School phones go on 'the cloud' - The Ridgefield Press

Read More..

Datrium Announces Split Provisioning For Simple Private Cloud Consolidation At Rackscale – Markets Insider

SUNNYVALE, Calif., Aug. 15, 2017 /PRNewswire/ --Datrium, the leading provider of Open Converged Infrastructure for private clouds, today announced Datrium DVX 3.0 with Split Provisioning, a fundamentally new and different method of scaling converged infrastructure for private clouds. DVX with Split Provisioning is the industry's first server-powered converged infrastructure system to fully separate scaling of host storage speed and persistent capacity to simply and incrementally match resources to evolving tenant requirements. The new software provides scaling from 1-128 Compute Nodes, or up to 200 gigabytes per second (GB/s) IO bandwidth, and from 1-10 Data Nodes, or up to 1.7 petabytes (PB) of capacity, in a single composable pool. Datrium will demonstrate its DVX platform with Split Provisioning at the VMWorld conference in Las Vegas, Aug. 27-31, booth #618.

Unlike Open Converged Infrastructure, Hyperconverged infrastructure (HCI) approaches maintain persistent data on every server and are limited to 2 or 3 simultaneous server failures before users experience a data outage. As a result, HCI users tend toward smaller clusters, across which servers and workloads should be homogeneous. According to Gartner, "in today's world, most hyperconverged deployments are in the range of 8 to 16 nodes,"1 so rackscale deployments need multiple clusters, each of which needs configuration and administration. Gartner has concluded that "many mainstream enterprise IT directors do not yet trust SDS in HCIS solutions to deliver multipetabyte capacity at scale for Tier 1 mixed workloads that require low latency."2

10X Greater ConsolidationWith Split Provisioning, Datrium DVX scales compute with both primary and secondary storage resources in a radically new way. Compute Nodes run all workloads in local commodity flash for high performance and low latency. Each Compute Node writes persistent data to network-attached, capacity-optimized Data Nodes, which are similar to purpose-built backup appliances optimized for cost efficiency and sequential IO. Because Compute Nodes are stateless, any number of them can go down for service without affecting data availability.

Until now, Datrium DVX supported a single Data Node. With DVX 3.0 and Split Provisioning, a single Datrium DVX can pool multiple Data Nodes, each of which adds capacity and write bandwidth. While the minimum DVX configuration is one compute node and one data node, Split Provisioning allows scalability up to 128 compute nodes and 10 data nodes in a single system. Adding a Data Node to a pool is a one-command operation, after which the system automatically and non-disruptively rebalances capacity. System-wide performance and capacity increases up to:

"At Parametric, we provide engineered portfolio solutions to institutional investors and private clients built on quantitative, rules-based analysis. Our largest private cloud environment has grown 200-300 percent over the last year, and have received requests for 80 to 100 terabytes for a single project," said Ben Garforth, Parametric Portfolio Systems Engineering Manager. "With Datrium and Split Provisioning, I can now handle our massive data growth cost effectively and non-disruptively within a single resource pool. And better still, I don't need to manage a thingI check in on the Datrium solution once a month and that's about it."

"Datrium's premise has always been that their 'open convergence' method delivers better overall scalability, along with independent scalability of performance and capacity, compared to hyperconverged architectures in essence, private clouds with virtually limitless performance and scale with the ease of public clouds," said Arun Taneja, Founder and Consulting Analyst of Taneja Group. "Split Provisioning delivers further proof point for their claim. With the pace of innovation coming from Datrium, I can't wait to see what comes next."

Converging Compute with Primary and Secondary StorageDatrium DVX converges compute with both primary and secondary storage, and Split Provisioning takes this to an entirely new scale. Compute Nodes write data to low cost, disk-based Data Nodes with always-on global deduplication, inline compression, erasure coding and Blanket Encryption. Data Cloud, Datrium's built-in cloud data management software, can now manage up to 1.2 million snapshots per DVX, replicable to other DVX's or to Amazon Web Services.

For additional data protection, DVX 3.0 now offers application-consistent snapshots for Microsoft Windows-based applications such as MS SQL Server. Datrium Volume Shadow Service Provider leverages instantaneous and scalable DVX Snapshots, eliminating VM stuns and related sluggish application performance for large data sets.

"As specialty appliances decline, the future of private clouds will look like DVX," said Brian Biles, Datrium CEO and Co-founder. "Split Provisioning enables an order of magnitude more application bandwidth than most all-flash arrays, much simpler rackscale consolidation than HCI, and better secondary storage than either. This is tomorrow calling."

[SPEC SHEET] Datrium DVX Specification Sheet[DATA SHEET] Datrium DVX Data Sheet[WHITEPAPER] Split Provisioning Technical Whitepaper[BLOG] And The Third Shoe Drops

Pricing and AvailabilitySplit Provisioning software is available immediately with Datrium DVX Software 3.0 at US list pricing of $12,000 per Compute Node.

1Gartner: Key Differences Between Nutanix, VxRail and SimpliVity HCIS Appliances | 26 April 20172Gartner: Beware the 'Myth-Conceptions' Surrounding Hyperconverged Integrated Systems | 18 February 2016 refreshed 02 June 20173Performance based on Datrium Lab Testing. Bandwidth measured with 64K IO size 100% Read. IOPS measured with 4K IO size, 100% Read | July 20174From XtremIO specifications sheet

Industry RecognitionGartner, Cool Vendors in Storage Technologies, 2016, April 2016 - A Cool Vendor

Gartner DisclaimerGartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About DatriumDatrium is the leader in Open Converged Infrastructure for private clouds. Datrium converges storage and compute across primary application and data management workloadsmodeled on public cloud IaaS versus traditional converged infrastructure or hyper-convergencefor vastly simpler performance, predictability and protection. The company is led by the founders and early top architects of Data Domain and VMware. For more information, visit http://www.datrium.com and follow @datrium on Twitter.

View original content with multimedia:http://www.prnewswire.com/news-releases/datrium-announces-split-provisioning-for-simple-private-cloud-consolidation-at-rackscale-300504060.html

SOURCE Datrium

Go here to see the original:
Datrium Announces Split Provisioning For Simple Private Cloud Consolidation At Rackscale - Markets Insider

Read More..

Notes: Cloud Computing still in running for Travers – Albany Times Union

2017 Jim Dandy winner Good Samaritan breezed this morning on the Oklahoma Training Center track Friday Aug. 11, 2017 in Saratoga Springs, N.Y. (Skip Dickstein/Times Union)

2017 Jim Dandy winner Good Samaritan breezed this morning on the Oklahoma Training Center track Friday Aug. 11, 2017 in Saratoga Springs, N.Y. (Skip Dickstein/Times Union)

2017 Jim Dandy winner Good Samaritan breezed this morning on the Oklahoma Training Center track Friday Aug. 11, 2017 in Saratoga Springs, N.Y. (Skip Dickstein/Times Union)

2017 Jim Dandy winner Good Samaritan breezed this morning on the Oklahoma Training Center track Friday Aug. 11, 2017 in Saratoga Springs, N.Y. (Skip Dickstein/Times Union)

Cloud Computing is cooled out after his four-furlong workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Cloud Computing is cooled out after his four-furlong workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Cloud Computing and jockey Javier Castellano head to the track for a workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Cloud Computing and jockey Javier Castellano head to the track for a workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Cloud Computing gets a drink after his workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Cloud Computing gets a drink after his workout at Saratoga on Saturday. (Tim Wilkin / Times Union)

Notes: Cloud Computing still in running for Travers

Saratoga Springs

Right after watching his Preakness winner Cloud Computing finish last in the Jim Dandy, the last thing trainer Chad Brown was thinking about was the Travers.

A spot in the $1.25 million Midsummer Derby is still no guarantee for Cloud Computing, but Brown is at least thinking about it.

"It's under consideration," Brown said outside his barn on the Oklahoma Training Track after watching Cloud Computing work four furlongs on the main track Saturday morning. "I haven't made up my mind yet. Initially, after his Jim Dandy race, I probably wouldn't go on after a poor effort like that."

After letting the race settle, Brown may be willing to forgive his horse for the Jim Dandy debacle. The track was deep that day, and Brown noted other horses struggled over it (Kentucky Derby winner Always Dreaming was third in the Jim Dandy).

Regular rider Javier Castellano was on board Cloud Computing for the work, which was clocked in 49.09 seconds.

"(Brown) wasn't looking for something timing-wise," Castellano said after the work. "He wanted a nice, comfortable work. He felt great. I just let him breeze in a nice, comfortable rhythm. He did it easy."

Brown was also happy with the work, saying the track was different on Saturday than it was two weeks ago when the Jim Dandy was run.

"He has trained well since," the Mechanicville trainer said. "He breezed terrific. We have to at least consider (the Travers)."

Other Travers works

Jim Dandy winner Good Samaritan worked four furlongs on Friday in 48.84 seconds and Girvin, winner of the Haskell Invitational at Monmouth Park, went four furlongs Saturday in 50.45. Both works came over the Oklahoma.

"His first work back," Good Samaritan's trainer Bill Mott said Saturday morning. "It was what it was supposed to be. He's feeling pretty good."

Trainer Joe Sharp was aboard the colt when he sent Girvin out to the Oklahoma at 5:30 a.m. Saturday, right after it opened.

Girvin finished 13th in the Kentucky Derby and was second behind Irap in the Grade III Ohio Derby in June.

"It's like he knows he is getting good and he's getting more confident in himself physically and mentally," Sharp said.

Sharp said Girvin will work again next week, probably in company, but it is yet to be decided whether the work will be on the Oklahoma or the main track.

Special race

The second of three graded stakes for 2-year-old colts will be run Sunday when nine juveniles contest the Grade II, $200,000 Saratoga Special at 61/2 furlongs.

Only two of the colts in the race have been in graded stakes. Copper Bullet, the 2-1 morning-line favorite, was second in the Grade III Bashford Manor at Churchill Downs on June 30, and the Todd Pletcher-trained Bal Harbour was fifth in the Grade II Sanford here on July 22.

Five of the colts entered in the race are jumping to a graded stake off a maiden win in their first start.

Included in that group is the Dale Romans-trained Hollywood Star, who won his debut at Churchill Downs by a half-length at six furlongs.

"What I learned in 2-year-old races is that you can't really handicap them," Romans said. "You don't really know the horses. This is a quality horse."

Hollywood Star, a son of Malibu Moon, went for $500,000 in the Keeneland September sale.

"He has trained better after that (first) race," Romans said. "I don't think he'll fool me. He has trained like a good one from day one."

twilkin@timesunion.com 518-454-5415 @tjwilkin

View original post here:
Notes: Cloud Computing still in running for Travers - Albany Times Union

Read More..

Assessing the key reasons behind a multi-cloud strategy – Cloud Tech

Everyone who follows cloud computing agrees that we are starting to see more businesses utilise a multi-cloud strategy. The question this raises is: why is a multi-cloud strategy important from a functional standpoint, and why are enterprises deploying this strategy?

To answer this, lets define multi-cloud since it means different things to different people. I personally like this one, as seen on TechTarget:

the concomitant use of two or more cloud services to minimise the risk of widespread data loss or downtime due to a localised component failure in a cloud computing environment... a multi-cloud strategy can also improve overall enterprise performance by avoiding vendor lock-in and using different infrastructures to meet the needs of diverse partners and customers

From my conversations with some cloud gurus and our customers, a multi-cloud strategy boils down to:

Lets look at each one.

Looking at our own infrastructure at ParkMyCloud, we use AWS and other AWS services including RDS, Route 53, SNS and SES. In a risk mitigation exercise, would we look for those like services in Azure, and try to go through the technical work of mapping a 1:1 fit and building a hot failover in Azure? Or would we simply use a different AWS region which uses fewer resources and less time?

You dont actually need multi-cloud to do hot failovers, as you can instead use different regions within a single cloud provider. But thats betting on the fact that those regions wont go down simultaneously. In our case we would have major problems if multiple AWS regions went down simultaneously, but if that happens we certainly wont be the only one in that boat.

Furthermore, to do a hot failover from one cloud provider to another (say, between AWS and Google), would require a degree of working between the cloud providers and infrastructure and application integration that is not widely available today.

Ultimately, risk mitigationjust isnt the most significant driver for multi-cloud.

What happens when your cloud provider changes their pricing? Or your CIO says we will never be beholden to one IT infrastructure vendor, like Cisco on the network, or HP in the data centre? In that case, you lose your negotiating leverage on price and support.

On the other hand, look at Salesforce. How many enterprises use multiple CRMs?

Do you then have to design and build your applications to undertake a multi-cloud strategy from the get-go, so that transitioning everything to a different cloud provider will be a relatively simple undertaking? The complexity of moving your applications across clouds over a couple of months is nothing compared to the complexity of doing a real-time hot failover when your service is down. For enterprises this might be doable, given enough resources and time. Frankly, we dont see much of this.

Instead, I see customers using a multi-cloud strategy to design and build applications in the clouds best suited for optimising their applications. By the way you can then use this leverage to help prevent vendor lock-in.

Hot failovers may come to mind first when considering why you would want to go multi-cloud, but what about normal operations when your infrastructure is running smoothly? Having access to multiple cloud providers lets your engineers pick the one that is the most appropriate for the workload they want to deploy. By avoiding the all or nothing approach, IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits their requirements, in terms of time-to-market or cost effectiveness - then integrate those services. Also, this approach may help in avoiding problems that arise when a single provider runs into trouble.

A multi-cloud strategy addresses several inter-related problems. Its not just a technical avenue for hot failover. It includes vendor relationship management and the ability optimise your workloads based on the strengths of your teams and that CSPs infrastructure.

By the way when you deploy your multi-cloud strategy, make sure you have a management plan in place upfront. Too often, I hear from companies who deploy on multiple clouds but dont have a way to see or compare them in one place. So, make sure you have a multi-cloud dashboard in place to provide visibility that spans across cloud providers, their locations and your resources, for proper governance and control. This will help you can get the most benefit out of a multi-cloud infrastructure.

Go here to see the original:
Assessing the key reasons behind a multi-cloud strategy - Cloud Tech

Read More..

Intel runs rule over new data centre storage design – Cloud Tech

It is not quite available yet but Intel has shed some light on its plans in the data centre storage space with the announcement of a new form factor which could enable up to one petabyte of storage in a 1U rack unit.

The new ruler form factor (above), named as such for self-evident reasons, shifts storage from the legacy 2.5 inch and 3.5 inch form factors that follow traditional hard disk drives and delivers on the promise of non-volatile storage technologies to eliminate constraints on shape and size, in Intels words. The company adds that the product will come to market in the near future.

1U rackmounts are predominantly 19 wide and 1.75 high, although the depth can vary from 17.7 to 21.5. As the numbers go up, the height essentially doubles, so a 5U mount can be 19.1 by 8.75 by 26.4, while 7U, the highest, is 17 by 12.2 by 19.8. To put one petabyte into perspective, it is enough storage to hold 300,000 HD movies.

Intel also had room for a couple more announcements. The company is targeting hard disk drive (HDD) replacement in the data centre with an updated SATA family of solid state disks (SSDs), aiming to reduce power and cooling as well as increase server efficiency, as well as announcing dual port Intel Optane SSDs and Intel 3D NAND SSDs, replacing SAS SSDs and HDDs. The former is available now with the latter coming in the third quarter of this year.

Bill Leszinske, Intel vice president, said the company was driving forward an era of major data centre transformation. These new ruler form factor SSDs and dual port SSDs are the latest in a long line of innovations weve brought to market to make storing and accessing data easier and faster, while delivering more value to customers, he said in a statement.

Data drives everything we do from financial decisions to virtual reality gaming, and from autonomous driving to machine learning and Intel storage innovations like these ensure incredible quick, reliable access to that data, Leszinske added.

According to a study from Intel and HyTrust released in April last year, two thirds of C-suite respondents said they expect increased adoption in the software defined data centre (SDDC) space.

Picture credit: Intel

View post:
Intel runs rule over new data centre storage design - Cloud Tech

Read More..

Film maker shuns cloud storage for Spectra Logic tape – ComputerWeekly.com

Movie production house Smoke & Mirrors has deployed more than 7PB of Spectra Logic tape storage for backup and archiving. It shunned public cloud archiving services for security and cost reasons.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The company is based in Londons Soho, with offices worldwide, and provides visual effects for advertising, film and music videos.

An average project is in the 5TB region, with about 12TB of data added per day.

Primary storage comprises around 1.5PB of Isilon scale-out NAS. Data had been shifted from Isilon to a Quantum tape library, but this was nearing end of life, and adding extra disk capacity elsewhere was cost-prohibitive.

The company decided to look at tape libraries and cloud storage. It had used a Quantum tape library in the past, but had experienced reliability issues with it.

Also, Spectra Logic tape libraries offered better storage density, said David Lennox, lead systems engineer at Smoke & Mirrors.

We liked the way Spectra Logic packed tapes into something like drawers in a filing cabinet, unlike the Quantum, which were like books on a bookshelf, he said.

Smoke & Mirrors deployed a Spectra T950 tape library with eight LTO-6 drives, providing more than 7PB of capacity.

The move allowed the company to double the number of tape slots from 893 to 1,460 in two racks, compared with the previous four.

The increased density helps hugely by saving space, which is important because were in Soho, and by cutting power and air-conditioning costs, said Lennox.

As data is used it is backed up and archived to the Spectra Logic tape library using IBM Spectrum Protect (formerly Tivoli Storage Manager) backup software.

Smoke & Mirrors chose not to use public cloud storage to archive its data for reasons of security and cost, said Lennox.

It was mainly security, he said. Our customers dont want us to use the cloud. The feeling is it can be potentially accessed by anyone and theres not as much control as there is with tape in a server room.Also, theres so much data were archiving 16TB a night that the cost of using cloud storage would be astronomical.

Follow this link:
Film maker shuns cloud storage for Spectra Logic tape - ComputerWeekly.com

Read More..

NetApp Has Better Storage Trends than Pure, Says Maxim – Barron’s

Maxim Groups Nehal Chokshi this morning raises his rating on storage technology vendor NetApp(NTAP) to Buy from Hold, while cutting his rating on shares of competitor Pure Storage (PSTG) to Hold, writing that the former is seeing success with its newer products for cloud computing, which can boost profit, while Pure is at risk of betting againstpublic cloudcomputing.

Chokshi raises his price target to $56 from $46 for NetApp, writing that it can achieve a 25% operating profit margin, in five years from now, up from 17% at the moment.

Its newer products for cloud, writes Chokshi, are giving NetApp newfound differentiation:

NTAP has developed a portfolio of hybrid cloud data management products that include NetApp Private Storage (released in late 2012), ONTAP Cloud (launched in late 2014), Altavault (acquired from FFIV in 2015), CloudSync and Cloud Control (both released in late 2016). These products form an effective complete hybrid cloud data management capability that spans mission critical workload bursting from a private to a public cloud (NPS) to the more basic disaster recovery (AltaVault) capability. On our annual Silicon Valley bus tour, CEO George Kurian reiterated from the Analyst Day that the company has 1,500 hybrid cloud customers (we estimate out of ~60K customers) that are utilizing NetApp Private Storage (NPS) and thousands of customers that are utilizing one of NTAP's hybrid cloud products with the customer base growing triple digits. Our limited channel checks also indicate that NTAP has begun to gain mindshare with resellers in terms of the company's hybrid cloud capabilities, which is providing a significant differentiator for NTAP that is aligned with the longer-termed trend of hybrid cloud adoption.

In addition to cutting costs, NetApp can boost its operating profit margin by raising its gross profit marginthanks to newer product:

We note that management has changed comp plans beginning in the July Q (F1Q18) to stop sales reps from over-specifying competitive deals to ensure winning deals at the expense of GM. We also note that when differentiation for NTAP was high, product GM was as high as 60% (FY06 to FY11, when NTAP benefitted from strong alignment with virtualization). Given evidence that NTAP's differentiation has morphed towards its rich and mature portfolio of hybrid cloud data management products which we believe will prove to be a decade long trend as virtualization had been, we see potential for NTAP to drive product GM back to the 60% level over time (vs. FY17 level of 47.4%). Plugging 60% product GM in would then yield an overall GM increase of 700bp.

As for Pure, whose price target Chokshi cuts to $15 from $20, he's got two concerns.

One is that the company is betting on corporate IT managers and therefore is in a sense betting against the spread of public cloud computing.

Pure is deeply invested in on-premise IT, writes Chokshi:

We note that the differentiating characteristics for PSTG continues to be to enable best-in-class on-premise IT data management capabilities, which makes PSTG a loved OEM with on-premise IT departments that naturally resist change. Given the pattern of PSTG's product development efforts consistently betting on IT departments resisting all other changes other than movement to data management simplicity and All Flash Arrays, we note that PSTG is also then betting against; (1) IT departments looking to simplify overall IT operations by utilizing the hypervisor as the substrate for automation, and (2) IT departments leveraging the benefits of public clouds while maintaining ownership of the IT (i.e. hybrid cloud).

Second, Chokshi thinks the companys newer FlashBlade product needs several more years to mature:

We note that the level of features that FlashBlade currently carries is still relatively limited, including missing the capability of Active Clusters that PSTG just introduced for their five year older product FlashArray, which in our view highlights that FlashBlade likely still has a multi-year path to becoming fully matured. We note that FY18 (Jan Q end) guidance embeds accelerating y/y revenue growth throughout the course of the fiscal year (from 31% y/y in the Apr Q to 34% y/y in the Jul Q to 37% y/y growth in the Oct Q to 44% y/y growth in the Jan Q), which is premised on FlashArray continuing to grow ~25% y/y and FlashBlade doubling y/y similar to how FlashArray doubled y/y at the same stage of it's lifecycle of what FlashBlade is at currently. Given our analysis that FlashBlade still has a long ways to mature, analytically we see risk to the guidance assumption that FlashBlade is continuing to increase at a 2x y/y rate, though we also note that our checks did not produce a "smoking gun either.

Shares of NetApp are up 54 cents, or 1%, at $42.03, in early trading, while shares of Pure are unchanged at $12.67.

View original post here:
NetApp Has Better Storage Trends than Pure, Says Maxim - Barron's

Read More..

Cloud Backup vs Local Backup Which Option Should I Choose To Protect My Data? – Fossbytes

With the rise of online businesses, the companies are realizing the importance of using a backup solution that can make sure that you areworriless in situations like data loss. It doesnt matter whats the size of your company, every business needs a backup solution. This also applies to the individual computing usage as our computers have become our ultimate answer to our all digital needs.

All popular computer and smartphone operating systems come with different backup utilities to save your data. But, while creating a backup, we are often asked to consider different factors. One such factor to take care of is the primary backup type, i.e., local backup or cloud backup.

It goes without saying that the basic difference between cloud and local backup is the place your backup data is stored. While a cloud backup store the data by sending its copy to a cloud-based server, the local backup solutions could be a physical backup made to an internal/external storage disk or an organizations own private cloud.

A big advantage of cloud backup is its accessibility. It allows you and your employees to access the backed up data from any place or device. With the cloud-powered solutions, one also gets the advantage of collaborative working technologies. Your data is also secure from on-site disasters.

But the most striking feature of a cloud backup is its scalability. Whenever you need to increase the storage space, simply ask your service provider to change your plans. Many providers bill as per the data used, and charge according to your monthly usage. There are tons of online options that could suit your needs.

In a recent study, cloud storage provider and expert Gradwell has shown how much space all your online documents will take up if they were to be stored physically.

For example, in IT sector, the laptops were found to be the most common device with an average size of 700KB for an Excel file. According to the study, the1,500,000 Excel files stored on a 1T laptop will take the space equal to the physical size of a plane within only 3 days.

Coming back to Cloud vs Local debate, maintaining and creating a local storage system is expensive. You need to buy more hardware, ensure a constant monitoring, and spend time maintaining them. The local backup is only recommended when youre struggling with an internet connection which isnt fast enough or metered.

The local backup option also gets a thumbs up when youre too paranoid about your data security. This might not make sense given the military grade encryption and protection mechanism employed by cloud providers. If one is able to create a centralized and hardenedstorage on its own by spending heavy bucks, local backup could be a viableoption.

Which backup solution do you prefer? Dont forget to share your views with us.

Originally posted here:
Cloud Backup vs Local Backup Which Option Should I Choose To Protect My Data? - Fossbytes

Read More..

Ex-MI5 Boss Evans: Don’t Undermine Encryption – Infosecurity Magazine

A former head of MI5 has argued against undermining end-to-end encryption in messaging apps like WhatsApp, claiming it will damage broader cybersecurity efforts.

Jonathan Evans, who left the secret service in 2013 and is now a crossbencher in the House of Lords, made the comments in an interview with BBC Radio 4s Today program on Friday.

Despite recognizing that end-to-end encryption has helped terrorists hide their communications from the security services, he distanced himself from outspoken critics of the technology, such as home secretary Amber Rudd.

Im not personally one of those who thinks we should weaken encryption because I think there is a parallel issue, which is cybersecurity more broadly, Evans argued.

While understandably there is a very acute concern about counter-terrorism, it is not the only threat that we face. The way in which cyber-space is being used by criminals and by governments is a potential threat to the UKs interests more widely.

He argued that undermining encryption would actually make countless consumers and businesses less secure, and the countrys economy as a whole worse off.

Its very important that we should be seen and be a country in which people can operate securely. Thats important for our commercial interests as well as our security interests, so encryption in that context is very positive, said Evans.

As our vehicles, air transport, our critical infrastructure is resting critically on the internet, we need to be really confident that we have secured that because our economic and daily lives are going to be dependent on the security we can put in to protect us from cyber-attack.

Evans also had something to say about allegations of Russian interference in elections, claiming that he would be surprised if thered been no attempts to sway UK votes in the past.

The former MI5 boss is not the first expert to have argued against the government forcing providers to undermine encryption so that the security services can access suspected terrorists comms.

Former GCHQ boss Robert Hannigan claimed in July that so-called backdoors in such services are a threat to everybody and that its not a good idea to weaken security for everybody in order to tackle a minority.

Read the original post:
Ex-MI5 Boss Evans: Don't Undermine Encryption - Infosecurity Magazine

Read More..

‘Father of Financial Futures’ Seeks Cryptocurrency Hardware Patent – CoinDesk

A U.S. economist and businessman known for his work in spearheading the early development offutures contracts is seeking a cryptocurrency patent.

Richard Sandor, a former Chicago Board of Trade chief economist and vice president, advanced the utilization of financial futures back in the 1970s, earning him the moniker "the father of financial futures" and, later, "the father of carbon trading," according to Time.

Notably, perhaps, Sandoris now listed as the first of three inventors for the "Secure Electronic Storage Devices for Physical Delivery of Digital Currencies When Trading" patent application, released on August 10 by the U.S. Patent and Trademark Office.

Sandor is currently the chairman and CEO of Environmental Financial Products LLC, which is listed as the applicant for the patent.The application itself details a hardware concept for the storage of digital currencies tied to derivatives contracts.

It explains:

"The invention relates to a method to facilitate trading of digital currencies, which comprises electronically storing an amount of a digital currency on an electronic storage device or electronic registry; and physically storing the storage device or electronic registry in a secure, physical repository that is not publicly accessible with the storage device or electronic registry available for use in subsequent delivery of the digital currency."

It's the latest submission to focus on cryptocurrency-related derivatives, coming on the heels of news that options exchange CBOE is planning to launch products in this area later this year.

Firms like CME have also moved to obtain intellectual property tied to cryptocurrencies. As CoinDesk previously reported, CME's patent applications reveal an interest in bitcoin mining derivatives.

Richard Sandor image viaJon Lothian News/YouTube

The leader in blockchain news, CoinDesk is an independent media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. Have breaking news or a story tip to send to our journalists? Contact us at [emailprotected].

Read the original:
'Father of Financial Futures' Seeks Cryptocurrency Hardware Patent - CoinDesk

Read More..