Page 4,335«..1020..4,3344,3354,3364,337..4,3404,350..»

StoneFly Integrates Veeam Backup Storage Platform With Microsoft Azure Govt Cloud – ExecutiveBiz (blog)

StoneFly has incorporated its cloud storage offering designed for Veeam Softwares backup and disaster recovery tool into Microsofts Azure Government cloud platform.

StoneFlys Scale-Out Cloud Storagefor Veeam platform works to help users manage multiple nodes for cloud storage and repositories through a common user interface as well as access file storage, block storage and other storage resource types, StoneFly said Wednesday.

StoneFlys offering provides direct restore to Microsofts government cloud platform and uses enterprise data services that include encryption, deduplication, self-healing, snapshots and georeplication.

The expansion into Microsoft Azure Government came a year after StoneFly, Veeam and Microsoft collaborated to deliver disaster recovery and backup platforms to government and commercial organizations as well as cloud service providers.

Through the partnership, StoneFly fielded its Scale-Out Cloud Storage for Veeam Cloud Connectoffering to Microsoft Azure-linked data centers worldwide.

Other services that StoneFly offers for Microsoft Azure Governmentusers include video surveillance monitoring support, network security for data centers and other disaster recovery assistance.

Read this article:
StoneFly Integrates Veeam Backup Storage Platform With Microsoft Azure Govt Cloud - ExecutiveBiz (blog)

Read More..

Microsoft Extends Cloud-Computing Arms Race to Africa – Wall Street Journal (subscription)


Wall Street Journal (subscription)
Microsoft Extends Cloud-Computing Arms Race to Africa
Wall Street Journal (subscription)
The data centers, which will serve customers of the software giant's Azure cloud-computing business, will be the first of their size built in Africa by one of the three major cloud-infrastructure providersMicrosoft, Amazon.com Inc., and Alphabet Inc ...
Microsoft to open Africa data centers to seek edge in cloud computingSeattle Times
Microsoft Just Made a Big Bet on AfricaFortune
Microsoft beats Google and Amazon to announce first African data ...VentureBeat
ComputerWeekly.com -TechCrunch -The Official Microsoft Blog - Microsoft
all 53 news articles »

Read the original post:
Microsoft Extends Cloud-Computing Arms Race to Africa - Wall Street Journal (subscription)

Read More..

Keying Longshot Cloud Computing in the Preakness – America’s Best Racing

On Saturday, racing fans across the country will turn their attention to the $1.5 million, Grade 1 Preakness Stakes at Pimlico Race Course in Maryland, the second leg of the Triple Crown. The race will also draw the attention of handicappers and bettors hoping to make a nice score on one of racings biggest days.

Naturally, all eyes will be focused on #4 Always Dreaming, winner of the Kentucky Derby two weeks ago. Trained by Todd Pletcher, the son of Bodemeister is 4-for-4 this year and hasnt been seriously challenged during that timeframe; furthermore, his Derby win was achieved in eye-catching fashion, as he tracked a solid early pace before taking command to win easily.

Being a speed horse, Always Dreaming is perfectly suited to the Preakness Stakes, which has a tendency to favor horses racing on or near the lead. If he repeats his Derby performance, Always Dreaming will be very tough to beat, though there are a few reasons to consider playing against him. For one, Always Dreaming received a pretty clean trip in the Derby, avoiding trouble at the start and staying clear of traffic while racing near the rail, which may have been the best part of the track. Hell also be a very heavy favorite in the wageringperhaps 3-5 or 4-5which means that playing him to win wont be very appealing.

Always Dreaming could also face a serious challenge from #5 Classic Empire, who finished fourth in the Kentucky Derby after a troubled start left him farther off the pace than usual. With a clean run, Classic Empire might have finished much better in the Derby (he actually ran about 9 lengths farther than Always Dreaming), and as the reigning champion 2-year-old, his talent is undeniable. Prior to the Kentucky Derby, he overcame a tough trip to win the Grade 1 Arkansas Derby with a solid late rally, which marked the third Grade 1 victory of his career. Hes very versatile in terms of running style, and might be just reaching his peak after missing a race and some training during the winter. Expect to see him much closer to the lead in the Preakness, which should give him every chance to run down Always Dreaming in the homestretch.

One longshot that I would strongly consider is #2 Cloud Computing, a lightly-raced colt trained by Chad Brown. Cloud Computing was late getting to the races and didnt debut until Feb. 11, when he won a maiden race sprinting three-quarters of a mile at Aqueduct while defeating the next-out winner Mineralogy. Off of that solid effort, Cloud Computing made his stakes debut in the March 4 Gotham Stakes, where he finished a strong second despite his lack of experience.

Cloud Computing auditioned for a potential Kentucky Derby run when he contested the Grade 1 Wood Memorial on April 8, but a slow start left him off the pace while racing over a track that favored front-runners. Under the circumstances, he had little chance to catch the leaders, but he did well to finish a clear third.

By skipping the Kentucky Derby to await the Preakness, Cloud Computing has had plenty of time to prepare for what will be his toughest race to date. And while its hard to say if he really wants to run this far, his pedigree suggests that the Preakness distance is within his capabilities.

Cloud Computing may also benefit from meeting a field that doesnt appear to have much speed on paper. In fact, according to Cloud Computings Brisnet pace figures (which attempt to quantify early speed), Cloud Computing is the most consistently fast horse in the Preakness field. I think he has a very big chance to finish in the trifecta, possibly even splitting Always Dreaming and Classic Empire for a spot in the exacta.

Since Cloud Computings morning line odds are solid (12-1), lets key him in our wagers to try and boost the potential payoffs while also considering the speedy Arkansas Derby runner-up #10 Conquest Mo Money on one ticket.

Wagering Strategy on a $20 Budget

$4 exacta: 4,5 with 4,5 ($8)

$3 exacta: 4,5 with 2 ($6)

$2 trifecta: 4,5 with 4,5 with 2 ($4)

$1 exacta: 2 with 4,5 ($2)

Wagering Strategy on a $30 Budget

$5 exacta: 4,5 with 4,5 ($10)

$4 exacta: 4,5 with 2 ($8)

$2 exacta: 2 with 4,5 ($4)

$2 trifecta: 4,5 with 4,5 with 2,10 ($8)

Good luck, and enjoy the race!

Go here to see the original:
Keying Longshot Cloud Computing in the Preakness - America's Best Racing

Read More..

US Universities Face Challenges With Cloud Computing Education … – The Data Center Journal

Despite an increasing need for cloud computing skills in the global workforce, a large number of colleges and universities are slow to offer courses in this field, according to a new report from Clutch, a leading B2B research firm.

LinkedIn named cloud and distributed computing as their number one global skill of 2016, indicating a high demand for hiring in 2017. Yet, course offerings are still limited at many institutions of higher education.

Clutch interviewed four professors of cloud computing education from Cornell University, Carnegie Mellon University, Georgetown University, and the University of North Florida. The objective was to gain insight into the current state of cloud computing education, as well as its benefits to students and most prominent challenges.

The professors indicated that, while cloud computing courses are increasing in prevalence at universities and colleges, they are still far from routine. Through the interviews, Clutch identified three main obstacles that may be hindering universities or colleges ability to implement a cloud computing course:

In regards to cost, Dr. Majd Sakr, a professor of computer science at Carnegie Mellon, said that negligent students led to runaway server usage costs in the early years of his cloud computing course. Typically, cloud computing services are paid for via usage and unless that usage is purposefully regulated, students can potentially accumulate high price tags on projects.

Furthermore, Dr. Ken Birman, a professor at Cornell University, says that cloud computing cannot be taught until its pace of development slows down.

As a purely pragmatic matter, we cannot teach the area until it begins to slow down and hold still for at least a few years at a time, he said.

However, fast-paced innovation may be an issue academia faces with increasing frequency in the future. The report suggests that cloud computings innovation will potentially never slow down.

Lastly, cloud computings recent emergence as a field means that universities and colleges may be slow to find available expertise to properly teach the subject.

Institutes of higher education can potentially address this issue by offering cloud computing training to current faculty, or by reaching out to industry leaders interested in teaching, according to Dr. Sanjay Ahuja of the University of North Florida (UNF).

Despite these obstacles, Dr. Ahuja of UNF, Dr. Sakr of Carnegie Mellon and Professor Kevin McDonald of Georgetown University say they are strongly in support of cloud computing education at universities and colleges.

Its becoming more important to understand cloud computing simply because its being adopted quite rapidly now, said McDonald. Having in-depth experience and knowledge of the cloud is probably a core competency going forward.

For the complete report, please visit: https://clutch.co/cloud/resources/cloud-computing-education-2017.

US Universities Face Challenges With Cloud Computing Education Despite High Workforce Demand was last modified: May 18th, 2017 by Press Release

Continue reading here:
US Universities Face Challenges With Cloud Computing Education ... - The Data Center Journal

Read More..

Google Targets Nvidia With Learning-Capable Cloud TPU – ExtremeTech

Only a week after Nvidias new AI-focused Volta GPU architecture was announced, Google aims to steal some of its thunder with its new, second-generation, Tensor Processing Unit (TPU) that it calls a Cloud TPU. While its first generation chip was only suitable for inferencing, and therefore didnt pose much of a threat to Nvidias dominance in machine learning, the new version is equally at home with both the training and running of AI systems.

At 180 teraflops, Googles Cloud TPU packs more punch, at least by that one measure, than the Volta-powered Tesla V100 at 120 teraflops (trillion floating point operations per second). However, until both chips are available, it wont be possible to get a sense of a real world comparison. Much like Nvidia has built servers out of multiple V100s, Google has also constructed TPU Pods that combine multiple TPUs to achieve 11.5 petaflops (11,500 teraflops) of performance.

For Google, this performance is already paying off. As one example, a Google model that required an entire day to train on a cluster of 32 high-end GPUs (probably Pascal), can be trained in an afternoon on one-eighth of a TPU Pod (a full pod is 64TPUs, so that means on 8TPUs). Of course, standard GPUs can be used for all sorts of other things, while the Google TPUs are limited to the training and running of models written using Googles tools.

Google is making its Cloud TPUs available as part of its Google Compute offering, and says that they will be priced similar to GPUs. That isnt enough information to say how they will compare in cost to renting time on an Nvidia V100, but Id expect it to be very competitive. One drawback, though, is that the Google TPUs currently only support TensorFlow and Googles tools. As powerful as they are, many developers will not want to get locked into Googles machine learning framework.

While Google is making its Cloud TPU available as part of its Google Compute cloud, it hasnt said anything about making it available outside Googles own server farms. So it isnt competing with on-premise GPUs, and certainly wont be available on competitive clouds from Microsoft and Amazon. In fact, it is likely to deepen their partnerships with Nvidia.

The other company that should probably be worried is Intel. It has been woefully behind in GPUs, which means it hasnt made much of a dent in the rapidly growing market for GPGPU (General Purpose computing on GPUs), of which machine learning is a huge part. This is just one more way that chip dollars that could have gone to Intel, wont.

Big picture, more machine learning applications will be moving to the cloud. In some cases if you can tolerate being pre-empted its already less expensive to rent GPU clusters in the cloud than it is to power them locally. That equation is only going to get more lopsided with chips like the Volta and the new Google TPU being added to cloud servers. Google knows that key to increasing its share of that market is having more leading edge software running on its chips, so it is making 1,000 Cloud TPUs available for free to researchers willing to share the results of their work.

The rest is here:
Google Targets Nvidia With Learning-Capable Cloud TPU - ExtremeTech

Read More..

Cloud provider snubs SAN for StorPool hyper-converged infrastructure – ComputerWeekly.com

London-based managed services provider Coreix has opted for StorPool software-defined storage in preference to SAN storage. The company has built hyper-converged infrastructure instead, using SuperMicro x86 boxes as a server and storage platform.

What to move, where and when. Use this checklist and tips for a smooth transition.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The move allowed Coreix to avoid a large capitaloutlay on SAN storage and instead scale up from a few servers.

Coreix provides hosting, managed services, private and hybrid cloud, servers and colocation from its London datacentres to about 600 clients using some 1,500 physical servers plus Dell and EMC storage arrays.

It was reluctant to spend a lot of money for large SAN arrays that dont last forever.

The company wanted to build a public cloud offering to provide enterprise-class applications to customers, but its initial efforts using CloudStack as a platform were frustrated by Dell iSCSI SAN storage that struggled to perform adequately, said Paul Davies, technical director at Coreix.

We had issues of IOPS and resiliency, and the SANs were generally over-contested. A SAN can be extremely resilient, but to get the IOPS you need to spend 250,000, he said.

Coreix looked around for new products to support the offering. We didnt want to spend on a chassis that could take 1PB from day one. SANs involve a lot of capex [capital expenditure]; its cost-prohibitive for us. We needed a model where we could scale,said Davies.

Coreix deployed a hyper-converged architecture based on 10 SuperMicro servers with four KVM virtual machine hypervisors and Storpool storage, and using OnApps cloud orchestration platform. Total storage capacity is around 20TB using 600GB flash drives.

Storpool offers software-defined storage that can pool storage from commodity servers it specifies recommended server components such as CPU, RAM and network card with Sata drives (HDD or flash) to provide performance of up to 100,000 IOPS per node.

It can provide hyper-converged infrastructure by utilising resources to offer server and storage capacity in the same box.

For Coreix, the advantage of building systems in-house from commodity hardware is the ability to scale from a few instances of server and storage hardware without having to spend on a big-ticket SAN.

Its about cost-efficiency and flexibility and not being tied to one vendor, saidDavies.We can put our own CPUs in and add storage. We can buy as we grow and dont have to buy a big chassis to start with. With a SAN you always get caught on something. Its just more cost-efficient to do it this way.

Originally posted here:
Cloud provider snubs SAN for StorPool hyper-converged infrastructure - ComputerWeekly.com

Read More..

Cisco’s servers are stuck in limbo, look likely to stay there – The Register

Comment Cisco has missed out on a blade to rack server shift, sales growth has turned negative, it doesn't sell to cloud providers and it has a small market share. Should it invest to grow or get out of servers altogether?

Cisco's third fiscal 2017 quarter results were disappointing, with a 1 per cent decline in revenue year-on-year to $11.9bn. The data centre segment, meaning UCS servers mostly, made $767m in revenue and was down 5 per cent. It constitutes just 6 per cent of Cisco's overall revenues.

In the previous quarter data centre revenues were $790m, down 4 per cent year-on-year, and in the quarter before that they were $834 million, down 3 per cent year-on-year. There is a pattern of decline here.

Stifel analyst and MD Aaron Rakers has charted this, showing Cisco data centre revenues and the year-on-year percentage change:

The chart shows actual numbers plus estimates looking at nine quarters.

UCS servers blazed a bright trail in the sky when they first arrived. What is going on?

Overall server sales are down, according to both Gartner and IDC. Dell and HPE lead the market, followed by IBM, Lenovo and Huawei.

IDC gave Cisco a 6.3 per cent market share in 2016's fourth quarter, with HPE having a 23.6 per cent share, Dell 17.6 per cent, IBM 12.3 per cent and Lenovo 6.5 per cent. Original design manufacturer (ODM) suppliers accounted for 7.9 per cent. Why is Cisco lagging?

Rakers charted quarterly server sales by architecture over the past few years:

Rack-optimised server sales are the big winners, with blade server sales second, a long way behind, and growth stopping. Density-optimised server sales are flattish, towers are in decline and large systems are the smallest category, although growing slightly.

Rakers next plotted Cisco's UCS server sales in the blade and rack segments, showing both revenues and revenue share percentages:

Most of Cisco's UCS revenues come from blade server sales, the declining second-placed architecture, and not rack servers, the main and growing segment. The conclusion is inescapable; Cisco has misread the server market badly, with revenue growth slowing drastically and then stopping from its first fiscal 2015 quarter, two and a half years ago.

Rakers said: "Cisco continues to face a misaligned portfolio for the mix from blade to rack servers i.e. Cisco has ~30 per cent revenue share in blades; sub-4 per cent share in rack servers."

Cisco sells its servers to enterprises and not the hyperscalers or cloud service providers, ODMs such as Supermicro and Chinese server suppliers such as Inspur.

Cisco has been pushing its HyperFlex hyperconverged infrastructure appliance (HCIA), using OEM'd Springpath software. In March Cisco said it had gained 1,100 HyperFlex customers after nine months of sales. Nutanix has around 5,400 and we expect Dell EMC to be in that kind of area soon.

In its third-quarter results announcement Cisco did not update the 1,100 customer number. A Stifel survey of Cisco's VARs/resellers found 16 per cent thought HyperFlex was best positioned in the HCIA market while 40 per cent thought Nutanix was the leader. Some 66 per cent had sold HyperFlex systems into existing Cisco accounts, not new customers.

Rakers said that some 20 per cent of server revenues come from sales into the public cloud, and Cisco does not sell there, with ODMs and white box servers having around a 40 per cent share.

To sum up, Cisco's servers account for 6 percent of its overall revenues, and these revenues have been declining for four quarters in a row. It has a 6.3 per cent share of the overall market, but a less than 5 per cent share in the biggest and growing rack server section. It's progress in the HCIA market was off to a good start but it lags a long way behind market leaders Nutanix and Dell. HPE, by buying SimpliVity, is becoming a stronger competitor.

Finally it is not a supplier to the public cloud server marker.

It seems to us that, to make progress with servers, Cisco needs to get into rack servers in a big way. But there is a more fundamental question; what is its goal here? Does it want to be a leading server supplier, up with Dell and HPE? Or is it content to have a sub-10 per cent share of the market, selling into its installed base and under continual attack from Dell, HPE and the various Chinese and ODM suppliers?

If it wants to get up with the leaders then it has to spend a lot of money on engineering development and so forth. That will be a hard call when overall revenues are declining, servers are just 6 per cent of its business and it's laying people off.

Perhaps Cisco should step back, take a deep breath, and decide to exit the server market, selling its UCS business to Lenovo, say. Perhaps on the other hand it could try something radical, like buying Supermicro.

That doesn't feel right with our view of Cisco moving into servers as an adjacent market to its core networking market. It then moved into storage as an adjacent market to servers, and failed. We think Cisco sees its server market prospects as being limited, and can't see it making the investments needed to become a top four or five server supplier.

Looking ahead we reckon there'll likely be product line tweaking, statements of renewed commitment and determination, but little actual change in its situation. Servers are too large a part of Cisco's revenues to throw away, too small a part to be worth investing heavily in, and not in a dire enough situation to need a radical fix. They're stuck in limbo and look likely to stay there.

See more here:
Cisco's servers are stuck in limbo, look likely to stay there - The Register

Read More..

Why we still fear working in the cloud – Augusta Free Press

Published Thursday, May. 18, 2017, 3:00 pm

Front Page Business Why we still fear working in the cloud

Join AFP's 112,000+ followers on Facebook, Twitter and YouTube Subscribe to sports and news podcasts on iTunes News, press releases, letters to the editor: augustafreepress2@gmail.com Advertising inquiries: freepress@ntelos.net Phone: 540-949-6574

Whenever there is a new technology there is a normal amount of concern that comes along with the territory. Its like the old saying goes, we naturally fear the unknown. But, why do we still fear cloud computing when its been around for so many years now and in actuality, the Internet itself is the Cloud! Even so, when asked why they arent using more cloud-based services, private individuals, nonprofits and businesses alike all express at least one or more of the following key concerns over working in the Cloud.

The funny thing about this concern is the fact that most of the big hacks we have heard about in recent years have been local mainframes that had been hacked! For example, it wasnt the cloud server that was breached in the Oracle hack of 2016. Their point of sales Micros division was breached, leading to a significant amount of panic within their customer base. Then there were the breeches within the Trump real estate systems that had nothing to do with a cloud-based platform because the system they were using was terribly antiquated and based on local hard drives.

A bit of advice seems to be called for here in light of all the concerns over security and privacy. Since most IT professionals understand that cloud servers are actually many times more secure than local mainframes due to 24/7 on-site security teams with the latest patches always being updated as problems are identified, you might want to invest in a little PR. A well-rounded digital marketing agency such as Single Grain not only provides backlinks and PPC ads, they are also expert in content geared towards public relations. Let the public know that your cloud services have the latest security technology and to what measures you will go to ensure their safety. Marketing pros like Single Grain are able to overcome objections even before they are made, and sometimes this is exactly what you need to do when fear is out of control!

Here is the second greatest fear when thinking about cloud computing. Many startups have given the whole industry a bad name because they lacked the expertise or resources to offer the level of service their clients required. You see this time and again when major brands relegate their call centers to some obscure third world country where the customer service reps dont even speak English fluently enough to understand what you are asking them!

In order to address this fear, it is suggested that you keep operations at home and seek out the top talent within your industry. Customers have a right to expect the service they are paying for and if they encounter a glitch, no matter how trivial it may seem to you, it is nonetheless significant to them. If youve already gotten bad press, hire a digital marketing agency to engage in a full-out PR campaign. Nothing builds a business faster than word of mouth but the opposite also holds true. Working in the cloud should be a worry-free process, so make every effort to offer that to your customers, and when all else fails, enlist the help of digital marketers to overcome any bad press. In the end, it pays.

Go here to see the original:
Why we still fear working in the cloud - Augusta Free Press

Read More..

Nvidia: This Could Work Out Great, Says Bernstein – Barron’s – Barron’s

Nvidia: This Could Work Out Great, Says Bernstein - Barron's
Barron's
Bernstein analyst Stacy Rasgon started coverage of GPU chip maker Nvidia with the equivalent of a Buy rating, and a $165 price target, arguing its market for ...
Nvidia makes the case for GPU accelerators | ZDNetZDNet

all 2 news articles »

Read the rest here:
Nvidia: This Could Work Out Great, Says Bernstein - Barron's - Barron's

Read More..

Cloud Computing-New-age Mantra for Business – CIOReview

Zack Bishop, EVP, CIO, Renasant Bank, Zack Bishop joined Renasant Bank (RNST) in July 2013 as Executive Vice President and Chief Information Officer. Zacks responsibilities are to p...More>>

Is the corporate owned data center a thing of the past?

With every company in the technology space heavily incenting their sales associates for those products that create the most repetitive profit, our ability to find relevant unbiased information around the construction of a data center in todays world was a challenge, so I decided to share my experience.

The Cloud has matured since a marketing division somewhere, a decade ago; renamed hosting to the buzzword of cloud computing and now The Cloud is a household name in both the consumer and commercial space.

However, what is it, and what value does it provide to a large enterprise? Does it replace in entirety the legacy data centers large organizations have had since the 1960s?

My answer is maybe, but that maybe is based on numerous assumptions.

A few years back, when we outgrew our data center, the simple process of beginning to design a new data center, became a financial planning analysis of comparative options. Do you expand the existing data center, build a new data center, utilize a co-location provider, move to a private cloud, or even public cloud for some situations? You also have to consider a hybrid approach of two or more of the options that exist.

Technology leaders are fortunate that the majority of the data center infrastructure is very much commoditized

The first thing we asked ourselves was this: Do we have the appropriate internal resources to manage the environment? Will we retain those associates? Are we in a location where we can recruit those associates if needed?

If the answer is Skilled dedicated Employees are not a problem then now you have options. If the answer is no, then skip the remainder of the article and begin your cloud journey.

The associates are the most important part to this puzzle. There is nothing more frustrating than living the life of Support Tickets, Service Level Agreements and escalations and passing these on to someone else who may or may not add value. Of course, there are some extremely reputable companies out there who strive for perfection; however, its not necessarily the norm. My mission critical systems (in my opinion) are more important than everyone elses mission critical systems, and sometimes we need to control our own destiny.

The next step once, you validate your skilled associates, is a financial plan, and I am assuming, once you complete this exercise you will be in the same position that we were in. Technology leaders are fortunate that the majority of the data center infrastructure is very much commoditized, so if you have a decent spend, the chances of you negotiating equivalent pricing with larger cloud providers is high. In addition, in this situation, where spend is equal, what decision will you make?

Breaking down the financials

For the purpose of even numbers, let us say you have a need for 100 racks, which is give or take 3,000-4,000 servers. If the typical rack consumes around 10KW of power, you are looking at close to a Megawatt of total power. Assuming you own the land, you will have fortified building construction expense, generators, UPS systems, fire suppressant and the list goes on and on. If you could do all of this for 8 +/- Millions after a mixture of 15 and 30 years depreciations your monthly expense would be around $30,000 +/-. Another way to look at it is $300 +/- per month per rack (which excludes maintenance and recurring operational expenses). Add back in the total value of that cash during the depreciation time and you will be in the $500 +/- per month per rack range.

Sure, you have random stuff missing, and a vendor who resales competitive products will challenge these numbers down to the penny. However, for a quick approach at comparison costs, the above numbers are going to be close.

So how big is this building? The building space, which will house the servers, will be approximately 2,500-3,000 ft. Some say 20 sq. ft. per rack is acceptable. I would lean more towards 30 sq. ft., but the above estimates do not consider anything more than the minimal requirements. You have to consider how much room cooling will consume. Different cooling approaches will consume different amounts of space. What about electrical, staging areas, carrier room, loading docks etc. You can quickly move that total expense upwards based on decisions.

Comparing alternatives

Cloud computing is a challenge to compare. The partner you are working with will be more than happy to show you an ROI that is priced based on network, compute, storage, and support. However, since these items are never static in the real world, defining a baseline is virtually impossible. You have to look at a current annual expense vs proposed annual expense. The above annual expense number of $360,000 does not consider the actual servers in the data center, the maintenance on that equipment, and any additional support agreements that may be in play.

Co-locations facilities are easier to compare. The co-lo facility will handle power and cooling, install a fence around your rented space and send you a bill each month. With this scenario you can control your own destiny, and not have to deal with data center management. Typical pricing is going to be volume based. You can expect to pay $200-$300 per month, per rack. Therefore, it is slightly less expensive in most cases in comparison to building your own.

We decided as an organization that a hybrid approach was the best choice for us. We fully embrace cloud computing where it makes sense; we constructed a data center on site for mission critical systems, and entered into a co-location agreement for disaster recovery.

After an extensive RFP process, we choose Century Construction out of Tupelo Mississippi for data center construction and engineering, and T5 data centers for co-location.

We have been very happy with our decisions and continue to provide impressive top tier service and security to our customers and associates.

Read more:
Cloud Computing-New-age Mantra for Business - CIOReview

Read More..