Category Archives: Cloud Servers

VMware officially lands on AWS cloud with new management and security features – SiliconANGLE News (blog)

Nearly a year after signing a landmark deal to bring its software-defined data center technology to the Amazon Web Services Inc. cloud, VMware Inc. kicked off its VMworld conference in Las Vegas today with the news that VMware Cloud on AWS is now generally available.

The service essentially enables the vast majority of companies that use VMware inside their data centers to use VMware software, which allowsdifferent operating systems and multiple applications to run on the same physical computer,with AWS services as well.

To date, companies have had difficulty moving workloads to Amazons cloud to take advantage of the clouds more flexible and lower-cost computing and storage services because many of their applications depended on VMware software that only ran on computers in company data centers. That presented customers of each provider with a tough choice: Use VMware technology it built its core applications on, but with none of the cost and flexibility of cloud computing, or use Amazons cloud, but not with the VMware software their data centers are built on.

They hated this binary decision that we were forcing on them, AWS Chief Executive Andy Jassy (pictured, right) said during an appearance this morning at VMworld with VMware CEO Pat Gelsinger (left). Now, the executives said, customers can more easily use so-called hybrid cloud services that use both on-premises software and hardware and cloud services as needed.

If this fully works, CIOs have no excuse in regard to moving VMWare loads to the cloud, said Holger Mueller, vice president and principal analyst at Constellation Research. But lets see if this works.

VMware, part of Dell Technologies Inc.s constellation of companies that also includes storage supplier Dell EMC, also announced a raft of services for the VMware Cloud today. Initially, VMware Cloud is available in the AWS U.S. West region, but other regions will be added throughout 2018.VMware said the integration will enable customers to run applications across operationally consistent vSphere-based private, public and hybrid cloud environments with the option of expanding to AWS elastic or bare-metal infrastructure.

When the AWS-VMware deal was announced last October, it was apparent that it could reset the competitive environment in computing, in particular presenting new challenges for IBM Corp., which had signed a deal with VMware earlier in 2016, Google Inc.s cloud platform and Microsoft Corp., whose No. 2-ranked Azure public cloud had claimed the lead in hybrid cloud computing.

The arrangement with AWS offers some benefits for VMware, including a connection to the leading public cloud provider that its customers have been clamoring for. When your own cloud fails, you need to join the ones that work, Mueller told SiliconANGLE. VMware now focuses on add-on software, such as application security.

But it also means AWS could steal some of VMwares customers ultimately, if it results in what Dave Vellante, chief analyst at SiliconANGLE Medias Wikibon, has called a potential one-way trip to Amazon cloudville.’ Moreover, said Mueller, the arrangement doesnt help Dell sell more servers into an on-premises data center.

As for Amazon, Mueller said, AWS needs a piece of the on-premises enterprise load and this is the way. He added that the fact that AWS is offering to host VMware instances on so-called bare-metal servers, those with no operating software installed on them, indicates how much it needs VMwares help to reach large enterprise customers, since AWS had generally eschewed bare-metal arrangements.

The offering will be delivered, sold and supported by VMware as an on-demand service. Its powered by VMware Cloud Foundation, a software-defined data center platform that includes vSphere, VMware VSAN and VMware NSX virtualization technologies managed by VMware vCenter. The initial set of cloud services includes six modules:

Discovery centralizes inventory information and cloud accounts across AWS, Microsoft Azure and VMware clouds, making it easier for information technology departments to search for and identify workloads. Administrators can group cloud resources even if they span multiple clouds. Built-in search and filters enables administrators to filter resources based upon cloud attributes.

AppDefense protects applications by embedding application control and threat detection and response capabilities into vSphere-based environments. Its tightly integrated with the NSX networking platform, and operates within the vSphere hypervisor to create a knowledge base of the correct state and behavior of each endpoint for change detection.

Cost Insight helps organizations analyze their cloud spending and identify savings opportunities. It provides detailed visibility into public and private cloud costs on AWS, Azure and VMware environments and enables drill-down to identify cost drivers. Cost Insight also identifies stopped virtual machines and associated storage resources across public and private clouds to reduce waste.

Network Insight analyzes application traffic flows between different tiers, virtual and physical network layers and public and private clouds. This has application security and load balancing applications, and makes it easier for cloud administrators to manage and troubleshoot large-scale NSX deployments.

NSX Cloud provides a single management console and common application program interface for monitoring and securing applications that span multiple private and public clouds. It features a micro-segmentation security policy that can be defined once and applied to application workloads running anywhere.

Wavefront is a metrics monitoring and analytics platform that gives developers insight into the performance of highly-distributed cloud-native services to detect performance anomalies while enabling high availability. Operating on what VMware said is a massive scale, Wavefront gives DevOps teams instant visualization of millions of data points per second. This helps resolve bottlenecks more efficiently and proactively.

VMware also said its expanding Cloud Foundations scope with new partner offerings. They include support from CenturyLink Inc., Rackspace Inc. and Fujitsu Ltd. New hardware platforms that support Cloud Foundation include Dell EMCs VxRack SDDC, Hitachi Data Systems Corp.s UCP-RS, Fujitsu Primeflex and Quanta Cloud Technology LLCs QxStack.

VMwares shares closed up nearly 2 percent today, to about $104.68 a share, on a relatively flat day for the overall market.

With reporting from Robert Hof

(* Disclosure: SiliconANGLE Medias video unit, theCUBE, is a paid media partner at VMworld. Stories on SiliconANGLE are written independently of coverage on theCUBE. Sponsors have no editorial influence on content on SiliconANGLE or theCUBE.)

Read more:
VMware officially lands on AWS cloud with new management and security features – SiliconANGLE News (blog)

VMware-on-AWS is live, and Virtzilla is now a proper SaaS player – The Register

VMworld 2017 VMware CEO Pat Gelsinger last week introduced the company’s second quarter results by saying the company has embarked on a multi-year journey from a compute virtualization company to offer a broad portfolio of products driving efficiency and digital transformation.

And today at VMworld the company began to explain what that mouthful of jargon meant: a strategy to put the company at the center of multi-cloud management.

The clearest expression of Gelsinger’s words is its half-dozen new software-as-a service offerings, namely:

All six are subscription services, accessible through existing VMware accounts. And all six are new stuff for your VMware account manager, or channel partner, to suggest. If you’re one of the few who resisted the company’s No Naked vSphere push, VMware’s going to come at you again, this time as a software-as-a-service vendor.

The Register expects the company will come hardest with AppDefense, because it’s created a new business unit to back a product it feels is genuinely new to offer. Most security is about finding bad, we are about ensuring good, says Tom Corn, senior veep of the Security Product group at VMware.

The Register revealed the basics of AppDefense well before its announcement. We had to wait for today to learn that it can build its whitelist of acceptable VM behaviour by interacting with either vCenter or automated provisioning tools like Jenkins or MAVEN. Linking with those tools is an effort to make AppDefense offer something to DevOps practitioners. It’s also trying to impress line-of-business types by offering them a mobile app that alerts them when applications misbehave, so that all stakeholders can participate in decisions about how to respond.

AppDefense will be sold as SaaS or on-premises software. Either way, it should do well: security types The Register’s virtualization desk have spoken to feel Virtzilla is onto something here!

VMware’s favourite news from this year’s event is that the company’s deal with Amazon Web Services has come to fruition. AWS now hosts servers running Cloud Foundation, the bundle of vSphere, VSAN, NSX and vCenter that is intended to mirror on-premises implementations.

It’s all available as of today, to run in AWS alone or in a hybrid cloud spanning an on-premises implementation.

For how it’s only in one AWS Region, US West, and you can only buy by the hour. One-and-three-year subscriptions are due soon, as is a global rollout that will start soon and continue deep into 2018. There’s just one server type, too, and while vSphere lets you slice and dice that as it would any other server, there’s no hint of the varied instance types AWS and other clouds offer.

At least the server is pleasingly grunty. Each host has a pair of CPUs, 36 cores, 72 hyper-threads, 512GB of RAM, local flash storage (3.6TB cache, 10.7TB raw capacity tier). But you’ll need four hosts to build a cluster!

There is integration between VMware-on-AWS and some AWS services.

VMware will run and support the service, in contrast to the arrangement it has with IBM and the other ~4,300 vCloud Air Network partners that run vSphere-based clouds. Those partners get a new version of vCloud Director, plus more hardware partners ready to sell them servers ready to roll with Cloud Foundation. And perhaps some worry beads, for stress relief and/or prayer as VMware challenges them like never before, because the new service integrates with some AWS services. We’re told that the VMware service lives in the same data centres as services like Lambda, so piping them into apps will be low-latency.

In the past VMware partners have told El Reg they feel VMware’s cloud partnerships aren’t bad for business, because they get users talking about vSphere-powered clouds. Now we hear some are re-thinking that position, but the pricing for VMware on Amazon may well crimp their concerns, because it isn’t super-cheap.

Here’s the pricing scheme.

Remember: you’ll probably need at least four hosts, so actual costs will be rather more than the single-host cost.

VMware justifies these prices by saying they stack up well when compared to the total cost of ownership compared to either on-prem or public clouds.

Here’s the company’s math.

That calculation excludes bandwidth and IP address charges, and assumes VMs have a pair of vCPUs, 8GB RAM and 150GB of storage.

VMware’s attempt to build a public cloud failed, as did its early SaaS forays.

The company’s now turned that around, because the AWS deal gives it unrivalled scale, with perhaps-unsettling price.

The new SaaS offerings do two things:

VMware has done astoundingly well to keep Hyper-V’s market share small. But anyone who needs new servers or storage now has to consider either hyperconverged infrastructure or Azure Stack because both offer strong alternatives to traditional infrastructure. Azure Stack also makes hypervisors irrelevant and therefore also makes the idea of Windows-on-VMware look a bit archaic.

Starting with last week’s earnings call and already in pre-VMworld briefings, VMware’s counter argument is that it’s happy for you to use Azure in any form. So long as you don’t needlessly rip and replace perfectly good vSphere in order to buy in to Microsoft’s hybrid vision.

The new SaaS tools give you reasons not to ditch vSphere, by making multi-cloud wrangling easier and making vCenter the place you’ll do it. AppDefense helps, too, because it looks a useful tool that won’t hurt even if only deployed as one layer of a defense-in-depth strategy. It needs vCenter, too. And if vCenter is the place to do some security, and do multi-cloud management, it’s a lot harder to contemplate ejecting it. That the VMware/AWS tie-up has quickly gone beyond IaaS and into AWS’ services also suggests Virtzilla has found its way into a position of cloudy strength.

For now, anyway. Clouds move fast, and so do strategies to catch them.

Sponsored: The Joy and Pain of Buying IT – Have Your Say

Here is the original post:
VMware-on-AWS is live, and Virtzilla is now a proper SaaS player – The Register

Socionext Partners with Advantech to Offer High-Density, Low-Cost … – Design and Reuse (press release)

Scalable, Robust, Low-power, and Easily Deployable Solutions for service providers and other video intensive applications

SUNNYVALE, Calif. and MILPITAS, Calif., Aug. 31, 2017 — Socionext Inc., a world leader in hardware HEVC encoding, and Advantech, creator of innovative video acceleration solutions, today announced a strategic partnership to provide live hardware transcoding solutions for the data center supporting MPEG2, AVC (H.264), and HEVC (H.265).

Socionext real time dense transcode solution, also known as the “Media Cloud”, enables advanced HEVC compression technology and real-time transcoding capabilities for OTT applications. Socionext’s extended partnership with Advantech includes the integration of Socionext’s Media Cloud technology into Advantech’s VEGA 7000 Family of High Density Video Servers to enable agile and cost-effective live UHD cloud services for the new video-centric era.

“We are seeing an increasing need to lower the cost of ownership by media, telecom and internet companies that are seeking to address the ever-increasing mass consumption of streaming high-quality video,” said David Lin, VP of Video Solutions at Advantech. “Socionext, as our valued partner, is able to solve the power, density, and performance technical design requirements we are looking for in order for us to develop a cost-competitive, highly-efficient transcoding solution with adaptive bitrate (ABR) streaming capabilities for live cloud media service providers.”

The Advantech VEGA 7000 is a family of accelerated video processing servers which combine best video and IT practices within an off-the-shelf platform that has been optimized to efficiently scale throughput of high-density transcoding applications in live OTT and cloud workflows. Up to four VEGA-3318 accelerators can be integrated into a 1U server to deliver up to 32 x 4Kp60 live HEVC profiles per rack unit the highest density available in the market. This allows for large scale, energy and cost-efficient data center deployments that benefit from a 20X rack space and power reduction when compared to non-accelerated solutions. Advantech VEGA solutions for the data center minimize development efforts by providing a comprehensive software package that features Linux and Windows SDKs, an FFmpeg plug-in and virtualization-friendly drivers supporting OpenStack. Advantech also offers hardware and software design and customization services for maximum deployment flexibility.

“Advantech offers decades of expertise in complex hardware and software system integration and design services,” said Yasuhiro Wakimoto, VP of the Enterprise Solution Business Unit at Socionext. “Socionext and Advantech have a long history together providing solutions for “Live” transcode broadcasting and processing large volume of media data for video systems. This partnership further extends the close relationship.”

Advantech will demonstrate their VEGA 7000 Series of High Density Video Server for the Media Cloud at IBC 2017 in Hall 11, Booth C32, taking place at RAI, Amsterdam, from September 15-19, 2017. For more information, visit or email

About Advantech

Founded in 1983, Advantech is a leader in providing trusted, innovative products, services, and solutions. Advantech VEGA Video Platforms and PCIe Adapters are designed to boost video infrastructure performance from acquisition to distribution at the lowest power budget while fully complying with the media industry needs. By providing access to the latest 4K/8K UHD video processing and IP media technologies on commercial-off-the-shelf IT platforms we accelerate the deployment of next-generation, open and more efficient video solutions across a wide range of applications from broadcast encoding and high-density OTT transcoding to cloud, mobile and 360-degree video. Advantech’s standard portfolio can be tailored to meet a range of system requirements, significantly reducing time-to-market effort for our customers. For more information, visit

About Socionext Inc.

Socionext is a new, innovative enterprise that designs, develops and delivers System-on-Chip products to customers worldwide. The company is focused on imaging, networking, computing and other dynamic technologies that drive today’s leading-edge applications. Socionext combines world-class expertise, experience, and an extensive IP portfolio to provide exceptional solutions and ensure a better quality of experience for customers. Founded in 2015, Socionext Inc. is headquartered in Yokohama, and has offices in Japan, Asia, United States and Europe to lead its product development and sales activities.

Read the original:
Socionext Partners with Advantech to Offer High-Density, Low-Cost … – Design and Reuse (press release)

Municipal adoption of the cloud – American City & County (blog)

By Pete Eichorn

Although state and federal agencies are migrating steadily to the cloud, municipal governments have been slower to jump on the trend. Cloud technology presents a challenge for municipal agencies, which often dont know where to start, whom to ask for help, whether they need a third-party partner or whether laws and statutes even allow them to move data to the cloud.

Try before you buyFortunately, city and county governments dont have to go all in from the beginning. The cloud is a perfect small-scale testing ground. You can try cloud-based software as a service, or SaaS, which allows your team to use an existing software, delivered via a Web browser, and pay for it on a usage-time basis or with a monthly fee that grants access to designated staff members.

For example, a county that wants to replace its legacy financial system to better adhere to its states financial reporting requirements and eliminate its long-standing manual processes might test SaaS through the cloud. The county would gain access to an existing, third-party software product that could house its data, provide real-time access to financial transactions and reports and deliver other services the municipality might not even have anticipated.

The county would have avoided a large capital outlay. The cloud-based software would automatically scale when the county expected a larger demand on its financial services software and, if the software didnt meet the countys needs during the test period, the county would be free to walk away and try another solution.

The security questionData is no less secure in the cloud than on an internal server. That said, government cant outsource responsibility for its data. A cloud provider wont be responsible if there is a breach, so you must know what you are buying, where your data is and how it is being safeguarded. If you decide to test a cloud-based service, get the provider to disclose and include in your contract its terms and conditions, security limits, liabilities and responsibilities.

Your responsibilities to secure data still exist in the cloud as they do in traditional technologies. You must still comply with applicable laws and follow best practices, such as ensuring that user accounts and passwords are managed properly.

Which cloud?There are several kinds of clouds, and reasons you might choose one over another. Private clouds, maintained on-site, offer flexibility and control, and often are better suited when you want the clouds benefits and are willing to manage technology capabilities yourself. Public clouds, maintained via the internet on a cloud providers servers, are a good fit for accessing the latest technologies and for changing your financial management from a capital expense to an operating expense model.

Another option is hybrid cloud, which offers access to the best aspects of both public and private clouds. A hybrid cloud solution works well when you need to balance the needs of legacy systems and new, cloud capabilities.

Now is the timeMore and more, software providers are moving toward offering their products and services via the cloud. Innovation will focus on SaaS, and updates and support for legacy systems will become rarer.

So now is unquestionably the time for municipal governments to commit to understanding the potential benefits and dipping their toes in the water with test projects.

Research the benefits and individual services various cloud providers offer and determine which meet your needs. If you are short on internal resources, look for a government services provider to help you navigate the options and set out a cloud adoption plan that integrates with your existing systems, security models and goals.


To get connected and stay up-to-date with similar content from American City & County:Like us on FacebookFollow us on TwitterWatch us on YouTube

Excerpt from:
Municipal adoption of the cloud – American City & County (blog)

IBM cooks up a hardware architecture for tastier cloud-based services – TechTarget

IBM hopes to raise its competitive profile in cloud services when it introduces new hardware and cloud infrastructure by the end of this year or early 2018.

The company will add a new collection of hardware and software products that deliver artificial intelligence (AI) and cloud-based services faster and more efficiently.

Among the server-based hardware technologies are 3D Torus, an interconnection topology for message-passing multicomputer systems, and new accelerators from Nvidia, along with advanced graphics processing unit (GPU) chips. Also included is Single Large Expensive Disk technology, a traditional disk technology currently used in mainframes and all-flash-based storage, according to sources familiar with the company’s plans.

The architecture achieves sub-20-millisecond performance latencies by eliminating routers and switches, and it embeds those capabilities into chips that communicate more directly with each other, one source said.

The new collection of hardware applies some of the same concepts as IBM’s Blue Gene supercomputer, which were among those used to create Watson. In the model of those special-purpose machines, the new system is designed specifically to do one thing: Deliver AI-flavored cloud-based services.

These technologies, which can work with both IBM Power and Intel chips in the same box, will be used only in servers housed in IBM’s data centers. IBM will not sell servers containing these technologies commercially to corporate users. The new technologies could reach IBM’s 56 data centers late this year or early next year.

IBM’s cloud business has grown steadily from its small base over the past three to four years to revenues of $3.9 billion in the company’s second quarter reported last month and $15.1 billion over the past 12 months. The company’s annual run rate for as-a-service revenues rose 32% from a year ago to $8.8 billion.

At the same time, sales of the company’s portfolio of cognitive solutions, with Watson at its core, took a step back, falling 1% in the second quarter after 3% growth in this year’s first quarter.

That doesn’t represent a critical setback, but it has caused some concern, because the company hangs much of its future growth on Watson.

Three years ago, IBM sunk $1 billion to set up its Watson business unit in the New York City borough Manhattan. IBM CEO Ginni Rometty has often cited lofty goals for the unit when claiming Watson would reach 1 billion consumers by the end of 2017, $1 billion in revenues by the end of 2018 and, eventually, $10 billion in revenue by an unnamed date. For IBM to achieve those goals, it requires a steady infusion of AI and machine learning technologies.

IBM executives remain confident, given the technical advancements in AI and machine learning capabilities built into Watson and a strict focus on corporate business users, while competitors — most notably Amazon — pursue consumer markets.

“All of our efforts around cognitive computing and AI are aimed at businesses,” said John Considine, general manager of cloud infrastructure at IBM. “This is why we have made such heavy investments in GPUs, bare-metal servers and infrastructure, so we can deliver these services with the performance levels corporate users will require.”

However, not everyone is convinced that IBM can reach its goals for cognitive cloud-based services, at least in the predicted time frames. And it will still be an uphill climb for Big Blue, as it looks to vie with cloud competitors faster out of the gate.

Lydia Leong, an analyst with Gartner, could not confirm details of IBM’s upcoming new hardware for cloud services, but pointed to the company’s efforts around a new cloud-oriented architecture dubbed Next Generation Infrastructure. NGI will be a new platform run inside SoftLayer facilities, but it’s built from scratch by a different team within IBM, she said.

My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one. Lydia Leonganalyst, Gartner

IBM intends to catch up to the modern world of infrastructure with hardware and software more like those from competitors Amazon Web Services and Microsoft Azure, and thus deliver more compelling cloud-based services. NGI will be the foundation on which to build new infrastructure-as-a-service (IaaS) offerings, while IBM Bluemix, which remains a separate entity, will continue to run on top of bare metal.

Leong said she is skeptical, however, that any new server hardware will give the company a performance advantage to deliver cloud services.

“My expectation is IBM will not have a long-term speed advantage with this — I’m not even sure they will have a short-term one,” Leong said. “Other cloud competitors are intensely innovative and have access to the same set of technologies and tactical ideas, and they will move quickly.”

IBM has stumbled repeatedly with engineering execution in its cloud portfolio, which includes last year’s launch and demise of a new IaaS offering, OpenStack for Bluemix. “[IBM has] talked to users about this [NGI] for a while, but the engineering schedule keeps getting pushed back,” she said.

IBM now enters the cloud infrastructure market extremely late — and at a time when the core infrastructure war has been mostly won, Leong said. She suggested IBM might be better served to avoid direct competition with market leaders and focus its efforts where it has an established advantage and can differentiate with things like Watson.

Originally posted here:
IBM cooks up a hardware architecture for tastier cloud-based services – TechTarget

Where does a business’s data live? – Information Age

Due to improved analytics and the rapidly growing benefits of cognitive computing, data is becoming an even more critical growth asset central to all business operations.

Not only this, but with the increase of cyber attacks and new regulation such as General Data Protection Regulation (GDPR) soon coming into force, control over data is now at the top of all organisations agendas.

Now is the time to rethink architectures to enable companies to use data to drive digital transformation, while keeping full control of one of their key assets data.

In order to compete in a market being transformed by the availability of cloud services, many businesses are realising that they need complete control and transparency of their data management where it is, who has access to it and how it is protected.

>See also:Is business data AI compatible?

The way they collect, organise and activate this data will be essential to their future. This is why many businesses are now putting their data in the cloud ensuring their most critical asset is always in the hands of experts, with increased access to cognitive analytics.

Data on the moveIn todays digital environment, data is crucial to the operation of businesses. They need secure, compliant access to this through a multitude of channels. For example a bank needs access at the branch, online, and via mobile.

Wherever the operation is taking place, there needs to be a secure and compliant connection to the data centre, with consistent processes across the whole environment from mobile to mainframe.

Making this data available across geographies and channels only makes security more complex, with a growing range of endpoints and regulation to consider.

Data residency is important for organisations to consider when moving to the cloud. IBM through its growing cloud footprint in Europe, the UK and other global centres can offer clients the choice of where their data is stored.

>See also:Outlook increasingly cloudy: the mass migration of business data

The giant advocates for client choice of data residency while opposing government-mandated data localisation, with the exception of sensitive data for example related to national security or critical infrastructure.

The General Data Protection Regulation (GDPR) is another key piece of regulation to consider, which comes into force next year.

The aim of GDPR is to protect all EU citizens from privacy and data breaches in an increasingly data-driven world, massively increasing the scope of data protection and residency from the 1995 directive.

One of the major changes set to impact businesses is that serious breaches, which are likely to result in a risk for the rights and freedoms of individuals, must be reported within 72 hours. Those in breach can be fined up to 4% of annual global turnover, or 20 Million whichever is greater.

The current models in place for handling data will need to change and businesses are looking to sophisticated cloud providers to assist with this complexity.

With the prospect of significant fines for those who are in breach, security is top line priority for many organisations. While some still view cloud as less secure, enhanced security can be a key reason for moving to the cloud but not all clouds are created equally.

>See also:How can a business extract value from big data?

Some cloud providers can often offer more advanced security expertise than is available in house, managing end to end security needs from identity and access controls, to data protection and even threat intelligence.

This gives the end customer complete control of the cloud including the application and the hypervisor, all the way to the processor level.

Knowing who has access to what and where, otherwise known as identity and access management, is crucial.

Cloud identity management also known as Identity as a Service- gives businesses complete control over this, minimising the insider threat and ensuring there are no weak links in the chain.

Hardware can also prove vulnerable. With bare metal cloud servers, businesses can ensure that security and data geo-fencing is provided all the way to the processor level.

They get the flexibility and pay as you go benefits of cloud without sacrificing control of visibility over the environment.

With logical access and security controls, businesses can be assured that workloads are only being run on trusted hardware in the designated geography.

>See also:Using data analytics to improve business processes and reduce waste

Businesses can be further reassured that the cloud is safe and secure with the introduction of the European Cloud Code of Conduct.

The Code is rigorous companies that sign up must meet all of the requirements with no room for opt-outs or exceptions.

Not only does the Code align with the current legal framework for data protection in Europe, but it also aligns with core elements of the GDPR.

One of the industries most concerned with security is financial services. Even businesses in this highly regulated industry are now expanding their cloud solutions.

While there is still a demand for in-house IT to run certain core systems and store particular data types, hybrid cloud is increasingly being seen as the most effective solution for this type of industry, integrating local, public and shared options.

>See also:The value of data driving business innovation and acceleration

One bank already taking advantage of these benefits is Boursorama, a subsidiary of the Socit Gnrale Group and leader in online banking, online brokerage and financial information services.

By deploying its services through a cloud infrastructure, Boursorama has been able to improve management of IT resources by quickly activating servers based on specific client needs and usage. It has also been able to strengthen the security and resilience of its websites.

Through a global, 247 managed backup solution for its back office that leverages one of IBMs business resiliency centres, Boursorama has been able to speed up recovery times by providing faster access to the crucial applications and data that are core to its business.

In a data led economy, businesses today cant afford to lose track of their most valuable asset. So the question is, do you know where your data is?

The UKs largest conference fortechleadership,TechLeadersSummit, returns on 14 September with 40+ top execs signed up to speak about the challenges and opportunities surrounding the most disruptive innovations facing the enterprise today.Secure your place at this prestigious summit byregisteringhere

See the original post:
Where does a business’s data live? – Information Age

Veeam follows Virtzilla’s cloud up the Amazon – The Register

Veeam now backs up VMware Cloud data on Amazon Web Services (AWS).

VMware has an arrangement with Amazon whereby it supplies AWS incarnations of servers running the VMware Cloud Foundation product bundle of vSphere, VSAN, NSX and vCenter. Cloud Foundation can run on premises or in AWS.

As Veeam prides itself on backing up data for Virtzilla’s customers, it naturally goes where Virtzilla goes. So it’s gone up the Amazon and the Veeam Availability Suiteis now available to customers of VMware Cloud on AWS.

Veeam president and joint-CEO Peter McKay said he was excited, because this news “reaffirms Veeam’s commitment to delivering Availability for any service, across any platform”.

It’s a relatively straightforward product extension by Veeam and a typically fast announcement. All other VMware-supporting data protection suppliers will inevitably have to follow suit.

Sponsored: The Joy and Pain of Buying IT – Have Your Say

Read the rest here:
Veeam follows Virtzilla’s cloud up the Amazon – The Register

Juniper adding microsegmentation to Contrail cloud – TechTarget

Juniper Networks Inc. has added tools for network microsegmentation in Contrail — an important feature for users of the software-defined networking controller, but a capability that’s unlikely to reverse Juniper’s decline in security revenues.

Juniper introduced the capability this week, along with other security features the company labeled as Juniper Contrail Security. In general, Juniper is focusing its latest stab at strengthening its security portfolio on companies with multiple data center environments in a Contrail cloud.

Microsegmentation tools, which have become a popular way to contain malware in the data center, allow corporate IT staff to build a zero-trust security zone around a set of resources, such as network segments and workloads. In network virtualization within SDN, microsegmentation adds firewall capabilities to east-west traffic.

VMware and Cisco have had microsegmentation capabilities in their SDN products, NSX and Application Centric Infrastructure (ACI), respectively, for several years. NSX has outpaced ACI deployments in the data center, primarily because microsegmentation has become its leading use case for protecting applications that run on top of VMware’s ubiquitous server virtualization products.

Companies use Juniper Contrail and vRouter — the vendor’s virtualized router software — to create a network overlay that extends across cloud-based environments in multiple data centers. The core users of Contrail and Juniper switches include cloud companies that provide infrastructure, platform or software as a service. Others include large financial institutions.

With the latest release, companies can use the Contrail cloud console to carve up their data center LAN and intradata-center WAN, and then create and distribute policies that establish restrictions on communications between network microsegments. Also, Juniper is providing tools that give companies the option of using third-party firewalls for policy enforcement.

The capability is available for cloud environments using bare-metal servers, Linux containers built and managed through the Kubernetes system, and OpenStack — the modular architecture for creating and managing large groups of virtual private servers. Kubernetes and OpenStack are open source technologies.

Juniper has contributed Contrail’s source code to the open source community through an initiative called OpenContrail. Contrail is a Juniper-supported binary version of OpenContrail, which is available under the Apache 2.0 license.

Juniper has contributed the source code of its latest security features to the OpenContrail community, said Pratik Roychowdhury, the product manager for Contrail. The site GitHub is the online repository for OpenContrail.

“Everything that I’m talking about in Contrail Security is out there [on GitHub],” Roychowdhury said. “Anyone can essentially go and take a look at the source code.”

Besides microsegmentation, Juniper has added other features to the Contrail console. They include a visual depiction of interactions between applications in hybrid cloud environments and analytics that detect anomalies and suggest corrective actions.

The latest features are useful to companies using Juniper switches or its SRX firewalls running alongside other vendors’ switches, said Lee Doyle, an analyst at Doyle Research and a TechTarget contributor. Either scenario would be helpful to Contrail adoption.

“Contrail is one of many SDN controllers that has struggled to break through [a competitive market],” Doyle said. “It’s not contributing a huge amount of revenue.”

What is contributing a growing share of Juniper’s revenue is switching. In the quarter ended June 30, revenue grew nearly 32% year over year to $276 million. However, the company’s overall market share is small at 3.4%, according to stock research firm Trefis.

Security, on the other hand, remains a weak spot in Juniper’s portfolio. Revenue has fallen from $670 million in 2012 to $318 million last year, according to Trefis. In the June quarter, revenue fell 12% to $68.7 million.

“Quite frankly, the focus right now on security has been on achieving stability and returning to growth,” Juniper CEO Rami Rahim said in an online transcript of the July earnings call with financial analysts. The transcript is available on the financial site Seeking Alpha.

Juniper adding microsegmentation to Contrail cloud – TechTarget

The future of serverless cloud looks a lot like physical servers – TechRepublic

In the cloud, hardware no longer matters, especially as the world goes gaga for serverless. At least, that would be the case but for one inconvenient truth: Serverless is powered by… servers.

Even if one accepts that the cloud increasingly allows developers to focus on writing code and not bothering with how it’s run, the hardware that powers the cloud looks set to matter for a long, long timesomething Google’s Kelsey Hightower humorously points out. In fact, in areas like edge computing, hardware has never mattered more, as former Goldman Sachs’ top technologist Don Duet told me in an interview. By Duet’s reckoning, “The ‘land grab’ for the next generation of computing literally involves physical assets like land and fiber.”


Duet believes this so strongly that he dumped his impressive Wall Street position to join an edge computing startup, Vapor IO, in Texas. Despite being able to architect the technology strategy for the world’s preeminent investment banking firm, Duet needed to break away to solve a serious problem, that of the speed of light:

In such a world, the cloud becomes an n-tier fabric that stretches from the centralized data center to the edge of the wireless network. The most interesting aspects of edge computing will emerge as the full capabilities of cloud computing get pushed to the edge.

Getting to the edge, however, means getting into the muck of physical infrastructure. To wit, Vapor IO signed a partnership with shared wireless infrastructure player Crown Castle to get access to over 40,000 cell towers and more than 60,000 miles of metro fiber, not to mention a growing small cell footprint. “A great deal of [edge computing success] depends on real estate and urban infrastructure,” Duet says, and in this case a network of fully managed, programmable data centers across a nationwide footprint of edge locations.

Of course, Vapor IO is hardly unique in trying to deliver on edge computing. AWS announced Greengrass, and Microsoft has its Azure IoT. In Duet’s mind, however, these don’t go far enough, because they’re still too centralized, primarily focused on edge gateways and devices, and bringing only a small subset of cloud functionality to the edge.

For a true edge cloud, he argues, [W]orkloads must run on cloud servers at the physical edge, adjacent to the devices, and directly cross-connected to the wireless network. IP addresses are presented at the edge nodes and handed off seamlessly at the edge, not resolved back in a central office location.”

A true edge cloud, in other words, provides all the key attributes of a centralized cloud, only in the edge location, including elastic scalability on automatically provisioned equipment. It also delivers a direct connection to regional and centralized data centers, as well as the internet at large, providing fast and seamless tiers of service that are more important for mission-critical computing. The key component is infrastructure and a lot of attention is on the use cases, whether it is autonomous cars, virtual reality, and so on, it is essential to be reliable, secure, and highly distributed.

Not surprisingly, this “full cloud at the edge” requires a reconceptualization of data centers. Instead of billion-dollar behemoths stacked with servers, Vapor IO builds so-called “Vapor Chambers” that are nine feet in diameter, house 130 to 160Kw of compute power, and are completely self-contained and remotely operable. They’re micro data centers, if you will, and serve as a tangible reminder that as much as we may want to eliminate our concern for hardware in the cloud, it is the nuts-and-bolts of physical servers that ultimately deliver this “serverless” reality, particularly at the edge.

Has your organization made progress toward implementing serverless computing? Share your experiences and advice with fellow TechRepublic members.

Image: Wavebreak Media LTD

Read more:
The future of serverless cloud looks a lot like physical servers – TechRepublic

Tachyum bets on flash storage to re-architect the cloud data center – ZDNet

special feature

The Cloud v. Data Center Decision

While companies used to have to justify everything they wanted to migrate to the cloud, that scenario has flipped in recent years. Here’s how to make the best decisions about cloud computing.

Read More

Cloud datacenters rely on acres of disk drives to store data, and startup Tachyum aims to change that with an all-flash cloud. The secret sauce is a combination of transistor physics and advanced data encoding. How will it work?

Tachyum’s founder and CEO, Dr. Radoslav Danilak, is an experienced chip designer, architect, and entrepreneur. His earlier startups, SandForce and Skyera, focused on flash storage.

Tachyum includes flash storage in its value proposition, but doesn’t stop there. Tachyum is developing a “Cloud Chip” that is optimized for low-power performance, combined with a software layer that enables current applications to run on their new architecture.

You’ve likely noticed that while transistors continue to get smaller, chip speeds have not improved. Why is that?

Smaller chip feature sizes are great for building fast transistors, but the resistance of the on-chip interconnecting wires increases as they shrink. That makes data harder and slower to move, limiting performance.

Tachyum’s solution: dramatically decrease data movement by performing operations in storage, not CPU registers. Tachyum’s software layer enables compatibility for hyperscale data apps.

Because data movement is reduced, so are power and heat. Tachyum expects to put 100 servers in a 1u rackmount box, using a fraction of the power that x86 servers need.

Another major part of Tachyum’s savings comes from using advanced erasure coding to eliminate the standard 3x data copies that hyperscale storage systems typically requires. These erasure codes are widely used today in large scale active archives, but their computational and network requirements make them uneconomic in cloud datacenters.

Tachyum’s cloud chip overcomes these problems by including many 100Gb Ethernet links and hardware that accelerates the erasure coding process. Instead of 3 copies of each file, they claim a 1 percent increase in file size with better than RAID 6 data resilience, cutting storage capacity by two thirds – and making all-flash affordable.

With massive reductions in power consumption, storage footprint, and server hardware cost, Tachyum expects its cloud chip-based systems to come in at 1/4 the cost of current cloud systems. At the scale the cloud giants are operating, giving Tachyum a fraction of their hardware spend would save them billions annually.

Bravo to Tachyum for architecting a clean sheet design for hyperscale computing. They say thay have an FPGA prototype of their cloud chip today, and they plan to ship their ASIC version next year.

In the meantime they’re showing the cloud vendors what they have. Given the economics, I don’t doubt that they are getting serious attention.

What I find most interesting though, is their in-storage processing. Scale changes everything, and it may be that our standard von Neumann CPU architectures need overhauling for the age of Big Data.

It may never come to your laptop, but as more and more computing resides in data centers, an approach like Tachyum’s is needed to keep scaling the cloud.

Courteous comments welcome, of course.

See more here:
Tachyum bets on flash storage to re-architect the cloud data center – ZDNet