Category Archives: Cloud Servers

HPE Reports Q3 Gains Along With Cloud Deal – EnterpriseTech

Alumni/Shutterstock

Hewlett Packard Enterprise reported better-than-expected earnings for its third quarter ending July 31 despite what it described as continued “commodities cost pressures” related to tighter memory supplies for servers. It also announced a cloud acquisition.

The hybrid IT specialist also reported unexpected gains in its server business as its “pivots” beyond the datacenter to greener pastures such as HPC while mapping its strategy in the emerging all-flash storage market.

CEO Meg Whitman also addressed reports she was in the running to head the ride-share service Uber, acknowledging interest in the position but declaring she would remain as head of HPE.

The company reported third quarter revenues totaling $8.21 billion, ahead of analysts’ estimate of about $ $7.5 billion. The company (NYSE: HPE) also exceeded its earnings-per-share target while operating margins rose sequentially.

The results also reflect completion of the “spin-merge” of its enterprise software business to U.K. vendor Micro Focus. HPE stock jumped 5 percent in response to the quarterly results, and is up 3 percent since the beginning of the year.

As it exits the application software business to focus on systems software used in datacenters and cloud deployments, Whitman said future acquisitions in area such as storage would focus on “organic growth.” To that end, the company announced earlier in the day it is acquiring the consulting firm Cloud Technology Partners.

Founded in 2010, the “cloud agnostic” consulting firm helps customers determine the appropriate environment for running specific workloads. Whitman said the acquisition strengthens HPE’s “hybrid IT” approach that spans datacenters, public and private clouds as well as systems software.

Asked whether the service would draw customers away from its on-premises IT services to public clouds, Whitman replied. “We will add our expertise with [on-premises] private cloud” and the acquisition is “an opportunity to scale that practice.” She added: “There are some workloads that customers probably should move to a public cloud. Microsoft (NASDAQ: MSFT) Azure is HPE public cloud partner.

Whitman and Tim Stonesifer, HPE’s chief financial officer, made frequent references to the impact of what Stonesifer called “ongoing commodities pressure” related to tight supplies of DRAMs used in servers.

“We continue to see commodities pressure,” Stonesifer noted, adding that the company expects rising memory prices at least through the end of this year. HPE reportedly plans to raise its server memory prices by 20 percent in response to accelerating adoption of higher density memory modules. Tightening supplies for server memories also is being driven by increased for DDR4 memory modules as new platforms such as Intel’s (NASDAQ: INTC) Skylake-SP, or “Purley” are introduced.

As DRAM supplies tighten, memory prices have risen between 5 and 10 percent. “We’ll see what happens with commodity prices,” Whitman said.

Meanwhile, HPE’s overall server business grew during the third quarter, and its expects to gain market share in its core datacenter business while “pivoting” to higher-end markets such as HPC.

“We are cautiously optimistic about our server business,” Whitman added, especially with blade and next-generation servers “with security built into servers at the chip level.”

In the competitive storage business, HPE touts the introduction of all-flash arrays in its 3Par storage line along with its recent acquisition of all- and hybrid flash array provider Nimble Storage. Competitors such Dell EMC also are rolling out all-flash storage options while targeting their existing customer bases. “We have to go mine our installed base” as competitors have, Whitman said. “This is a fundamental [technology] in the datacenter.”

As for her dalliance with Uber, Whitman said: “In the end, it wasn’t the right thing,” adding, “I actually am not going anywhere.”

Related

About the author: George Leopold

George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.

Read the original:
HPE Reports Q3 Gains Along With Cloud Deal – EnterpriseTech

Want to do IoT right? You’ll need more storage, networking, servers, and cloud – TechRepublic

It’s no secret that the Internet of Things (IoT) has lead to an explosion of data. In order to better manage that data, and get the most out of it, enterprise IT is making serious investments across its infrastructure stack.

According to a Tuesday report from 451 Research, some 32.4% of enterprise organizations are planning to increase their storage capacity over the next 12 months. Some 30.2% of organizations are planning to increase their network edge equipment over the next year, while 29.4% plan to increase server infrastructure, and 27.2% plan to increase off-premises cloud infrastructure.

The majority of respondents are planning to either increase their spending on IoT projects, or keep it the same, the report found. Of those surveyed, 65.6% planned to increase their spending on IoT over the next year, while only 2.7% were planning a reduction.

SEE: Special report: Harnessing IoT in the enterprise (free PDF)

In terms of storage, 53.1% of companies store their IoT data at a company-owned data center, while 59.1% analyze it there. Some two-thirds of respondents said that the data would remain there for its lifespan, but the other one-third said they eventually move it to the public cloud. Cloud storage was more often used for historical use cases such as “regulatory reporting and trend analysis,” the report said, as it offers flexibility and cost savings.

Data processing is also happening at the edge, according to slightly fewer than half of the respondents. Analysis, aggregation, and filtering are happening on the IoT device (22.2%) or in IT infrastructure that is nearby (23.3%), respondents said.

“Companies are processing IoT workloads at the edge today to improve security, process real-time operational action triggers, and reduce IoT data storage and transport requirements,” Rich Karpinski, research director for Voice of the Enterprise: Internet of Things, said in the report. “While some enterprises say that in the future they will do more analyticsincluding heavy data processing and analysis driven by big data or AIat the network edge, for now that deeper analysis is happening in company-owned datacenters or in the public cloud.”

Most companies are using IoT to improve their IT processes, the report foundspecifically, data center management, surveillance, and security monitoring. Although, the report also noted that facilities automation will become the most popular use case within two years.

As the use cases for IoT continue to grow, so does demand for tech workers skilled in the areas relevant to IoT. Nearly 50% of survey respondents said that the struggle with a “skill shortage” for work in IoT.

Image: iStockphoto/chombosan

Follow this link:
Want to do IoT right? You’ll need more storage, networking, servers, and cloud – TechRepublic

Unlocking the promise of a connected world through edge cloud … – ITProPortal

Internet of Things has started to impact every aspect of our daily lives. Our appliances, cars, gadgets, communication devices, tools, and even some of our clothing have become nodes on the internet. By 2020, as many as 50 billion devices will be connected so there will be skyrocketing growth of traffic generated by devices at the edge of the network posing a monumental challenge to our networks and central cloud computing. Thankfully, we can take advantage of the ever-increasing computing capabilities of edge devices to turn them into cloud servers and extend central-cloud capabilities to the edge. Edge cloud to central cloud is like WiFi to Cellular Communication. Just as WiFi carries most of the global wireless traffic today, edge devices will soon manage most of the cloud computing burden.

According to Gartners 2017 Hype Cycle for Emerging Technologies, edge cloud computing is on the brink of becoming an innovation trigger. Microchips and sensors continue to become embedded in everyday objects, making edge cloud computing an immense opportunity. There are billions of potential edge servers today and there will be tens of billions more tomorrow.

An illustrative example: self-driving cars

Self-driving cars have many potential benefits: optimized traffic flow, improved fuel efficiency, reduced accidents, and drivers with more free time. However, a big challenge for self-driving cars is to find a cost-effective way to process the vast amounts of data that they generate. On average, every self-driving car generates approximately one GByte/sec of data which is orders of magnitude more data than the capacity of a 4G base station and an order of magnitude large than a 5G base station. There are no networks in the foreseeable future that can be used to send all this data back to the central cloud for processing. Most of this data needs to be processed locally and only a minimal set of processed data should be sent back to the central cloud for global coordination. Moreover, in order to meet latency requirements to take agile decisions, self-driving cars should communicate in the fastest way possible. This demands instantaneous processing of information and when possible peer to peer communication. This is where distributed edge cloud computing comes into play, transforming cars to data centers on wheels where most of the communication and processing is performed as close as possible to the edge.

In a distributed edge cloud environment, every car can be a set of microservices that can sense other cars not only because of all the sensors but also because their microservices can communicate with microservices of other cars. To make this work, microservices (within a car and between cars) need to form ad-hoc clusters based on scopes such as proximity, network or account affinity. This way, cars can process the terabytes of data as quickly and as efficiently as possible leveraging not only the processing power at the central cloud but also their own collective computing, storage and memory resources in a collaborative fashion with other cars on the road.

Edge cloud computing is the next major computing revolution

Throughout the last few decades, computing has experienced different cycles shifting between centralized and distributed. In the early days, computing was centralized with mainframe computing. In the mid-80s, with the advent of personal computers the industry shifted to distributed computing. In the last decade, we witnessed the move to centralized cloud computing. Many falsely predicted that this was the holy grail of computing and we would move to an era of thin clients where devices would be dumb screens and input devices and all the processing would be performed in data centres in the cloud. This made sense for some applications such as music or video streaming or hosting some software applications. In all these cases, edge devices do not generate too much data and are mostly passive receivers of information.

In the last few years, two major socio-technical trends have contributed to a fundamental change in production and consumption of data.

First, thanks to mobile internet and social media, ordinary people generate massive amounts of data turning them from mere consumers to consumers and producers. For example, today close to 500 million photos are uploaded on Facebook and Instagram and roughly 500 thousand hours of video is uploaded to YouTube daily; this is more than what the three major US networks generate in content over two years!!! This is not a consumer phenomenon but also applies to enterprises; for instance, more than 80% of businesses have started to leverage user-generated content in their marketing efforts.

Second, we have the rapid growth of IoT devices where many new edge devices produce valuable data. There are already 20 billion connected devices, 2.5 billion of which are B2B IoT devices. Over 300M wearable devices were sold in 2017 alone. Many of these devices generate small amounts of data but many generate massive amounts of data; for example, when video meets IoT. We apply machine-learning algorithms to video feeds allowing cameras to recognize people, objects, and situations automatically. There will be phenomenal growth in AR/VR in the gaming industry, and even the enterprise starting with creative applications and quickly moving to brick and mortar industries and manufacturing. Robots will also be producers of massive amounts of data at the edge.

Clearly, we are amid an explosion of data generated at the edge and the tsunami is yet to come. The question is can our communication networks scale to cope with the data generated at the edge? To try to answer this, we can look at two predictors: the Moores law in computing and its equivalent in network bandwidth. History has shown that computing power roughly doubles every 18 months (or hundred times every decade) whereas network bandwidth grows about 50 times every decade. In other words, even if the number of devices do not grow (which they clearly will), the communication network will be the bottleneck for the growth of IoT.

Setting bandwidth aside, many IoT applications such as self-driving cars or tactile control communications in various industries require low latency response. In this case, even if the network capacity is miraculously increased to cope with the data, laws of physics inhibit remote processing of data in the central cloud due to large latencies in the long-haul transmission of data.

So, what is the solution? How can we cope with the explosion of data at the edge and strict latency requirements of some IoT applications? The answer is distributed edge cloud computing. Edge cloud computing means that any device (or node) becomes a cloud server. As much as possible, the data is processed at the edge of a network, as close to the originating source as possible, instead of processing everything in the central cloud. This approach is faster, more efficient, and scalable: data can be immediately analysed and put into action overcoming bandwidth limitations and latency constraints on the network. Edge cloud computing is essential to meet stringent requirements on bandwidth and latency and at the same time minimizes power consumption and infrastructure costs.

Edge cloud computing is a paradigm shift that enables every device, appliance, or gadget to communicate and share resources making them part of the solution for scaling of IoT. It allows drones and robots to harness their collective resources in industries such as manufacturing, oil and gas, agriculture or mining, delivering real-time data and improving business efficiency. This new computing model will revolutionize the world in ways that we may not be able to predict at this moment.

The great news is that the technology is ready for developers today. mimik has developed a fully distributed edge cloud platform that extends central cloud to the edge: mimik arms developers with a platform to unleash the power of edge devices. mimik SDK solves many of the current challenges that centralized cloud computing alone cannot address. We extend the power of the central cloud to the edge and boost its reach so that bandwidth, latency constraints, and infrastructure cost do not become the bottleneck for the healthy and scalable growth of IoT.

We need a paradigm shift that transforms tens of billions of devices from a challenge to an opportunity. IoT requires a revolution in computing that unlocks the power of connected devices. Distributed edge cloud is the ideal solution to harness computing resources at the edge, unlocking the promise of a smart connected world that will bring massive efficiencies to enterprises and digital freedom to consumers.

Siavash Alamouti, CEO at mimik

Image Credit: Jamesteohart / Shutterstock

Read more:
Unlocking the promise of a connected world through edge cloud … – ITProPortal

So you’re already in the cloud but need to come back down to Earth – The Register

We generally think of a transformation to a hybrid infrastructure as one where you’re going from a completely private setup to one that spans the public cloud and your private installation. But what if you started life as a small company with your systems entirely in the cloud? It’s not an unusual approach, as running up your initial services in the cloud is straightforward and avoids a big capital outlay. As a company grows it’s understandable that it might want to take on a private data centre, build an in-house support team and evolve to a two-site setup.

Step one is to consider why you’re bothering with an on-premises setup instead of a second cloud instance. The answer will generally be that you want something that’s closer to your office, with a potential performance improvement gained from such proximity. And that’s fine what matters is that you’ve considered the options before deciding which way to go.

The next step is to think about where you’ll host your private data centre. As you’re already in the cloud, you have the opportunity to pick a data centre that’s close (electronically speaking) to the cloud centre you’re in. For example, you’re probably aware that AWS provides a Direct Connect facility that lets you hook straight into their infrastructure rather than accessing your cloud world over the internet. Check out the locations and you’ll see that the connectivity’s hosted at 51 well-known locations Equinix in London, for example, or TierPoint in Seattle. Connectivity between your public and private components with a latency of just a few milliseconds is an attractive concept if you’re looking for high availability with seamless failover.

Next, you’ll need to think about the platform you’re using. Most of the time you’ll have used one or more of your cloud provider’s standard operating system templates, so it makes sense to run your local stuff on the same operating system flavour if you can. And of course you should use the same CPU architecture where you can too, so you can be assured that your apps will be portable.

So you’ve sorted the platform. Now you need to decide whether the on-premises setup is to be your primary or secondary installation. If it’s to be a secondary setup you should have a relatively straightforward job of adding new system and application-level components in as secondaries to your cloud-based apps.

If you decide to flip things around you’ll have a more involved task of be shifting the primary apps over and redeploying the cloud setup as the secondary installation. Either way the happy news is that you’ve already gone through the non-trivial task of providing your office users with connectivity to the cloud installation, so hooking things up so they’re able to get to the private data centre, regardless of whether it’s the primary or the secondary, should be easier.

One further consideration with the choice of primary and secondary installations is the cost of data transfer. Shifting data out of a commercial cloud setup has a cost associated with it. Not a vast cost, I’ll grant you, but one that you do need to keep an eye on. Using Amazon as an example, moving a terabyte per month over the internet from the cloud setup to your private installation will cost you $90. That’s $900 for 10TB, or $7,800 for 100TB; even though the per-gigabyte cost tapers down, it doesn’t ever tail off at zero. What does this mean? Easy: if the cloud setup is the primary and it’s replicating application data to the private secondary, you’re paying a chunk of cash for it to do so.

While we’re on the subject of data transfer, you also need to figure out how you’re going to do it. In these modern times, it’s a relative doddle to set up the major cloud providers’ storage instances so you can access them externally via standard protocols such as NFS. Alternatively you can look to the major storage vendors, who will sell you a funky gateway to install in your private data centre and handle the cloud magic for you.

The next consideration is licensing, and there are two aspects here. First is the basic fact that you’ll need to buy operating system and/or application licences for your private setup sounds obvious but you may not ever have had to consider this if you were using a pay-as-you-go model with pre-configured cloud app servers. Second is that if you want to go for a clustered or active/passive application setup, you may need to revisit the versions you use on the cloud servers as well as buying licences for your private setup. Take SQL Server, for example: if you’re running Standard Edition you can implement basic two-node high availability, but if you want something more advanced you’ll need to upgrade to Enterprise Edition. Same with Oracle: if you want to enable Data Guard between sites that’ll need Enterprise Edition too.

Lastly, but by no means least, is your internal support team. They’ve probably spent a number of years fettling your cloud installation and fixing stuff when it broke, but their skillset will be at worst lacking and at best out of date when it comes to hosting, networking, hardware and hypervisor support.

Be prepared to invest in training so that you can be confident that the new kit you’re acquiring for your private data centre is properly supportable and hence properly supported. Yes, your typical infrastructure is easier to put together than it was a few years ago, but that doesn’t mean it’s trivial. And if you’re virtualising your private data centre which you should getting the hypervisor layer running and optimised will take time, effort and skill.

Going from a cloud-centric setup to a hybrid infrastructure isn’t rocket science, then which is no great surprise as any problem’s tractable if you design, plan and implement the solution properly. But going from cloud to hybrid has some differences from going from private to hybrid.

So you just need to think a bit before you do it.

Sponsored: The Joy and Pain of Buying IT – Have Your Say

Read more:
So you’re already in the cloud but need to come back down to Earth – The Register

Nasa: Our demands for repeat presidential election – Daily Nation

By PATRICK LANG’ATMore by this Author

The Raila Odinga-led National Super Alliance (Nasa) has written to the electoral agency with 25 new demands that they say should be fulfilled in the October 17 fresh poll.

In a letter signed by Mr Odingas chief agent and Nasa co-principal Musalia Mudavadi, the opposition has demanded a full audit of the elections technology, full access to the servers, change of ballot printing firm, and the gazettement of new 290 constituency returning officers.

It is therefore inappropriate and foolhardy for the IEBC to embark on the planning of the fresh election without full compliance with the Orders of the Supreme Court in the redeployment of technology in the fresh presidential election. We therefore demand a full audit of technology in use in full compliance with the law, Mr Mudavadi said in the four-page letter.

The Nasa team has not only opposed the Independent Electoral and Boundaries Commission date for the repeat poll, it has also questioned why it narrowed the number of candidates to only Mr Odinga and President Uhuru Kenyatta.

Your interpretation that the election scheduled on the 17th October 2017 shall be in the style of a run-off contest of only two candidates is erroneous and unconstitutional. We also take reservation that you have not consulted the parties involved before making a determination on the date of the said election, Mr Mudavadi told IEBC Chairman Wafula Chebukati.

In the audit, Nasa has demanded a scrutiny they say should have full information on the ICT infrastructure, list of support partners and their respective Service Level Agreements, a detail of the firewall configuration, including ports configuration, as well as disclosure of all database transaction logs.

The team also wants a physical view and inspection of the IEBC servers, portal access to the cloud servers and IP addresses of all 20 servers; full access and copy of all servers and databases used by the IEBC; GPS coordinates of KIEMS; and Telkom and network structure with all service providers.

The opposition has also demanded the removal of senior personnel at the IEBC secretariat including Chief Executive Ezra Chiloba, his deputy Betty Nyabuto, James Muhati, the ICT director, Ms Immaculate Kassait, director of voter registration, Ms Praxedes Tororey, the head of legal team, with Mr Moses Kipkosgey being added to the earlier list.

Independence of the IEBC is not negotiable. Nasa coalition demands that fresh election should be administered by professional and non-partisan officials, Mr Mudavadi said.

“We demand that officials who are partisan or perceived as such should step aside and or be suspended during the planning and execution of the fresh election.”

Further, the coalition demanded a full audit of the Sh3.8 billion 45,000 voter identification and results transmission kits that were provided by French-based Safran Morpho.

We demand that Safran and Oracle provide full implementation information in relation to their involvement in the General Election held on 8th August 2017, said Mr Mudavadi.

The team has also demanded a review of the voter register, and the 40,883 polling stations.

To enhance transparency, Nasa said the following specific demands must be met:

Appoint and gazette returning officers not among the Constituency Election Coordinators in consultations with political parties and candidates.

Establishment of a technical monitoring committee with representatives of the main political parties, coalitions or candidates to oversee implementation of the technology in use.

Stop use of Al-Ghurair to print ballot papers and results declaration forms

All Forms 34Bs should be pre-printed indicating the names of polling stations in the constituency and names of candidates

Elections results to be announced at the Constituency level. Results sent electronically must be accompanied by corresponding statutory result declaration forms

Candidates agents should be part of receiving teams at the constituency and national tallying centers, and be allowed to confirm entries before transmission

Establish defined roles of the security agencies and eliminate undue influence by the provincial administration and other public officials

Jubilee Party chose the October 17 date, claim Nasa leaders.

Follow this link:
Nasa: Our demands for repeat presidential election – Daily Nation

Chinese smartphone maker Xiaomi open to moving servers to India – Economic Times

NEW DELHI: Xiaomi said it was open to moving its servers to India subject to its cloud service provider partner setting up base in the country, amid increased government efforts to protect user data on mobile phones.

All our servers are sitting on AWS (Amazon Web Services) in Singapore and US. If AWS moves to India, we would be happy to work with them, Manu Kumar Jain, managing director of India operations, told ET, becoming the first overseas company to openly offer to move its servers to India.

Handset companies typically dont store data on their own servers but instead lease space on third party cloud service providers such as AWS, Microsoft and Google. While AWS and Microsoft have already set up their centres in India, Google has also announced setting up the same in the country to cater to a larger number of customers, especially those in the government or financial services industry since regulations in those sectors dont permit data to be transmitted outside the country.

We last evaluated this about 2-3 years ago when we were moving our servers (from China). At that time there was no significant presence (of AWS) and it was much more difficult to have it here, Jain said, when asked whether the company would move or add its servers in India, which appears to be the larger aim for the government intending to secure data. Jain did not say whether the company was already in talks with Amazon to move its servers to India.

He though added that from an internet speed perspective, the connectivity between India and Singapore was one of the best. We moved and thought it was pretty good. But if someone, AWS or equivalent, were to set up servers here (in India), we would be happy to work with them, he added.

The company, which sells about 4-5 million smartphones a quarter, said its devices were super-secure, no data was taken without use consent and the data that is taken is encrypted to the highest degree and that it cannot be decrypted even if the data is stolen.

Xiaomis views come at a time when the government is taking up security of phones with all companies, including Apple and Samsung, and scrutinising protection levels that all handset makers a large majority of which are Chinese were providing in India.

Another Chinese brand One-Plus has also said that it is prepared to respond to Indias data security and privacy concerns, since it sells the same smartphones in the US and Europe, where these concerns are already addressed.

Currently, we have not received direct request or requirement to set up servers or cloud storage in India. We are trying to get more clarity on that, One-Plus CEO Peter Lau told ET.

Amid the recent India-China standoff at Dokalam which has since been resolved the IT and electronics ministry has asked over 30 smartphone companies for protocols used by them to ensure the security of mobile phones in the country.

While the government is evaluating responses on security preparedness, it may well ask all those selling devices in India to have locally based servers.

Officials in the ministry said while the issue of apps sweeping up excessive user data was worrying, the broader issue remained that of the security of information that could be going to third parties outside the country, especially to China.

Read the original:
Chinese smartphone maker Xiaomi open to moving servers to India – Economic Times

VMware officially lands on AWS cloud with new management and security features – SiliconANGLE News (blog)

Nearly a year after signing a landmark deal to bring its software-defined data center technology to the Amazon Web Services Inc. cloud, VMware Inc. kicked off its VMworld conference in Las Vegas today with the news that VMware Cloud on AWS is now generally available.

The service essentially enables the vast majority of companies that use VMware inside their data centers to use VMware software, which allowsdifferent operating systems and multiple applications to run on the same physical computer,with AWS services as well.

To date, companies have had difficulty moving workloads to Amazons cloud to take advantage of the clouds more flexible and lower-cost computing and storage services because many of their applications depended on VMware software that only ran on computers in company data centers. That presented customers of each provider with a tough choice: Use VMware technology it built its core applications on, but with none of the cost and flexibility of cloud computing, or use Amazons cloud, but not with the VMware software their data centers are built on.

They hated this binary decision that we were forcing on them, AWS Chief Executive Andy Jassy (pictured, right) said during an appearance this morning at VMworld with VMware CEO Pat Gelsinger (left). Now, the executives said, customers can more easily use so-called hybrid cloud services that use both on-premises software and hardware and cloud services as needed.

If this fully works, CIOs have no excuse in regard to moving VMWare loads to the cloud, said Holger Mueller, vice president and principal analyst at Constellation Research. But lets see if this works.

VMware, part of Dell Technologies Inc.s constellation of companies that also includes storage supplier Dell EMC, also announced a raft of services for the VMware Cloud today. Initially, VMware Cloud is available in the AWS U.S. West region, but other regions will be added throughout 2018.VMware said the integration will enable customers to run applications across operationally consistent vSphere-based private, public and hybrid cloud environments with the option of expanding to AWS elastic or bare-metal infrastructure.

When the AWS-VMware deal was announced last October, it was apparent that it could reset the competitive environment in computing, in particular presenting new challenges for IBM Corp., which had signed a deal with VMware earlier in 2016, Google Inc.s cloud platform and Microsoft Corp., whose No. 2-ranked Azure public cloud had claimed the lead in hybrid cloud computing.

The arrangement with AWS offers some benefits for VMware, including a connection to the leading public cloud provider that its customers have been clamoring for. When your own cloud fails, you need to join the ones that work, Mueller told SiliconANGLE. VMware now focuses on add-on software, such as application security.

But it also means AWS could steal some of VMwares customers ultimately, if it results in what Dave Vellante, chief analyst at SiliconANGLE Medias Wikibon, has called a potential one-way trip to Amazon cloudville.’ Moreover, said Mueller, the arrangement doesnt help Dell sell more servers into an on-premises data center.

As for Amazon, Mueller said, AWS needs a piece of the on-premises enterprise load and this is the way. He added that the fact that AWS is offering to host VMware instances on so-called bare-metal servers, those with no operating software installed on them, indicates how much it needs VMwares help to reach large enterprise customers, since AWS had generally eschewed bare-metal arrangements.

The offering will be delivered, sold and supported by VMware as an on-demand service. Its powered by VMware Cloud Foundation, a software-defined data center platform that includes vSphere, VMware VSAN and VMware NSX virtualization technologies managed by VMware vCenter. The initial set of cloud services includes six modules:

Discovery centralizes inventory information and cloud accounts across AWS, Microsoft Azure and VMware clouds, making it easier for information technology departments to search for and identify workloads. Administrators can group cloud resources even if they span multiple clouds. Built-in search and filters enables administrators to filter resources based upon cloud attributes.

AppDefense protects applications by embedding application control and threat detection and response capabilities into vSphere-based environments. Its tightly integrated with the NSX networking platform, and operates within the vSphere hypervisor to create a knowledge base of the correct state and behavior of each endpoint for change detection.

Cost Insight helps organizations analyze their cloud spending and identify savings opportunities. It provides detailed visibility into public and private cloud costs on AWS, Azure and VMware environments and enables drill-down to identify cost drivers. Cost Insight also identifies stopped virtual machines and associated storage resources across public and private clouds to reduce waste.

Network Insight analyzes application traffic flows between different tiers, virtual and physical network layers and public and private clouds. This has application security and load balancing applications, and makes it easier for cloud administrators to manage and troubleshoot large-scale NSX deployments.

NSX Cloud provides a single management console and common application program interface for monitoring and securing applications that span multiple private and public clouds. It features a micro-segmentation security policy that can be defined once and applied to application workloads running anywhere.

Wavefront is a metrics monitoring and analytics platform that gives developers insight into the performance of highly-distributed cloud-native services to detect performance anomalies while enabling high availability. Operating on what VMware said is a massive scale, Wavefront gives DevOps teams instant visualization of millions of data points per second. This helps resolve bottlenecks more efficiently and proactively.

VMware also said its expanding Cloud Foundations scope with new partner offerings. They include support from CenturyLink Inc., Rackspace Inc. and Fujitsu Ltd. New hardware platforms that support Cloud Foundation include Dell EMCs VxRack SDDC, Hitachi Data Systems Corp.s UCP-RS, Fujitsu Primeflex and Quanta Cloud Technology LLCs QxStack.

VMwares shares closed up nearly 2 percent today, to about $104.68 a share, on a relatively flat day for the overall market.

With reporting from Robert Hof

(* Disclosure: SiliconANGLE Medias video unit, theCUBE, is a paid media partner at VMworld. Stories on SiliconANGLE are written independently of coverage on theCUBE. Sponsors have no editorial influence on content on SiliconANGLE or theCUBE.)

Read more:
VMware officially lands on AWS cloud with new management and security features – SiliconANGLE News (blog)

VMware-on-AWS is live, and Virtzilla is now a proper SaaS player – The Register

VMworld 2017 VMware CEO Pat Gelsinger last week introduced the company’s second quarter results by saying the company has embarked on a multi-year journey from a compute virtualization company to offer a broad portfolio of products driving efficiency and digital transformation.

And today at VMworld the company began to explain what that mouthful of jargon meant: a strategy to put the company at the center of multi-cloud management.

The clearest expression of Gelsinger’s words is its half-dozen new software-as-a service offerings, namely:

All six are subscription services, accessible through existing VMware accounts. And all six are new stuff for your VMware account manager, or channel partner, to suggest. If you’re one of the few who resisted the company’s No Naked vSphere push, VMware’s going to come at you again, this time as a software-as-a-service vendor.

The Register expects the company will come hardest with AppDefense, because it’s created a new business unit to back a product it feels is genuinely new to offer. Most security is about finding bad, we are about ensuring good, says Tom Corn, senior veep of the Security Product group at VMware.

The Register revealed the basics of AppDefense well before its announcement. We had to wait for today to learn that it can build its whitelist of acceptable VM behaviour by interacting with either vCenter or automated provisioning tools like Jenkins or MAVEN. Linking with those tools is an effort to make AppDefense offer something to DevOps practitioners. It’s also trying to impress line-of-business types by offering them a mobile app that alerts them when applications misbehave, so that all stakeholders can participate in decisions about how to respond.

AppDefense will be sold as SaaS or on-premises software. Either way, it should do well: security types The Register’s virtualization desk have spoken to feel Virtzilla is onto something here!

VMware’s favourite news from this year’s event is that the company’s deal with Amazon Web Services has come to fruition. AWS now hosts servers running Cloud Foundation, the bundle of vSphere, VSAN, NSX and vCenter that is intended to mirror on-premises implementations.

It’s all available as of today, to run in AWS alone or in a hybrid cloud spanning an on-premises implementation.

For how it’s only in one AWS Region, US West, and you can only buy by the hour. One-and-three-year subscriptions are due soon, as is a global rollout that will start soon and continue deep into 2018. There’s just one server type, too, and while vSphere lets you slice and dice that as it would any other server, there’s no hint of the varied instance types AWS and other clouds offer.

At least the server is pleasingly grunty. Each host has a pair of CPUs, 36 cores, 72 hyper-threads, 512GB of RAM, local flash storage (3.6TB cache, 10.7TB raw capacity tier). But you’ll need four hosts to build a cluster!

There is integration between VMware-on-AWS and some AWS services.

VMware will run and support the service, in contrast to the arrangement it has with IBM and the other ~4,300 vCloud Air Network partners that run vSphere-based clouds. Those partners get a new version of vCloud Director, plus more hardware partners ready to sell them servers ready to roll with Cloud Foundation. And perhaps some worry beads, for stress relief and/or prayer as VMware challenges them like never before, because the new service integrates with some AWS services. We’re told that the VMware service lives in the same data centres as services like Lambda, so piping them into apps will be low-latency.

In the past VMware partners have told El Reg they feel VMware’s cloud partnerships aren’t bad for business, because they get users talking about vSphere-powered clouds. Now we hear some are re-thinking that position, but the pricing for VMware on Amazon may well crimp their concerns, because it isn’t super-cheap.

Here’s the pricing scheme.

Remember: you’ll probably need at least four hosts, so actual costs will be rather more than the single-host cost.

VMware justifies these prices by saying they stack up well when compared to the total cost of ownership compared to either on-prem or public clouds.

Here’s the company’s math.

That calculation excludes bandwidth and IP address charges, and assumes VMs have a pair of vCPUs, 8GB RAM and 150GB of storage.

VMware’s attempt to build a public cloud failed, as did its early SaaS forays.

The company’s now turned that around, because the AWS deal gives it unrivalled scale, with perhaps-unsettling price.

The new SaaS offerings do two things:

VMware has done astoundingly well to keep Hyper-V’s market share small. But anyone who needs new servers or storage now has to consider either hyperconverged infrastructure or Azure Stack because both offer strong alternatives to traditional infrastructure. Azure Stack also makes hypervisors irrelevant and therefore also makes the idea of Windows-on-VMware look a bit archaic.

Starting with last week’s earnings call and already in pre-VMworld briefings, VMware’s counter argument is that it’s happy for you to use Azure in any form. So long as you don’t needlessly rip and replace perfectly good vSphere in order to buy in to Microsoft’s hybrid vision.

The new SaaS tools give you reasons not to ditch vSphere, by making multi-cloud wrangling easier and making vCenter the place you’ll do it. AppDefense helps, too, because it looks a useful tool that won’t hurt even if only deployed as one layer of a defense-in-depth strategy. It needs vCenter, too. And if vCenter is the place to do some security, and do multi-cloud management, it’s a lot harder to contemplate ejecting it. That the VMware/AWS tie-up has quickly gone beyond IaaS and into AWS’ services also suggests Virtzilla has found its way into a position of cloudy strength.

For now, anyway. Clouds move fast, and so do strategies to catch them.

Sponsored: The Joy and Pain of Buying IT – Have Your Say

Here is the original post:
VMware-on-AWS is live, and Virtzilla is now a proper SaaS player – The Register

Socionext Partners with Advantech to Offer High-Density, Low-Cost … – Design and Reuse (press release)

Scalable, Robust, Low-power, and Easily Deployable Solutions for service providers and other video intensive applications

SUNNYVALE, Calif. and MILPITAS, Calif., Aug. 31, 2017 — Socionext Inc., a world leader in hardware HEVC encoding, and Advantech, creator of innovative video acceleration solutions, today announced a strategic partnership to provide live hardware transcoding solutions for the data center supporting MPEG2, AVC (H.264), and HEVC (H.265).

Socionext real time dense transcode solution, also known as the “Media Cloud”, enables advanced HEVC compression technology and real-time transcoding capabilities for OTT applications. Socionext’s extended partnership with Advantech includes the integration of Socionext’s Media Cloud technology into Advantech’s VEGA 7000 Family of High Density Video Servers to enable agile and cost-effective live UHD cloud services for the new video-centric era.

“We are seeing an increasing need to lower the cost of ownership by media, telecom and internet companies that are seeking to address the ever-increasing mass consumption of streaming high-quality video,” said David Lin, VP of Video Solutions at Advantech. “Socionext, as our valued partner, is able to solve the power, density, and performance technical design requirements we are looking for in order for us to develop a cost-competitive, highly-efficient transcoding solution with adaptive bitrate (ABR) streaming capabilities for live cloud media service providers.”

The Advantech VEGA 7000 is a family of accelerated video processing servers which combine best video and IT practices within an off-the-shelf platform that has been optimized to efficiently scale throughput of high-density transcoding applications in live OTT and cloud workflows. Up to four VEGA-3318 accelerators can be integrated into a 1U server to deliver up to 32 x 4Kp60 live HEVC profiles per rack unit the highest density available in the market. This allows for large scale, energy and cost-efficient data center deployments that benefit from a 20X rack space and power reduction when compared to non-accelerated solutions. Advantech VEGA solutions for the data center minimize development efforts by providing a comprehensive software package that features Linux and Windows SDKs, an FFmpeg plug-in and virtualization-friendly drivers supporting OpenStack. Advantech also offers hardware and software design and customization services for maximum deployment flexibility.

“Advantech offers decades of expertise in complex hardware and software system integration and design services,” said Yasuhiro Wakimoto, VP of the Enterprise Solution Business Unit at Socionext. “Socionext and Advantech have a long history together providing solutions for “Live” transcode broadcasting and processing large volume of media data for video systems. This partnership further extends the close relationship.”

Advantech will demonstrate their VEGA 7000 Series of High Density Video Server for the Media Cloud at IBC 2017 in Hall 11, Booth C32, taking place at RAI, Amsterdam, from September 15-19, 2017. For more information, visit http://www.advantech.com/nc/spotlight/IBC2017 or email video.solutions@advantech.com.

About Advantech

Founded in 1983, Advantech is a leader in providing trusted, innovative products, services, and solutions. Advantech VEGA Video Platforms and PCIe Adapters are designed to boost video infrastructure performance from acquisition to distribution at the lowest power budget while fully complying with the media industry needs. By providing access to the latest 4K/8K UHD video processing and IP media technologies on commercial-off-the-shelf IT platforms we accelerate the deployment of next-generation, open and more efficient video solutions across a wide range of applications from broadcast encoding and high-density OTT transcoding to cloud, mobile and 360-degree video. Advantech’s standard portfolio can be tailored to meet a range of system requirements, significantly reducing time-to-market effort for our customers. For more information, visit http://www.video-acceleration.com.

About Socionext Inc.

Socionext is a new, innovative enterprise that designs, develops and delivers System-on-Chip products to customers worldwide. The company is focused on imaging, networking, computing and other dynamic technologies that drive today’s leading-edge applications. Socionext combines world-class expertise, experience, and an extensive IP portfolio to provide exceptional solutions and ensure a better quality of experience for customers. Founded in 2015, Socionext Inc. is headquartered in Yokohama, and has offices in Japan, Asia, United States and Europe to lead its product development and sales activities.

Read the original:
Socionext Partners with Advantech to Offer High-Density, Low-Cost … – Design and Reuse (press release)

Municipal adoption of the cloud – American City & County (blog)

By Pete Eichorn

Although state and federal agencies are migrating steadily to the cloud, municipal governments have been slower to jump on the trend. Cloud technology presents a challenge for municipal agencies, which often dont know where to start, whom to ask for help, whether they need a third-party partner or whether laws and statutes even allow them to move data to the cloud.

Try before you buyFortunately, city and county governments dont have to go all in from the beginning. The cloud is a perfect small-scale testing ground. You can try cloud-based software as a service, or SaaS, which allows your team to use an existing software, delivered via a Web browser, and pay for it on a usage-time basis or with a monthly fee that grants access to designated staff members.

For example, a county that wants to replace its legacy financial system to better adhere to its states financial reporting requirements and eliminate its long-standing manual processes might test SaaS through the cloud. The county would gain access to an existing, third-party software product that could house its data, provide real-time access to financial transactions and reports and deliver other services the municipality might not even have anticipated.

The county would have avoided a large capital outlay. The cloud-based software would automatically scale when the county expected a larger demand on its financial services software and, if the software didnt meet the countys needs during the test period, the county would be free to walk away and try another solution.

The security questionData is no less secure in the cloud than on an internal server. That said, government cant outsource responsibility for its data. A cloud provider wont be responsible if there is a breach, so you must know what you are buying, where your data is and how it is being safeguarded. If you decide to test a cloud-based service, get the provider to disclose and include in your contract its terms and conditions, security limits, liabilities and responsibilities.

Your responsibilities to secure data still exist in the cloud as they do in traditional technologies. You must still comply with applicable laws and follow best practices, such as ensuring that user accounts and passwords are managed properly.

Which cloud?There are several kinds of clouds, and reasons you might choose one over another. Private clouds, maintained on-site, offer flexibility and control, and often are better suited when you want the clouds benefits and are willing to manage technology capabilities yourself. Public clouds, maintained via the internet on a cloud providers servers, are a good fit for accessing the latest technologies and for changing your financial management from a capital expense to an operating expense model.

Another option is hybrid cloud, which offers access to the best aspects of both public and private clouds. A hybrid cloud solution works well when you need to balance the needs of legacy systems and new, cloud capabilities.

Now is the timeMore and more, software providers are moving toward offering their products and services via the cloud. Innovation will focus on SaaS, and updates and support for legacy systems will become rarer.

So now is unquestionably the time for municipal governments to commit to understanding the potential benefits and dipping their toes in the water with test projects.

Research the benefits and individual services various cloud providers offer and determine which meet your needs. If you are short on internal resources, look for a government services provider to help you navigate the options and set out a cloud adoption plan that integrates with your existing systems, security models and goals.

_____________

To get connected and stay up-to-date with similar content from American City & County:Like us on FacebookFollow us on TwitterWatch us on YouTube

Excerpt from:
Municipal adoption of the cloud – American City & County (blog)