Category Archives: Cloud Servers
Your private network in the cloud
Customers running Virtual Network
Azure Virtual Network gives you an isolated and highly-secure environment to run your virtual machines and applications. Use your private IP addresses and define subnets, access control policies, and more. Use Virtual Network to treat Azure the same as you would your own datacenter.
Traffic between Azure resources in a single region, or in multiple regions, stays in the Azure networkintra-Azure traffic doesnt flow over the Internet. In Azure, traffic for virtual machine-to-virtual machine, storage, and SQL communication only traverses the Azure network, regardless of the source and destination Azure region. Inter-region virtual network-to-virtual network traffic also flows entirely across the Azure network.
In a virtual network, run your favorite network virtual appliancesWAN optimizers, load balancers, and application firewallsand define traffic flows, allowing you to design your network with a greater degree of control.
Use Virtual Network to extend your on-premises IT environment into the cloud, like you set up and connect to a remote branch office. You have options to securely connect to a virtual networkchoose an IPsec VPN or a private connection by using Azure ExpressRoute.
Use Virtual Network to build your hybrid cloud applications that securely connect to your on-premises datacenterso an Azure web application can access an on-premises SQL Server database, or authenticate customers against an on-premises Azure Active Directory service.
Use Virtual Network to build your services that rely on Azure cloud services and Azure Virtual Machines. Use Azure web roles for your front end and virtual machines for backend databases. Combine platform as a service (PaaS) and infrastructure as a service (IaaS) in a virtual network to get more flexibility and scalability when youre building apps.
Find and take advantage of popular networking appliances, load balancers, and firewall and traffic management solutions in the Azure Marketplace.
Related products and services
Create highly-available, infinitely-scalable cloud applications and APIs
Dedicated private network fiber connections to Azure
Start your free account with Virtual Network
Q. When I upload a file to the Google Drive cloud, where is it actually stored, and is it protected from hackers?
A. According to Google, the files you add to your Google Drive app or folder are stored on servers in secure data centers. Your data stored with Google is encrypted during transfer from your computer and while it sits on Google Drive servers. (Many online file-storage sites offer similar protection, but check before you sign up if you have security concerns.)
Cloud services are often targets for thieves, but personal security by the account holder is often the weak link. Google recommends using two-step verification on Google Drive accounts to add an extra layer of protection, and it says to always fully log out of your account when using shared or public computers. The Safety Center area of the companys website has other tips and tools for many of its products, including Gmail, Android and Chrome.
In addition to providing a place to store files, Google Drive includes Google Docs, an online software suite for creating and collaborating on documents, spreadsheets and presentations. This summer, the company announced a desktop app for Windows and Mac, called Backup and Sync, that keeps Google Photos and Google Drive files backed up online.
Google Drive has been around since 2012, and some people have raised concerns about Googles scanning user content for its own purposes, like selling targeted advertising. The companys overall Terms of Service document states, Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received and when it is stored.
Googles terms are not unlike those of other companies offering similar services and can be seen as the price of free. If they bother you, you can look for a different service with more agreeable terms or encrypt the stored data yourself.
Read the original here:
Keeping Your Files Safe in Google’s Cloud – New York Times
5 Reasons SD-WAN, 4G LTE Are Cloud EssentialsThese technologies provide critical WAN edge infrastructure for seamless and elastic connectivity to the cloud — and an optimal user experience.
These technologies provide critical WAN edge infrastructure for seamless and elastic connectivity to the cloud — and an optimal user experience.
Businesses of all sizes across many industries are embracing cloud-based IT solutions in lieu of traditional on-premises server deployments, but wide-area network (WAN) implications brought about by this transition aren’t immediately recognized.
Cloud-based applications for office automation, Web and video conferencing, file sharing, VoIP and collaboration — including Microsoft 365, Cisco Spark, Slack, Box, and others — are proliferating due to the reduced cost and complexity associated with rolling out and managing these services across a distributed enterprise.
However, unified communications- and other software-as-a-service (SaaS) applications create new demands on underlying WAN infrastructure, especially for remote branch sites. Two emerging WAN technologies make it easier for businesses to update their WAN infrastructures for the cloud era: software-defined WAN (SD-WAN) and 4G LTE.
Here are five reasons SD-WAN and 4G LTE provide essential capabilities for ensuring reliability and optimal performance of cloud-based applications:
In the cloud era, SD-WAN and 4G LTE technologies provide essential WAN edge infrastructure for seamless and elastic connectivity to the cloud — and an optimal user experience.
Read the original:
5 Reasons SD-WAN, 4G LTE Are Cloud Essentials – No Jitter
Canon USA Advances PRISMAsync Color Print Server in Version 5.2, Offering Cloud-Based PRISMAlytics Dashboard … – PR Newswire (press release)
Canon will debut PRISMAsync Print Server v5.2 at PRINT 17 (booth #1213), held in Chicago, Illinois from September 10 to September 14, 2017.
“The evolution of the PRISMAsync family of print servers reflects Canon’s technological leadership and our focus on creating products that can help users increase productivity and maximize efficiency,” said Toyotsugu Kuwamura, executive vice president and general manager, Business Imaging Solutions Group, Canon U.S.A., Inc. “Productivity is an evolving term and means so much more than just the number of pages printed per minute in today’s market. The new tools and features of the PRISMAsync solution are designed specifically to help print providers keep operators efficient and their imagePRESS engines running, as is our effort to maintain the functionality and user interaction of the PRISMAsync platform for color and monochrome presses as uniform as possible.”
Among new features included with PRISMAsync Color Print Server v5.2 are:
PRISMAlytics Dashboard1-A cloud-based service that collects, organizes and displays performance trends from PRISMAsync driven digital presses running on version 5.2 or later. It records press data and transforms it into an easy-to-access and interpret dashboard, displaying usage, production and consumption-related information. This fact-based dashboard can help print floor managers study press performance on a daily basis as well as analyze trends over time, which can help them identify areas of improvement as a basis to take actions and bring higher efficiency to their production environment.
Extended and Automated Color Management Tools – For color critical environments, PRISMAsync Print Server v5.2 brings new color controls that can be implemented on a job or system-wide basis, helping operators produce each job with high quality. From optimized black point compensation to special saturation rendering, operators can be confident a full range of tools for producing the best color are at hand.
In addition, with PRISMAsync v5.2 for the imagePRESS C10000VP/C8000VP, the presses’ Inline Spectrophotometric Sensors (ILS) can be used for color calibration and ICC output profile creation. Rather than relying on measuring color targets by hand, the imagePRESS C10000VP Series presses will be able to automatically print and measure color patches (including G7 targets), creating custom ICC output profiles in just a few minutes and with no operator intervention after initiation.
“We chose PRISMAsync to drive our imagePRESS color digital presses because G7 calibration is built into the RIP and there is no need for add-ons or third-party software,” said Brad Steven, IT/Digital Production Manager at Print-Tech, a G7 Master Qualified Printerin Ann Arbor,Michigan. “The efficiency and productivity of PRISMAsync print server and imagePRESS C10000VP is phenomenal. Our color calibration continues to be spot on. We are excited to implement the new release of PRISMAsync Version 5.2; it is expected to cut the time it takes to perform our in-RIP G7 calibration and color profiling nearly in half while assuring even greater consistency of measurement.”
New Scan/Copy Features – For copy/scan-intensive print operations such as in-plants, full support of Lightweight Directory Access Protocol (LDAP) search when scanning to email and a new simplified “Easy Copy” screen have been implemented, helping to bring new efficiencies to these environments. Furthermore, v5.2 enables user interface configuration by user role, allowing administrators to limit available settings and options for certain roles to better fit their operation’s workflow or task division.
PRISMAsync Print Servers are key components of the PRISMA Suite – Canon’s integrated suite of workflow essentials for production print. Designed to optimize the print production workflow from job submission to completion; enabling users to adapt, evolve and grow within ever-changing production print environments.
PRISMAsync Color Print Server v5.2 is now available through Canon Authorized Dealers and Canon Solutions America for imagePRESS C10000VP/C8000VP, imagePRESS C850/C750 and brings the relevant functionality in line with the PRISMAsync platform for varioPRINT monochrome digital presses. For current PRISMAsync users, Canon makes a strong effort to allow the field upgrades to keep the PRISMAsync feature and usability level aligned with newly sold devices. As such, version 5.2 is now also available for imagePRESS C800/C700 and imagePRESS C70xxVPS Series2.
The new PRISMAlytics Dashboard is scheduled to be available from October 2017.
For more information about Canon U.S.A., visit http://www.usa.canon.com.
About Canon U.S.A., Inc.
Canon U.S.A., Inc., is a leading provider of consumer, business-to-business, and industrial digital imaging solutions to the United States and to Latin America and the Caribbean markets. With approximately $29 billion in global revenue, its parent company, Canon Inc. (NYSE: CAJ), ranks third overall in U.S. patents granted in 2016. Canon U.S.A. is committed to the highest level of customer satisfaction and loyalty, providing 100 percent U.S.-based service and support for all of the products it distributes in the United States. Canon U.S.A. is dedicated to its Kyosei philosophy of social and environmental responsibility. In 2014, the Canon Americas Headquarters secured LEED Gold certification, a recognition for the design, construction, operations and maintenance of high-performance green buildings. To keep apprised of the latest news from Canon U.S.A., sign up for the Company’s RSS news feed by visiting http://www.usa.canon.com/rss and follow us on Twitter @CanonUSA. For media inquiries, please contact firstname.lastname@example.org.
Based on weekly patent counts issued by United States Patent and Trademark Office.
Availability and specifications are subject to change without notice. Prices are set by dealers and may vary.
All referenced product names, and other marks, are trademarks of their respective owners.
1PRISMAsync System Version 5.2, enablement of data transfer via the Web, PRISMAlytics license and Google or Microsoft account required to gain access to PRISMAlytics. Subscription to a third party service required. Google and Microsoft accounts are subject to Google’s and Microsoft’s terms and conditions, respectively. Neither Canon, Inc., nor Canon U.S.A., Inc. represents or warrants any third party product, service or feature referenced hereunder.
2PRISMAlytics Dashboard is not supported for imagePRESS C7000VPS Series/C7010VPS Series/C7011VPS Series.
Canon U.S.A. website: http://www.usa.canon.com
For sales information/customer support: 1-800-OK-CANON
View original content with multimedia:http://www.prnewswire.com/news-releases/canon-usa-advances-prismasync-color-print-server-in-version-52-offering-cloud-based-prismalytics-dashboard-and-additional-functionality-300515633.html
SOURCE Canon U.S.A., Inc.
SHANGHAI, Sept. 6, 2017 /PRNewswire/ — Today at HUAWEI CONNECT 2017, Huawei released the new-generation intelligent cloud hardware platform – Atlas. Positioned for scenarios such as public cloud, artificial intelligence (AI), and high-performance computing (HPC), Atlas rides on the advantages of key technologies including heterogeneous resource pooling and intelligent orchestration to deliver new levels of hardware resource utilization and adapt more flexibly to service demands. As the result of Huawei’s Boundless Computing strategy, the Atlas platform makes Huawei well positioned to steer the infrastructure resource pool revolution.
With the rise of AI research and application, AI and cloud computing technologies will be integral to people’s daily lives. From applications such as safe city and smart manufacturing to autonomous driving, AI and cloud technologies are spreading. Meanwhile, pivotal to intelligence is having capabilities that allow numerous systems to process in real time the massive volumes of data generated from a world of full connectivity. For example, a city with 100,000 cameras installed can produce 100 billion car plate records and 1 trillion facial recognition records a year. It has become a tremendous challenge for traditional computing systems to deliver the required capabilities in order to process, query, and analyze data of such a high order of magnitude.
Built for the public cloud, AI, and HPC scenarios, the Atlas platform released by Huawei addresses the data processing challenges in the AI era. The Atlas platform is based on Huawei’s FusionServer G series heterogeneous servers. The FusionServer G series leverages technologies such as heterogeneous resource pooling and intelligent orchestration to pool resources like GPUs, HDDs, and SSDs, and provides hardware resources on demand to suit the needs of specific service models.
Atlas drives an over 50% higher resource utilization efficiency, and boosts performance by over 10x compared with the traditional x86 architecture, meanwhile slashing the hardware device types for customers. Additionally, Atlas can provision logical servers in different resource configuration ratios, and deploy the servers within seconds, significantly shortening the service rollout cycle while empowering customers with the flexibility to adapt to service changes.
“As the AI era is approaching, traditional hardware cannot meet the requirements on development of AI and cloud technologies,” said Qiu Long, President, IT Server Product Line, Huawei. “Positioned as a new-generation intelligent cloud hardware platform, Huawei’s Atlas pioneers heterogeneous resource pooling and intelligent orchestration technologies to bring resource utilization and performance to new high levels. Atlas is also the fruit of Huawei’s Boundless Computing strategy. Huawei has been inspired by the idea of making computing simple through continuous innovation, and keeps customers’ requirements in mind to build servers that are stable, reliable, and high-performing, to help customers better cope with the challenges for successful transformation in the AI and cloud era.”
Huawei has strategized Boundless Computing as a major step to building a better connected world. The Boundless Computing strategy advocates innovations around computing and leverages innovative chips, heterogeneous computing, and edge computing in order to push the traditional computing boundaries that used to be confined to CPUs, servers, and data centers. At HUAWEI CONNECT 2017, Huawei has released multiple brand new public cloud services based on the Atlas hardware platform, including GPU and FPGA cloud services, empowering AI applications with cloud capabilities and enabling customers to access heterogeneous computing services with more ease and speed.
Huawei servers and the entire IT product line have emerged to the global IT mainstream. Huawei delivers leading performance, and simplified and efficient IT infrastructure to help enterprises accomplish their digital transformation. So far, 197 companies on the Fortune Global 500 list and 45 companies in the Fortune Global 100 have chosen Huawei as their digital transformation partner.
HUAWEI CONNECT, Huawei’s flagship event for the global ICT industry, is taking place at the Shanghai New International Expo Centre from September 5-7, 2017 under the theme of “Grow with the Cloud”. At this global platform for open collaboration, Huawei together with its customers and partners will explore new growth opportunities through digital transformation. For more information, please visit http://www.huawei.com/en/events/huaweiconnect2017/
Hewlett Packard Enterprise reported better-than-expected earnings for its third quarter ending July 31 despite what it described as continued “commodities cost pressures” related to tighter memory supplies for servers. It also announced a cloud acquisition.
The hybrid IT specialist also reported unexpected gains in its server business as its “pivots” beyond the datacenter to greener pastures such as HPC while mapping its strategy in the emerging all-flash storage market.
CEO Meg Whitman also addressed reports she was in the running to head the ride-share service Uber, acknowledging interest in the position but declaring she would remain as head of HPE.
The company reported third quarter revenues totaling $8.21 billion, ahead of analysts’ estimate of about $ $7.5 billion. The company (NYSE: HPE) also exceeded its earnings-per-share target while operating margins rose sequentially.
The results also reflect completion of the “spin-merge” of its enterprise software business to U.K. vendor Micro Focus. HPE stock jumped 5 percent in response to the quarterly results, and is up 3 percent since the beginning of the year.
As it exits the application software business to focus on systems software used in datacenters and cloud deployments, Whitman said future acquisitions in area such as storage would focus on “organic growth.” To that end, the company announced earlier in the day it is acquiring the consulting firm Cloud Technology Partners.
Founded in 2010, the “cloud agnostic” consulting firm helps customers determine the appropriate environment for running specific workloads. Whitman said the acquisition strengthens HPE’s “hybrid IT” approach that spans datacenters, public and private clouds as well as systems software.
Asked whether the service would draw customers away from its on-premises IT services to public clouds, Whitman replied. “We will add our expertise with [on-premises] private cloud” and the acquisition is “an opportunity to scale that practice.” She added: “There are some workloads that customers probably should move to a public cloud. Microsoft (NASDAQ: MSFT) Azure is HPE public cloud partner.
Whitman and Tim Stonesifer, HPE’s chief financial officer, made frequent references to the impact of what Stonesifer called “ongoing commodities pressure” related to tight supplies of DRAMs used in servers.
“We continue to see commodities pressure,” Stonesifer noted, adding that the company expects rising memory prices at least through the end of this year. HPE reportedly plans to raise its server memory prices by 20 percent in response to accelerating adoption of higher density memory modules. Tightening supplies for server memories also is being driven by increased for DDR4 memory modules as new platforms such as Intel’s (NASDAQ: INTC) Skylake-SP, or “Purley” are introduced.
As DRAM supplies tighten, memory prices have risen between 5 and 10 percent. “We’ll see what happens with commodity prices,” Whitman said.
Meanwhile, HPE’s overall server business grew during the third quarter, and its expects to gain market share in its core datacenter business while “pivoting” to higher-end markets such as HPC.
“We are cautiously optimistic about our server business,” Whitman added, especially with blade and next-generation servers “with security built into servers at the chip level.”
In the competitive storage business, HPE touts the introduction of all-flash arrays in its 3Par storage line along with its recent acquisition of all- and hybrid flash array provider Nimble Storage. Competitors such Dell EMC also are rolling out all-flash storage options while targeting their existing customer bases. “We have to go mine our installed base” as competitors have, Whitman said. “This is a fundamental [technology] in the datacenter.”
As for her dalliance with Uber, Whitman said: “In the end, it wasn’t the right thing,” adding, “I actually am not going anywhere.”
About the author: George Leopold
George Leopold has written about science and technology for more than 25 years, focusing on electronics and aerospace technology. He previously served as Executive Editor for Electronic Engineering Times.
Read the original:
HPE Reports Q3 Gains Along With Cloud Deal – EnterpriseTech
It’s no secret that the Internet of Things (IoT) has lead to an explosion of data. In order to better manage that data, and get the most out of it, enterprise IT is making serious investments across its infrastructure stack.
According to a Tuesday report from 451 Research, some 32.4% of enterprise organizations are planning to increase their storage capacity over the next 12 months. Some 30.2% of organizations are planning to increase their network edge equipment over the next year, while 29.4% plan to increase server infrastructure, and 27.2% plan to increase off-premises cloud infrastructure.
The majority of respondents are planning to either increase their spending on IoT projects, or keep it the same, the report found. Of those surveyed, 65.6% planned to increase their spending on IoT over the next year, while only 2.7% were planning a reduction.
SEE: Special report: Harnessing IoT in the enterprise (free PDF)
In terms of storage, 53.1% of companies store their IoT data at a company-owned data center, while 59.1% analyze it there. Some two-thirds of respondents said that the data would remain there for its lifespan, but the other one-third said they eventually move it to the public cloud. Cloud storage was more often used for historical use cases such as “regulatory reporting and trend analysis,” the report said, as it offers flexibility and cost savings.
Data processing is also happening at the edge, according to slightly fewer than half of the respondents. Analysis, aggregation, and filtering are happening on the IoT device (22.2%) or in IT infrastructure that is nearby (23.3%), respondents said.
“Companies are processing IoT workloads at the edge today to improve security, process real-time operational action triggers, and reduce IoT data storage and transport requirements,” Rich Karpinski, research director for Voice of the Enterprise: Internet of Things, said in the report. “While some enterprises say that in the future they will do more analyticsincluding heavy data processing and analysis driven by big data or AIat the network edge, for now that deeper analysis is happening in company-owned datacenters or in the public cloud.”
Most companies are using IoT to improve their IT processes, the report foundspecifically, data center management, surveillance, and security monitoring. Although, the report also noted that facilities automation will become the most popular use case within two years.
As the use cases for IoT continue to grow, so does demand for tech workers skilled in the areas relevant to IoT. Nearly 50% of survey respondents said that the struggle with a “skill shortage” for work in IoT.
Internet of Things has started to impact every aspect of our daily lives. Our appliances, cars, gadgets, communication devices, tools, and even some of our clothing have become nodes on the internet. By 2020, as many as 50 billion devices will be connected so there will be skyrocketing growth of traffic generated by devices at the edge of the network posing a monumental challenge to our networks and central cloud computing. Thankfully, we can take advantage of the ever-increasing computing capabilities of edge devices to turn them into cloud servers and extend central-cloud capabilities to the edge. Edge cloud to central cloud is like WiFi to Cellular Communication. Just as WiFi carries most of the global wireless traffic today, edge devices will soon manage most of the cloud computing burden.
According to Gartners 2017 Hype Cycle for Emerging Technologies, edge cloud computing is on the brink of becoming an innovation trigger. Microchips and sensors continue to become embedded in everyday objects, making edge cloud computing an immense opportunity. There are billions of potential edge servers today and there will be tens of billions more tomorrow.
An illustrative example: self-driving cars
Self-driving cars have many potential benefits: optimized traffic flow, improved fuel efficiency, reduced accidents, and drivers with more free time. However, a big challenge for self-driving cars is to find a cost-effective way to process the vast amounts of data that they generate. On average, every self-driving car generates approximately one GByte/sec of data which is orders of magnitude more data than the capacity of a 4G base station and an order of magnitude large than a 5G base station. There are no networks in the foreseeable future that can be used to send all this data back to the central cloud for processing. Most of this data needs to be processed locally and only a minimal set of processed data should be sent back to the central cloud for global coordination. Moreover, in order to meet latency requirements to take agile decisions, self-driving cars should communicate in the fastest way possible. This demands instantaneous processing of information and when possible peer to peer communication. This is where distributed edge cloud computing comes into play, transforming cars to data centers on wheels where most of the communication and processing is performed as close as possible to the edge.
In a distributed edge cloud environment, every car can be a set of microservices that can sense other cars not only because of all the sensors but also because their microservices can communicate with microservices of other cars. To make this work, microservices (within a car and between cars) need to form ad-hoc clusters based on scopes such as proximity, network or account affinity. This way, cars can process the terabytes of data as quickly and as efficiently as possible leveraging not only the processing power at the central cloud but also their own collective computing, storage and memory resources in a collaborative fashion with other cars on the road.
Edge cloud computing is the next major computing revolution
Throughout the last few decades, computing has experienced different cycles shifting between centralized and distributed. In the early days, computing was centralized with mainframe computing. In the mid-80s, with the advent of personal computers the industry shifted to distributed computing. In the last decade, we witnessed the move to centralized cloud computing. Many falsely predicted that this was the holy grail of computing and we would move to an era of thin clients where devices would be dumb screens and input devices and all the processing would be performed in data centres in the cloud. This made sense for some applications such as music or video streaming or hosting some software applications. In all these cases, edge devices do not generate too much data and are mostly passive receivers of information.
In the last few years, two major socio-technical trends have contributed to a fundamental change in production and consumption of data.
First, thanks to mobile internet and social media, ordinary people generate massive amounts of data turning them from mere consumers to consumers and producers. For example, today close to 500 million photos are uploaded on Facebook and Instagram and roughly 500 thousand hours of video is uploaded to YouTube daily; this is more than what the three major US networks generate in content over two years!!! This is not a consumer phenomenon but also applies to enterprises; for instance, more than 80% of businesses have started to leverage user-generated content in their marketing efforts.
Second, we have the rapid growth of IoT devices where many new edge devices produce valuable data. There are already 20 billion connected devices, 2.5 billion of which are B2B IoT devices. Over 300M wearable devices were sold in 2017 alone. Many of these devices generate small amounts of data but many generate massive amounts of data; for example, when video meets IoT. We apply machine-learning algorithms to video feeds allowing cameras to recognize people, objects, and situations automatically. There will be phenomenal growth in AR/VR in the gaming industry, and even the enterprise starting with creative applications and quickly moving to brick and mortar industries and manufacturing. Robots will also be producers of massive amounts of data at the edge.
Clearly, we are amid an explosion of data generated at the edge and the tsunami is yet to come. The question is can our communication networks scale to cope with the data generated at the edge? To try to answer this, we can look at two predictors: the Moores law in computing and its equivalent in network bandwidth. History has shown that computing power roughly doubles every 18 months (or hundred times every decade) whereas network bandwidth grows about 50 times every decade. In other words, even if the number of devices do not grow (which they clearly will), the communication network will be the bottleneck for the growth of IoT.
Setting bandwidth aside, many IoT applications such as self-driving cars or tactile control communications in various industries require low latency response. In this case, even if the network capacity is miraculously increased to cope with the data, laws of physics inhibit remote processing of data in the central cloud due to large latencies in the long-haul transmission of data.
So, what is the solution? How can we cope with the explosion of data at the edge and strict latency requirements of some IoT applications? The answer is distributed edge cloud computing. Edge cloud computing means that any device (or node) becomes a cloud server. As much as possible, the data is processed at the edge of a network, as close to the originating source as possible, instead of processing everything in the central cloud. This approach is faster, more efficient, and scalable: data can be immediately analysed and put into action overcoming bandwidth limitations and latency constraints on the network. Edge cloud computing is essential to meet stringent requirements on bandwidth and latency and at the same time minimizes power consumption and infrastructure costs.
Edge cloud computing is a paradigm shift that enables every device, appliance, or gadget to communicate and share resources making them part of the solution for scaling of IoT. It allows drones and robots to harness their collective resources in industries such as manufacturing, oil and gas, agriculture or mining, delivering real-time data and improving business efficiency. This new computing model will revolutionize the world in ways that we may not be able to predict at this moment.
The great news is that the technology is ready for developers today. mimik has developed a fully distributed edge cloud platform that extends central cloud to the edge: mimik arms developers with a platform to unleash the power of edge devices. mimik SDK solves many of the current challenges that centralized cloud computing alone cannot address. We extend the power of the central cloud to the edge and boost its reach so that bandwidth, latency constraints, and infrastructure cost do not become the bottleneck for the healthy and scalable growth of IoT.
We need a paradigm shift that transforms tens of billions of devices from a challenge to an opportunity. IoT requires a revolution in computing that unlocks the power of connected devices. Distributed edge cloud is the ideal solution to harness computing resources at the edge, unlocking the promise of a smart connected world that will bring massive efficiencies to enterprises and digital freedom to consumers.
Siavash Alamouti, CEO at mimik
Image Credit: Jamesteohart / Shutterstock
We generally think of a transformation to a hybrid infrastructure as one where you’re going from a completely private setup to one that spans the public cloud and your private installation. But what if you started life as a small company with your systems entirely in the cloud? It’s not an unusual approach, as running up your initial services in the cloud is straightforward and avoids a big capital outlay. As a company grows it’s understandable that it might want to take on a private data centre, build an in-house support team and evolve to a two-site setup.
Step one is to consider why you’re bothering with an on-premises setup instead of a second cloud instance. The answer will generally be that you want something that’s closer to your office, with a potential performance improvement gained from such proximity. And that’s fine what matters is that you’ve considered the options before deciding which way to go.
The next step is to think about where you’ll host your private data centre. As you’re already in the cloud, you have the opportunity to pick a data centre that’s close (electronically speaking) to the cloud centre you’re in. For example, you’re probably aware that AWS provides a Direct Connect facility that lets you hook straight into their infrastructure rather than accessing your cloud world over the internet. Check out the locations and you’ll see that the connectivity’s hosted at 51 well-known locations Equinix in London, for example, or TierPoint in Seattle. Connectivity between your public and private components with a latency of just a few milliseconds is an attractive concept if you’re looking for high availability with seamless failover.
Next, you’ll need to think about the platform you’re using. Most of the time you’ll have used one or more of your cloud provider’s standard operating system templates, so it makes sense to run your local stuff on the same operating system flavour if you can. And of course you should use the same CPU architecture where you can too, so you can be assured that your apps will be portable.
So you’ve sorted the platform. Now you need to decide whether the on-premises setup is to be your primary or secondary installation. If it’s to be a secondary setup you should have a relatively straightforward job of adding new system and application-level components in as secondaries to your cloud-based apps.
If you decide to flip things around you’ll have a more involved task of be shifting the primary apps over and redeploying the cloud setup as the secondary installation. Either way the happy news is that you’ve already gone through the non-trivial task of providing your office users with connectivity to the cloud installation, so hooking things up so they’re able to get to the private data centre, regardless of whether it’s the primary or the secondary, should be easier.
One further consideration with the choice of primary and secondary installations is the cost of data transfer. Shifting data out of a commercial cloud setup has a cost associated with it. Not a vast cost, I’ll grant you, but one that you do need to keep an eye on. Using Amazon as an example, moving a terabyte per month over the internet from the cloud setup to your private installation will cost you $90. That’s $900 for 10TB, or $7,800 for 100TB; even though the per-gigabyte cost tapers down, it doesn’t ever tail off at zero. What does this mean? Easy: if the cloud setup is the primary and it’s replicating application data to the private secondary, you’re paying a chunk of cash for it to do so.
While we’re on the subject of data transfer, you also need to figure out how you’re going to do it. In these modern times, it’s a relative doddle to set up the major cloud providers’ storage instances so you can access them externally via standard protocols such as NFS. Alternatively you can look to the major storage vendors, who will sell you a funky gateway to install in your private data centre and handle the cloud magic for you.
The next consideration is licensing, and there are two aspects here. First is the basic fact that you’ll need to buy operating system and/or application licences for your private setup sounds obvious but you may not ever have had to consider this if you were using a pay-as-you-go model with pre-configured cloud app servers. Second is that if you want to go for a clustered or active/passive application setup, you may need to revisit the versions you use on the cloud servers as well as buying licences for your private setup. Take SQL Server, for example: if you’re running Standard Edition you can implement basic two-node high availability, but if you want something more advanced you’ll need to upgrade to Enterprise Edition. Same with Oracle: if you want to enable Data Guard between sites that’ll need Enterprise Edition too.
Lastly, but by no means least, is your internal support team. They’ve probably spent a number of years fettling your cloud installation and fixing stuff when it broke, but their skillset will be at worst lacking and at best out of date when it comes to hosting, networking, hardware and hypervisor support.
Be prepared to invest in training so that you can be confident that the new kit you’re acquiring for your private data centre is properly supportable and hence properly supported. Yes, your typical infrastructure is easier to put together than it was a few years ago, but that doesn’t mean it’s trivial. And if you’re virtualising your private data centre which you should getting the hypervisor layer running and optimised will take time, effort and skill.
Going from a cloud-centric setup to a hybrid infrastructure isn’t rocket science, then which is no great surprise as any problem’s tractable if you design, plan and implement the solution properly. But going from cloud to hybrid has some differences from going from private to hybrid.
So you just need to think a bit before you do it.
Sponsored: The Joy and Pain of Buying IT – Have Your Say
By PATRICK LANG’ATMore by this Author
The Raila Odinga-led National Super Alliance (Nasa) has written to the electoral agency with 25 new demands that they say should be fulfilled in the October 17 fresh poll.
In a letter signed by Mr Odingas chief agent and Nasa co-principal Musalia Mudavadi, the opposition has demanded a full audit of the elections technology, full access to the servers, change of ballot printing firm, and the gazettement of new 290 constituency returning officers.
It is therefore inappropriate and foolhardy for the IEBC to embark on the planning of the fresh election without full compliance with the Orders of the Supreme Court in the redeployment of technology in the fresh presidential election. We therefore demand a full audit of technology in use in full compliance with the law, Mr Mudavadi said in the four-page letter.
The Nasa team has not only opposed the Independent Electoral and Boundaries Commission date for the repeat poll, it has also questioned why it narrowed the number of candidates to only Mr Odinga and President Uhuru Kenyatta.
Your interpretation that the election scheduled on the 17th October 2017 shall be in the style of a run-off contest of only two candidates is erroneous and unconstitutional. We also take reservation that you have not consulted the parties involved before making a determination on the date of the said election, Mr Mudavadi told IEBC Chairman Wafula Chebukati.
In the audit, Nasa has demanded a scrutiny they say should have full information on the ICT infrastructure, list of support partners and their respective Service Level Agreements, a detail of the firewall configuration, including ports configuration, as well as disclosure of all database transaction logs.
The team also wants a physical view and inspection of the IEBC servers, portal access to the cloud servers and IP addresses of all 20 servers; full access and copy of all servers and databases used by the IEBC; GPS coordinates of KIEMS; and Telkom and network structure with all service providers.
The opposition has also demanded the removal of senior personnel at the IEBC secretariat including Chief Executive Ezra Chiloba, his deputy Betty Nyabuto, James Muhati, the ICT director, Ms Immaculate Kassait, director of voter registration, Ms Praxedes Tororey, the head of legal team, with Mr Moses Kipkosgey being added to the earlier list.
Independence of the IEBC is not negotiable. Nasa coalition demands that fresh election should be administered by professional and non-partisan officials, Mr Mudavadi said.
“We demand that officials who are partisan or perceived as such should step aside and or be suspended during the planning and execution of the fresh election.”
Further, the coalition demanded a full audit of the Sh3.8 billion 45,000 voter identification and results transmission kits that were provided by French-based Safran Morpho.
We demand that Safran and Oracle provide full implementation information in relation to their involvement in the General Election held on 8th August 2017, said Mr Mudavadi.
The team has also demanded a review of the voter register, and the 40,883 polling stations.
To enhance transparency, Nasa said the following specific demands must be met:
Appoint and gazette returning officers not among the Constituency Election Coordinators in consultations with political parties and candidates.
Establishment of a technical monitoring committee with representatives of the main political parties, coalitions or candidates to oversee implementation of the technology in use.
Stop use of Al-Ghurair to print ballot papers and results declaration forms
All Forms 34Bs should be pre-printed indicating the names of polling stations in the constituency and names of candidates
Elections results to be announced at the Constituency level. Results sent electronically must be accompanied by corresponding statutory result declaration forms
Candidates agents should be part of receiving teams at the constituency and national tallying centers, and be allowed to confirm entries before transmission
Establish defined roles of the security agencies and eliminate undue influence by the provincial administration and other public officials
Jubilee Party chose the October 17 date, claim Nasa leaders.
Follow this link:
Nasa: Our demands for repeat presidential election – Daily Nation