Page 3,625«..1020..3,6243,6253,6263,627..3,6303,640..»

Private Cloud Server Market Growth by Top Companies, Trends by Types and Application, Forecast to 2026 – Cole of Duty

Sugarsync

Moreover, the Private Cloud Server report offers a detailed analysis of the competitive landscape in terms of regions and the major service providers are also highlighted along with attributes of the market overview, business strategies, financials, developments pertaining as well as the product portfolio of the Private Cloud Server market. Likewise, this report comprises significant data about market segmentation on the basis of type, application, and regional landscape. The Private Cloud Server market report also provides a brief analysis of the market opportunities and challenges faced by the leading service provides. This report is specially designed to know accurate market insights and market status.

By Regions:

* North America (The US, Canada, and Mexico)

* Europe (Germany, France, the UK, and Rest of the World)

* Asia Pacific (China, Japan, India, and Rest of Asia Pacific)

* Latin America (Brazil and Rest of Latin America.)

* Middle East & Africa (Saudi Arabia, the UAE, , South Africa, and Rest of Middle East & Africa)

To get Incredible Discounts on this Premium Report, Click Here @ https://www.marketresearchintellect.com/ask-for-discount/?rid=190097&utm_source=NYH&utm_medium=888

Table of Content

1 Introduction of Private Cloud Server Market

1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology

3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Private Cloud Server Market Outlook

4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Private Cloud Server Market, By Deployment Model

5.1 Overview

6 Private Cloud Server Market, By Solution

6.1 Overview

7 Private Cloud Server Market, By Vertical

7.1 Overview

8 Private Cloud Server Market, By Geography

8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Private Cloud Server Market Competitive Landscape

9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles

10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix

11.1 Related Research

Get Complete Report

@ https://www.marketresearchintellect.com/need-customization/?rid=190097&utm_source=NYH&utm_medium=888

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

Tags: Private Cloud Server Market Size, Private Cloud Server Market Trends, Private Cloud Server Market Growth, Private Cloud Server Market Forecast, Private Cloud Server Market Analysis Sarkari result, Government Jobs, Sarkari naukri, NMK, Majhi Naukri,

Our Trending Reports

Metal Cleaning Chemicals Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Transcritical CO2 Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

Go here to see the original:
Private Cloud Server Market Growth by Top Companies, Trends by Types and Application, Forecast to 2026 - Cole of Duty

Read More..

Swarm Theory: Lessons from nature in the advancement of robotics – Techerati

Can a robot truly be greater than the sum of its parts?

As part of a recent entry to Science Robotics, experts argued that Covid-19 could be a catalyst for developing robotic systems that can be rapidly deployed with remote access [] to front lines. It is often in times of great strife that innovation truly comes to the fore the progress made across both public and private sectors in recent weeks is a tribute to just that, encompassing everything from advanced data analytics to the production of ventilators by the likes of McLaren, Mercedes and other F1 teams.

Robotics is no different. Robots are currently handling room service in isolation centres, patrolling the streets to help countries achieve social distancing policies, and helping to entertain the elderly. There are even robots whose purpose aligns perfectly with the specificities of this particular pandemic. UVD Robots, a company founded in 2016 by BlueOcean Robotics, produces a mobile bot with powerful UV lights built into the hardware. The robot can kill 99.99 per cent of all pathogens in the air using those light waves, a feature which will be most welcome in hospitals around the world currently.

To a certain degree, however, this form of robotics appears familiar, or at the very least the theory behind it does: goal-orientated, designed for a particular purpose, and often redundant if the task changes. They are valuable if not adaptable part of the problem-solution matrix, in which value is measured by streamlining a process or making a specific task easier. The question moving forward ought to be: can a robot truly be greater than the sum of its parts? Answering this will open doors to further innovation and provide a foundation for producing robots that can contribute across multiple industries and tasks.

Much like our own society, robots can benefit from the output of a collective. Working together, we can often maximise our strengths and mask individual weaknesses. Swarm robotics, the concept that takes inspiration from bees and other social insects, is set to change everything we know about the Internet of Things (IoT). As a collection of connected devices at scale, swarm robotics represents the next stage of deployments and promises to revolutionise the applications of robots in more diverse environments.

The first aspect of that swarm robotics foundation is, in fact, swarm intelligence. This is where the connection between teams of insects and robots is the closest. It looks at how insects communicate and react as part of a larger group, and how the actions and movements of others impact the entire groups output.

In robotics, its the shift from a central authority or brain function most likely in the cloud to one that is dispersed between every robot; one that is almost telepathic in its communication. We can think about it in terms of direction. The majority of robots today receive information vertically: sensors in a robot collect data at the edge, which is sent up to the cloud for processing and delivering actionable insights. For swarm robotics to function in real-time, this flow of information must become horizontal in order for each individual robot to know the bigger picture. It is this decentralised processing of information that will empower robots to move beyond their own singular capabilities.

Developing a robot colony that reflects nature will have a vast array of benefits and use cases, but some of the most immediate include tackling oil or toxic waste spillages, treating coral and other submerged areas, and contributing to search and rescue operations in environments that have become inhospitable. In all of these situations, swarm robotics is ideally suited to the unknown or changing settings, as well as the need to react in real-time to situations that are not often possible to map or predict.

That, in a nutshell, is why swarm robotics is important. But to answer the question of why it is becoming a relevant topic right now, we must look closer at the technology stack underpinning it. The Robot Operating System (ROS) is a popular open-source platform already used for advanced robotics. Its flexibility and ease-of-use make it perfectly suited to a wide array of robotics applications. Now add to this the progress being made in tiny AI and security solutions, and the bedrock of swarm robotics is close to completion.

Academic researchers and tech companies alike are working on new algorithms to shrink larger, existing artificial intelligence models without losing the capabilities they offer. At the present moment, AI is demanding more computational power each day, which then spills over to cloud servers. This is neither conducive to swarm robotics, nor is it very secure.

Localised AI is better aligned to privacy, because data does not need to leave the IoT device. Robot manufacturers, meanwhile, can take this one step further, by minimising the attack surface of a bot to boost security. Finally, technology such as snaps allows software updates to be processed automatically and transactionally, to ensure a robot never fails at the base level.

Similar to developments within AI, swarm robotics is an example of technology and academia intersecting at a broader level. An increase in the number of mechatronics courses and other specialised degrees in recent years is helping to promote new practices in robotics theory at the very highest level, which in turn trickles down to physical developments. This convergence of mechanics and robotics studies has produced engineers ready-made for IoT deployments, further bolstering the likelihood we will see real-life use-cases of swarm robotics in the near future.

When this era does arrive, vast numbers of identical robots will be working in coordination for a larger collective goal. This will result in smarter and more agile deployments, directly at the edge. Inspiration can come from anywhere, and matching lessons from nature with robust technology solutions will benefit multiple industries, both in the short and long term.

Swarm robotics can help tackle situations we cannot currently foresee or plan for, and it is this distinguishing feature that will help elevate the entire robotics industry. Technology, above all, must be useful. Swarm robotics can provide the impetus for changing perception in robotics from cool and shiny applications we see today, to genuinely helpful solutions.

Read more from the original source:
Swarm Theory: Lessons from nature in the advancement of robotics - Techerati

Read More..

What are the Differences Between IaaS, PaaS, and SaaS? – stopthefud

As a solutions architect, I talk regularly to developers and architects about IaaS, PaaS, and SaaS. These terms, which are relatively new, can sometimes be confusing so Id like to give a brief description of what they are and how they differ from each other.

As shown here, IaaS, PaaS, and SaaS stand for infrastructure, platform and software as a Service, respectively. The core idea behind aaS is that you can focus on building products and services critical to your business and let other companies build and manage non-core services you need, whether as part of your own offering or just to run your business.

These days, you can take pretty much add aaS to anything, like data (DaaS) and integration platform which introduces the potentially confusing iPaaS which differs from both IaaS and PaaS! In the last few years, new businesses have popped up providing X, whether it be a product, application, feature or anything really, as a service to other businesses or consumers.

Before we dive into what these 3 terms mean, lets take a step back and look at how we got here. Life was simple when all developers had to worry about were monolithic applications running on-premises. As a company, you would either have your own datacenter or rent some space at a datacenter managed by another company. Your networking and sysadmin teams would make sure all the servers were properly connected and working for your developers to use. Your developers would build robust and scalable applications (hopefully) that got deployed to your servers. Simple, right?

Most mid-sized business and large enterprises, still manage their own servers and datacenters, but are adopting the cloud. For almost a decade most companies have realized they dont want to be in the business of managing their own infrastructure (servers/datacenters) and would rather have someone else, such as Amazon or Google, do it for them. Startups these days are going straight to the cloud, which allows them to be extremely flexible and scalable amongst other things. This has led cloud providers such as AWS and GCP to provide you with servers and manage them for you thats infrastructure as a service!

For example, you can easily spin up Solace PubSub+ Event Broker software on a linux server using an Amazon AMI in less than a minute. Compare this to having to buy a server, set it up, install it in a datacenter, and deploy software on it. Depending on the size of the company, this can take weeks, if not months.

As companies started adopting IaaS and migrating to the cloud, it became apparent that it wasnt so easy. A key benefit of the cloud is elasticity. Its easy to spin up servers when demand is high and shut them down when demand is low, but, developing applications for your limited servers in your own datacenter is one thing. Developing applications to run in the cloud where your application might be deployed to hundreds of servers on peak days like Black Friday, accessible by way of the Internet or WAN links, might not be as stable as it is in your own private datacenter.

All of this came with a lot of overhead for developers, and for DevOps team which now had to build and manage cloud-native applications. This led to platform as a service which lets developers develop cloud-native applications and manage the overhead associated with orchestrating them. Popular PaaS offerings include AWS Elastic Beanstalk, Pivotal Cloud Foundry, Kubernetes and OpenShift. I recently blogged about how to deploy Solaces broker in OpenShift.

As cloud migration has picked up, it has further fueled the build vs buy debate. Every company is faced with the option to either build something themselves and hence, have complete control over it or buy it from another company so they can focus on its core business at the cost of control. Additionally, while you might have the resources to build the product initially, you still have to dedicate significant resources to maintain and upgrade the product going forward. As a consequence, more and more companies are realizing that they would rather buy non-core products/software from other companies and focus on their core business. For example, you can use Solace Cloud to spin up a PubSub+ Event Broker without needing any hardware and have it managed by Solace.

The main difference between IaaS, PaaS, and SaaS is how much they abstract away from end-users. The end product is an application that you want to use. For that to happen, you need to manage: datacenters, servers, networks, virtualization, operating systems, runtime, storage, middleware, data, applications, and monitoring.

Here is a useful table which shows which parts are being abstracted away from the end-user in each model:

With so many options, it can be a little daunting to decide which one is the best for your company. Thankfully, you dont have to pick just one. You can go with a hybrid model where your core applications will run on-prem in your own datacenter and the rest of your in-house applications will run on a PaaS such as OpenShift. You might also decide to have some database services run on cloud such as AWS Redshift. Finally, you might want to limit all your vendor software, such as HR/payroll management software (i.e. Workday), monitoring tools (i.e. New Relic), and event brokers (i.e. Solace), to SaaS.

Picking the right model comes down to how much flexibility you want and how little you want to manage. If a product is core to your business and you need to heavily customize it, then going with a SaaS model is not the right option. However, there is no need to have your own convoluted HR/payroll management system when a generic SaaS solution would suffice. Finally, you have to consider the sensitivity of these applications and the data they manage. Some applications manage crown jewel processes and information that you wont want to trust to a SaaS.

As you can see, there are a lot of similarities and differences between IaaS, PaaS, and SaaS. If you are a developer, you must be familiar with all three and how they differ. As an architect, you may be responsible for deciding which model, or hybrid approach, is right for your company based on your requirements, and resources. Fortunately, you can pick which model works best for you for each of your applications. I hope Ive helped you understand a little bit better which one might be right for you.

The post What are the Differences Between IaaS, PaaS, and SaaS? appeared first on Solace.

What are the Differences Between IaaS, PaaS, and SaaS? published first on https://jiohow.tumblr.com/

Like Loading...

Related

Netmetic site features more about newly released products in the market and their service providers. Bookmark this site & read our blogs regularly for free business tips, review on new release products in the market and high quality service providers for your products.View all posts by Netmetic

Read more:
What are the Differences Between IaaS, PaaS, and SaaS? - stopthefud

Read More..

Zoom Settles with NY AG over Privacy and Security Concerns – Security Magazine

Zoom Settles with NY AG over Privacy and Security Concerns | 2020-05-12 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Read the original post:
Zoom Settles with NY AG over Privacy and Security Concerns - Security Magazine

Read More..

Server sales went through the roof in the first three months of 2020. Enjoy it while it lasts, Dell, HPE, and pals – The Register

Global server shipments reached an industry record-breaking 3.3 million units in the first quarter of 2020, marking a 30 per cent year-on-year growth, Omdia analysts estimated this week.

That's not too surprising and it's not just due to the coronavirus crisis. The first quarter of 2020 followed a strong final quarter of 2019, in which shipments grew 27 per cent year-on-year and sequentially, we're told, as the data-center market expanded. Last year, 11.9 million servers shipped, up four per cent on 2018, it is estimated, raking in $78bn in sales. That revenue total is actually down six per cent on 2018 due to component prices, particularly memory, falling, and the lower costs passed onto customers as savings.

As you'd expect, the first-quarter surge this year largely came at the end of the three-month period when millions around the world stayed inside and worked from home to curb the coronavirus spread. All those video conference calls, cloud-based productivity and collaboration suite subscriptions, and business applications needed machines to run on.

With the manufacturing sector in China slowed by the virus outbreak in January and February, many server vendors reported constrained supply chains, the analysts noted. Once that cleared, however, shipments took off. The second quarter is also expected to produce huge numbers, and has pushed Omida to bump up its full-year server sales estimates by 500,000 to more than 12.7 million units.

And who was buying? Cloud giants mainly from white-box makers, such as Foxconn and Quanta; and smaller cloud platforms, businesses, and telcos from traditional suppliers, such as Dell and HPE.

Multiple data points from vendors and end-users indicate that during the first quarter of 2020 cloud service providers continued to expand their server installed base to accommodate a ramp-up of consumer and enterprise demand, said Omida's Vlad Galabov, principal analyst for data center IT.

Enterprises increased their investment in servers as these organizations prepare employees and business processes for remote working. Meanwhile, telecommunications network providers ramped up server deployments to cope with increased demand on wireless and wired networks. As a result, the server market attained a quarter of year-over-year growth exceeding 30 percent.

Having said that, while it's been a boom time for server makers, any number of businesses and organizations are expected to cut their spending to weather this year's coronavirus's economic storm, which will bring shipments of traditional servers into the enterprise back down to Earth. There may still be strong demand for traditional servers from second-tier clouds and telcos, though.

Meanwhile, tier-one cloud platforms are expected to continue buying up white boxes to support their swelling userbases.

We now have multiple indicators that the start to 2020 and likely the entire first half of the year will see the server market grow double digits over 2019, Galabov said.

However, its important to avoid underestimating how much the server market could be impacted by the looming global recession. Many enterprises and governments are likely to postpone investing in new servers in the second half of 2020. Additionally, despite vendors and distributors successfully managing the supply chain challenges in the first quarter of the year, Omdia continues to receive reports of strain and shortages in components globally.

Dell and HPE will have strong demand from enterprises setting up working-from-home conditions and doing final server deployments before lockdown...

Speaking to The Register on Wednesday, Galabov added: "I expect that in the first quarter, Dell and HPE will have strong demand from enterprises setting up working-from-home conditions and doing final server deployments before lockdown, and from tier-two cloud service providers and telecommunication network providers responding to growing demand for their services.

"As the year progresses, demand from enterprises will definitely soften, and it could offset or more-than-offset the goodness from the first quarter. I dont think telco and tier-two cloud service provider demand will be enough to offset enterprise weakness for Dell and HPE."

Furthermore, as organizations adopt cloud-hosted services during the pandemic, this could be a catalyst for customers shifting more workloads off premises, reducing data-center server sales.

Galabov said much of the long-term growth in the market will continue to be driven by the hyperscalers that prefer large volumes of white-box hardware. This means the big brand names will continue to fight over the tier-two cloud platforms, businesses, and network providers. By 2024, the analyst firm estimates, the market will reach a point where only one in every four servers will be owned and operated on-premises by an enterprise customer.

"HPE, Dell, and IBM have minimal shipments to hyperscale cloud service providers today," Galbov told us. "From that perspective they should not see much of the demand increase from these clients, however, they should benefit from strong demand from tier-two cloud-service providers, and telcos. This strong demand could partially offset weaker demand from enterprises."

For the final quarter of 2019, according to the analysts, IBM bagged a 8.3 per cent share of server industry revenues, from a 26 per cent year-on-year revenue increase due to z15 mainframe sales, and Huawei took a 6.5 per cent share after growing revenues 35 per cent. Dell-EMC took 17.4 per cent, and HPE 15.3 per cent, after suffering falling revenues.

The white-box makers, which took 18.7 per cent of the market, and a six per cent dip in sales, "established themselves as preferred server vendors to cloud service providers and have the most to gain from the cloud data center expansion in 2020," the analysts added.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read more:
Server sales went through the roof in the first three months of 2020. Enjoy it while it lasts, Dell, HPE, and pals - The Register

Read More..

Codestone helps shipping agent to cloud-based infrastructure – Codestone

A long-standing customer of Codestone, John Samonas & Sons Ltd. provides commercial, operational and technical management for a fleet of dry bulk carriers operating globally. Codestone has supported its on-premise IT infrastructure for over 10 years, assisting it with periodic refreshes of the platform. In order to move to a more robust and flexible environment, the organisation decided to implement a phased migration to a cloud-based solution.

As a result of its long and trusted relationship, and the clarity of its pre-sales advice, Codestone was chosen to begin the implementation of the new systems architecture.

Initially the solution will comprise the Microsoft 365 Business bundle enabling core services, such as email, personal and corporate data to be moved to MS Cloud OneDrive and Microsoft Teams applications. Codestones Systemsure offering will provide 24/7 support, with back up resilience using VEEAM BUaaS. An on-premise server will be retained to run third party applications on in-warranty hardware.

John Samonas & Sons will benefit from the flexibility of remote working, enabling easier collaboration between its UK and Greek staff. Enhanced security and compliance will be delivered by Microsoft 365 Business features, including multi-factor authentication and e-Discovery.

See the rest here:
Codestone helps shipping agent to cloud-based infrastructure - Codestone

Read More..

Global Cloud Infrastructure Testing Market Research Report 2020 By Size, Share, Trends and Analysis up to 2025. – Cole of Duty

The recently published market study by GLOBAL MARKETERS.BIZ highlights the current trends that are expected to influence the dynamics of the Cloud Infrastructure Testing market in the upcoming years. The report introspect the supply chain, cost structure, and recent developments pertaining to the Cloud Infrastructure Testing market in the report and the impact of the COVID-19 on these facets of the market. Further, the micro and macro-economic factors that are likely to impact the growth of the Cloud Infrastructure Testing market are thoroughly studied in the presented market study.

Get PDF Samplecopy(including TOC, Tables, and Figures) @https://www.globalmarketers.biz/report/technology-and-media/global-cloud-infrastructure-testing-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/130333#request_sample

Leading Players Are :

CompuwareAkamaiSpirent CommunicationsIxiaInfosysHuaweiWiproInsuperApicaCloud HarmonyCore Cloud Inspect

Reasons to Trust Our Business Insights

Proven track record of delivering high-quality and insightful market studies

Data collected from credible sources including product managers, sales representatives, marketing executives, and more

Providing accurate insights for over ten industrial verticals

Swift delivery of reports with COVID-19 impact without any delays

Up-to-date market research and analytical tools used to curate market reports

Critical Data in the Cloud Infrastructure Testing Market Report

Company share analysis and competition landscape

Recent trends and notable developments in the Cloud Infrastructure Testing market space

Growth projections of each market segment and sub-segment during the forecast period

COVID-19 impact on the global Cloud Infrastructure Testing market

Recent innovations, product launches, and technological advances relevant to the Cloud Infrastructure Testing market

Regional Assessment

The regional assessment chapter in the report offers an out and out understanding of the potential growth of the Cloud Infrastructure Testing market across various geographies such as:

Application Assessment

The presented study ponders over the numerous applications of the Cloud Infrastructure Testing and offers a fair assessment of the supply-demand ratio of each application including:

Market Taxonomy

By Type

ServerStorageVirtualizationOperating System

By Application

Banking, Financial Services, and InsuranceTelecom and ITGovernmentHospitalityEducationPublic Sector and UtilitiesOthers

Enquire Here For Customization:

https://www.globalmarketers.biz/inquiry/customization/130333

By Region

North America

Latin America

Europe

China

Japan

SEA and Other APAC

MEA

Get Table of Contents with Charts, Figures & Tables https://www.globalmarketers.biz/report/technology-and-media/global-cloud-infrastructure-testing-market-2019-by-company,-regions,-type-and-application,-forecast-to-2024/130333#table_of_contents

The report resolves the following doubts related to the Cloud Infrastructure Testing market:

1. Who are the leading market players operating in the current Cloud Infrastructure Testing market landscape?

2. Which region is expected to dominate the Cloud Infrastructure Testing market in terms of market share and size during the forecast period?

3. What are the various factors that are likely to contribute to the growth of the Cloud Infrastructure Testing market in the upcoming years?

4. What is the most impactful marketing strategy adopted by players in the Cloud Infrastructure Testing market?

5. What is the projected CAGR growth of application 1 during the forecast period?

Read the original post:
Global Cloud Infrastructure Testing Market Research Report 2020 By Size, Share, Trends and Analysis up to 2025. - Cole of Duty

Read More..

Digital Harmonic to Bring its Powerful AI-Driven Image and Video Enhancing Solution to the Federal Market – Business Wire

ELLICOTT CITY, Md.--(BUSINESS WIRE)--Turning night into day, clearing up fog, and removing cloud cover in image and video sources, in real-time, Digital Harmonics PurePixel solution will help the Federal mission secure critical areas and communities, enabling our military to make better-informed command and control decisions.

Digital Harmonic LLC, a leading image, video, and signal processing technology company, announced today they have chosen to collaborate with Dell Technologies OEM | Embedded & Edge Solutions to bring PurePixel, designed on a scalable suite of hardware solutions from edge devices to rack mounted servers, to federal agencies.

PurePixel is an upstream pre-processing component to enhance the quality and efficiency of machine learning (ML) and computer vision (CV) algorithms increasing the success of the output for real-time video. PurePixel also improves advanced analytics software with cloud-based ML algorithms for object recognition, object detection, image annotation/labeling, semantic image segmentation analysis, and edge computing capabilities.

One of the options for delivering PurePixel to customers is on the Dell EMC PowerEdge C4140 server, which is an ultra-dense, accelerator optimized rack server system purpose-built for artificial intelligence (AI) solutions with a leading GPU-accelerated infrastructure. The platform shines enabling demanding workloads and enhancing low latency performance for todays most advanced AI use-cases.

With the proliferation of optical sensors across all domains expected to reach 45 billion cameras in the next few years, the need for enhanced video processing becomes paramount to the success of AI/ML. Our process provides clarity when necessary to avoid the biases of trained models. We can think of no better collaborator than Dell Technologies to achieve the speed and scale needed to place our powerful technology in the hands of Federal customers to help give the US a competitive advantage, said Scott Haiges, CEO of Digital Harmonic.

The fundamental technology of Digital Harmonic was developed by Paul Reed Smith, namesake of PRS Guitars, and his father, an applied mathematician. What started as an experiment for measuring waveforms from a guitar string to create a new guitar synthesizer ended up producing technologies that will revolutionize the practice of signal and imaging processing.

To learn more, visit http://www.digitalharmonic.com.

Read the rest here:
Digital Harmonic to Bring its Powerful AI-Driven Image and Video Enhancing Solution to the Federal Market - Business Wire

Read More..

Sorry if this seems latency obvious, but… you can always scale out your storage with end-to-end NVMe – The Register

Comment Data storage is one of the most complex areas of IT infrastructure, as it needs to fit in with a range of conflicting requirements. Storage architectures have to be fast enough to meet the demands of users and applications, but without breaking the budget. They must deliver enough capacity to meet ever growing volumes of data while being reliable.

Not surprisingly, it turns out that most organisations still have a need for on-premises storage infrastructure, despite the lure of cloud-based storage services that promise to take away all the complexity and replace it with per-gigabyte pricing models. This is for various reasons, such as concerns over data governance, security worries or performance issues with online storage.

Whatever the reason, on-premises storage for most organisations is not going to disappear anytime soon. But it is facing new challenges as data volumes continue to expand, new workloads are being introduced into the enterprise portfolio, and users demand ever higher performance.

For small to mid-market businesses, on-premises storage has long been delivered by network-attached storage (NAS) and storage area network (SAN) platforms, both of which were developed to provide a shared pool of storage. However, while a NAS box attaches directly to a corporate Ethernet LAN and serves up files, a SAN may comprise multiple storage devices connected to servers via a dedicated network, often using high-speed Fibre Channel links.

Another key difference between the two is that while NAS exposes a file system to the network via protocols such as NFS or SMB, a SAN provides block-level storage that looks like a locally attached drive to servers on the SAN. As SAN and NAS has matured, some storage arrays offer unified block and file services in a single device. In this latter case, it is important that the storage platform you choose offers the ability to expand capacity seamlessly across both SAN and NAS environments as your business grows.

Enterprise workloads are evolving and calling for greater levels of performance. The volumes of data that organisations have to deal with also keep growing, and storage systems now have to serve applications such as analytics to make sense of all that data. Meanwhile, storage systems are also expected to deal with the introduction of new challenges such as DevOps, which brings a rapid cadence to the software development process and calls for the on-demand provisioning of new resources.

This demand for fast and easy access to data sets has driven the uptake of flash storage in the enterprise. First this took the form of hybrid storage arrays as a cache or hot tier front-end to a bunch of disk drives to accelerate read and write performance, but all-flash arrays started to gain market share as the relative cost of flash memory chips came down. IDC figures from the third quarter of 2019 show the all-flash array (AFA) market revenue was up 11.3 per cent year on year, for example.

Despite this, organisations are seeing that SAN and NAS systems may soon be too slow for some applications. Partly, this is because many flash-based solid state drives (SSDs) were manufactured with legacy host interfaces like SATA and SAS and came in disk drive form factors, in order to maintain compatibility with enterprise server and storage systems based on hard drives.

Because SSDs based on NAND flash are significantly faster than rotating hard drives, interfaces such as SAS and SATA are actually a bottleneck to the potential throughput of the drive. To fix this, SSD makers started to produce drives that used the PCIe bus, which offers higher speed and connects directly to the host processor in a server or storage controller, but there was no standard protocol stack to support this, which hindered the uptake of such solutions.

Fortunately, a group of storage and chip vendors got together and created Non-Volatile Memory Express (NVMe) as a new storage protocol optimised for high performance, designed from the ground up for non-volatile memory media.

NVMe includes a number of features that provide a significant improvement in I/O performance and reduced latency compared to legacy protocols. NVMe drives typically support four lanes of PCIe 3.0 each for a total of 4GBps bandwidth, compared with 12Gbps (about 1.5GBps) for SAS.

Another feature is support for multiple I/O queues up to 65,535 taking advantage of the internal parallelism of NAND flash storage. Each queue can also support up to 65,535 commands, compared with a single queue for SATA and SAS interfaces, supporting 32 and 256 commands respectively. This means that NVMe storage systems should be far less prone to the performance degradation that SAS and SATA can experience when heavily loaded with requests for data.

Perhaps equally importantly, NVMe allows applications to request data more or less directly from an SSD, while in the traditional storage protocol stack the commands have to pass through multiple layers on their way to the target drives. NVMe also provides a streamlined and simple command set that requires far fewer CPU cycles to process I/O requests than SAS or SATA, according to the NVMe standards body. This delivers higher IOPS per CPU instruction cycle and lower I/O latency in the host software stack. For the ultimate application solution performance, customers will want to architect end to end NVMe infrastructure where the host, network, and storage all incorporate NVMe technology.

Switching to NVMe thus speeds data transfers between the SSD and the host processor in a server, or in the case of a NAS box or SAN storage array, between the SSD and the controller. But this just shifts the bottleneck to the host interface or fabric that links the storage device to the systems it serves.

This is because a fabric such as Fibre Channel is essentially just a transport for SCSI commands, as is iSCSI (which transfers SCSI commands over TCP/IP, typically using Ethernet). However, fast your NVMe array is, data is still being sent across the network using a legacy storage protocol stack, with the additional latency that implies.

One answer to this is to extend the NVMe protocol across the network, using it over fabric technologies that are already widely used for storage purposes in enterprises, such as Fibre Channel, and Ethernet. This has collectively become known as NVMe over Fabrics (NVMe-oF).

NVMe-oF differs according to the fabric technology employed. Running it over Fibre Channel (FC-NVMe) is relatively simple for environments already invested in that infrastructure. For others such as Ethernet, this may necessitate the use of special network adapter cards supporting Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE).

This end-to-end implementation of NVMe allows enterprises to scale out their storage infrastructure without losing any of the low latency advantages offered by the protocol, effectively by allowing requested data to be transferred directly from the storage device into the computer systems memory by the network adapter, needing no further intervention by the host processor.

Naturally, there is a great deal of technical detail work in making all of this new technology hang together seamlessly, and organisations will likely find it beneficial to work with a vendor partner that can supply not just the storage, but servers with the necessary adapters and even network switches optimised for NVMe-oF to provide a true end to end NVMe solution. Partnering with one partner can also offer complete support and components validated to work together in an end-to-end solution.

In fact, the growing complexity of storage means that for many small to mid-market businesses, choosing a data platform that can simplify many processes will be vital. This may include intelligently placing data into storage media tiers or providing quality of service (QoS) capabilities to ensure that business-critical applications get the system resources they need.

Having previously stated that organisations still need on-premises storage, it is also true that many are looking to take advantage of cloud-based storage wherever it is advantageous to do so. One of the reasons for doing so is economics, with many cloud providers offering storage services that cost just pennies per GB per month.

While on-premises storage systems have the performance needed by applications, they can easily get cluttered up with data that is no longer in everyday use, so a key feature for any SAN or NAS platform is to enable this to be migrated to a lower cost storage tier such as a cloud storage service, when speed of access is no longer so important.

Some storage array platforms go beyond this to allow customers to deploy a cloud-hosted version of the product onto one or more of the big public clouds such as AWS or Azure. With this scenario, organisations can easily put in place data protection and business continuity strategies by using the cloud-hosted version to replicate snapshots and disaster recovery copies from their on-premises system.

This enables them to build high availability using the consumption-based expense model of the cloud instead of the cost of a physical second site. It is also important to avoid cloud vendor lock in and choose a solution that has the choice of multiple cloud providers.

Multi-cloud management is coming to the forefront of the cloud discussion due to escalating cloud storage costs and the ability to pick and choose multiple vendors gives customers more business leverage.

As technologies such as NAND flash and storage-class memories mature and become less costly, storage arrays based on them will eventually edge out those based on disk drives. NVMe has been designed to make the most of the capabilities these offer and with NVMe-oF now delivering these benefits end-to-end in the data centre, organisations should consider this a key technology to future proofing their storage infrastructure.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Follow this link:
Sorry if this seems latency obvious, but... you can always scale out your storage with end-to-end NVMe - The Register

Read More..

The role of the data centre in the future of Data Management – Data Economy

Given todays challenges with the at home economy, schooling & zooming, we need to focus more than ever on cleaning up our house and our Data Center. The ongoing trend toward multiple computing models, with workloads spread across on-premise, public cloud and hybrid environments, data center managers require more visibility and operational control than ever before. Subsequently, server asset management is essential when IT staff are making decisions based on the available computation and storage capacity. But with such an overwhelming number of IT assets to track and monitor, especially in large-scale data centers, the task of server asset management has gradually become an efficiency bottleneck.

Enterprises and cloud service providers (CSPs) very often manually maintain and manage server assets through a configuration management database (CMDB). Asset information includes CPU, memory, hard disk model, serial number, capacity and other information.

However, asset management solutions of this kind usually offer limited scope and cannot be easily integrated into existing systems. Moreover, this method presents a number of problems, such as low data entry efficiency, the failure to update data in real time, and the inability to track server component maintenance updates.

Additionally, many large data centers remain hamstrung by the outsourced hardware maintenance model.

With this approach, an operations and maintenance center confirms a hardware failure and then submits work orders to the onsite hardware supplier, and after the field personnel completes the batch replacement of parts, they provide feedback to the remote operations and maintenance center through the work order system.

This mode has glaring efficiency problems. The feedback information is slow, and manual remote login to the server is needed to confirm whether the parts were correctly replaced, as required.

Adopting a Lean Asset Management Approach for Improved Data Center Efficiency

Lean management practices, a function of lean manufacturing that sought to increase productivity and efficiency by eliminating waste in the automobile industry, date back to the 1940s and Taiichi Ohno, a Japanese industrial engineer, who is considered to be the father of the Toyota Production System, later adopted in the U.S., and worldwide.

With respect to lean asset management, Ohno advocated for a clear understanding of what inventory is required for a certain project, real-time visibility of what capacity is available and what is already committed, and a streamlined replenishment process. He also believed that inefficient processes will always cause delays, if not excess inventory (over-provisioning) and idle resources (underutilization).

Sound familiar?

Through the practice of lean asset management methodology in the data center, IT staff gain the ability to manage the server assets in a fine-grained manner, such as tracking the model, brand, capacity, serial number and other information of the main components of the server.

Lean asset management also enables IT teams to react quickly and efficiently when implementing change strategy. As any IT administrator will attest, change deployments and implementations can pose significant risk. When a deployed change affects systems in an unanticipated way, it can lead to service outages that negatively impact an organizations bottom line and its brand reputation.

By discovering changes in server components, it also becomes convenient for the operations and maintenance team to track changes in components in a timely manner, and improve the efficiency of the component replacement process. Its also easier to collect information concerning the data center computing resources on demand.

A Trusted Source for IT Asset Discovery and Management

Data center management solutions such as Intel Data Center Manager (Intel DCM) have the capability to automatically obtain server asset information such as CPU, memory, disk model, capacity, etc., for the various bands/models through out-of-band methods.

External applications can obtain server asset information through APIs, which are provided by the data center management solution. External systems can automatically compare device component information, find and identify information, and changes in parts.

The following is a typical scenario. The remote operation and maintenance center of a CSP discovers a server component is faulty, and requests that the supplier replace the parts onsite at the data center.

The operator has no need to double-confirm the component by logging into the server after the parts have been replaced. Furthermore, the real-time asset information of the entire data center can be reported to the IT staff at any time and before they make any decision.

To support lean asset management methodology, Intel DCM offers many asset management features, such as organizing systems in physical or logical groups, easily searching for systems using their asset tags or other details, and importing and exporting a data centers inventory and hierarchy.

Along with Intel DCMs real-time power and thermal monitoring, and its middleware APIs that allow the software to easily integrate with other solutions, these features assist companies to avoid investing in additional asset management tools.

As organizations continue to leverage multiple computing models, further dispersing their workloads, and the data center itself becomes more complex, manual processes cant keep pace with the rate of change in the IT environment.

By adopting a lean asset management approach, supported by a data center management solution with IT asset discovery and management capabilities, data center managers benefit from a trusted source of information about asset ownership, interdependencies and utilization so that they can make informed decisions regarding the deployment, operations and maintenance of their servers and systems.

So after youve cleaned our your 5th closet at home, think about cleaning your data center clutter using innovative automation tools to see these lean asset management principles at work, theres no question that Taiichi Ohno would be proud.

View post:
The role of the data centre in the future of Data Management - Data Economy

Read More..