Page 3,746«..1020..3,7453,7463,7473,748..3,7603,770..»

Centrica operations are boosted by development of cloud-based system – Information Age

Centrica plc, has deployed Citrix cloud technology to simplify and standardise desktops for 18,000 employees, with just a five-month development cycle

The changes comes as Centrica aims to introduce more flexibility and scalability to their business.

The cloud first energy and services company, teamed up with Babic and its EUC service partner Fujitsu to develop a new solution based on Citrix Cloud services and improve its operating capacity.

The team built two completely new, standardised desktop solutions, simplifying 1,200 different applications that could ultimately be rolled out to 18,000 users around the world.

This comes as a shift from Centricas previous server technology that was physically based in data centre sites. By moving these sites to the cloud, it makes the business instantly more scalable.

Centrica believes the transition will help them expand its operations with greater cost efficiency and give their services more flexible capabilities meaning they are more adaptable for a seasonal business.

The cloud storage market is a growing area thats seen a number of traditional vendors moving into the space quite aggressively. This has led to some disruption and a rise in innovative cloud storage solutions. Read here

Moving the virtual desktop to the cloud has already allowed the business to simplify and standardise its desktop whilst also bringing greater flexibility and cost savings.

Centrica serves 26 million customers in the UK, Ireland and North America, and with the solutions now fully rolled-out in Europe, it will shortly deploy them to its North American locations as well.

The energy provider aims to have a standardised and centralised system to run all of its global services, making them more efficient for users and their system simpler for their 30,000 employees.

The team has also designed an on-premise desktop located in the private datacentre for energy traders, for whom trading software did not support a move to cloud-based technology.

By combining Citrix Cloud services and Microsoft Azure, Centrica can adapt operations, increasing or decreasing services per region in line with seasonal demand.

This means that according to Centrica the project is already cost-neutral and its forecasted the company will make cost savings in the first full year of operation, partly because the new model is consumption-based.

The company is also benefiting hugely from embracing of robotic process automation (RPA) platform, with 350 bots now processing two million transactions, a 100% growth from the previous year.

Summarising the project, Steven Anderson, Fujitsus global client executive for Centrica said: Centrica, partnering with Fujitsu and Citrix, have designed, deployed and migrated thousands of users to a global desktop at pace and, critically, with no disruption to the business. It has been achieved collaboratively in just a few months, something that usually takes large organisations years to complete, meaning Centrica is quickly benefiting from a single, consistent user experience and cost savings.

Follow this link:
Centrica operations are boosted by development of cloud-based system - Information Age

Read More..

Cloud Server: Market 2020 What Factors will drive the Market in Upcoming Years Dell, HP, IBM, Oracle, Cisco, Fujitsu, Hitachi, NEC – News Times

Cloud Server Market (By Major Eminent Players, By Types, By Applications, and Leading Regions) Segments Outlook, Business Assessment, Competition Scenario, Trends and Forecast by Upcoming Years. The study of the Cloud Server report is done based on the noteworthy research methodology that provides the analytical inspection of the global market based on various segments the Industry is alienated into also the summary and Advance size of the marketplace owing to the various outlook possibilities. The report also gives information about the key players of the Cloud Server Industry by different features that include the Cloud Server overview of the companies, the portfolio of the product and also the revenue facts from Period of Forecast.

DellHPIBMOracleCiscoFujitsuHitachiNEC

Product Type SegmentationPublic CloudPrivate CloudHybrid CloudCommunity Cloud

Industry SegmentationSMELarge Enterprise

Which prime data figures are included in the Cloud Server market report?

What are the crucial aspects incorporated in the Cloud Server market report?

Who all can be benefitted out of this Cloud Server market report?

Major Players: The report provides company profiling for a decent number of leading players of the global Cloud Server market. It brings to light their current and future market growth taking into consideration their price, gross margin, revenue, production, areas served, production sites, and other factors.

Industry Overview: The first section of the research study touches on an overview of the global Cloud Server market, market status and outlook, and product scope. Additionally, it provides highlights of key segments of the global Cloud Server market, i.e. regional, type, and application segments.

Cloud Server Market Dynamics: The report shares important information on influence factors, market drivers, challenges, opportunities, and market trends as part of market dynamics.

Regional Market Analysis: It could be divided into two different sections: one for regional production analysis and the other for regional consumption analysis. Here, the analysts share gross margin, price, revenue, production, CAGR, and other factors that indicate the growth of all regional markets studied in the report. covering North America, Europe, Asia-Pacific, South America, Middle East, and Africa.

Global Cloud Server Market Forecast: Readers are provided with production and revenue forecasts for the global Cloud Server market, production and consumption forecasts for regional markets, production, revenue, and price forecasts for the global Cloud Server market by type, and consumption forecast for the global Cloud Server market by application.

Cloud Server Market Competition: In this section, the report provides information on competitive situation and trends including merger and acquisition and expansion, market shares of top three or five players, and market concentration rate. Readers could also be provided with production, revenue, and average price shares by manufacturers.

Contact Us:Web: http://www.qurateresearch.comE-mail: [emailprotected]Ph: US +13393375221, IN +919881074592

Read the rest here:
Cloud Server: Market 2020 What Factors will drive the Market in Upcoming Years Dell, HP, IBM, Oracle, Cisco, Fujitsu, Hitachi, NEC - News Times

Read More..

Sysdig Provides the First Cloud-Scale Prometheus Monitoring Offering – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Sysdig, Inc., the secure DevOps leader, today announced cloud monitoring at scale with full Prometheus compatibility. Sysdig addresses the issues that hold teams back from the organization-wide adoption of Prometheus monitoring: scale, data retention, and enterprise access controls. Sysdig also introduces support for creating dashboards, alerts, and metric analytics based on PromQL, the query language for Prometheus. Sysdig is the only enterprise monitoring solution to be fully compatible with Prometheus. This allows customers to retain their investment in existing Prometheus exporters, configurations, alerts, and dashboards. With Sysdig, DevOps and cloud teams can scale their visibility, security, and troubleshooting capabilities with a supported platform that simplifies management.

Blog: Sysdig Introduces the First Cloud-Scale Prometheus Monitoring Offering

In a separate release today, the company announced PromCat.io, a free repository of curated Prometheus exporters, dashboards, and alerts to monitor any infrastructure, application, and service running in the cloud. Sysdig offers documentation and suggested configurations for PromCat integrations.

"As enterprises journey to the cloud, our focus is on helping them accelerate application development and deliver new, competitive capabilities. A core challenge in this is reducing the complexity in managing various workloads with technologies all over the world," said Jason McGee, chief technology officer and vice president, IBM Cloud Platform. "IBM Cloud is built on open technologies to address this issue and connect various IT and cloud environments. Since Prometheus is the key monitoring tool within many of IBM Cloud's open projects, Sysdig cloud monitoring with Prometheus compatibility provides development teams with a more holistic view that can help them further simplify operations."

Organizations are moving to the cloud and DevOps workflows to ship applications faster. However, meeting customer expectations requires complete visibility into infrastructure, services, and applications across multi and hybrid clouds and on-premises data centers.

Developers are rapidly adopting open source Prometheus to monitor the performance of their applications. With more than 13,500 code commits and 6,300 contributors, Prometheus adoption is accelerating. However, as organizations transition to full-scale production, they encounter scaling and workflow issues. Additional requirements including the need for centralized and scalable metric stores, a unified view across clusters and services, and out-of-the-box insights are needed in order to reduce risk and maintain application availability. Without a macro view of the environment, it is difficult to anticipate issues with microservices that have cross-platform dependencies.

New Features

Fully Compatible Prometheus Monitoring: As organizations scale cloud deployments, they want to retain the industry-standard monitoring approach their developers prefer. Sysdig is the only cloud-scale monitoring solution fully compatible with Prometheus and the PromQL query language. This enables DevOps teams to retain their investment in existing Prometheus exporters, configurations, alerts, and dashboards. The Sysdig platform enhances its existing capabilities with greater scale, visibility, security, troubleshooting, and support.

Cloud Scale: With microservices and Kubernetes, scaling is a major hurdle. With Kubernetes, there is an increase in the number of objects and labels to track. With microservices, there is a dramatic increase in instances to monitor and therefore, the number of metrics to collect. Additionally, with Prometheus, companies are forced to monitor each Prometheus server on its own, making it difficult to view trends that would be visible from a unified view. Issues on microservices that have cross-platform dependencies may go unnoticed.

The Sysdig Secure DevOps Platform provides a scalable system that can handle more than 100 million metrics per second, and retain up to 13 months of data. Sysdig is the monitoring solution for IBM Cloud Platform, one of the largest Prometheus monitoring deployments today. With Sysdig, teams can adopt Prometheus compatible monitoring using an enterprise-ready platform.

Other Enhancements

Prometheus brings tremendous value to developers, which is why we standardized our monitoring approach on the open source project, said Payal Chakravarty, vice president, product management at Sysdig. There are, however, scaling challenges for the enterprise. By extending Prometheus monitoring, were able to help enterprises to use the Prometheus monitoring approach they love, while also giving them the scale, workflows, controls, and insights they need to maximize performance and availability.

The Sysdig Approach

To date, Sysdig is the only security company to support a secure DevOps approach by integrating monitoring and security into a single platform. Cloud teams can embed security, maximize availability, and ensure compliance. By integrating security into the DevOps process, teams can realize the business goals in their transition to cloud native. Sysdig is a strong advocate for open source technologies, including three open source technologies created by the company, sysdig, sysdig Inspect, and Falco, an open source cloud-native runtime security project. Currently, Falco is an incubation-level hosted project in the CNCF.

Availability

The features announced today are available to Sysdig customers now. Cloud scale will be available at the end of April.

Resources

About Sysdig

Sysdig enables companies to confidently run cloud-native workloads in production. With the Sysdig Secure DevOps Platform, cloud teams embed security, validate compliance and maximize performance and availability. The Sysdig platform is open by design with the scale, performance, and usability enterprises demand. The largest companies rely on Sysdig for cloud-native security and visibility. Learn more at http://www.sysdig.com.

See the article here:
Sysdig Provides the First Cloud-Scale Prometheus Monitoring Offering - Business Wire

Read More..

Coronavirus: How organisations align IT costs with business value – Software Testing News

Businesses need to assess the impacts unforeseen disasters like the coronavirus can have beyond major event cancellations and manufacturing slowdowns.

Such a large proportion of business operations are underpinned by technology, from the cloud servers hosting business critical data right down to the devices employees work on.

When the IT teams that run these systems are themselves affected by disasters, it becomes a nightmare for IT departments to not only maintain usual operations as much as possible, but also to accurately quantify the impact its had and put in place steps to manage these incidents in future.

For many businesses, incidents like coronavirus have an impact on the value that IT employees and contractors actually deliver against their costs. Businesses that can track this (like those with agile processes) stand a much better chance of adapting in the face of such disasters.

If organisations arent able to accurately assign labour costs to projects or products and their forecasted value, theyll start to see vast discrepancies in projected outcomes against delivered outcomes. And this is likely to be more pronounced if theyre using a waterfall-style project-based system.

As companies begin to reevaluate their workflow with new precautions in mind, its a good time to rethink how investment funding and governance processes are set up, whether thats for project-based or agile development.

Being able to appropriately prioritise investments, optimise resources and report on value is always going to be useful, but its more important than ever when all of these things are in flux as they are now.

Its only by being able to automatically update forecasts and track costs that organisations will truly be able to assess the impact of the coronavirus, and plan accordingly.

More:
Coronavirus: How organisations align IT costs with business value - Software Testing News

Read More..

Insights on the ASEAN and Taiwan Cloud Infrastructure-as-a-Service Market – Forecast to 2025 – ResearchAndMarkets.com – Yahoo Finance

The "ASEAN and Taiwan Cloud Infrastructure-as-a-Service Market, Forecast to 2025" report has been added to ResearchAndMarkets.com's offering.

The ASEAN cloud infrastructure market will gravitate toward a hybrid deployment model (partially premises-based and partially cloud-based) in the next 3 to 5 years. The demand for managed cloud offerings has been increasing, particularly for Application-as-a-Service and Network-as-a-Service offerings. Enterprises are looking to deploy more open-source private cloud platforms across hybrid environments, as hybrid-IT emerges as a new norm. The adoption of multicloud solutions will accelerate owing to factors such as the migration of development applications to the public cloud and disaster recovery and business continuity purposes.

Active government initiatives such as Smart Cities in developing countries and key digital hubs present new monetization opportunities for service providers in developing verticalized cloud solutions to address governments' digital transformation ambitions. The demand for integrated management solutions is likely to increase in multicloud environments. Managed hybrid cloud platforms are in demand to accommodate workloads, ranging from enterprise applications (such as email, enterprise resource planning (ERP), and cloud-native applications) and analytics to the Internet of Things (IoT).

While cloud security and privacy remain major concerns for many companies in the ASEAN region, they present monetization opportunities for vendors to provide ancillary managed security, cloud, and other professional services. The expansion of open-source technologies, as well as advances in application programming interfaces (API)-accessible single-tenant cloud servers, helps promote acceptance toward managed cloud providers.

Established data center markets such as Singapore and countries experiencing high data center growth rates such as Malaysia, Indonesia, and Thailand are expected to be the key focus areas for cloud infrastructure market growth in the ASEAN region owing to the availability of necessary infrastructure and connectivity, as well as the presence of hyperscale cloud vendors. Stiff competition and increasing commoditization of pure-play storage and cloud offerings are driving providers to offer new services and innovate. These include bundling more managed services (such as managed security and analytics) into their cloud services portfolio, thereby shifting the focus away from being a pure reseller of public cloud services.

Key Issues Addressed

Key Topics Covered:

1. Executive Summary

2. Market Overview

3. Drivers and Restraints - Total Cloud IaaS Market

4. Key Trends

5. Forecasts and Trends - Total Cloud IaaS Market

6. Growth Opportunity and Call to Action

7. Singapore

8. Thailand

9. Indonesia

10. Vietnam

11. Taiwan

12. Malaysia

13. The Philippines

14. Hyperscale Cloud Providers Profiling

15. Key Cloud Providers

16. The Last Word

17. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/wgjoh7

View source version on businesswire.com: https://www.businesswire.com/news/home/20200319005570/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com

For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

See the original post here:
Insights on the ASEAN and Taiwan Cloud Infrastructure-as-a-Service Market - Forecast to 2025 - ResearchAndMarkets.com - Yahoo Finance

Read More..

Say Goodbye to legacy: Moving over to cloud-native availability solutions – Bdaily

In todays fast-growing digital world, services not only have to be accessible 24/7, but also have to meet strict SLAs. Achieving this is contingent on critical data being available and recoverable at all times to ensure business continuity. But common data threats and vulnerabilities might get in the way unless theres an effective data protection solution in place. The good news is that backup technologies and data protection have come a long way, and businesses can now be more resilient than ever.

More than a decade ago, in the era of real-server environments, legacy backup solutions were popular among organisations and they were widely used for data protection. But with the recent advances in data protection technologies, these solutions leave much to be desired. They remain disruptive, agent-based, time-consuming and resource-intensive tools, which were not designed for the cloud. Failing to adapt to todays highly virtualised world and to live up to the expectations of modern organizations has meant that legacy backup solutions have lagged behind.

The rapid expansion of cloud computing has undoubtedly brought along considerable benefits when it comes to data protection. It has made backing up business data cheaper, simpler and more flexible, and, to top that off, it is highly scalable. This market is dominated by cloud vendors like Amazon, Microsoft and Google. However, even these new solutions are, to an extent, caught in the trap of backing up legacy systems for customers operating in hybrid environments.

In addition, how safe is data really in the cloud? Businesses assume that data redundancy is a fail-safe method in case of unforeseen events. However, any errors, unintentional or malicious, are simply replicated across a businesss systems. With cloud solutions, there is no possibility of restoring workloads from a specific point in time. Moreover, according to a recent report by McAfee, a large number of organisations that have chosen to store their sensitive data in public cloud environments are at higher risk of losing control of their data and this is driving new areas of risk.

Yet, this digital transformation need not be a stumbling block for businesses. An all-in-one backup solution with cloud-native capabilities can be just the data protection solution that meets the challenges in virtualised and cloud environments.

Retiring legacy systems

Many organisations have already realised that they are wasting their server resources on outdated systems. Fast-growing virtualisation technologies offer a more viable and effective alternative and this has led to a rush to invest in cloud technologies and virtualisation. In the process, many legacy backup systems that were once used on physical machines were simply moved over to virtual ones. And theres the rub.

Legacy solutions are scrambling to meet the needs of organisations in todays fast-moving and virtualised environments. They are slow with new releases, cumbersome to update and manage, and do not offer the features of cloud-native backup solutions. Moreover, the cloud alone cannot futureproof organisations and cover their backup and recovery needs. Essentially, using legacy systems in your companys production environment prevents you from operating your backup infrastructure effectively and efficiently. Luckily, there is an alternative solution to safeguarding your data.

The future is cloud-native

If organisations want an effective and efficient backup solution for reliable data recovery and restoration in any scenario, the only way to go is a modern, feature-rich, and robust cloud-native backup solution. While no solution is foolproof, a cloud-native backup solution comes pretty close. Native backup outperforms legacy backup solutions in several key areas:

AgentlessCloud-native backup architecture does not require installing agents within the operating system - solutions integrate seamlessly with the virtual infrastructure and interact with it without the need for additional software.

ScalableWithout as many elements, native backup solutions have almost no limitation to fast scaling both horizontally and vertically to meet evolving backup needs. In contrast to legacy backup solutions, they are less complex and do not need weeks to deploy.

Image-basedCloud-native backup solutions are image-based - they create a copy of the operating system with all its data which are then saved as a single file which contains everything needed to recreate the virtual machine in the event that something catastrophic happens.

ReplicationWith many legacy backup products, a backup is all you get. If organisations wish to perform data replication, they either have to provide their own replication solution or pay extra for additional licensing to unlock replication features. An agile native backup offering, on the other hand, allows organisations to replicate any data and store it in multiple locations to ensure it can be retrieved if system failure occurs.

Legacy backup solutions hardly cover an organisations virtual environment backup and recovery needs, and combining them with cloud technologies only makes the backup and disaster recovery process more complex and time-consuming. As virtualisation is evolving, organisations cannot afford to ignore the necessity of backing up their virtual data using an integrated automated cloud-backup solution. Regardless of setup, organisations must realise that legacy backup solutions are of no help and are a thing of the past.

Read more here:
Say Goodbye to legacy: Moving over to cloud-native availability solutions - Bdaily

Read More..

Thought Machine’s Paul Taylor: ‘Events like this will push banking into the cloud’ – AltFi

Digital Banking

How coronavirus is leading to an overhaul in the very foundations of banking.

Its an interesting time to be building cloud-based banking tech, Thought Machines CEO Paul Taylor explains via video chat from his home officea result of the current global coronavirus pandemic.

Each event like this, he gestures around him. Pushes banks further towards realising they really have to get on top of cloud-centric banking once and for all.

Thought Machine was founded by Taylor along with a senior team of ex-Google employees in 2014 to try and fix the foundations of banking.

Core banking, the world in which Thought Machine operates, describes the back-end system of banks which track account totals and transactionsthe ledger of the bank if you will.

Unlike some very bespoke systems built over decades and run by most traditional banks, Thought Machines core banking system, called Vault, is a modern piece of software designed around APIs and that can be run on Amazon Web Services, Google Cloud or any server in the world.

We're not tied to physical infrastructure, we're not tied to particular data centres, it's pretty easy to move the systems for an entire bank from one physical location to another. That can be next to impossible in a traditional bank, says Taylor.

Fixing the foundations of banking isnt terribly sexy, but the outcome could one day be felt in the lives, and pockets, of millions.

I'm not really being much of an Oracle saying that banking is going mobile, we all know that. But banking is still stuck, it's troublesome, it's full of friction and it's expensive, says Taylor.

People might not see the direct cost, but they still feel it when overdraft fees and credit card fees kick in.

When the Chancellor Rishi Sunak announced a three-month mortgage holiday policy as a result of Covid-19, Taylor says in a Thought Machine system such a change would be as easy as clicking a parameter to make the amend.

Now there are people running around in banks with legacy platforms going, 'Oh my god, how are we going to get our mortgage holiday sorted out?

Clearly Taylors position is winning over former critics, Thought Machines list of clients has grown rapidly since 2014 with both fintechs like Atom Bank and incumbents like Lloyds Bank and Standard Chartered adopting its tech.

Earlier this month Thought Machine raised its largest round of funding yet, $83m, with new backers including Draper Esprit getting involved.

For Taylor the cash injection unlocks two paths to future growth, international expansion into North America and Australia, a booming fintech hub, and growth into new types of banking.

Currently we're very much focused on retail banking, but small and medium-sized enterprise (SME) banking is pretty close, so we're looking at that, and then it's private banking and commercial or corporate banking, Taylor tells AltFi.

Whether or not Thought Machine wins over the world of core banking, well have to wait and see, but the sudden global shift to remote working and mobile banking is undoubtedly a boon for the change Taylor has been championing for the last six years.

Read the original post:
Thought Machine's Paul Taylor: 'Events like this will push banking into the cloud' - AltFi

Read More..

Head in the Clouds: Managing Security in a Multi-Cloud World – TechSpective

The rise in business processes supported by cloud services has been well evidenced by organizations using cloud services store confidential data in the cloud environment. But when using cloud services, organizations are still unsure whether to entrust cloud service providers (CSPs) with their data. CSPs generally provide a certain level of security as substantiated by multiple surveys, but cloud-related security incidents do occur.

CSPs cannot be solely responsible for the security of their customers critical information assets. Cloud security relies equally on the customers ability to implement the right level of information security controls. Nevertheless, the cloud environment is complex and diverse, which hinders a consistent approach to deploying and maintaining core security controls. It is vital that organizations are aware of and fulfill their share of the responsibility for securing cloud services to successfully address the cyber threats that increasingly target the cloud environment.

As organizations acquire new cloud services, they typically choose these from a selection of multiple CSPs and therefore need to deal with a multi-cloud environment, which is characterized using two or more CSPs.

Organizations favor a multi-cloud environment because it allows them to pick and choose their preferred cloud services across different CSPs, including Amazon Web Services (AWS), Microsoft Azure, Google Cloud and Salesforce. However, each individual CSP adopts its own jargon, its own specific technologies and approaches to security management. The cloud customer therefore needs to acquire a wide range of skills and knowledge to use different cloud services from multiple CSPs securely.

Organizations require a range of different users to securely access cloud services from within the organizations network perimeter through secure network connections (e.g. via a gateway). However, organizations also need their cloud services to be accessed from outside the internal perimeter by business partners and users travelling off-site or working remotely, all connecting through a selection of secure network connections as dictated by the organization.

Securing the use of cloud services is a shared responsibility between the CSP and the cloud customer. The security obligations incumbent on the CSP are to protect the multi-tenant cloud environment, including the backend services and physical infrastructure, as well as to prevent the commingling of data between different customers.

While the CSP maintains much of the underlying cloud infrastructure, the cloud customer is responsible for securing its data and user management. Whether the customers responsibility extends to performing security configurations for applications, operating systems and networking will depend on the cloud service model selected.

This shared responsibility for security can create confusion and lead to over-reliance on the CSP to mitigate threats and prevent security incidents. It is essential that the cloud customer does not depend wholly on the CSP to deploy the appropriate security measures, but clearly understands how responsibility for security is shared with each CSP in order to identify and deploy the requisite security controls to protect the cloud environment.

An organization using an on-premises IT data center will know exactly where its critical and sensitive data resides and can exert full control over the movement of its data. This helps considerably when implementing security controls, whereas in the cloud environment, data moves in and out of an organizations perimeter more freely. This can obscure where critical and sensitive data is located, and how it can be protected, which can hinder an organizations ability to effectively enforce the requisite security controls across all of its cloud services in line with compliance requirements.

While it is the cloud customers responsibility to ensure the security of its data in the cloud environment, the customers control over its data is intrinsically limited since the data is stored by an external party the CSP in an off-site location, often in a different country. Moreover, the CSPs will often leverage several data centers in geographically distinct locations to ensure the organizations data is stored on more than one server for reasons of resilience. This creates additional complexity in terms of managing data across borders, understanding where it is located at a given moment in time, determining the applicable legal jurisdiction and ensuring compliance with relevant laws and regulations an obligation that rests fully with the cloud customer, not the CSP.

Modern organizations must operate at a fast pace, delivering new products and services to stay ahead of the competition. Many are therefore choosing to move ever further towards cloud computing, as the elasticity and scalability offered by cloud services provide the desired flexibility needed to compete. For an organization to have confidence that it can move to the cloud whilst ensuring that vital technological infrastructure is secure, a robust strategy is required.

The cloud environment has become an attractive target for cyber attackers, highlighting the pressing need for organizations to enhance their existing security practices. Yet consistently implementing the fundamentals of cloud security can be a complicated task due to the diverse and expanding nature of the cloud environment.

This is only one of countless challenges that organizations need to overcome to use cloud services securely. Organizations cannot rely purely on CSPs to secure their critical information assets but must accept their own share of responsibility. This responsibility calls for a combination of good governance, deployment of core controls and adoption of effective security products and services. Controls that cover network security, access management, data protection, secure configuration and security monitoring are not new to information security practitioners, but they are critical to using cloud services securely.

Moving forward, organizations can select from a variety of trends and technologies that will enable them to use cloud services securely from the adoption of new products to the embedding of improved processes, such as a focus on secure containers, where security is given greater emphasis during development.

Assuring that services are used securely will provide business leaders with the confidence they need to fully embrace the cloud, maximizing its potential and driving the organization forward into the future.

Read more:
Head in the Clouds: Managing Security in a Multi-Cloud World - TechSpective

Read More..

Laser Links will link small satellites to Earth and each other – Laser Focus World

Three decades ago, the first long-haul submarine fiber-optic cables brought international telecommunications traffic that had been routed through geosynchronous satellites back to Earth by offering lower latency and more bandwidth. Now, planned constellations of small satellites in low-Earth orbit hope to bring telecommunications back to space by offering lower latency and more bandwidth to nearly half the worlds population who lack broadband Internet service. Laser links between those satellites will provide the backbone of the system (see Fig. 1).

The idea of laser communication in space dates back to the birth of the laser in 1960, when Theodore Maiman proposed it as a potential application. Yet not until 2008 was the first intersatellite laser link demonstrated between two low-Earth-orbit satellites, the European radar satellite TerraSAR-X and the U.S. military satellite Near Field Infrared Experiment (NFIRE), says Philipp Biller, product manager for laser communication systems at Tesat Spacecomm (Backnang, Germany). The laser terminals supplied by Tesat needed less than 25 seconds on average to lock onto each other and begin transmission in both directions at 5.6 Gbit/s. Test runs continued as long as the two spacecraft were in a line of sight from each other, an average of about 20 minutes during which the laser link between them turned about 80. The experiments continued for several months, with bit error rates under 10-8 for ranges up to 8000 km.

NASA made headlines in 2013 by demonstrating the first laser link in deep space when its Lunar Laser Communication Demonstration (LLCD) transmitted data from the Moon at 622 Mbit/s. This marked a big improvement over data rates possible with radio waves.

The first commercial laser intersatellite link, Airbuss SpaceDataHighway, began operation in 2016, using Tesat laser communications terminals to transmit high-priority image data collected by four satellites in low-Earth orbit at rates to 1.8 Gbit/s to relays in geosynchronous orbit for radio transmission to the ground. The high-speed laser uplink in the low satellite tracks the geosynchronous satellite so it can relay images quickly, without having to wait to pass over a ground station. Then, the synchronous orbit can relay the images to cloud servers on the ground via its high-speed microwave link. Addition of a second geosynchronous satellite in 2019 increased relay speed.

Meanwhile, a dramatic new vision has emerged for laser communications in space: using laser-linked satellites in low-Earth orbit to deliver broadband Internet and other services to places beyond the reach of fiber-optic cables. Two key factors have driven that trend, notes Tim Yarnall of the MIT Lincoln Laboratory (Lexington, MA). SpaceX has disrupted the launch services industry, making access to low-Earth orbit much less expensive, he says. And a push to make small satellites has created a vendor base making all the things you need to build systems that would work in low-Earth orbit.

The idea is to orbit hundreds or thousands of small satellites 300 and 2000 km above the ground and make sure that at least one satellite is always in the sky above any point being served. The satellites would orbit around the Earth in a number of rings, spaced at different altitudes and positions to avoid collisions. Satellites would be evenly spaced around each ring so laser links would always have one or two adjacent satellites in sight (see Fig. 2). An individual ground station would connect to a satellite passing overhead to send signals, and when that satellite moves out of range, it would hand the connection off to the next satellite following its orbit. Once in space, the signal then would be routed through a series of intersatellite links until it reached a satellite passing over the destination, which would route the signal to a local ground station.FIGURE 2. Low-orbit satellites are evenly spaced so they can relay signals around the globe. One satellite ahead and one satellite behind in orbit is always visible in rings of 7 to 9 for this orbit. Redundancy is possible with rings of 10 to 14 satellites, in which each satellite could always link to two ahead and two behind, so failure of a single satellite would not break the link.(Courtesy of T. Yarnall, MIT Lincoln Laboratory)

Microwave links are likely between ground stations and satellites because clouds and precipitation can block optical signals in air. Some proposals envision microwave links between satellites. However, laser links offer important advantages over radio frequencies for intersatellite links above the atmosphere: higher transmission bandwidth, smaller receivers, and tighter beams to enhance security. The biggest challenge in the optical band is the need to sustain laser connections between adjacent satellites as the whole constellation moves through space and the nearest neighbors of each satellite keep changing.

The plan is a huge leap beyond the limited bandwidth of todays satellite telephone systems, and has attracted wide attention from companies hoping to deliver broadband services to large areas poorly served by existing cables, including much of Africa, central Asia, the Arctic, and the rural central United States. Facebook early on proposed transmitting from high-flying drones, but is now considering satellites. Googles Project Loon flies balloons at altitudes of 18 to 25 km, and has a contract with Telkom Kenya to provide services in rural areas. Fleets of hundreds or thousands of low-orbit satellites could serve much broader areas, and would have much lower latency than the quarter-second round-trip time to geosynchronous orbit.

Amazon, Boeing, and SpaceX all have announced plans for thousands of small low-orbit satellites, and Facebook is also interested. In early May 2019, Aviation Week counted proposals from 32 companies for a total of 13,529 small communications satellites in low-Earth orbit. Only a few test satellites were in orbit then, but on May 23, 2019, a single SpaceX launched 60 test satellites for its Starlink system; three subsequent launches have added more satellites. The satellites lacked laser transmitters planned for the full system, but had internal engines that raised them from the drop-off point at 440 km to final orbits at 550 km. Much more is in the offingSpaceX is now talking about adding some-7500 satellites to the 4400 it originally planned, which would the bring the Starlink constellation close to 12,000 satellites.

Radio-frequency transmission is less expensive, but lasers can offer more bandwidth for a modest premium. That has led laser developers to focus on 10 Gbit data rates so they can exceed the data rates available with microwaves.

The planned intersatellite laser links are largely based on technology developed for ground-based fiber-optic systems. Almost all transmitters in development are based on distributed-feedback diode lasers followed by fiber amplifiers, says Tim Yarnall of the MIT Lincoln Laboratory. Both the 1550 nm erbium band and the 10301080 nm ytterbium band are under consideration. You can buy lots of good parts at 1550 nm, he says, but power is easier to generate at one micrometer. Tesat uses 1064 nm diode-pumped solid-state lasers for its long-distance relays to geosynchronous orbit. A standards group, the Consultative Committee on Space Data Systems, recently recommended using 1550 nm for downlinks from orbit and 1 m for uplinks to keep wavelengths separated. However, the technology is still evolving.

A 715 kg terminal could transmit 10 Gbit/s on a 4500 km intersatellite link, says Paul Cornwell of Mynaric, AG (Gilching, Germany), a company founded a decade ago to develop wireless laser links for aerospace use. The future is terabit speeds, he says. The company is looking at 100 Gbit/s, but its current products operate at 10 Gbit/s. Figure 3 shows a Mynaric ground-based laser terminal.FIGURE 3. The Mynaric ground-based Rhino terminal.(Courtesy of Mynaric)

Pointing, acquisition, and tracking is the most challenging problem in space laser communications, says Yarnall. A fundamental tradeoff is directivity vs. powerthe tighter the beam divergence, the higher the power on target, but the more demanding are the requirements on directivity. Laser divergence can be as narrow as 10 rad. That may be worth it for the NASA [deep space] programs we support, he says. Size, weight, and power are the key concerns, so they let us use very complex pointing control systems to get every last bit out of performance rather than spraying power in all directions.

The economics are very different for constellations of hundreds or thousands of satellites in low-Earth orbit, where orbits decay within a few years, so satellites have to be replaced frequently. To control costs, satellites must be small, mass-produced, and mass-launched. Yet, they also must be sophisticated enough to keep their transmitters properly aligned to communicate with their neighbors in orbit, and the system must be resilient enough to keep on working or recover quickly after a single satellite fails.

It adds up to a tough set of requirements. The general consensus seems to be that the planned constellations need to be based on smallsats weighing up to 500 kg. Thats just under one-tenth the 5300 kg for the geosynchronous satellites in the EuropeanDataHighway, small enough that several can be launched on a single rocket to reduce costs. SpaceXs Starlink satellites will weigh in at 227 kg and will carry four relay links, two to the satellites ahead and behind it in orbit, and two others linked to the nearest satellites in other orbits. With each laser terminal expected to weigh 7 to 15 kg, they would account for 30 to 60 kgan eighth to a quarter of the total satellite. Power, pointing and tracking, control systems, propulsion, and other equipment would account for the balance.

Most architectures call for radio-frequency downlinks from low-Earth orbits to ground stations because radio can deliver signals through clouds and precipitation that make laser downlinks unreliable. Tesat is an exception, offering a Tosiris laser downlink that can transmit up to 10 Gbit/s to ground stations with uplink beacons to aid in locking onto the receiver. To deal with weather, Biller says they install several ground stations over the area where the signal is to be delivered and have the system select the site with the clearest air to receive the signal. The system is designed to download satellite information, so it requires only several megabits per second for uplinks, which also are delivered via laser. Figure 4 compares Tesat laser transmitters for low-orbit to ground and between geosynchronous satellites.FIGURE 4. Tesat laser terminals are shown where (a) is the 10 Gbit/s TOSIRIS terminal for low-Earth orbit to ground transmission and (b) is the LCT135 terminal for transmission between geosynchronous satellites separated by up to 80,000 km.(Courtesy of Tesat)

Satellites assembled from modules 10 by 10 by 10 cm called cubesats have been in the spotlight because they offer inexpensive access to space for projects such as laser guide star research. They are also being tested for communications. Kerri Cahoys group at the MIT Space Telecommunications, Astronomy, and Radiation Laboratory has also developed tools to optimize pointing laser beams from cubesats, reducing beacon tracking errors to an average of 16 rad. Tesat has delivered a 360 g, 9.5 9.53.25 cm laser communications terminal capable of transmitting 100 Mbit/s from low-Earth orbit to the ground to an unidentified U.S. customer.

However, the small size of cubesats limits their usefulness in constellations. Cubesats are notoriously shy on power, says Yarnall, a limit on designing transmitters and control systems to link to other satellites in the constellation. Biller thinks cubesats communications can work for Earth observation or Internet of Things (IoT) links at low data rates, but that larger smallsats are needed for constellations.

Details of some major plans remain vague. Boeing has filed plans for a constellation of some-3000 low-Earth-orbit communication satellites with the Federal Communications Commission, and has invested in Bridgecomm (Denver, CO), a maker of laser communication equipment for ground stations and low-orbit satellites operating in the 1550 nm band. Amazon has filed plans for a fleet of 3,236 satellites called Project Kuiper Systems LLC that would orbit satellites at 590, 610, and 630 km to serve the 95% of the worlds population living at latitudes up to 56 from the equator. But few other details are available about either big constellations. Many other companies also are working quietly on plans for building equipment or launching laser-linked satellites.

Other plans do not involve lasers. OneWeb (McLean, VA) plans a constellation of 588 low-Earth-orbit satellites49 in each of 18 orbital planesthat will connect end users through the satellites to ground-based gateways via microwaves, with no intersatellite connections. OneWeb launched six test satellites on July 16, 2019, and planned to launch over 30 more in December 2019, with more launches following regularly.

Worried that military communications and operations depend too heavily on large and vulnerable geosynchronous satellites, DARPAs Blackjack program is developing plans for a global high-speed network in low-Earth orbit. The goal is a more flexible and resilient system that could take advantage of commercial technology and would not depend on a few tempting targets in geosynchronous orbit.

The first generation of commercial laser communications terminals for low-Earth-orbit satellites are in development. Timetables are difficult, says Cornwell, but Mynaric hopes to complete qualification of its 10 Gbit/s laser terminals before April 2020 and begin production later in the year. Tesat plans to test its 10 Gbit/s Tosiris satellite terminal on the International Space Station in 2020. Real hardware should soon be rolling off production lines. Tomorrows 5G networks could use the capacity.

Its hard to believe that all of the roughly 20,000 planned satellites will be launched. On the other hand, if they all operate at 10 Gbit/s and each one transmits a single channel, the overall capacity is on the order of 200 Tbit/s. Thats only about 35% larger than the 144 Tbit/s capacity of all six fiber pairs in the Pacific Light Cable, currently the worlds highest for a single cable. Its no wonder that Cornwell says the future is terabit speeds.

Originally posted here:
Laser Links will link small satellites to Earth and each other - Laser Focus World

Read More..

The trouble with cloud evaluations amid too many choices – TechTarget

Enterprises are presented with many challenges when they move to the cloud -- not the least of which is wading through the multitude of services to find the ones that will actually benefit their businesses.

Cloud providers churn out so many services these days that the race appears to have shifted from which one can offer the cheapest services to which one can offer the most services. AWS executives boasted about having more than 175 services at their 2019 user conference, while Microsoft and Google list an even higher number of services in their respective product directories.

The variety of cloud services only makes it harder to choose the right ones as part of a cloud evaluation. There's the staples of the public cloud, such as compute, networking, storage and databases; then there's the host of other IT sectors these vendors continue to encroach upon, whether it's AI, security or even esoteric categories like satellite communications. And within each of those groupings, there can be a dozen services to pick from, each with its own set of unique features.

"The increasing volume of services is definitely a challenge for pretty much all of my clients," said Sean Feeney, cloud practice director at Nerdery, an IT consultancy. "It could take an entire career to keep up with the actions of the cloud providers, much less implement them to add value to your business."

As part of its Cloud Price Index, 451 Research tracks 2 million product line items across AWS, Microsoft, Google, IBM and Alibaba. The total number of items it tracks doubled in 2019.

"It's the Wild West out there," said Owen Rogers, research vice president at 451. "As soon as an enterprise has made a commitment to a development, a new technology comes along which might be a better fit."

To illustrate the conundrum enterprises face when they try to keep pace with the cloud release cycles, Rogers used an example of a company that moved all its applications to cloud VMs a few years ago, only to see a raft of container-based services come along after the fact. The diversity of options can help add value and build more powerful apps, but it also creates headaches as IT teams figure out the best combination of services.

Another part of the problem is enterprises aren't fully aware of what's out there now, so they're missing opportunities to save money. For example, an IT team could move its on-premises database to SQL Server hosted on AWS as a cost-saving measure, but it might miss the fact that it could have saved as much as 80% on migration expenses if the team did it with AWS Database Migration Services, said Jeff Valentine, CTO at CloudCheckr, a cloud management platform provider.

"It's information overload," Valentine said. "No one can keep up with the constant barrage of changes."

Despite the bevy of options, enterprises shouldn't feel inundated to the point of paralysis. It's important to see what tools and services fit. Start by sorting out the core public cloud services and features you want to use, Feeney said. This could include VMs, containers or a PaaS platform.

"Whatever your base model is, that's going to be your biggest chunk of spend," he said. "All these other services they roll out with are often just iterative versions or features of an existing service, or they're very vertically aligned services."

Avoid services tailored to an industry you're not a part of, but otherwise encourage your developers to experiment, Feeney noted. And they should do so continuously. In the past, the bulk of the major product rollouts or updates would coincide with a vendor's primary user conference, but that's not the case today. Instead, cloud providers push out updates and services almost weekly.

"Build safe sandboxes to try these new services and find out if it improves workflow, saves cost or benefits [your system] otherwise," Feeney said.

It's information overload. No one can keep up with the constant barrage of changes. Jeff ValentineCTO at CloudCheckr

And that experimentation should be a grassroots initiative, led either by developers or lines of business, Feeney said. A top-down approach won't work because company leaders often only catch the biggest announcements and are less likely to see something minor that might be highly valuable to the business.

Analyst firm IDC is actively encouraging its clients to experiment with new services and to have a process in place to ensure it's done regularly. While testing IT infrastructure on-premises often involves long, drawn out RFPs and beta tests, it's much more cost effective to evaluate public cloud services with a small pilot program, said Deepak Mohan, an IDC analyst.

"When a new database type gets announced, it literally costs you a few dollars to quickly spin it up and try it out," he said.

And the need to experiment is paramount today as organizations move beyond lift-and-shift and transition to different -- and often unfamiliar -- compute paradigms in the cloud. Companies will get the most benefit from the public cloud if they modernize their workloads and move closer to the 12-factor application model, Feeney said. The only way that happens is if IT teams dig into these services and find out how the tools can help them optimize their IT footprint.

"The optimization of cloud a few years ago was about working out how to squeeze costs on infrastructure," Rogers said. "The optimization of today is about helping enterprises constantly take advantage of new services to add value to the business, not just squeezing costs."

Of course, some companies can only tolerate so much risk. CloudCheckr has large enterprise clients that spend millions of dollars each month on the public cloud across dozens of departments and projects. For those companies, experimentation leads to fear and anxiety, because the wrong move, no matter how well intentioned, could run afoul of data privacy laws or create other issues that result in dire consequences for the business.

"They have hundreds of people in their organizations that could [make a mistake that accidentally] sends their CEO to jail or puts them on the front page of The Wall Street Journal," Valentine said.

Leaders at these companies don't want to give autonomous control at the department level because of a lack of governance and consistency, Valentine said. If different teams experiment across clouds, it could lead to service sprawl and confusion, and limit visibility across workloads.

An emerging trend among these types of companies is what is known as "cloud centers for excellence." There isn't a standardized approach to creating and implementing these centers, but the goal is to create cross-functional, centralized oversight over IT resources in the cloud. Input comes from stakeholders across a business, but it can also involve cloud providers and outside consultants. These cloud centers can provide guidance on new projects, so developers and lines of business comply with governance policies around cost, security and consistency.

However, enterprises shouldn't be draconian, Valentine said. Instead, put the right guardrails in place so employees can have some freedom while still complying with corporate security and compliance standards. That way, IT teams can confidently move fast and make important changes to get the most out of their cloud environments.

Read the original:
The trouble with cloud evaluations amid too many choices - TechTarget

Read More..