Page 3,250«..1020..3,2493,2503,2513,252..3,2603,270..»

Amazon details cause of AWS outage that hobbled thousands of online sites and services – GeekWire

A past AWS re:Invent conference. (GeekWire Photo)

A relatively small addition of capacity to the Amazon Kinesis real-time data processing service triggered a widespread Amazon Web Services outage last week, the company said in a detailed technical analysis over the weekend.

The addition caused all of the servers in the fleet to exceed the maximum number of threads allowed by an operating system configuration, the post said, describing a cascade of resulting problems that took down thousands of sites and services.

The outage impacted online services from big tech companies such as Adobe,Roku,Twilio,Flickr,Autodesk, and others, including New York Citys Metropolitan Transit Authority. The Washington Post, which is owned by Amazon CEO Jeff Bezos, was also impacted by the outage.

It was an especially ill-timed incident for Amazon, coming just days before its annual AWS re:Invent cloud conference, which kicks off Tuesday morning as a virtual event. Reliability has been a hotly debated topic between Amazon, Google, Microsoft and other major players in the cloud, each of whom experiences periodic outages.

The explanation underscores the interdependent nature of cloud services, as the problems with Kenesis impacted the Amazon Cognito authentication service, CloudWatch monitoring technology, Lambda serverless computing infrastructure, and other Amazon services.

In the very short term, we will be moving to larger CPU and memory servers, reducing the total number of servers and, hence, threads required by each server to communicate across the fleet, the company said, describing one of the lessons learned from the incident. This will provide significant headroom in thread count used as the total threads each server must maintain is directly proportional to the number of servers in the fleet.

Amazon apologized and said it would apply lessons learned to further improve its reliability: While we are proud of our long track record of availability with Amazon Kinesis, we know how critical this service, and the other AWS services that were impacted, are to our customers, their applications and end users, and their businesses. We will do everything we can to learn from this event and use it to improve our availability even further.

Follow this link:
Amazon details cause of AWS outage that hobbled thousands of online sites and services - GeekWire

Read More..

An Introduction to Cloud Computing | Ethical Hacking | EC-Council Blog – EC-Council Blog

Cloud computing has become one of the most deliberated topics among cybersecurity experts and IT professionals. And more recently, cloud computing in ethical hacking has taken up the spotlight. With the rise of cloud crimes, experts are looking into how ethical hacking principles can curb security issues and boost forensic investigations as well.

Cloud computing presents new paths for malicious hackers to leverage vulnerabilities, thus increasing the new categories of vulnerability and cloud security concerns.Moreover, investigating crimes in the cloud can be somewhat demanding.

This article serves as an introduction to cloud computing and its benefits. It also explains how cloud computing in ethical hacking can be useful.

Cloud computing describes the on-demand delivery of IT competencies like storage, databases, servers, intelligence, analytics, networking, and others through metered services. This lets you customize, create, and configure applications either offline or online. The word cloud refers to a network.

Previously, you could only store information locally. An on-premises data center required organizations to manage everything procuring and virtualization, installation of an operating system, setting up network and storage for data, and maintenance.

Cloud computing dramatically altered this state of affairs by off-shoring or outsourcing ICT duties to third-party services. They are not only responsible for procurement and maintenance, but they also offer a wide range of platforms and software as a service. Some cloud computing service providers include Amazon Web Services, IBM Cloud, Google Cloud Platform, Microsoft Azure, VMware, DigitalOcean, RackSpace, etc.

There are four popular types of cloud computation:

This categorization is based on the types of services offered:

Cloud computing is highly valuable:

One of the major issues with cloud computing is security and privacy concerns over the infrastructure and services provided by a third party. While vendors try to ensure secure networks, a data breach could affect consumers and their businesses. Another concern is the need for private data to be stored separately.If another customer falls victim to an attack, the availability and integrity of the data might be compromised. Some of the common threats and attacks which can affected cloud computing are:

Cloud computing services make business applications mobile and cooperative. However, there is always the risk of security and privacy breach when handling sensitive data to vendors or a third party. The fundamental ethical principles of IT remains unaffected even with the emergence of cloud computing infrastructure and services.

It is critical to reconsider these principles. Particularly since most of what used to be completely internal deliberations of operations and risk management has been assigned to vendors and persons who sit beyond immediate organizational control. These vendors become the main keepers of customer data, risk mitigation, and functional operation. Therefore, they must understand the operational risks they are undertaking on behalf of their clients.

Similarly, these clients also have an obligation, since its possible they are also providing services to other clients. It is important to have an in-depth knowledge of the technology employed and its associated risks. The easiest way is to undertake due diligence when considering a third-party provider for cloud computing services.

At the end of the day, it all boils down to certain basic concepts: accountability, honesty, respect for privacy, and do unto others what you would like to be done unto you. Cloud computing can be maximized only if true, long-term trust is established between clients and providers. This can only be achieved through a definite system of ethics. As such, the storing of client data in the cloud should follow stricter regulations.

EC-Councils Certified Ethical Hacker (CEH) credential is the most extensively recognized and respected certification in this industry. CEH is a knowledge-based exam that will evaluate your competencies in Attack Prevention, Attack Detection, Information Security Threats and Attacks Vectors, Procedures, Methodologies, and more!

The CEH credential certifies security officers, site administrators, auditors, cybersecurity professionals, and other cybersecurity enthusiasts in the specific network security discipline of ethical hacking from a vendor-neutral perspective. For more information, visit our course page now!

FAQs

What is CIA in ethical hacking?

It is an acronym that stands for Confidentiality, Integrity, and Availability. However, the CIA triad forms the standard model implemented to assess the information security of an organization. Actually, these should function as the mission for every security program.

Is cloud computing safe from hackers?

Certain risks are linked to cloud computing. In fact, the development of cloud computing has made hacking more widespread. Data stored in the cloud are vulnerable to hackers, viruses, and malicious software. For instance, a malicious hacker can implement employee login ID information to remotely access critical data saved in the cloud.

Should I trust the cloud?

Yes, but not absolutely. Your data is fairly safe in the cloud as compared to your hard drive and other storage devices. Your cloud service eventually entrusts your sensitive data to the hands of other individuals. You have nothing to worry about if you are not really big on privacy.

Can the cloud be secure?

Although security threats persist, network defenses and security measures minimize the odds of victimhood. Restricting cloud access via internal firewalls facilitates security. Also, to an extent, encryption also helps keep data safe from unauthorized access.

More here:
An Introduction to Cloud Computing | Ethical Hacking | EC-Council Blog - EC-Council Blog

Read More..

Google Cloud Will Not Be Able To Overtake Microsoft Azure – Forbes

ANKARA, TURKEY - MARCH 3: In this photo illustration a mobile phone and computer screens display ... [+] Microsoft and Google logos in Ankara, Turkey on March 3, 2020. Halil Sagirkaya / Anadolu Agency

Google Cloud certainly has the technical chops and engineering talent to compete with Microsoft Azure and Amazons AWS when it comes to cloud infrastructure, edge computing and especially inferencing/training for machine learning models. However, Google may lack focus due to Search and YouTube being the main revenue drivers. This is seen from the companys inability to ignite revenue growth in the cloud segment during a year when digital transformation has been accelerated by up to six years due to work-from-home orders.

ADVERTISEMENT

In this analysis, we discuss why Google (Alphabet) may have missed a critical window this year for the infrastructure piece. We also analyze how Microsoft directed all of its efforts to successfully close the wide lead by AWS. Lastly, we look at how all three companies will bring the battle to the edge in an effort to maintain market share in this secular and fiercely competitive category.

The three leading hyperscalers in the United States have diverse origins. Amazon found itself serendipitously holding server space year-round that it could rent out and was first to market by a wide lead. Amazon continues to release customization tools and cloud services for developers at a fast clip and this past week was no exception.

Microsofts roots in enterprise created a direct path to upsell on-premise and become the leader in hybrid. The majority of the Fortune 500 is on Azure as they want seamless security and APIs regardless of the environment.

Google is one of the largest cloud customers in the world due to its search engine and mass-scale consumer apps, and therefore, is often first to create cloud services and architectures internally that later lead to widespread adoption, such as Kubernetes. Machine learning is another piece where Google was one of the first to require ML inference for mass-scale models.

Despite all three having very talented teams of engineers and various areas of strength, we see AWS maintain its lead and Microsoft Azure firmly hold the second-place spot. Keep in mind that Azure launched one year after Google Cloud yet has 3X the market share and is growing at a higher percentage.

Canalys

Google Cloud grew two percentage points from 5% to 7% since 2018 while Azure grew four percentage points from 15% to 19% in the same period. In the past year, Google Cloud saw a 1% gain compared to Azures 2% gain, according to Canalys.

ADVERTISEMENT

Azure is under Intelligent Cloud but the company does break down the growth rate which was 48%. Although Google Cloud Is not specifically broken down, the Google Cloud segment grew 45% year-over-year compared to Microsoft Azure up 48% year-over-year.

Amazon Web Services is growing at 29%, which is substantial considering the law of large numbers. In the past two quarters, Google Cloud reported 43% year-over-year growth and 52% in the quarter before that. Microsoft has seen a slightly less deceleration from 51% and this is down from the 80%-range almost two years ago.

The key thing here is that when Microsoft held the percentage of market share that GCP currently holds, Azure was growing in the 80-90% range. This is the range we should be seeing from Google Cloud if the company expects to catch up to Azure.

ADVERTISEMENT

In 2020, the term digital transformation has become a buzzword with cloud companies seeing up to six years of acceleration. Nvidia is a bellwether for this with triple-digit growth in the data center segment in both Q2 and Q3. Despite this catalyst, Google has lagged the category in Q2 and Q3 in terms of both growth and percentage share of market. If there were any year that Google Cloud could pull ahead, it should have been this year.

Alphabet has emphasized that GCP is a priority and the company will be aggressively investing in the necessary capex. However, the window of opportunity was wide open this year and aggressive investments would ideally have been allocated during the years of 2017-2018 to stave off Azures high-growth years with 80-90%.

There is no argument that Alphabet is an innovator within cloud and a leader in its own right. Across public, private and hybrid cloud, containers are used by 84% of companies and 78% of those are managed on Kubernetes which has risen in popularity along with cloud-native apps, microservices architectures and an increase in APIs. Kubernetes was first created by Google engineers as the company ran everything in containers internally and this was powered by an internal platform called Borg which generated up to 2 billion container deployments a week. This led to automated orchestration rather than manual and also forced a new architecture away from monolithic as server-side changes were required.

ADVERTISEMENT

Kubernetes also helps with scaling as it allows for scaling of the container that needs more resources instead of the entire application. Microservices dates back to Unix, while Kubernetes, the automation piece around containers, is what Google engineers invented before releasing it to the Cloud Native Foundation for widespread adoption.

Just as Google was one of the first to need automated orchestration for containerization of cloud-native apps, the company was also one of the first to require low-power machine learning workloads. The compute intensive workloads were running on Nvidias GPUs for both training and inferencing until Google made their own processing unit called Tensorflow (TPUs) to perform the workload at a lower cost and higher performance.

Performance between TPUs and GPUs is often debated depending on the current release (A100 versus fourth-generation TPUs is the current battle). However, the TPU does have an undisputed better performance per watt for power-constrained applications. Notably, some of this comes with the territory of being an ASIC, which is designed to do one specific application very well whereas GPUs can be programmed as a more general-purpose accelerator. In this case, the benchmarks where TPUs compete are object detection, image classification, natural language processing and machine translation all areas where Googles product portfolio of Search, YouTube, AI assistants, and Google Maps, for example, excels.

ADVERTISEMENT

Google

Notably, TPUs are used internally at Google to help drive down the costs and capex of its own AI and ML portfolio and they are also available to users of Googles AI cloud services. For example, eBay adopted TPUs to build a machine learning solution that could recognize millions of product images.

Unless Google releases an internal technology as open-source, it wont be adopted by the competitors. This is where Nvidias agnosticism becomes a positive as its universally used by Amazon, Microsoft, Google - and Alibaba, Baidu, Tencent, IBM and Oracle. Meanwhile, TPUs create vendor lock in which most companies want to avoid in order to get the best capabilities across multiple cloud operators (i.e. multi-cloud). eBay is the exception here as the company needs Google-level object detection and image classification.

ADVERTISEMENT

In a similar vein of Google being early to the companys internal requirements, BigQuery is also a superior data warehouse system that competes with Snowflake (I cover Snowflake with an in-depth analysis here). BigQuery has a serverless feature that makes it easier to begin using the data warehouse as the serverless feature removes the need for manual scaling and performance tuning. Dremel is the query engine for BigQuery.

BigQuery has a strong following with nearly twice the number of companies as Snowflake and is growing around 40%. Due to AWS being a first mover and having a large cloud IaaS market share, Redshift has the biggest market presence but growth is nearly flat at 6.5%.

Point being, Google has important areas of strength and first-hand experience whether its in data analytics, machine learning/inference or cloud-native applications at scale. Googles search engine and other applications are often the first globally to challenge current architectures and inferencing capabilities.

ADVERTISEMENT

However, as we see in the contrast between Google and Microsoft in the most recent earnings calls, Google has a hard time prioritizing cloud over the bigger revenue drivers. Meanwhile, Microsoft has a no holds barred approach with one, singular focus: Azure.

The most recent earnings calls from both Microsoft and Google could not have carried more contrast. Google focused primarily on search and YouTube while adding towards the last half of the call that GCP is where the majority of their investments and new hires were directed. Notably, one analyst wondered if the capex investments would eat at margins and produce enough returns.

Microsoft, on the other hand, held an hour-long call that was nearly all-Azure including what the company is doing right now to capture more market share, a laundry list of large enterprises coming on board and strategic partnerships to strengthen its second place standing. The companys beginning, middle and end was Azure and cloud services.

ADVERTISEMENT

Here is a preview of how the two opened:

Thanks for joining us today. This quarter, our performance was consistent with the broader online environment. It's also testament to the investment we've made to improve search and deliver a highly relevant experience that people turn to for help in moments big and small. We saw an improvement in advertiser spend across all geographies, and most of verticals, with the world accelerating its transition to online and digital services. In Q3, we also saw strength in Google Cloud, Play and YouTube subscriptions.

This is the third quarter we are reporting earnings during the COVID-19 pandemic. Access to information has never been more important. This year, including this quarter showed how valuable Google's founding Product Search has been to people. And importantly, our products and investments are making a real difference as businesses work [indiscernible] and get back on their feet. Whether it's finding the latest information on COVID-19 cases in their area, which local businesses are open, or what online courses will help them prepare for new jobs, people continue to turn to Google search.

You can now find useful information about offerings like no contact delivery or curbside pickup for 2 million businesses on search and maps. And we have used Google's Duplex AI Technology to make calls to businesses and confirm things like temporary closures. This has enabled us to make 3 million updates to business information globally.

ADVERTISEMENT

We know that people's expectations for instant perfect search results are high. That's why we continue to invest deeply in AI and other technologies to ensure the most helpful search experience possible. Two weeks ago, we announced a number of search improvements, including our biggest advancement in our spelling systems in over a decade. A new approach to identifying key moments and videos, and one of people's favorites hum to search which will identify a song noticed based on the humming. -Sundar Pichai, Q3 2020 Earnings Call

Compare this to the tone for Microsofts earnings call

Were off to a strong start in fiscal 2021, driven by the continued strength of our commercial cloud, which surpassed $15 billion in revenue, up 31% year-over-year. The next decade of economic performance for every business will be defined by the speed of their digital transformation. Were innovating across the full modern tech stack to help customers in every industry improve time to value, increase agility, and reduce costs.

Now, Ill highlight examples of our momentum and impact starting with Azure. Were building Azure as the worlds computer with more data center regions than any other provider, now 66, including new regions in Austria, Brazil, Greece, and Taiwan. Were expanding our hybrid capabilities so that organizations can seamlessly build, manage, and deploy their applications anywhere. With Arc, customers can extend Azure management and deploy Azure data services on-premise, at the edge, or in multi-cloud environments.

ADVERTISEMENT

With Azure SQL Edge, were bringing SQL data engine to IoT devices for the first time. And with Azure Space, were partnering with SpaceX and SES to bring Azure compute to anywhere on the planet.

Leading companies in every industry are taking advantage of this distributed computing fabric to address their biggest challenges. In energy, both BP and Shell rely on our cloud to meet sustainability goals. In consumer goods, PepsiCo will migrate its mission critical SAP workloads to Azure. And with Azure for Operators, were expanding our partnership with companies like AT&T and Telstra, bringing the power of the cloud and the edge to their networks. Just last week, Verizon chose Azure to offer private 5G mobile edge computing to their business customers.-Satya Nadella, Fiscal Q1 2021 Earnings (Calendar Year Q3 2020)

The calls continue in a similar manner with Microsoft making it clear they have their entire weight behind cloud while Google must continue to cater to its largest revenue drivers search and consumer. The main takeaway we get from the call is that Google is investing in GCP rather than a takeaway of market dominance or growth. Here are a few examples:

ADVERTISEMENT

As weve told you on these calls, given the progress were making, and the opportunity for Google Cloud in this growing global market, we continue to invest aggressively to build our go-to-market capabilities, execute against our product roadmap, and extend the global footprint of our infrastructure And another: An obvious example is Cloud. We do intend to maintain a high level of investment, given the opportunity we see. That includes the ongoing increases in our go-to-market organization, our engineering organization, as well as the investments to support the necessary capex. So, hopefully, that gives you a bit more color there. And, also here And the point that both Sundar and I have underscored is that we are investing aggressively in Cloud, given the opportunity that we see. And, frankly, the fact that we were later relative to peers, we're encouraged, very encouraged, by the pace of customer wins and the very strong revenue growth in both GCP and Workspace. We do intend to maintain a high level of investment to best position ourselves. And I kind of went through some of those items, the go-to-market team, the engineering team, and capex. And so we describe this as a multi-year path because we do believe we're still early in this journey.

The question remains if aggressively investing will have the same impact after the digital transformation has been accelerated by up to six years. Nobody could have predicted covid and the work-from-orders but we see from the growth rates on large revenue bases that AWS and Azure were better positioned to answer the demand.

The race for cloud IaaS dominance is only beginning and the hyperscalers are not resting on their laurels as they compete for the edge. Major strategic partnerships are being struck with telecom companies to break open new uses cases for decentralized applications and increased connectivity. Google mentioned Nokia in their earnings call while Microsoft mentioned AT&T, Verizon and Telstra. Amazon also has partnerships with Verizon and Vodafone. (For brevity sake, you can assume every telecom company is either partnered or will be partnering with multiple hyperscalers for edge computing).

ADVERTISEMENT

Here is a breakdown of the buildout and how these strategic partnerships plan to profit from 5G. The result will be new use cases, such as remote surgery, autonomous vehicles, AR/VR and a significant number of internet of things devices that arent feasible with 4G and/or with the current centralized cloud IaaS servers.

Amazons edge computing technologies are being rapidly built-out. For example, Wavelength is being embedded in Vodafones 5G networks throughout Europe in 2021 after being in beta for two years. This will provide ultra-low latency for application developers enabled by 5G. On Vodafones end, they have developed multi-access edge computing (MEC) to fit both 4G and 5G networks to process data and applications at the edge. This lowers processing time from about 50-200 milliseconds to 10 milliseconds. Amazon is also expanding its Local Zones to offer low-latency in metro areas from L.A. to about a dozen cities in 2021.

In order to support its retail business, AWS built out 200 points of presence where serverless processing like Lambda can run. The network latency map will be enhanced by telco partnerships who have about 150 PoPs per telco.

ADVERTISEMENT

Azure has the largest global footprint across the cloud providers. Where AWS has been the long-standing developer preference, Microsoft is the C-suite/enterprise preferred company across the Fortune 500. Microsofts goal will be to move compute closer to end users and to offer Azure-hosted compute and storage as a single virtual network with security and routing.

Microsoft excelled at hybrid as a strategy for taking market share (which I also detailed as the investment thesis for my position in Microsoft after the company missed Q3 2018 earnings and prior to winning the JEDI contract). Azure Edge Zones extends the current hybrid network platform to allow distributed applications to work across on-premise, edge data centers both public and private, Azure IaaS both public and private. This allows the same security and APIs to work seamlessly across these hybrid environments. The overarching performance will attempt to combine the range of compute and storage capabilities of Azure with the speeds/low-latency of the edge.

Google is also partnering with telecom companies such as AT&T to deploy Google hardware inside AT&Ts network edge to run AI/ML models and other software for 5G solutions. Similar to AWS and Azure, the goal is to open up new use cases for industries, such as retail, manufacturing and transportation.

ADVERTISEMENT

Anthos for Telecom is a Kubernetes-orchestrated infrastructure that can be deployed anywhere including an AWS cluster. In this way, the strategy for Google continues to amplify its strengths which is containerized network functions to merge edge and core infrastructure. This helps with decentralized applications and could potentially compete with network slices to where AT&T could potentially use local breakouts to offer a cloud service tier in a few years from now.

Weve seen Google build some of the best products for developers in terms of automating microservices and container-orchestration with Kubernetes and also ASIC chips (TPUs) that compete with the likes of Nvidia. Im not betting against Googles talented engineers by any means, rather Im simply observing that the infrastructure piece is leaning towards more of a duopoly at this time. Cloud is expensive on a capex level, so if Google doesnt find its footing, the margins driven by ads could take a hit in the near-term.

Who will lead software and AI applications is impossible to predict (and when) as the main competitors will be hundreds (if not thousands) of startups. With that said, I personally own Amwell because Google is a backer and I think health care is an example of a vertical where Googles experience with data can deliver a serious competitive edge. To be clear, Alphabet may have an advantage with AI/ML software whereas this analysis is about the infrastructure. Perhaps there will be a catalyst in the future for Google Cloud to take more share but the strategy is not evident at this time.

ADVERTISEMENT

Beth Kindig owns shares of Microsoft and Amwell which are mentioned in this analysis. The information contained herein is not financial advice

See the article here:
Google Cloud Will Not Be Able To Overtake Microsoft Azure - Forbes

Read More..

Google builds out Cloud with Actifio acquisition Blocks and Files – Blocks and Files

Google is buying Actifio, the data management and DR vendor, to beef up its Google Cloud biz. Terms are undisclosed but maybe the price was on the cheap side.

Actifio has been through torrid time this year. The one-time unicorn refinanced for an unspecified sum at near-zero valuation in May. It then instituted a 100,000:1 reverse stock split for common stock which crashed the value of employees and ex-employees stock options.

Financial problems aside, Google Cloud is getting a company with substantial data protection and copy data management IP and a large roster of enterprise customers.

Matt Eastwood, SVP of infrastructure research at IDC, provided a supporting statement: The market for backup and DR services is large and growing, as enterprise customers focus more attention on protecting the value of their data as they accelerate their digital transformations. We think it is a positive move for Google Cloud to increase their focus in this area.

Google said the acquisition will help us to better serve enterprises as they deploy and manage business-critical workloads, including in hybrid scenarios. It also expressed commitment to supporting our backup and DR technology and channel partner ecosystem, providing customers with a variety of options so they can choose the solution that best fits their needs.

This all suggests Actifio software will still be available for on-premises use.

Ash Ashutosh, Actifio CEO, said in a press statement: Were excited to join Google Cloud and build on the success weve had as partners over the past four years. Backup and recovery is essential to enterprise cloud adoption and, together with Google Cloud, we are well-positioned to serve the needs of data-driven customers across industries.

Actifio was started by Ashutosh and David Chang in July 2009. The company took in $311.5m in total funding across A. B, C, D and F-rounds. The latter was for $100m in 2018 at a $1.3bn valuation.

Google Cloud says Actifios software:

Original post:
Google builds out Cloud with Actifio acquisition Blocks and Files - Blocks and Files

Read More..

5 advantages of a cloud disaster recovery plan – BAI Banking Strategies

Once upon a time, maintaining a physical datacenter or cloud-based backup was an expensive proposition that only the largest financial institutions could afford. The cost of facilities, coupled with the management burden of keeping data in-sync and updated, became a budgetary blackhole for many institutions.

However, thanks to advances in virtualization and cloud technologies, modern data recovery options are now affordable for most banks and credit unions seeking to update their disaster recovery plan (DRP).

Even though natural disasters like hurricanes, tornados and ice storms tend to be rare, they have the potential to cause catastrophic damage to organizations that find themselves unprepared. Cyberattacks and data breaches, on the other hand, occur with increasing frequency. According to a recent Verizon report, 58% of all data breaches in 2020 targeted personal data.

But maintaining a DRP isnt just good for risk managementthere are compliance considerations as well. Disaster recovery planning for financial institutions is still required by regulators. GLBA, FFIEC, EFA and a host of other compliance requirements specific to financial institutions increase the compliance liability of banks and credit unions nationwide.

Currently, financial institutions have a few options for storing and recovering data during a disaster:

While on-premises, secondary datacenter and cloud data disaster recovery options are viable in todays data-first financial sector, the cloud recovery option offers a few advantages to institutions of every size.

The bottom line is that managing and storing data in the financial sector is a dynamic challenge that will only increase as digital channels further expand. Cloud data recovery offers a flexible, cost-effective and scalable option in a disaster recovery plan.

Steven Ward is vCIO manager at Computer Services Inc.

Subscribe to the BAI Banking Strategiesnewsletterandpodcast.

View post:
5 advantages of a cloud disaster recovery plan - BAI Banking Strategies

Read More..

Your Digital Transformation Will Be as Successful as the Foundation Its Built On – CMSWire

PHOTO:Chris Nguyen

Reaching an enlightened state of digital transformation is a different journey often with a different destination for every company. The phrase digital transformation itself is up for interpretation. Some define it as simply moving to SaaS apps or cloud infrastructures. The purer definition, however, is more about reimagining business processes to drive greater business success in the fast-paced digital age.

Working in the software space, I have a front row seat to many companies varying quests towards digital maturity. We tend to expect our largest financial institutions, insurance companies, and to some degree, healthcare companies, to be digitally transformed and cloud-ready. This is due in part to the nature of their businesses sensitive consumer information and massive volumes of data as well as to the fierceness of the competition. If they dont evolve quickly, someone else will steal precious market share. So while theyre not always striving to be innovative, theyre highly conscious of remaining competitive.

The over 30 million small businesses in the US, however, are always weighing the cost of digital transformation against the cost of maintaining business continuity and being happy (and profitable) with the status quo. Small businesses account for 99.9% of our economic engine and jobs, and they all fall in different places along the spectrum of digital maturity.

One thing most of them have in common? They were not ready for 2020.

Here are three big-picture lessons companies across the board learned during the digital transformation crash course we all got this year.

Being agile is arguably more about culture than adopting specific processes. Agile companies are trained to fail fast, iterate and keep moving forward. This requires complete transparency and open communication across teams. Getting negative results is critical to learning and growing.

Because agile companies are constantly in experimentation mode where friction is a part of growth, rather than something to avoid their teams know how to adapt quickly when unideal circumstances present themselves, no matter their origin.

Companies that hadnt established agile cultures prior to COVID-19 lockdowns had difficulty adapting to 2020. Growth and velocity were likely secondary to simply keeping up.

Related Article: The 3 Fundamental Pillars of Organizational Agility

Weve all heard stories of the critical server under someones desk thats responsible for a core business function. For companies that still operate this way due to slowness to adapt or fear that cloud servers arent secure enough 2020 was extremely stressful when they learned they werent allowed in the office the next morning.

There are many areas that can be overlooked when you havent embraced the cloud. For instance, do you use on-premises source control on servers with out-of-date hardware? Is your CRM a modern SaaS tool like Salesforce or is it a legacy client-server product? Are there third-party connectors or adapters you have built internally over the years that arent cloud-ready? For example, a C++ plug-in that works perfect on your LAN but does not have a Web API for the same functions over HTTP?

Every one of us can relate to this because it wasnt too long ago that most organizations were still vulnerable to these issues even at the largest enterprises. The takeaway is that on-prem and legacy issues can have a significant adverse effect when unforeseen emergencies happen in the physical environment.

Related Article: The Role of Distributed Cloud Computing in the Enterprise

While distributed teams arent a new concept for most of us, a 100% remote workforce is.

SaaS communication and collaboration tools had already become a part of some companies everyday processes over the past few years, but the majority of businesses were still slow to adopt. Many remained dependent on drive-by hallway conversations between team members or executives to get things done. To these companies, casual kitchen conversations that led to major business breakthroughs were the norm and instilled in their culture.

Now, with still mostly remote workforces, driving business performance will depend on using the digital platforms that focus on team and company alignment on goals, enable decision-making with analytics, hold teams and individuals accountable, and keep everyone in the know.

The takeaway? Establish a data culture. Use data to drive decision making and understand the data behind your key metrics.

Doing this will increase your teams performance by letting you prioritize the initiatives that help you meet larger organizational goals.

If I were to sum all of this up into a single thought, it would be: You must keep up with the rapid pace of technological change or you will not survive. What was once optional, is now critical.

Jason is the SVP of Developer Tools at Infragistics, where for 16 years hes held roles at the intersection of tech evangelism and product management. He and his team spearhead the customer-driven, innovative features and functionality throughout all Infragistics testing, developer and user experience products.

Read more here:
Your Digital Transformation Will Be as Successful as the Foundation Its Built On - CMSWire

Read More..

3 cloud computing trends to watch in 2021 – TechHQ

Technology has enabled businesses to continue operating this year, and the cloud has taken center stage going forward, it will only play a larger role in the enterprise.

According to CloudTech, public cloud spending is expected to grow from US$229 billion in 2019 to US$500 billion in 2023, with a compound annual growth rate (CAGR) of 22.3%.

Key players Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and Alibaba, and are expected to grow yet bigger, and by the end of 2021, 60% of companies will leverage containers on public cloud platforms and 25% of developers will use serverless.

As this market continues its growth and evolution, here are three trends to watch in 2021.

A cloudless server is a technology that implements functions in the cloud on a necessary basis. Enterprises rely on serverless computing because it provides space to work on core products without the pressure to operate or manage servers. Microsoft CEO Satya Nadella favors serverless cloud and he believes serverless computing can not only respond and focus on back-end computing but also become key to the imminent future for distributed computing.

Serverless was among the top five fastest-growing PaaS cloud services for 2020, according to the Flexera 2020 State of the Cloud report.

Choosing between public, private, or hybrid cloud environments, has proved challenging for some organizations each offers advantages and disadvantages when it comes to flexibility, performance, security, and compliance.

According to Gartner, in 2019, the actual number of companies using hybrid cloud was 58%, up 6% from 2018. Hybrid cloud benefits include speed, control, and security. In terms of speed, it optimizes the network to reduce the latency and speeds up the data so that it can reach where it needs to be. For control, companies can customize the end of their hybrid cloud model, optimize it, and adjust it according to their needs rather than trusting a third-party cloud provider.

The continued demand for hybrid cloud could lead the worlds biggest providers to partially break out from their walled garden approach. By collaborating and introducing some interoperability, they can continue to satisfy multi-cloud demands. This will enable better data sharing and access between partners, who may be working across diverse applications and standards.

A virtual cloud desktop refers to the software requirements of a device being fully managed by cloud service providers. The user just needs to have a screen and some basic hardware while the rest of the processing power will be seamlessly handled by cloud-based services.

Virtual cloud desktop users only pay on cloud usage, eliminating costs associated with acquiring powerful new hardware, updating the existing hardware, and the disposal of redundant computing equipment.

Sometimes known as desktop-as-a-service, this model of computing is offered by Amazon via the Workspaces platform and Microsoft with Windows Virtual Desktop. Google also offers functionality through its Chromebook devices. In practice, this can increase efficiency across a workforce by ensuring everyone is using up-to-date, synchronized technology.

Read the original here:
3 cloud computing trends to watch in 2021 - TechHQ

Read More..

Kubernetes: What You Need To Know – Forbes

Digital generated image of data.

Kubernetes is a system that helps with the deployment, scaling and management of containerized applications.Engineers at Google built it to handle the explosive workloads of the companys massive digital platforms.Then in 2014, the company made Kubernetes available as open source, which significantly expanded the usage.

ADVERTISEMENT

Yes, the technology is complicated but it is also strategic. This is why its important for business people to have a high-level understanding ofKubernetes.

Kubernetes is extended by an ecosystem of components and tools that relieve the burden of developing and running applications in public and private clouds, said Thomas Di Giacomo, who is the Chief Technology and Product Officer at SUSE.With this technology, IT teams can deploy and manage applications quickly and predictably, scale them on the fly, roll out new features seamlessly, and optimize hardware usage to required resources only.Because of what it enables, Kubernetes is going to be a major topic in boardroom discussions in 2021, as enterprises continue to adapt and modernize IT strategy to support remote workflows and their business.

In fact, Kubernetes changes the traditional paradigm of application development.The phrase cattle vs. pets is often used to describe the way that using a container orchestration platform like Kubernetes changes the way that software teams think about and deal with the servers powering their applications, said Phil Dougherty, who is the Senior Product Manager for the DigitalOcean App Platform for Kubernetes and Containers.Teams no longer need to think about individual servers as having specific jobs, and instead can let Kubernetes decide which server in the fleet is the best location to place the workload. If a server fails, Kubernetes will automatically move the applications to a different, healthy server.

There are certainly many use cases for Kubernetes.According to Brian Gracely, who is the Sr. Director of Product Strategy at Red Hat OpenShift, the technology has proven effective for:

Now all this is not to imply that Kubernetes is an elixir for IT.The technology does have its drawbacks.

As the largest open-source platform ever, it is extremely powerful but also quite complicated, said Mike Beckley, who is the Chief Technology Officer at Appian.If companies think their private cloud efforts will suddenly go from failure to success because of Kubernetes, they are kidding themselves. It will be a heavy lift to simply get up-to-speed because most companies dont have the skills, expertise and money for the transition.

Even the setup of Kubernetes can be convoluted.It can be difficult to configure for larger enterprises because of all the manual steps necessary for unique environments, said Darien Ford, who is the Senior Director of Software Engineering at Capital One.

But over time, the complexities will get simplified.Its the inevitable path of technology.And there will certainly be more investments from venture capitalists to build new tools and systems.

ADVERTISEMENT

We are already seeing the initial growth curve of Kubernetes with managed platforms across all of the hyper scalerslike Google, AWS, Microsoftas well as the major investments that VMware and IBM are making to address the hybrid multi-cloud needs of enterprise customers, said Eric Drobisewski, who is the Senior Architect at Liberty Mutual Insurance.With the large-scale adoption of Kubernetes and the thriving cloud-native ecosystem around it, the project has been guided and governed well by the Cloud Native Computing Foundation. This has ensured conformance across the multitude of Kubernetes providers.What comes next for Kubernetes will be the evolution to more distributed environments, such as through software defined networks, extended with 5G connectivity that will enable edge and IoT based deployments.

Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. He also has developed various online courses, such as for the COBOL and Python programming languages.

More here:
Kubernetes: What You Need To Know - Forbes

Read More..

Amazon Web Service Explains Its Major OutageAnd Other Small Business Tech News – Forbes

(Photo by Pedro Fiza/NurPhoto via Getty Images)

Here are five things in technology that happened this past week and how they affect your business. Did you miss them?

1 Amazon Web Services revealed what caused the major outage last week.

Many AWS operations were impacted last week from an outage that happened in the Northern Virginia region after additions were being made to the capacity of its Kinesis servers, which are used by additional AWS platforms such as Cognito and CloudWatch as well as developers. While the capacity addition set off the outage, it was not the sole reason for it. As capacity was being added, the front-end fleet servers started to exceed the amount of threads permitted by the system andwhen the maximum was reachedthe trigger started a domino effect creating the outage. (Source: ZDNet)

Why this is important for your business:

Just to remind, big brands like Netflix, Twitch, LinkedIn and Facebook and many others rely on AWS to deliver their cloud base products and services. Its an $8 billion business for Amazon and a major part of the companys future strategic plans. And yet, even with all that Amazon money, resources and technical know-howit still went down. The cloud is powerful. But its also not infallible.

2 Shopify merchants generated record sales of $5.1 billion over the holiday weekend.

Merchants on the popular e-commerce site Shopify broke records from Black Friday and Cyber Monday pulling in $5.1 billion total for the holiday shopping weekend. In 2019, Shopify set a $2.9 billion record which was broken this year by 5pm on the Saturday after Black Friday, a 76 percent increase YOY (year-over-year). According to the data released by Shopify, online sales ramped up 19 days earlier than years past as well with an 84 percent jump in sales the week of Thanksgiving. Sales during the weekend reached their height on Black Friday by 12pm reaching $102 million within a one-hour window. (Source: Motley Fool)

Why this is important for your business:

The smartest small business owners retailers especially who pivoted to ecommerce this year are reaping the benefits. Its platforms like Shopify, and their competitors, that enabled many small firms to stay in business and even profit during this unprecedented recession.This is not a trend. This is permanent.

3 Small businesses who pivoted to e-commerce saw record sales during Black Friday weekend.

Due to the difficulties brought on for retailers by the coronavirus pandemic, nearly a fourth of small businesses had to close. However, the select businesses who survived shifted to online sales in hopes to continue operations. Many businesseswho used to rely on their in-store offers and special customer experienceshave had to pivot to rely more heavily on their websites, many offering pick-up and online ordering. According to information released by Adobe Analytics, small businesses saw a 110 percent average increase in their online sales throughout the 2020 holiday season so far with a big boost coming during Black Friday weekend. (Source: NBC News)

Why this is important for your business:

Uhsee number 2 above please.

4 Small business digital platform GoSite landed $40 million.

GoSite a small business digital platformrecently received $40 million in fundraising. The funds will go toward hiring needed personnel and will help develop and add more features and offerings for small businesses. (Source: Pymnts)

Why this is important for your business:

Its not just ecommerce thats helping small businesses navigate Covid.Its the ability to leverage all the digital tools that help a business grow. With so many small businesses needing to transition to online due to the coronavirus, GoSite helps small businesses who operate online with their customer transactions as well as payments, bookings, reviews, messages, websites, listings, all in one place. Over the last year, the platform has doubled their users.

5 Microsoft Teams got an overhauled calling interface, CarPlay support, and more.

Microsoft announced changes and upgrades to calling features within Teams this past week which include updates to CarPlay support and the calling interface, to name a few. Some of these and additional features will be ready early next year. (Source: The Verge)

Why this is important for your business:

Get your people ready. The new calling interface is now going to include calling history, voicemail, and a contact list in one location with hopes that Teams will be able to replace the traditional desk phone. Users will soon have the ability to transfer phone calls between desktop and mobile, allowing more mobility as many continue to work from home due to the pandemic. CarPlay will now also make it easier for users to make or answer calls using Siri.

Go here to read the rest:
Amazon Web Service Explains Its Major OutageAnd Other Small Business Tech News - Forbes

Read More..

How AWS is computing the future of the cloud – SiliconANGLE News

The highlight ofAmazon Web Services Inc.s annual re:Invent conference is always the marathon three-hour keynote by Chief Executive Officer Andy Jassy, and despite the show going virtual this year, itll be the same today.

As he always has, the longtime leader of Amazon.com Inc.s cloud computing company will debut adizzying array of new and upgraded services, which AWS is keeping close to the vest until Jassys keynote starts at 8 a.m. PST today.

But in an exclusive video conversation, Jassy (pictured) revealed more than a few hints of whats coming: He offered a deep dive into the processors AWS Annapurna operation has designed, the challenges and opportunities of serverless computing, its plan to democratize artificial intelligence and machine learning, and its let-a-thousand-flowers-bloom strategy for purpose-built databases.The conversation was lightly edited for clarity.

Look for more strategic and competitive insights from Jassy in my summary and analysis of the interview, as well as in two more installments coming in the next week or so, and in thefirst partthat ran Monday. And check out all the re:Invent coverage through Dec. 18 by SiliconANGLE, its market research sister companyWikibonand itslivestreaming studio theCUBE.

Q: Whats your vision on the compute piece of cloud at AWS?

A: Theres three basic modes of compute that people run. There are instances, which is really the traditional way people run compute and often when people want to have deeper access to the resources on the machine. And then theres the smaller units of compute that are really coming on strong and growing at a very rapid clip in containers. And theres serverless. I think all three are there to stay and are all growing at a very rapid rate.

If I look at instances, we have an unmatched array of instances, not just the number of instances, but just if you look at the detail. We have the fastest networking instances with 400 gigabit-per-second capabilities and the largest high-memory instances of 24 terabytes, the largest storage instances. Weve got the most powerful machine learning training in instances and the most powerful inference instances. We have a very broad array of instances with more coming in the next few weeks that will further extend that differentiation.

But I think that one of the things that our customers are really excited about, and its changing the way they think about compute on the instance side, is the Graviton2 chips that we have built and launched in our families, like our R6G and M6G and T4G. Weve used Intel and AMD processors in our instances for a long time. And I actually expect that we will for a very long time. Those partnerships are deep and matter and will be around for a long time. But we know that if we want to continue to push the price-performance of these chips and our customers want us to, were going to have to design and build some of those chips ourselves.

Q: Whats behind doing your own chip designs?

A: We bought a business, the Annapurna business, and they were a very experienced team of chip designers and builders. We put them to work on chips that we thought could really make a big difference to our customers. We started with generalized compute and we built these Graviton chips initially in the A1 instances we launched a few years ago, which really were for scale-out workloads like the web tier or microservices, things like that. It was 30% better price-performance and customers really liked them, but there were some limitations to their capabilities that made it much more appropriate for a smaller set of workloads.

We didnt know how fast customers would pick up those Graviton chips, butthey adopted them a lot quicker than we even thought. And a customer said, Can you build a chip that is a version of that that can allow us to run all our workloads on it? And thats what we do with Graviton2, which if you look at the performance of what weve delivered with Graviton2 chips in those instances I mentioned, its 40% better price performance than the latest processors from the large x86 providers. Thats a big deal. So we have customers trying to move as many workloads as they can as quickly as possible to these Graviton2 instances.

Q: The Annapurna team doesnt get a lot of public attention. What else has it been doing?

A: Weve also put that Annapurna team to work on some hard machine learning challenges. We felt like training was something that was reasonably well-covered. The reality is when you do big machine learning models at scale, 90% of your cost is on the inference or the predictions. So we built a chip to optimize the inference called Inferentia. And that already is growing incredibly quickly. Alexa, which is one of biggest machine learning models and inference machines around, already has 80% of its predictions being made through Inferentia. Thats saving it 30% on costs and 25% on latency.

So were continuing to build chips. We have the scale and the number of customers and the input from customers that allow us to be able to optimize for workloads that really matter to customers.

Q: Turning to containers, whats your strategy given all the competition there?

A: Most providers have a single container offering, which is a managed Kubernetes offering. But we realized with builders that they dont believe in one tool to rule the world. Different developers and teams have different tastes and different interests and different needs. So if youre a developer that wants to optimize for using the open-source Kubernetes framework, then youll use our Elastic KubernetesService (EKS), and its growing incredibly quickly. If youre somebody who wants to optimize for the container framework that has the deepest integration with AWS, youll work on our Elastic Container Service or ECS. Because since we own it, we can launch everything right from the get-go without having to run it through anybody else. So you have deep integration.

And if youre running containers without thinking about servers or clusters, then you run Fargate. By far, the largest number of net new container customers to AWS get going using Fargate because its serverless and they dont have to worry about it. You and I talked about it a few years ago at the Cube. I remember on your show I said that if Amazon were starting from scratch today, that we would build it on top of Lambda and on top of our serverless services. I think it was Stu [Miniman] who said, Youre not serious about that. I said, Oh no, I am serious about it.

Q: How serious are developers about it?

A: In 2020, half of the new applications that Amazon built were built on top of Lambda compute. I think the next generation of developers are going to grow up building in this serverless fashion, which is a combination of having event-driven, serverless computing service like Lambda and a bunch of enabling services like API Gateway and our Event Bus, Event Watch and things like Step Functions for orchestration workflow. But also all the services that can set event driven serverless triggers. We have 140 services at this point, which is seven times more than anybody else has.

You can really build end-to-end serverless workflows and applications that you couldnt a few years ago. I think compute is totally being reinvented and were working hard to help customers have better capabilities, more cost-effective and more agile.

Q: One surprise success is the partnership between AWS and VMware, which many people back in 2016 when it was announced by you and VMware CEO Pat Gelsinger thought was VMware simply capitulating to Amazon. Hows it going, and does it represent a trend?

A: Youre right that VMware Cloud and AWS, or VMC as the offering is called, has been a huge success for both VMware and for AWS. There are a lot of things that they were purported to make it easier to do hybrid that really were a lot of hype and no traction. The traction on VMC is very significant, even just in the last year double the number of nodes, double the number of VMs and big enterprises making their move to the cloud through VMC together.

Because most of the world is virtualized on VMware, to be able to use those same tools that youve used to run your infrastructure on VMware for many years to deploy and manage your AWS deployments is super-attractive. Thats why its compelling, but I will tell you that everybody aspires to have partnerships like that. We have a lot of them and lots of other companies do, but I would say that there arent that many partnerships that work as well as the way its working with VMware and AWS.

Q: Why did it work out that way?

A: It takes both companies really willing to lean in and to commit engineering resources together, to build something and to get your [people in the field] connected. You cant just make a press release and then just let it go and fire, forget. Those teams are meeting all the time at every level. And both Pat and I are very passionate and supportive and prioritize it. And we meet frequently together and with our teams. And I think those teams really function as one. I think our customers are seeing that. Even if you aspire to have a unique partnership like that, it takes a lot of work.

Q: How are developers viewing AWS now? How would you grade yourself in terms of ease of developer use and developer satisfaction?

A: Well, if you rate it based on how people are voting with their workloads and the amount of applications and workloads people are running on AWS, I think were doing quite well. But I would also argue thats not necessarily the right bar. I would say that we are perpetually dissatisfied with making it as easy as possible for developers to abstract away as much of the heavy lifting as you can. And I think were going to be working on that forever.

If you look at containers and you look at serverless, which is the smaller units of compute that more and more customers are moving to, even though weve radically improved how easy it is to get going and customers are moving really fast and Fargate is this totally different serverless offering nobody else has, I think we have a long way to go to get to where we want to be.

Q: What are some of the challenges of serverless?

A: If you use containers and serverless together, or Ill say Lambda on the compute side, you actually want to be able to deploy both of them from the same tools. Nobodys made that easy. No ones made it possible today. Just think about the challenge, John, in the difference in deployed containers than traditional instance-space servers. The traditional instance-space applications, its one code base. You use infrastructure as code tools like Cloud Formation, you build a CI/CD pipeline to deploy it. Its a block of code. If you have to change it, you change the block of code.

Thats very different than containers where people are building in these smaller chunks really of microservices where they all have their own code. They all have their own CI/CD pipelines. There are lots of teams that are actually operating on these that end up comprising a full application. For just one application where you have all those microservices, its really hard to keep them consistent and deploy in a high-quality way to track what everybodys doing that contributes to that application. And theres loads and loads of those. There are no tools today, really no tools, that do that well. And thats something that really matters to developers and something were working on.

Q: Last year on theCUBE, we were riffing on the fact that theres going to be thousands of databases out there, not one database to rule the world. First, I wanted to ask how customers are viewing database licensing issues that may affect which clouds they use.

A: For many years, most companies only ran on relational databases. And when you were in the neighborhood of gigabytes and sometimes terabytes of data that mightve been OK, but in this new world where were in the neighborhood of terabytes and petabytes, and even sometimes exabytes, those relational databases are not appropriate for a lot of those workloads.

If you look at their commercial-grade relational databases, which have really had all of the workloads on them back when people were running relational for everything, theyre basically Oracle and Microsoft SQL Server. And if you look at those two companies and those offerings, the offerings are expensive, proprietary, have high amounts of lock-in.

And then they have licensing terms that are really punitive where theyre constantly auditing their customers. And when they find things, they try to extract more money from them or theyll let you off the hook if you buy more from them. And I think those companies have no qualms about changing the licensing terms midstream to benefit themselves.

Q: Examples?

A: Just look at what Microsoft did with SQL Server over the last year or two where they basically told customers who had bought SQL Server licenses they couldnt use them on any other cloud than Microsoft. Now, is that better for customers? Hell no. Is it better for Microsoft? I think they think so. I happen to think its really short-term thinking, because customers really resent that. And as quickly as they can flee, they will.

But I think customers in general are really fed up and sick of these commercial-grade, old-guard database providers who change the licensing terms whenever they want and the pricing whenever they want to suit themselves. And I think its why so many companies have moved as quickly as they can to those open-source engines like MySQL. Its why we built Aurora, which is 100% compatible. They have additions for MySQL and PostgreSQL. Thats why its the fastest-growing service five, six years running.

So I think that customers are fed up with it. Theyre moving as fast as they can. We have an accelerating number of customers who are looking to move away, not just from Oracle, but from SQL Server, because theyre really sick of whats happening and they dont trust those companies anymore.

Q: More broadly, whats up with all these new databases from Amazon and others, and whats the value for customers?

A: In this new world, theres so much more data. Whats happened over time is that people realize that relational databases are more expensive and complex and overkill than they need for a lot of use cases, and that theyre better off with these purpose-built databases, like key-value stores, or in-memory databases, or graph databases, or time series databases, or document databases, all those types of things.

Most companies have got these central data lakes to run analytics and machine learning. And yet at the same time theyre using more and more of these purpose-built databases and purpose-built analytics services like Athena and Redshift and EMR and Kinesis and things like that. A lot of customers are trying to come to grips with, How do I think about having this data in the middle and this data in all of these external nodes, which I need to do for a lot of my applications for operational performance?

What a lot of customers are asking for help on is how to move that data from the inside out, from the outside in, and from those purpose-built databases on the outside, along the perimeter to other outside databases. Because if you can actually take some of those same views from databases and materialize them into other spots, they open up all kinds of opportunities, which today are really arduous and hard to do. And thats another area that were really squarely focused on.

Q: One of the things weve always said is that the huge thing about cloud is horizontal scalability. You can have purpose-built databases, but if you can tie them together horizontally, thats a benefit, and you can still have vertical specialty for the application. So are the old guard, these old mission-critical workloads, going to be replaced or cloudified or what?

A: An accelerating number of companies are not just building their new databases from the get-go on top of things like Aurora or the purpose-built databases we have, but are migrating away from those older guard databases as fast as they can. Since we built our database migration service, we have over 350,000 databases that have moved.

The Database Migration Service actually makes it quite doable to move the data and the database to another source and the Schema Conversion Tool we have allows you to move those schemas. And the last piece the customers really want help with is how to move that application code thats unique to some of these databases. Because some of these old-guard databases have built these unique dialects that work just with their particular database engine.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Read the original:
How AWS is computing the future of the cloud - SiliconANGLE News

Read More..