Going from COBOL to Cloud Native The New Stack – thenewstack.io

Virtually every technology publication these days is full of cloud stories, often about the success that has been achieved by webscale companies doing amazing things exclusively in the cloud. Unlike Netflix, Twitter and Facebook, however, most companies have a heritage that predates the availability of cloud computing.

Mark Hinkle

Mark has a long history in emerging technologies and open source. Before co-founding TriggerMesh, he was the executive director of the Node.js Foundation and an executive at Citrix, Cloud.com and Zenoss, where he led its open source efforts.

Unlike these relatively young companies that have the benefit of starting more recently and growing to maturity in the cloud native era, there are myriad companies that may feel that they are held hostage by legacy infrastructure that cant be migrated to the cloud for reasons of risk, compliance or compatibility.

Just because you have a legacy investment that would be disruptive to move doesnt mean you cant adopt cloud or cloud native systems that enable new digital initiatives and still capitalize on those legacy investments. However, it does mean that you need to find ways to integrate in a nondisruptive way.

There are a few practices you can put in place to get to a cloud native environment while still using your existing legacy investment. I advocate working on adopting cloud native practices and architecture patterns that can ease your implementation of cloud computing incrementally, which involves adopting cloud computing architecture patterns on-premises.

In the early days of the internet, the idea of stacks was prevalent. In regards to delivering web-based services, Microsoft had the WIMSA (Windows, IIS, SQL Server and ASP) and open source users had the LAMP (Linux, Apache, MySQL, PHP). The LAMP stack was the most democratic, allowing you to choose the vendors for your stack, and vendors provided a single throat to choke should something go awry. The choice to choose the layers of the stack was a benefit many users of legacy technology may not realize today.

When you look at todays applications, the gold standard for reliability is Java. Though you need to manage the JVMs, you need to tune the stack and use garbage collection to manage memory. You also need an app server to serve the instances. Taking a container-based approach to running individual services, you can leverage Kubernetes and Knative (both housed in the CNCF), which can simplify things by scaling containers automatically both up and down as needed.

Kubernetes and containers make application environments portable from on premises to the cloud and back again. An example of how you could get the best of both worlds is to consider Spring Boot, an open source framework for Java developers aimed at cloud native deployments that can be deployed in containers that can run on premises with Kubernetes or in the cloud.

Using composable infrastructure is the best practice, taking the best technologies and solutions to build systems that are decoupled but well integrated. Gartner describes the Composable Enterprise as a composable business made from interchangeable building blocks and follows four principles of composable business: modularity, autonomy, orchestration and discovery. The idea that any system or application can benefit from composability is often overlooked. Anything can be part of composable infrastructure, not just cloud services.

We experience batch processing every day. Our banks typically process our deposits overnight, and we dont see that in our banking app until after the batch processes. The same thing applies to our utilities that process the usage on a monthly basis, and we only see the consumption once a month.

Batch processing was used because the load placed on the data warehouse could potentially interrupt or slow down business operations. So the goal would be to move to an architecture that increases the speed of delivery of data without interrupting current business operations. Thats where extract, load, and transform (ELT) and event-driven architecture (EDA) can help.

Many times, we use the term replicating data and syncing data interchangeably. Technically, theres an important difference. Replication implies a copy of the data (or some subset thereof) is maintained to keep the data closer to the user, often for performance or latency reasons. Synchronization implies that two or more copies of data are being kept up to date but not necessarily that each copy contains all the data, though there is the idea that some consistency is kept between the data sources.

Using an event-streaming technology like Apache Kafka, you can replicate data from read-only data producers (databases, ERP systems, keeping your attack face smaller since you arent granting writes to the database). You can also choose to replicate only whats needed for other systems like mobile apps, web portals and other customer-facing systems without necessarily having them place the load on the canonical database.

Figure 1.1 Extract, transform, and load versus extract, load, and transform

When you look at any major cloud provider, the pattern of event-driven architecture is prevalent. In AWS, for example, services are decoupled and run in response to events. They are made up of three types of infrastructure: event producers, event consumers and an event router.

While AWS deals exclusively in services, your enterprise likely has things like message buses and server software that logs activity on the server. These systems can be event producers. They can be streamed via Kafka or consumed from your log server directly by an event router. In this usage, I suggest the project I work on, the open source TriggerMesh Cloud Native Integration platform to connect, split, enrich and transform these event sources.

For example, you can forward messages from your mainframe using the IBM MQ message bus to integrate your legacy and cloud services like Snowflake. Using the event payloads, you can create data replication without additional load on the producer. You can change that event to a format consumable by the event consumer by changing that event or enriching that event on the fly.

By decoupling the event consumer and producer, you can change the destinations in the event you change vendors (move from AWS to Google) or add additional sources where you may want to replicate data. You also get the benefit of creating synchronizations in real-time, which is in contrast to waiting on batched data to arrive.

EDA isnt a silver bullet. There are times when you may need to make synchronous API calls. Using APIs, you can make queries based on some set of conditions that cant be anticipated. In that case, I am a fan of using open source, cloud native technologies like Kongs API Gateway.

When you talk about code, you might have heard the term WET (Write Everything Twice) as opposed to DRY (Dont Repeat Yourself). In the world of development, WET refers to poor coding that needs to be rewritten and DRY is writing more efficient code that doesnt need to be rewritten. In integration, its not an exact correlation, but I believe synchronous API integration is often WET; you write to the API and then write the response that the API returns.

There are many good reasons to do this when you need to complete a complex integration that requires look-ups and a complex answer. However, it can be overkill.

Event-driven architecture (EDA) provides a way for DRY integration by providing an event stream that can be consumed passively. There are many advantages. If you are forwarding changes via the event streams, you can even do whats called change data capture (CDC).

Change data capture is a software process that identifies and tracks changes to data in a database. CDC provides real-time or near-real-time movement of data by moving and processing data continuously as new database events occur. Event-driven architectures can accomplish this by using events that are already being written but then can be streamed to multiple sources.

Many corporations face one of the most entrenched pieces of legacy technology in the cloud. Although, until I went digging, I didnt realize the full extent of this. Mainframes still run a large amount of COBOL. In fact, our whole financial system relies on technology that is unlikely to move to the cloud in the near future.

One of the most interesting and unforeseen integrations I have run into is the integration of mainframes with the cloud. While Amazon doesnt have an AWS Mainframe-as-a-Service, there is a benefit in integrating workflows between mainframes and the cloud. One global rental car company I work with has an extensive workflow that takes data stored in IBM mainframe copybooks and transforms it into events that are consumed to automate workflows in AWS SQS.

There are many reasons you might want to forward mainframe traffic and not just for workflows, but for data replication, real-time dashboards or to take advantage of cloud services that have no data center equivalent. Also, because you arent logging in to the event-producing system, there can be a security benefit of a smaller attack surface exposing only the event stream and not the host system.

I believe strongly that going forward there will be two main types of infrastructure: those served by cloud providers as services and open source software. Open source has eaten the world. Linux is the dominant operating system in the cloud and the data center. Kubernetes is becoming the open source cloud native fabric of the cloud. Then there is an abundance of free and open source data center software from multibillion-dollar corporations, consortia and innovative start-ups alike.

One incredibly interesting example of composable infrastructure is the ONUG Cloud Security Notification Framework. CSNF is an open source initiative led by FedEx, Raytheon and Cigna that tackles the difficulty of providing security assurance for multiple clouds at scale caused by the large volume of events and security state messaging. The problem is compounded when using multiple cloud service providers (CSPs) due to the lack of standardized events and alerts among CSPs.

Figure 1.2 Architecture diagram of composable infrastructure for ONUG Cloud Security Notification Framework

This gap translates into increased toil and decreased efficiency for the enterprise cloud consumer. Cloud Security Notification Framework (CSNF), developed by the ONUG Collaboratives Automated Cloud Governance (ACG) Working Group, is working to create a standardization process without sacrificing innovation.

The interesting thing about CSNF is that its a loosely coupled set of technologies that can incorporate both cloud services and on-premises technologies. While the initial goal is to normalize security events from cloud providers into a single format, it can also incorporate any number of other tools and data sources as appropriate.

While your existing infrastructure may not be completely modern, theres no reason you cant benefit from modern technologies and cloud services through integration. Firstly, integration is arguably the key to modernization without the dreaded lift and shift. If you look at your integration layer today, Id consider a number of tactics:

For IT operations to thrive, they need to adopt agile practices like DevOps and technologies that are open source, event-driven and cloud native. Though, even if you have an IT heritage to consider, it doesnt mean you are stuck in the past. In the modern world of open source cloud native technologies, you can still reap benefits without a wholesale move to the cloud.

Featured image via Pixabay

View post:
Going from COBOL to Cloud Native The New Stack - thenewstack.io

Related Posts

Comments are closed.