What with the 60th birthday of BASIC and SQL turning 50, I felt inspired to look back into software development’s past. Then when I saw Ian Miell’s diagram that he originally made for a presentation (he’s a partner at Container Solutions), I could immediately see how it would make a great device to hang some history on.
Not every tool has been placed in the diagram — only ones that Ian thought made a considered advancement. For example, Ansible, a configuration tool I’m quite familiar with, is missing. Many developers in mid-career today will doubtless see Kubernetes as the recognizable final result in the “cloud native” tree. But this post is more about what came before. So let’s jump back.
While punch cards sound truly arcane, they were used in our school back in the 1980s. Pupils in very early Computer Studies classes wrote instructions with punch cards in a language called CESIL (Computer Education in Schools Instruction Language). These were sent to a mainframe to be processed, with the results coming back on a printout. Needless to say, very few kids got anything to run. And computers remained uncool.
While Graphical User Interfaces (GUI) like Microsoft Windows helped democratize who could use computing among the populace in general, it was the shell script where programmers first saw how a process could be controlled by a sequence of commands,and how this was a separate domain from program code itself.
One of the first quiet revolutions was to stop thinking in terms of a sequence of commands. The conceptual jump from sequential coding was the declarative form — not that anyone used that term. This was only possible when there was enough memory available and system space to both separate the concept of what needed to be done, and how to do it. SQL is a good example of a declarative language because we state what we want to create or see, but make no mention of exactly how or where (or even why) it should happen. This started the path of computers being a tool for computing, yet with both things retaining a subtly separate identity. It also went slightly against the idea of the “programmer” as a worker who robotically entered lines of code, and ushered us toward the age of the “developer”.
In the early 90s when I first wanted to build an executable program using the C language, I needed Make. It was both a declarative tool, and one of the earliest software production automation tools. As we remembered when looking at Zig, C needs to bring the source code together, include header files, compile the language into object code, and then link up the required libraries into a single executable format. So there was a chain of events that needed to be done, and these were inferred from the instructions and the type of file target.
Looking across the diagram from make, a tar file was one of the first organizational attempts to make portable sets of files for deployment. I would have seen this first in a zip file, but it introduced the same concept — it was used to make a target system look like the development system. This was an early look into configuration management.
Source control (or version control, to the right of tar on the diagram) took quite some time to become relevant. Files and storage were expensive, and programs were small. But as the size and time investment grew longer, and the concept of collaboration became commonplace, tooling was needed. CVS (Concurrent Versioning System) was the first recognized client-server system that tracked changes in a code repository. I remember a conversation with my team about moving from SVN to Git. Git was not a simple sell, because it had the three basic steps of adding, committing and pushing code, versus the two of previous source control systems. Git treated your local machine as a valid repository.
Working with scripts — or recipes — for any of the main configuration managers (Ansible, Chef or Puppet) meant that by the 2000s developers had to be fully cognizant of a pipeline. This brought them closer to other parts of the production process, like Quality Assurance (QA), as they had a position for testing further down the pipeline.
The “distributed” part of Git that mattered wasn’t that it didn’t need a central storage location — most organizations still ran one with BitBucket, GitLab or GitHub — it is that the “source of truth” could be distributed to branches reasonably well. The differences between the “main branch” and the current “release branch” could be methodically understood. This was a major technique for maintaining sanity while collaborating. Branches could be coupled with environments, like Staging, Testing and Production.
Java, the major language in this period, used Maven for dependency management to pull down missing artifacts. In an attempt to resolve everything, it would often pull down what felt like the entire internet to make sure your local repository had everything it needed to build your project.
Jenkins, the successful result of a project fork, was the key to the success of Continuous Integration/Continuous Development (CI/CD). It automated the process of pulling code from source control, building it, then delivering it to an environment perhaps for automated testing. I remember someone creating physical traffic lights to show whether our central build was working or not. Trying to leave work on Friday evening with traffic lights at red was bad, and got people into the habit of not checking in breaking changes at the end of the week.
Finally, we come to the dawning of the cloud era. When you literally can’t touch your infrastructure, it becomes even more important to state in advance what you want to do. We are now in the 2010s. As teams of developers were now usually handed laptops, there was the need to somehow capture the Linux experience onto Windows or (if you were lucky) a Mac. I remember using early Virtual Machines (VMs) like Oracle’s VirtualBox. If we tried using a Linux OS with a GUI, the VM would have to handle the tricky bits like making sure your laptop’s mouse worked correctly within, say, Ubuntu.
The principle of isolation was exercised in VMs, and finally honed in the container, which didn’t try to abstract an entire physical machine.
Docker has been the lynchpin of cloud adoption as it allows the developer to commune with the container, without worrying so much about where the container is. The responsibility of worrying about the overall infrastructure could then shift elsewhere. Instead of a project, we now had an image.
Today we are at the bottom of the diagram, where the focus has shifted to container management, orchestration, scaling and monitoring.
The cloud has presented us with new opportunities, and a host of different problems. One company, Amazon, has succeeded in controlling the mindset of cloud development — our artifacts or components are now EC2 and S3. Developers have been introduced to the vagaries of the internet, from peak capacity to the geography and legality of data storage.
Right now, we await further repercussions of Generative AI. One could argue that the concept of “cloud native” is no longer so prevalent, as the GitOps flagship Weaveworks is no longer active.
But the diagram isn’t necessarily about cloud, or declarative programming, or even about DevOps. I’ve worked with most of the tools mentioned here without thinking in these terms. It is, of course, about a meandering journey as we spend more time on working out how to spend as little time as possible dealing with the results of changes.
The history here is also as much about the value of collaboration, as well as the continuing search for the Source of Truth. And yet the future will still consist of a developer saying the words “but it works on my machine” for some time to come.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Excerpt from:
From Cards to Clouds: A Family Tree of Developer Tools - The New Stack
Read More..