Why Capistrano Got Usurped by Docker and Then Kubernetes – The New Stack

While listening to the much appreciated intellectual property and digital rights advocate Cory Doctorow reading a little of his new book, I heard him mention the place in California called Capistrano. But of course, I remembered Capistrano, a remote server automation tool popular in the early 2010s it was effectively a pre-containers and pre-Kubernetes tool.

Im sometimes interested in what happened to commonly used technology that lost popularity over time. Of course, Capistrano isnt actually dead even if I am using the past tense to describe it. Open source tools never truly die, they just become under-appreciated (and possibly placed in the attic). I remember using Capistrano as a remote server automation tool a little over a decade ago. Using SSH, it would follow a script to allow you to deploy your updates to target servers. An update might be a new executable file, maybe some code, maybe some configuration, maybe some database changes. Great, but why look back at a system that is no longer in regular use?

Firstly, to understand trends it helps to look at past examples. It also helps to note the point at which something decreased in popularity, while checking that we havent lost anything along the way. Current tech is just a blip on the timeline, and it is much easier to predict what is going to happen if you glance backwards occasionally. If you find yourself having to work on a deployment at a new site, it is good to have a grab bag of tools other than just your one personal favourite. You might even have to use Capistrano in an old stack. So let us evaluate the antique, to see what it might be worth.

Capistrano understood the basic three environments that you would work on: typically production, staging and development. A development environment is probably a laptop; a staging environment is probably some type of cloud server that QA can get at. Using these definitions, Capistrano could act on specific machines.

The basic command within Capistrano was the task. These were executed at different stages of a deployment. But to filter these, you used roles to describe which bit of the system you were working with:

role :app, "my-app-server.com"role :web, "my-static-server.com"role :db, "my-db-server.com"

role :app, "my-app-server.com"

role :web, "my-static-server.com"

role :db, "my-db-server.com"

This represents the application server (the thing generating dynamic content), the web pages or web server, and the database as separate parts. You can of course create your own definitions.

Alternatively, you could focus more on environment separation, with roles operating underneath. For a description of production, we might set the following:

# config/deploy/production.rbserver "11.22.333.444", user: "ubuntu", roles: %w{app db web}

# config/deploy/production.rb

server "11.22.333.444", user: "ubuntu", roles: %w{app db web}

The default deploy task had a number of subtasks representing the stages of deployment:

Here is an example of a customized deploy task. This ruby-like code uses both the roles to filter the task, as well as the stage of deployment. In this case, we can update the style.css file just before we are done:

namespace :deploy do after :finishing, :upload do on roles(:web) do path = "web/assets" upload! "themes/assets/style.css", "#{path}" end on roles(:db) do # Migrate database end end end

namespace :deploy do

after :finishing, :upload do

on roles(:web) do

path = "web/assets"

upload! "themes/assets/style.css", "#{path}"

end

on roles(:db) do

# Migrate database

end

end

end

To fire this off on the command line, you could use the following after Capistrano was installed:

There is a default deploy flow as well as a corresponding rollback flow. Here is a more detailed look at how that could go:

deploy deploy:starting [before] deploy:ensure_stage deploy:set_shared_assets deploy:check deploy:started deploy:updating git:create_release deploy:symlink:shared deploy:updated [before] deploy:bundle [after] deploy:migrate deploy:compile_assets deploy:normalize_assets deploy:publishing deploy:symlink:release deploy:published deploy:finishing deploy:cleanup deploy:finished deploy:log_revision

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

deploy

deploy:starting

[before]

deploy:ensure_stage

deploy:set_shared_assets

deploy:check

deploy:started

deploy:updating

git:create_release

deploy:symlink:shared

deploy:updated

[before]

deploy:bundle

[after]

deploy:migrate

deploy:compile_assets

deploy:normalize_assets

deploy:publishing

deploy:symlink:release

deploy:published

deploy:finishing

deploy:cleanup

deploy:finished

deploy:log_revision

You can see the hooks started, updated, published and finished which correspond to the actions starting, publishing, etc. These are used to hook up custom tasks into the flow with before and after clauses like we saw above.

Note that after publishing, a current symlink pointing to the latest release is created or updated. If the deployment fails in any step, the current symlink still points to the old release.

The run this, then run that model wasnt always a good way of predicting what your system would be like after deployments. Tools like Chef were better at handling sprawling systems, because they started with a model and said make this setup true. Chef worked in terms of convergence and idempotence. Missing bits were added, but after that re-applying the same steps didnt change anything. Hence, multiple executions of the same action did not cause side effects on the state.

The flexibility of Capistrano would allow less experienced developers to build Jenga towers of working but unstable deployments.

By contrast, a single Docker image allowed systematic control of OS, packages, libraries, and code. It also allowed a laptop and a cloud server to be treated similarly just as places to mount containers.

And finally, Kubernetes handled clusters without having to worry about slowdowns and time-outs. Having a fully transparent infrastructure, with the ability to get lists of the services and exact configurations needed to run all aspects made life much easier for DevOps teams. Instead of changing already-running services, one could create new containers and terminate the old ones.

One other sticking point with Capistrano from a modern point of view is that it is built from Ruby. The Ruby language is unfairly tied to the popularity of Ruby On Rails; and that has fallen out of favor with the rise of Node.js and JavaScript. Overall, other languages and language trends have overtaken it in popularity: Python has become the favored scripting language, for example. The tasks shown above use a DSL that is effectively the ruby Rake build tool.

Has anything been lost? Possibly. Having a set of customized tasks to make quick changes does encourage a hacking approach, but it also allowed for smaller temporary event-based changes. Make this change happen as opposed to I always want the server to look like this.

It might be better to say that tools like Capistrano appeared as a waypoint on a deployment journey for any team, before a wider view was needed. But even as a dusty relic, Capistrano remains a great modular tool for automating the deployment and maintenance of web applications.

As for Capistrano the place in California? Bad news Im afraid.

See the article here:
Why Capistrano Got Usurped by Docker and Then Kubernetes - The New Stack

Related Posts

Comments are closed.