Development Tidbits

From ChipWiki
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Outside of the development environment itself (maybe I should cover that in Linux Tidbits), modern tooling allows some very cool stuff, and here are a few ideas I recommend to make life easier and to quicken development cycles. In no particular order:


One-Command Processes

Most devops discipline comes from getting important things down into a single step. Yes, that's "Automation", but deciding where and what to optimize is the key, so this is a list of what I target and a bit on how.

A good philosophy when designing any software is to assume that anything you want to do once you will eventually want to do a million times. (A thousand seemed low).


One-Command Local Dev Deploy

Be able to spin up your development deployment with one command. Not your development environment, but a fully functional if tiny version of whatever you're developing. Generally I either like "docker-compose up" or "vagrant up"; the latter when infrastructure matters, clearly, the former for, well, anything that fits in a container independent of the OS layer.

There actually isn't much discipline required for this with modern tooling, other than to line up all your dev ducklings, which is both the goal and the hard work. Vagrant and docker get this down to a single file for you to manage (Vagrantfile or Dockerfile). The goal here is to make sure you understand your entire dependency tree. Yes, you can write code that works in your dev environment, but you have a given OS, given versions of everything, given hardware... using containers or virtual machines mean you can rely on and move these things with no concerns, and it means you can prove you understand your complete stack.

Note that this isn't exactly the same as One-Command Dev/Test/Prod environments. We'll get to that, and maybe the only difference is some changes in a very top level config file.


One-Command Machine Provisioning

On a big enough environment, just make things easy on yourself and set up One-Command machine provisioning. Need a new dev machine? Spin one up. Need a new prod machine? Spin one up. Virtualization at the hardware level has simplified this in recent years and it's worth taking advantage of. While there used to be some areas -- frequently database systems -- that resisted this due to the performance gains from running on-the-metal software, this has changed in the 2010s and virtualization is the norm rather than the exception. The cost benefits in up-time and maintenance well outweigh the price of overhead on these systems, and the simplicity with which clusters, failover, backups, and all the things you don't want to think about can be automated with virtual machines makes this approach ideal.

This also dovetails nicely with the One-Command Dev Environments, which should, coupled with Ansible or Kubernetes or something, be able to easily scale up a production footprint if you need. Really, this is the sort of thing that AWS and Azure have tried to perfect, largely because they can charge you for it.  :-) But however you handle it scalability is the real goal here. And scalability isn't just for production; remember how we said we'd get to One-Command Test/Prod? This is where that happens, coupled with the tooling you had for dev.

We should only have to change a single variable value ("Dev", "Test", "Prod", "Customer-1", "Offsite-7", etc.) to specify the domain of the new machine, and that sole variable should drive machine requirements downstream. Ansible can be really useful upon provisioning and deployment, but often multiple Dockerfiles, Vagrantfiles or build configurations - Maven/Gradle, Jenkins, whatever - are the approach here.

  • Dev probably spins things up in debug mode, test may or may not and prod definitely shouldn't
  • Dev should include client GUI dev environments, etc., test should have only the test tooling it needs and prod should have nothing outside of what is needed to run and monitor
  • Dev should take advantage of cool tooling such as live development (seeing changes reload live on running code), test and prod absolutely should not.
  • Test should be running performance tooling that may not be forced to dev (although it should be readily available). Any related overhead and code should be stripped from prod.
  • Test should be ALWAYS be concerned with code coverage, style (linting), dependency version control and all those things. Prod should trust test to gate these things properly. Developers should run such tests on demand when it's efficient for them.
  • Prod is minimalist, and should include nothing with which test doesn't test or to which dev doesn't have access

And you know, whatever else you need.


One-Command Test-and-Build

There are a few disciplines built in here. The value of "build once deploy many" is tied up in this, as is the complete gating system between development and production. In practice, "One-Command" here is often just a merge to a master or release branch if your tooling is set up. "One-Command" can often mean "One Step", but "Command" makes it more specific because we want the step to be as agnostic as possible to tooling so that, if needed, we can replace it.

  • Take a look at what takes time in your test/build/deploy cycle and make sure it's only done once whenever it's needed.
  • If you have multiple code branches that you're confident do not rely on one another, you can gate their _entire test/build/deploy process_ separately. This can get messy in a monorepo (one reason I'm against them). It's also one reason I prefer the Unix philosophy of independent pieces which perform smaller tasks; you can perform version control at that small level and dependent development can forge ahead with the latest compatible release.
  • This should include automatic version numbering for minor releases. I like release numbering, I think it's important for lots of reasons - basic dependency management, a reasonable ability to roll back changes, and a simple way of communicating progress to people consuming your code. Probably lots of other reasons too. I like 3-number schemes (major.minor.bugfix but that's a whole section itself), but whatever works for you; minimally, the lowest version section should be automatically incremented on build, or at least gated if there's a change without a version number increment.
  • Release notes should be built automatically
  • Documentation should be a part of the build and release

Whatever else... but make things nice and complete and ONE STEP.

Build Gating

Just so we're clear here what we mean by "gating" is that we erect a gate... a barrier so that a build won't become a release unless all criteria are met. Tests pass, test coverage is adequate, version numbers are updated, etc. This is normally just a failure to complete a step in a development automation tool accompanied by a nice email, some status updates and all that nicety.

  • Test coverage
  • Tests pass
  • Static analysis (linting) passes
  • Version numbers update
  • No secrets are in the wrong place


One-Command Production Deployment and Re-provisioning

Production, like dev and test, should be just the same, but, you know, different:

  • Strive for zero-downtime deployments. You have to plan this early; if you can parallelize your implementations so that multiple versions can run in production at once (i.e. slowly roll out changes to various production machines, moving production load around), that's ideal. Easier said than done and not always worth the time... some software may be ok with downtime and difficult to divide, so, you know, _Make Good Choices_.
  • I added "Re-Provisioning" here to mean several things, but in general the ability to deploy whatever you want wherever you want. Production deploys aren't just incrementing to the most recent version everywhere. Roll-backs should be easy in case of a problem. Deploying an ancient version somewhere for forensics may be useful. Repurposing machines between dev/test/prod or other areas; A/B testing, whatever.
  • Be sure to update everything. Not just your application code, but documentation, and take the opportunity to do system maintenance. Do you defragment disks still? Now's the time. Reboot machines? Why not. Of course if you've obviated the need for all that with VMS, then great, but if you need to, do it now, and do it ALL with One Command. This is even a good time to update ALL your secrets. Yes, ALL of them, every time... see "One-Command Security"

One-Command Security

The idea of a single command "make-new-secrets.sh" should appeal to everyone. I've never truly gotten this to work, mostly because other priorities ensue, but I've gotten close and it's clearly not infeasible. Everything that is a secret should be a secret from the perspective of your central script -- SSL keys, passwords, SSH certs, service account credentials, whatever. Centralized secret stores aren't necessary for this (Vault, PasswordState, LastPass, etc.) although they can help.

Of course you have to have permissions to change secrets -- either higher-level security or the old secrets themselves. But that's all manageable.

Do it

Anyway, just do it. Someday I'll post skeleton code, maybe, but it will get stale, but only recently has devops tooling for this become stable, open source, and widespread. Just Do It; the sooner you do the sooner it will pay off.