Development Tidbits
Outside of the development environment itself (maybe I should cover that in Linux Tidbits), modern tooling allows some very cool stuff, and here are a few ideas I recommend to make life easier and to quicken development cycles. In no particular order:
One-Command Processes
Most devops discipline comes from getting important things down into a single step. Yes, that's "Automation", but deciding where and what to optimize is the key, so this is a list of what I target and a bit on how.
A good philosophy when designing any software is to assume that anything you want to do once you will eventually want to do a million times. (A thousand seemed low).
One-Command Local Dev Deploy
Be able to spin up your development deployment with one command. Not your development environment, but a fully functional if tiny version of whatever you're developing. Generally I either like "docker-compose up" or "vagrant up"; the latter when infrastructure matters, clearly, the former for, well, anything that fits in a container independent of the OS layer.
There actually isn't much discipline required for this with modern tooling, other than to line up all your dev ducklings, which is both the goal and the hard work. Vagrant and docker get this down to a single file for you to manage (Vagrantfile or Dockerfile). The goal here is to make sure you understand your entire dependency tree. Yes, you can write code that works in your dev environment, but you have a given OS, given versions of everything, given hardware... using containers or virtual machines mean you can rely on and move these things with no concerns, and it means you can prove you understand your complete stack.
Note that this isn't exactly the same as One-Command Dev/Test/Prod environments. We'll get to that, and maybe the only difference is some changes in a very top level config file.
One-Command Machine Provisioning
On a big enough environment, just make things easy on yourself and set up One-Command machine provisioning. Need a new dev machine? Spin one up. Need a new prod machine? Spin one up. Virtualization at the hardware level has simplified this in recent years and it's worth taking advantage of. While there used to be some areas -- frequently database systems -- that resisted this due to the performance gains from running on-the-metal software, this has changed in the 2010s and virtualization is the norm rather than the exception. The cost benefits in up-time and maintenance well outweigh the price of overhead on these systems, and the simplicity with which clusters, failover, backups, and all the things you don't want to think about can be automated with virtual machines makes this approach ideal.
This also dovetails nicely with the One-Command Dev Environments, which should, coupled with Ansible or Kubernetes or something, be able to easily scale up a production footprint if you need. Really, this is the sort of thing that AWS and Azure have tried to perfect, largely because they can charge you for it. :-) But however you handle it scalability is the real goal here. And scalability isn't just for production; remember how we said we'd get to One-Command Test/Prod? This is where that happens, coupled with the tooling you had for dev.
We should only have to change a single variable value ("Dev", "Test", "Prod", "Customer-1", "Offsite-7", etc.) to specify the domain of the new machine, and that sole variable should drive machine requirements downstream. Ansible can be really useful upon provisioning and deployment, but often multiple Dockerfiles, Vagrantfiles or build configurations - Maven/Gradle, Jenkins, whatever - are the approach here.
- Dev probably spins things up in debug mode, test may or may not and prod definitely shouldn't
- Dev should include client GUI dev environments, etc., test should have only the test tooling it needs and prod should have nothing outside of what is needed to run and monitor
- Dev should take advantage of cool tooling such as live development (seeing changes reload live on running code), test and prod absolutely should not.
- Test should be running performance tooling that may not be forced to dev (although it should be readily available). Any related overhead and code should be stripped from prod.
- Test should be ALWAYS be concerned with code coverage, style (linting), dependency version control and all those things. Prod should trust test to gate these things properly. Developers should run such tests on demand when it's efficient for them.
- Prod is minimalist, and should include nothing with which test doesn't test or to which dev doesn't have access
And you know, whatever else you need.
One-Command Test-and-Build
There are a few disciplines built in here. The value of "build once deploy many" is tied up in this, as is the complete gating system between development and production.
- Take a look at what takes time in your test/build/deploy cycle and make sure it's only done once whenever it's needed.
- If you have multiple code branches that you're confident do not rely on one another, you can gate their _entire test/build/deploy process_ separately. This can get messy in a monorepo (one reason I'm against them). It's also one reason I prefer the Unix philosophy of independent pieces which perform smaller tasks; you can perform version control at that small level and dependent development can forge ahead with the latest compatible release.
- This should include automatic version numbering for minor releases. I like release numbering, I think it's important for lots of reasons - basic dependency management, a reasonable ability to roll back changes, and a simple way of communicating progress to people consuming your code. Probably lots of other reasons too. I like 3-number schemes (major.minor.bugfix but that's a whole section itself), but whatever works for you; minimally, the lowest version section should be automatically incremented on build, or at least gated if there's a change without a version number increment.
One-Command Production Deployment and Re-provisioning
One Command Security
The idea of a single command "make-new-secrets.sh" should appeal to everyone. I've never truly gotten this to work, but mostly because other priorities ensue, not because it's infeasible. Everything that is a secret should be a secret from the perspective of your central script -- SSL keys, passwords, SSH certs, service account credentials, whatever. Centralized secret stores aren't necessary for this,