I’ve seen a few people around asking about how to back up DNS from their various providers. I feel it’s entirely the wrong direction to go. Having people manually making changes to a DNS service, then backing up records somehow is just too much room for human error for my liking. I’ve also seen other setups using bind or another local DNS service and hosting it themselves. At least this allows for backups of the config since bind is pretty much plain text.
I’ve been using all kinds of build and deployment systems over the years, and used different self-hosted and cloud based deployment systems for both work and my own personal projects as well. The Old I’ve been primarily using Jenkins for things for many years as it’s open source, has a huge amount of plugins and community support around it as well. Admittedly things like TeamCity have a better UI, but Jenkins can do all the things TeamCity can do and isn’t closed source so it gets the thumbs up from me.
Since I recently moved my blog to Hugo, I figured it’s a good time to write up a few bits about it. In the setup I went with, there are a few key pieces: Hugo Github pages Cloudflare There are alternatives to each, but these are the one’s I’ve found to be simple to use and maintain, as well as being 100% free for personal use. Let’s start at setting up Hugo.
I previously had my blog hosted using Ghost, which if you aren’t aware of it is a node application that is similar to a lighter-weight Wordpress. The downside is that you need to run the node process to use it, so you need a server or VPS somewhere to run it. For a low traffic blog it’s a bit overkill, and particularly since the content is static, why would you want to have to run an active service for this?
Getting a remote SSL certificate from a server with openssl is pretty straightforward, it looks something like this: openssl s_client -showcerts -connect www.dray.be:443 If you run that it will however hang until the connection closes since it recieved no EOF from your client, so adding a </dev/null at the end to slurp /dev/null to stdin fixes this. But if you’re connecting to a server with multiple domains hosted using SNI, it will only return the default certificate.
I started using resin.io recently for a few small projects and love the idea of using docker as a deployment method. It lets you define your application and requirements quite nicely and in a relatively standardized way at that. But currently where it falls a little short is the ability to run multiple applications on a single node, although from what I’ve been seeing it is one of the most requested features and hopefully isn’t too far away.
GNU Parallel is a fantastic utility, and I’ve been using more and more of it recently. Often I end up with something that will be a one off task, write a quick 4-5 line bash script to do what I want and that’s done. But sometimes there is a slow task that can be done in parallel, and that’s where it really shines. I recently wanted to make sure the 200 odd URLs in a html file were valid and returning 2xx responses, so I wrote a quick bash script to do so
By default Gnome lets you set a period of inactivity after which the system should suspend/hibernate/etc. This is fine for a desktop where you’re actively using it, but I also use Gnome on my media center where this is less than ideal. The use case I have is that I might play a 20-180 minute long video throughout which I don’t want any power saving features like screen dimming, sleep, etc.
All the time I see people trying to handle large numbers of files in the shell, and any of you that have tried this before would know that it is not pretty. Try doing an ls * in a folder with a few hundred thousand files and you’ll be lucky to have anything happen in a reasonable time frame. There’s a few gotcha’s that apply to these sorts of situations. The first is that using ‘*’ in the command will use shell globbing, so before executing the ls in a folder structure like this for example:
GPG has always been a bit of a double-edged sword. It’s fantastic in terms of security, reliability and ubiquity, sure. But it’s never been particularly easy to use and finding the correct key for a person is not very reliable. Once you get used to the CLI it’s not bad, but it has a bit of a learning curve, and finding the right person and the right key can require a bit of luck.