How a script to save myself time became a core application at work

Adam Savard
7 min readJan 16, 2022

When I was hired, there were 3 people at the company; one was the CEO, the other did business-related things and HR, and the final person was my boss, my direct-report. I was told by the CEO that my job was to “keep [my boss] sane”, by offloading some of the programming work that has to be done.

Of course, being a fresh-face not-yet-grad, I didn’t really know what that meant. As I started to work , though, I began to see what the issues were.

It’s nobody’s fault that the tools that were at our disposal in Year One were pretty lackluster. The code had been rewritten from its original, proof-of-concept stage, originally in Visual Studio, to a pure JavaScript and PIXI.JS web application. Why move to PIXI.JS? Zero install required, which is something that a government contractor adores. If all you have to do is whitelist a website instead of allow an app installation, that’s easy. Of course it made sense to do this.

The code was actually pretty mature, I thought at the time, since there was a lot. I do mean a lot: there was anywhere between 10–12 thousand lines of code, in my estimation. I could go back to my earliest commits and check for sure, but that would mean facing my “hello world” days (and I’m not sure if I’m mentally prepared for that).

Oh, you might be wondering what the application we’re developing is: imagine Sim City, but using real city data, simulating real people and all being built in JavaScript. (Sound crazy? The crazier thing is, it works. Not only that, it works well.)

Early Days

In those early days, development was simple; our data was a collection of JSON files which were included and swapped out at runtime, depending on what you selected from a pretty generic splash screen. Simple stuff, but not exactly scalable.

About 6 months into working there, we got a project in that we had contracted out: a shiny new API to help us manage our data. We were going to move from static JSON files to a database that could be modified much more easily. Now at the time, I wasn’t a huge fan of our choice of software (MongoDB); I had gone to school and written tons of SQL, so a real RDB was heavily imprinted on me. With an additional layer of abstraction through mongoose, though, my qualms were quickly put to rest. As an aside, I learned quickly why I was not in charge; no way could I have seen the big picture and how much it would impact us over time.

There was an issue with the API as I saw it, though; we didn’t have a lot of routes for managing the data, mostly just retrieval. We actually populated our local instances through Postman, which for 5 or 6 entries made sense. The more work we did, though, the more I felt like we needed to automate things. Efficient this was not, so I wanted to see if it was possible to make one task, populating a database from scratch, more streamlined.

The Script

The initial version of the “population script”, as I dubbed it, was something written in a couple of hours in bash. I was on Linux, and I knew as much as a script kiddie would, so I took the initiative: we needed something for our server, which ran Linux, and I was sick of manually copy/pasting from Postman. After all, there was a lot of data and it was getting really annoying.

I presented it to my boss and was rather proud of it; it saved me a ton of time, since for testing purposes in the early days I would regularly nuke my local DB and repopulate from scratch. Oh, the joys of not knowing or caring about how to fix things…

My boss liked it, and asked how to run it. That’s when we hit a snag: I was (and still am) the only person running Linux in the company. Mac wasn’t a problem, since the bash scripts were extremely simple, but Windows? This was before WSL was easy to install, before Windows Terminal. There was no convincing anyone to install Linux, either: if it didn’t work for Windows, then we needed another solution.

The Script 2: Electric Boogaloo

So my boss gave me a task: instead of Bash, try writing the script in JavaScript. Make it a node app; at least that way, if something went wrong, it was more easily solvable and debuggable (since we both knew JS and only I was vaguely familiar with Bash). Alright, then, to Node it is!

It was here that I started to learn Node JS; up until this point, I’d used it to serve things through Express, but that was it. No other experience. I found out that you could make a console application and it could read/write to files, send HTTP requests, the whole nine yards. In fact, I pretty much copy/pasted the auto-generated Node output from Postman.

It worked! I didn’t know anything about modifying package.json and the entire thing was pretty much a giant file, but yeah, that would do for now! So “node populate.js” became a regular thing that we would put in our terminals, and all was well. That was, until we hired more devs and our data doubled overnight; we were also creating new data on-the-fly and it couldn’t be easily stored in the static JS files I’d written. We needed another solution.

The Script 3: With A Vengeance

About 2 months into working with our new hires, my boss came up with an idea: rewrite the population script and make it more legible, modular and robust. Alright, easy enough.

One other criteria: add database version control.

Basically, we were asked to write a tool that could transfer data between two databases; so a development DB to production, or local dev DB to a cloud-hosted one. I didn’t know if it was possible to do it without a ton of work validating the data. I had very little knowledge of how HTTP requests were handled. I was on my second rewrite of the small tool that I thought would be a one-off. Well… you have to try, right?

And try we did. What we came up with was basically exactly what was asked of us; populating a database would be easier. We had a “master” copy stored in the cloud that anyone with the tool could access. If you pulled from it, your local DB would be modified. If you pushed to it, the master DB would be modified. Basically, we made middleware for the middleware, or git (albeit really bad git) for databases. It was glorious once it had been completed, I felt; it felt robust, worked well, and we used it for well over a year. But something was rotten, and we only started smelling it in late August of 2021.

The Script 4: An Unexpected (yet completely warranted) Journey

So you know how you’ll be working on a project and only test the happy path? Well, that’s what we ended up doing with our shiny new CLI solution. There were bugs. There was 0 validation. There was next to no error checking, and even better, sometimes it would say “success!” when nothing had even been transferred. To top it all off, it was slow. The last time I checked we were at 400 requests to do an update, and each one was synchronous. Yeesh.

To top it off, we had added functionality to the app but in a “make it work” fashion. You should never, ever do that, especially if you become dependent on the software you made working.

After a few instances of the tool just not working and me, being one of the lead devs who had created it saying “I have no idea what I’m looking at”, my boss and I came to the same conclusion: we couldn’t keep using this. We needed a refactor. I did one even better: we needed a complete reimplementation with extensibility and modularity in mind.

So for the last two weeks, that’s what I’ve done. I took our old tool and chucked it in the bin; it will forever be lost to git history. I started with implementing our old functionality, but with meaningful error messages. I added comments to the code (shocking, I know), created documentation, simplified and streamlined the request process using the node-fetch API… Best of all, I created a new feature: pass in a flag to update only what’s been changed, and the API will do just that. To top it off, asynchronously. We went from 5–10 minute commands to under 90 seconds. It was so fast that we couldn’t believe it. I still don’t, in all honesty.

I’m very proud of what I wrote the last couple of weeks. It’s probably the best software I’ve ever written, and something I’ll continue going back to and tweaking because I love how it turned out. But does that mean that this is the last time that we’ll do a ground-up rewrite? Probably not.

What I’ve learned over the last few years in writing these tools is that your needs will change; you might need that tool today, but tomorrow you might need to use another. Maybe it doesn’t scale well, maybe there’s a better framework to use, etc. But this is why I love software development in startups: you can develop a personal-use tool and suddenly everyone else wants to use it, so you get to write something that you’re passionate about. For me, it’s terminal applications; they’re my first love. Simple, elegant, and makes me feel like hackerman while using them. Take Dave Plummer and Task Manager as another example: something he wrote for himself became a tool used by millions of people every single day.

You’re never going to write perfect software. Heck, a lot of what you write is going to be garbage. But it’s moments like these where you get to finally fix the thing that’s been irking you that make it all worth it.

--

--

Adam Savard

A software developer living in Canada, with experience in JavaScript and the Pixi.JS framework