Clojure app setup for Auto-deploy with raw systemd

Table of Contents

 

  • REPLACED [2022-11-11 Fri] The below is hopefully informative, but it actually only causes a thing to deploy once and then to re-deploy on system restart. For instructions that ACTUALLY auto-deploy, see https://tech.toryanderson.com/2022/11/11/systemd-devops-run-and-restart-services/

  • Updated [2022-09-19 Mon] Fixed error in deploy script that occurred if trying to restart but nothing was in the docket

  • Updated [2022-07-13 Wed] Enhanced the server-side deploy script to operate more transparently if files are missing. Also noted enabling of the systemd files, and cautioned not to open ports.

With this, as much for my own notes as for anyone else’s instruction, I’m detailing setting up a deployment process for a long-running Clojure application. As some background, we utilize Apache on Ubuntu Linux servers and I will be deploying uberjar files. We deploy both staging versions, for our clients to see and play with, and production versions once the staging version is approved. You will need to have full admin privileges on the server.

Suggestions are welcome.

This documentation includes instructions for:

  • Apache reverse-proxying
  • SystemD .service and .path files
  • Shell-scripting

Decide on the desired port

Ports in the 3000 range are usually safe, and any future apps will need to be deployed here. In this case I chose port 3001 as my target, and future apps will be 3002, 3003, etc. This will come into play when we set up our reverse proxy and our startup scripts.

Build local deployment script that utilizes systemd startup

We make two shell scripts: one on our development (re: local) machine, which just builds and ships our uberjar from our code, and one that resides on the server, which receives the uberjar, backs up the old one before replacement, and then swaps the new one into its place.

Local publish_staging.sh

This lives in my project directory and utilizes ssh aliases in my ssh config so there are no passwords, so it’s easy to include in my version control. Note that I have a build profile for “Staging” that specifies details like which database to use. A similar one exists for prod, and its deploy file will look just like this. That one would end up being called publish_prod.sh.

#!/bin/bash
### Publishes the staging profile
lein clean
lein with-profile staging uberjar
scp target/MYAPP.jar humdev:/srv/webapps/MYAPP/docket/
# run start script here
ssh -t humdev "/srv/webapps/MYAPP/deploy.sh"
echo "placed on humdev and started"
exit 0

You see that the ssh line is executing a script on the humdev server called “deploy.sh”.

Build Server deployment script

On the server we have two stages of implementation: backup the existing jar-file into a dated location, and then deploy a new one to replace it. These are handled in one script, which is the same one called by our local deployment script above. We’ll be leveraging systemd to redeploy whenever the jar file is replaced.

It makes sense for our action to take place in our /srv/webapps/MYAPP/ directory, under which we create two other directories:

  • /docket where the local deployment will place its new jar file before it is swapped into action
  • /archive where we will put the backup of our running application which is being replaced
  • Our actual running jar will be at the surface level above these two directories, and will be called simply MYAPP.jar.

Server deploy.sh

With those directories created, we also create our deploy.sh. It will tag and archive and then replace the actionable jar file:

#!/bin/bash
deployment_path='/srv/webapps/fttv';
date=$(date +%Y.0%m.%d.%T);
filename="fttv.jar";
archive_filename="$filename.$date";
deployment_file="$deployment_path/$filename";
docket_file="$deployment_path/docket/$filename"
# Archive existing thing
if (( test -f "$docket_file" && test -f "$deployment_file")); then
    mv "$deployment_file" "$deployment_path/archives/$archive_filename" &&
        echo "File archived: $archive_filename"
else
    echo "No file to archive or no file in docket."
fi

#deploy new thing
mv "$docket_file" "$deployment_file" &&
echo "deployment archived and repositioned";

#echo "FTTV reloaded";

Remember to use absolute paths in the script; otherwise you may have issues with finding paths when you try to run the script remotely with ssh -t later.

Build SystemD startup file for .jar file

On the destination server I create two SystemD files for your application: the actual service, which will determine how to start a thing, and a Path file, which will cause it to restart when it is edited on disc (ie, a new version is deployed.

Initialize user and groups

Add your desired users and groups.

groupadd -r appmgr
useradd -r -s /bin/false -g appmgr jvmapps
# id jvmapps
# uid=451(jvmapps) gid=449(appmgr) groups=449(appmgr)

.service file

I placed the service in the default systemd directory: /etc/systemd/system/MYAPP.service . This file tells it how to start my jar file, giving it the desired port.

[Unit]
Description=Fairytale TV Service

[Service]
Environment=MYAPP_PORT='3001'
WorkingDirectory=/srv/webapps/MYAPP
ExecStart=/usr/bin/java -Xms128m -Xmx256m -jar MYAPP.jar -p ${MYAPP_PORT}
User=jvmapps
Type=simple
Restart=on-failure
RestartSec=10

.path file

This file tells it to redeploy when the file is changed, which is whenever we past our new version in place. It is placed in the same directory as the other, and is linked via the “Wants” line under [Unit]. We cause our file to redeploy whenever there is a change to srv/webapps/MYAPP/MYAPP.jar, the location we’ve decided to put our app.

[Unit]
Wants=MYAPP.service

[Path]
PathChanged=/srv/webapps/MYAPP/MYAPP.jar

[Install]
WantedBy=multi-user.target

Enable services and tell systemd about changes

We reload the daemon so it knows about our new service and path files and then we enable our app, meaning it will start at system-startup.

$> sudo systemctl enable MYAPP.service # start our actual program every boot
$> sudo systemctl enable MYAPP.path # turn on the file watcher for future changes
$> sudo systemctl daemon-reload # OR, if reloading the whole systemd right now is not wanted:
$> sudo systemctl start MYAPP # start the app right now before any restarts

With this we are almost done – our app will always be running on the server (as long as it runs at all) and will update whenever we put a new version of the jar file into the designated location. Now to make the last step, building the apparatus that takes our local code and results in a new jar going to the right place.

Build Apache conf file and enable

The filename needs to be decided, and a consistent filename convention decided. I like this one: address-PORT.conf. For example, for this one it’s v2.MYAPP.byu.edu-3001.conf. Also make sure you have proxy enabled with sudo a2enmod proxy. You might need to enable other proxy mods as well, like proxy_html and proxy_http. They should all be included in your install of Apache, so no external downloads necessary.

<VirtualHost *:80>
        ServerAdmin webmaster@localhost

        ServerName v2.MYAPP.byu.edu

# DON'T FORGET TRAILING SLASH!
        ProxyPass / http://127.0.0.1:3001/
        ProxyPassReverse / http://127.0.0.1:3001/

        ErrorLog ${APACHE_LOG_DIR}/MYAPP-error.log
        LogLevel warn
        CustomLog ${APACHE_LOG_DIR}/MYAPP-access.log combined
</VirtualHost>

After that just remember to enable our file with sudo a2ensite v2.MYAPP.byu.edu-3001 followed by apache restart with sudo apachectl graceful , and, assuming you’ve worked out your DNS appropriately (which is beyond the scope of this post), you are ready to visit your site once you’ve deployed.

  • Gotcha: if you forget the trailing slash in the ProxyPass directives, only the requests for the front-matter will succeed; all resources and deeper routes will fail by being passed to, e.g. for a resource at /assets/CSS, <Site>assets/css instead of the desired <Site>/assets/css. Your browser will simply show server proxy errors, but your server logs may show error like

[Tue Sep 01 04:14:37.081598 2020] [proxy:error] [pid 17880] [client 10.0.82.130:38780] AH00898: DNS lookup failure for: 127.0.0.1:3001scripts returned by scripts/sigma.min.js, referer: http://v2.MYAPP.byu.edu

Notice the DNS lookup failure for: 127.0.0.1:3001scripts.

Conclusion & Further Work

Voila! We are up and running and simply re-run our local deploy.sh to trigger updates, and visitors will always see our latest staging thing at v2.MYAPP.byu.edu. A similar process on a different server will mark our production thing at MYAPP.byu.edu.

For our heavy-duty apps we overlay a Jenkins server on this which will be triggered by certain code pushes on Git and, instead of our local code instruction above, will own it’s own version of the code which it will build and deploy.

Gotcha! Tips & Warnings

  • Ensure your server has java installed so it can run java -jar
  • Ensure your server has Apache ModProxy enabled so it can reverse-proxy: sudo a2enmod proxy
  • Take care that you apache conf has trailing / at the end of its proxy lines
  • Ensure that appropriate users and groups exist before you specify them in your service file
    • Make sure your target files and directories have the right permissions/ownership with this user/group
  • Make sure your DNS provider is actually pointing the desired URL at your server
  • DON’T enable the specified port publicly on the server. The actual port that your apache is reverse-proxying to is for local access only; do not enable this to the internet.

Resources

Tory Anderson avatar
Tory Anderson
Full-time Web App Engineer, Digital Humanist, Researcher, Computer Psychologist