Git-hooks and web development
17th June 2016 - Web , Programming , DevLog
In a previous post I wrote about how this website was created and how the content was managed with CouchCMS. However there is more than just the textual content to this site. In the recent post about my Nim implementation of TinyWM the need for syntax highlighted code became apparent. Other features recently added was the home and RSS-feed buttons along with their features. What all these have in common is the need to modify the HTML, CSS, and/or JavaScript of the site. CouchCMS touts itself as a CMS for developers. This means that instead of a hosted interface for creating such changes, as typically seen in "lay-men CMS'", everything is done locally with your own tools. In fact CouchCMS is also updated in a rather manual fashion by replacing files. In order to make such changes live the files have to be pushed to the server, typically this could be done over FTP but this is rather bothersome. And since most of my projects are stored in Git repositories residing on the same server I decided to look into Gits hook system.
The merits of Git hooks
For those not aware, Git supplies you with an event based system for carrying out tasks on your repository. The events are called hooks and are simply scripts put into a hooks folder, their name deciding which event the script applies to. The hook that suits this purpose the best is the post-receive hook. This is fired as soon as all the files from a push operation is complete, passing information about the included branches over stdin. There are many such hooks which gets activated at different times during various Git operations and they serve as a powerful tool to complete tasks on Git repositories server-side.
The needs of the server
Simply cloning the repository into the HTTP-reachable part of the server wouldn't be such a good idea since this would mean that anyone would be able to see the Git repository itself. Therefore the script only copies the content of the repository and not any of the metadata. As I mentioned earlier it also gets information about which branches were pushed to over stdin. This allows me to have two monitored branches, one for production and a separate one for development. The development branch pushes to a different address and allows me to test out changes on the server without interfering with potential visitors. In the repository there is also quite a bit of CSS and JavaScript serving various purposes. In a production environment there is no use for things such as optional white space and pretty human-readable formats. So if the production branch is pushed the server will automatically run a server-side CSS/JavaScript compressor on any .css and .js files found within the repository. This allows me to find bugs in the development branch where line numbers makes sense, but still save visitors from the extra data transfers required for this non-critical data.
Summary
All in all this solution works pretty well. The production branch can be pushed to when I have finished with a feature, and will automatically optimise the files, while the development branch can be used for live testing. Any other branch gets ignored and can as such be used for simultaneously working with new features which can then be sent to the development branch for testing. One addition I might make in the future should it prove to be a problem is a branch that does the compression but without pushing to the main address. This would be used to detect any erroneous compressions which would make the site fail.
Note that this is by far not a comprehensive overview of what can be done with Git hooks. In fact this is only using a single out of the many hooks Git supplies. Other features that frequently gets mentioned is automatic alerts for new commits to a shared repository, managing a web-interface for the repository (akin to your own private GitHub), or even automatic server-side builds and test checking. Below follows the scripts for convenience.
#!/bin/bash
# Read the branches from stdin
while read oldrev newrev refname
do
# Check which branch is currently active and set variables accordingly (locations removed).
# All unknown branches are skipped
if [ `git rev-parse --symbolic --abbrev-ref $refname` == "development" ]; then
UPDATE_DIRECTORY=<testing-directory>
ACTIVE_BRANCH=development
elif [ `git rev-parse --symbolic --abbrev-ref $refname` == "production" ]; then
UPDATE_DIRECTORY=<live-directory>
ACTIVE_BRANCH=production
else
continue
fi
# Set the directory and copy the files from the active branch
GIT_WORK_TREE=$UPDATE_DIRECTORY git checkout $ACTIVE_BRANCH -f
# If this is the production branch, start the compressor script
if [ $ACTIVE_BRANCH == production ]; then
cd $UPDATE_DIRECTORY
~/compress
fi
# Set more web-friendly permissions on the files (actual permissions removed)
find $UPDATE_DIRECTORY -type f -exec chmod <file-permissions> {} \;
find $UPDATE_DIRECTORY -type d -exec chmod <directory-permissions> {} \;
done
And this is the compress script that's run by the script above. This is placed in the home folder of the git user so that it can be shared across repositories.
#!/bin/bash
# Find all files not in the couch subdirectory (there are a lot of files here and nothing
# to be done, just saves time to skip them)
for f in $(find . -type f -not -path "./couch/*"); do
echo "Checking out $f with extension ${f##*.}"
# If the file extension is equal to js or css then the compressor
# runs on the file and overwrites the original.
if [ ${f##*.} == "js" ]; then
COMPRESSED=$(java -jar ~/yuicompressor-2.4.8.jar --type js $f)
echo $COMPRESSED > $f
elif [ ${f##*.} == "css" ]; then
COMPRESSED=$(java -jar ~/yuicompressor-2.4.8.jar --type css $f)
echo $COMPRESSED > $f
fi
done