29a.ch experiments by Jonas Wagner

 Experiments

 View all my experiments

Recent Articles

Source Maps with Grunt, Browserify and Mocha

Written by

If you have been using source maps in your projects you have probably also encountered that they do not work with all tools. For instance, if I run my browserifyd tests with mocha I get something like this:

ReferenceError: dyanmic0 is not defined
    at Context.<anonymous> (http://0.0.0.0:8000/test/tests.js:4342:44)
    at callFn (http://0.0.0.0:8000/test/mocha.js:4428:21)
    at timeslice (http://0.0.0.0:8000/test/mocha.js:5989:27)
    ...

That's not exactly very helpful. After some searching I found a node module to solve this problem: node-source-map-support. It's easy to use and magically makes things work.

Simply:

npm install --save-dev source-map-support
and then add this somewhere in your initialization code:
require('source-map-support').install();

I place it in a file called dev.js that I include in all development builds.

Now you get nice stack traces in mocha, jasmine, q and most other tools:

ReferenceError: dyanmic0 is not defined
    at Context.<anonymous> (src/physics-tests.js:44:1)
    ...

Nicely enough this also works together with Qs long stack traces:

require('source-map-support').install();
var Q = require('q');
Q.longStackSupport = true;
Q.onerror = function (e) {
    console.error(e && e.stack);
};

function theDepthsOfMyProgram() {
  Q.delay(100).then(function(){
  }).done(function explode() {
    throw new Error("boo!");
  });
}

Will result in:

Error: boo!
    at explode (src/dev.js:12:1)
From previous event:
    at theDepthsOfMyProgram (src/dev.js:11:1)
    at Object./home/jonas/dev/sandbox/atomic-action/src/dev.js.q (src/dev.js:16:1)
    ...

That's more helpful. :) Thank you Evan!

Comments

Desktop rdiff-backup Script

Written by
screenshot

I have recently revamped the way I backup my desktop. In this post I document the thoughts that went into this. This is mostly for myself but you might still find it interesting.

To encrypt or not to encrypt

I do daily incremental backups of my desktop to an external harddrive. This drive is unencrypted.

Encrypting your backups has obvious benefits - it protects your data from falling into the wrong hands. But at the same time it also makes your backups much more fragile. A single corrupted bit can spell disaster for anything from a single block to your entire backup history. You also need to find a safe place to store a strong key - no easy task.

Most of my data I'd rather have stolen than lost. A lot of it is open source anyway. :)

The data that I'd rather lose than having it fall into the wrong hands (mostly keys) is stored and backed up in encrypted form only. For this I use the gpg agent and ecryptfs.

Encrypting only the sensitive data rather than the whole disk increases the risk of it being leaked. Recovering those leaked keys will however require a fairly powerful adversary which would have other ways of getting his hands on that data anyway so I consider this strategy to be a good tradeoff.

As a last line of defense I have an encrypted disk stored away offsite. I manually update it a few times a year to reduce the chance of loosing all of my data in case of a break in, fire or another catastrophic event.

Before showing you the actual backup script I'd like to explain you to why I'm back to using rdiff-backup for my backups.

Duplicity vs rdiff-backup vs rsync and hardlinks

Duplicity and rdiff backup are some of the post popular options to do incremental backups on linux (ignoring the more enterprisey stuff like bacula). Using rsnapshot which is using rsync and hardlinks is another one.

The main drawback to using rsync and hardlinks is that it stores full copies of every when it changes. This can be a good tradeoff especially when fast random access to historic backups is needed. This combined with snapshots is what I would most likely use for backing up production servers, where getting back some (or all) files of a specific historic version as fast as possible is usually what is needed. For my desktop however incremental backups are more of a backup of a backup. Fast access is not needed but I want to have the history around just in case I get the order of the -iname and -delete arguments to find wrong again without noticing.

Duplicity backs up your data by producting compressed (and optionally encrypted) tars that contain diffs of a full backup. This allows it to work with dumb storage (like s3) and makes encrypted backups relatively easy. However if even just a few bits get corrupted any backups after the corruption can be unreadable. This can be somewhat mitigated by doing frequent full backups, but that takes up space and increases the time needed to transfer backups.

rdiff-backup works the other way around. It always stores the most recent version of your data as a full mirror. So you can just cp that one file you need in a pinch. Increments are stored as 'reverse diffs' from the most current version. So if a diff is corrupted only historic data is affected. Corruption to a file will only corrupt that file, which is what I prefer.

The Script

screenshot

Most backup scripts you find on the net are written for backing up servers or headless machines. For backing up Desktop Linux machines the most popular solution seems to be deja-dup which is a frontend for duplicity.

As I want to use rdiff-backup I hacked together my own script. Here is what it roughly does.

  • Mounts backup device by label via udisks
  • Communicates start of backup via desktop notifications using notify-send
  • Runs backup via rdiff-backup
  • Deletes old increments after 8 weeks
  • Communicates errors or success via desktop notifications.
#!/bin/bash
BACKUP_DEV_LABEL="backup0"
BACKUP_DEV="/dev/disk/by-label/$BACKUP_DEV_LABEL"
BACKUP_DEST="/media/$BACKUP_DEV_LABEL/fortress-home"
BACKUP_LOG="$HOME/.local/tmp/backup.log"
BACKUP_LOG_ERROR="$HOME/.local/tmp/backup.err.log"
# delay backup a bit after the login
sleep 3600
# unmount if already mounted, ensures it's always properly mounted in /media
udisks --unmount $BACKUP_DEV
# Mounting disks via udisks, this doesn't require root
udisks --mount $BACKUP_DEV 2> $BACKUP_LOG_ERROR > $BACKUP_LOG
notify-send -i document-save Backup Started
rdiff-backup --print-statistics --exclude /home/jonas/Private --exclude MY_OTHER_EXCLUDES $HOME $BACKUP_DEST 2>> $BACKUP_LOG_ERROR >> $BACKUP_LOG
if [ $? != 0 ]; then
{
    echo "BACKUP FAILED!"
    # notification
    MSG=$(tail -n 5 $BACKUP_LOG_ERROR)
    notify-send -u critical -i error "Backup Failed" "$MSG"
    # dialog
    notify-send -u critical -t 0 -i error "Backup Failed" "$MSG"
    exit 1
} fi
rdiff-backup --remove-older-than 8W $BACKUP_DEST
udisks --unmount $BACKUP_DEV
STATS=$(cat $BACKUP_LOG|grep '^Errors\|^ElapsedTime\|^TotalDestinationSizeChange')
notify-send -t 1000 -i document-save "Backup Complete" "$STATS"

This script runs whenever I login. I added it via the Startup Applications settings in Ubuntu.

The backup ignores the ecryptfs Private folder but does include the encrypted .Private folder thereby only backing up the cipher texts of sensitive files.

I like using disk labels for my drives. The disk label can easily be set using e2label:

e2label /dev/sdc backup0

The offsite backup I do by manually mounting the LUKS encrypted disk and running a simple rsync script. I might migrate this to amazon glacier at some point.

I hope this post is useful to someone including future me. ;)

Comments

smartcrop.js ken burns effect

Written by

This is an experiment that multiple people have suggested to me after I have shown them smartcrop.js. The idea is to let smartcrop pick the start and end viewports for the ken burns effect.. This could be useful to automatically create slide shows from a bunch of photos. Given that smartcrop.js is designed for a different task it does work quite well. But see for yourself.

I'm sure it could be much improved by actually trying to zoom on the center of interest rather than just having it in frame. The actual animation was implemented using css transforms and transitions. If you want to have a look you can find the source code on github.

Comments

Introducing smartcrop.js

Written by

Image cropping is a common task in many web applications. Usually just cutting out the center of the image works out ok. It's often a compromise and sometimes it fails miserably.


Evelyn by AehoHikaruki

Can we do better than that? I wanted to try.

Smartcrop.js is the result of my experiments with content aware image cropping. It uses fairly simple image processing and a few rules to attempt to create better crops of images.

This library is still in it's infancy but the early results look promising. So true to the open source mantra of release early, release often, I'm releasing version 0.0.0 of smartcrop.js.

Source Code: github.com/jwagner/smartcrop.js

Examples: test suite with over 100 images and test bed to test your own images.

Command line interface: github.com/jwagner/smartcrop-cli

Comments

Wild WebGL Raymarching

Written by

It's been way too long since I've released a demo. So the time was ripe to have some fun again. This time I looked into raymarching distance fields. I've found that I got some wild results by limiting the amount of samples taken along the rays.

Demo

screenshot
View the demo

Behind the scenes

If you are interested in the details, view the source. If left it unminified for you. The interesting stuff is mainly in the fragment shader.

Essentially the scene is just an infinite number of spheres arranged in a grid. If it is properly sampled it looks pretty boring:

boring

Yes, I do love functions gone wild and glitchy. :)

Comments

New Website

Written by

Updating this website has been long overdue. It has been running on zine, a blog system that's no longer maintained since 2009, for way too long. After looking for a replacement and not finding anything I liked I decided that it would be fun to write my own. ;)

I tried hard not to break any existing content. If something is not working anymore, let me know.

A few details about the system

My new website is maintained by a static website generator and server using nginx. This has the benefit of speed and trivial deployments via rsync. Comments are now handled with disqus.

The system is fairly simple, it takes index.html and meta.json files from content/ and indexes them into a bunch of json files in data/. The html content is then processed using Cheerio. This involves fixing relative links and extracting meta data. After this step the pages are rendered using a few Jade templates. All of this is held together by a small set of Grunt tasks.

Goodbye old website

Comments

Fixing bash autocomplete on ubuntu 13.04

Written by

The updated git package in ubuntu 13.04 changed the way bash completion works for git. Which resulted in the following error:

completion: function `_git' not found.

This is because git completion is now using autoloading which does not work with aliases. The solution is to simply manually source the bash-completion function:

# from my bashrc
alias g='git'
source /usr/share/bash-completion/completions/git
complete -o default -o nospace -F _git g

Comments

 View all my articles