29a.ch experiments by Jonas Wagner

 Experiments

 View all my experiments

Recent Articles

Javascript Film Emulation

Written by

I hacked together a little analog film emulation tool in Javascript. It's based on the awesome work of Pat David. I wrote it mainly to play with some new tech but I liked the result enough to share it with you. You can try it here:

example image
View the Film Emulator

It also works on android phones running chrome, give it a try!

How the Film Emulation works

I guess the most interesting part for most people is the actual film emulation code. It's using Color Lookup Tables (cluts).

So in simplistic terms:

For every pixel in the image
Take it's color values r, g, b
Look up it's new color value in the lookup table
r', g', b' = colorLookupTable[r, g, b]
Set the pixel to the color values (r', g', b')

In practice there are a few more considerations. Most cluts don't contain values for all 16 777 216 (224) colors in the rgb space. A simplistic solution to this problem would be to always just use the closest color (nearest-neighbor interpolation). This is fast but results in very ugly banding artifacts.

So to keep things fast I use random dithering for the previews and trilinear filtering for the final output. The random dithering is probably a suboptimal choice, but it was easy to implement.

You can find more details about how the lookup tables were create on Pat Davids website.

Technology

As stated at the beginning I wrote this application to play with new technology, so there is a lot going on in this little application.

The entire code is written in Javascript (ES6 to be precise). Which is then converted to more mainstream javascript using babel.js.

It is using the canvas API for accessing the pixel data of images and then processes them in web workers for parallelism using transferable objects to avoid copies.

WebGL would obviously also be suitable for this task, I might even write an implementation in the future

The css makes heavy use of flexible boxes and is written in scss. The icon font was generated using fontello.

The whole thing is built using grunt and browserify.

Of course these are just a few of the bits of tech that I played with to make this append. If you want to know even more, just look at the source.

Source Code

You can find the source code of this tool on github. The code is not licensed under an open source license and does not come with all the data files in order to prevent lazy people from just copying everything and pretending it is their own work. You are of course free to study the code and takes bits and pieces, I consider this fair use. Just attribute them to me properly. If you have grander plans for it and the lack of a license prevents you from following up on them feel free to contact me.

Comments

Full-text search example using lunr.js

Written by
Moon

I did a little experiment today. I added full-text search to this website using lunr.js. Lunr is a simple full-text search engine that can run inside of a web browser using Javascript.

Lunr is a bit like solr, but much smaller and not as bright, as the author Oliver beautifully puts it.

With it I was able to add full text search to this site in less than an hour. That's pretty cool if you ask me. :)

You can try out the search function I built on the articles page of this website.

I also enabled source maps so you can see how I hacked together the search interface. But let me give you a rough overview.

Indexing

The indexing is performed when I build the static site. It's pretty simple.

// create the index
var index = lunr(function(){
    // boost increases the importance of words found in this field
    this.field('title', {boost: 10});
    this.field('abstract', {boost: 2});
    this.field('content');
    // the id
    this.ref('href');
});

// this is a store with some document meta data to display
// in the search results.
var store = {};

entries.forEach(function(entry){
    index.add({
        href: entry.href,
        title: entry.title,
        abstract: entry.abstract,
        // hacky way to strip html, you should do better than that ;)
        content: cheerio.load(entry.content.replace(/<[^>]*>/g, ' ')).root().text()
    });
    store[entry.href] = {title: entry.title, abstract: entry.abstract};
});

fs.writeFileSync('public/searchIndex.json', JSON.stringify({
    index: index.toJSON(),
    store: store
}));

The resulting index is 1.3 MB, gzipping brings it down to a more reasonable 198 KB.

Search Interface

The other part of the equation is the search interface. I went for some simple jQuery hackery.

jQuery(function($) {
    var index,
        store,
        data = $.getJSON(searchIndexUrl);

    data.then(function(data){
        store = data.store,
        // create index
        index = lunr.Index.load(data.index)
    });

    $('.search-field').keyup(function() {
        var query = $(this).val();
        if(query === ''){
            jQuery('.search-results').empty();
        }
        else {
            // perform search
            var results = index.search(query);
            data.then(function(data) {
                $('.search-results').empty().append(
                    results.length ?
                    results.map(function(result){
                        var el = $('<p>')
                            .append($('<a>')
                                .attr('href', result.ref)
                                .text(store[result.ref].title)
                            );
                        if(store[result.ref].abstract){
                            el.after($('<p>').text(store[result.ref].abstract));
                        }
                        return el;
                    }) : $('<p><strong>No results found</strong></p>')
                );
            }); 
        }
    }); 
});

Learn More

If you want to learn more about how lunr works I recommend you to read this article by the author.

If you still want to learn more about search, then I can recommend this great free book on the subject called Introduction to Information Retrieval by Christopher D. Manning, Prabhakar Raghavan and Hinrich Sch├╝tze.

Comments

Source Maps with Grunt, Browserify and Mocha

Written by

If you have been using source maps in your projects you have probably also encountered that they do not work with all tools. For instance, if I run my browserifyd tests with mocha I get something like this:

ReferenceError: dyanmic0 is not defined
    at Context.<anonymous> (http://0.0.0.0:8000/test/tests.js:4342:44)
    at callFn (http://0.0.0.0:8000/test/mocha.js:4428:21)
    at timeslice (http://0.0.0.0:8000/test/mocha.js:5989:27)
    ...

That's not exactly very helpful. After some searching I found a node module to solve this problem: node-source-map-support. It's easy to use and magically makes things work.

Simply:

npm install --save-dev source-map-support
and then add this somewhere in your initialization code:
require('source-map-support').install();

I place it in a file called dev.js that I include in all development builds.

Now you get nice stack traces in mocha, jasmine, q and most other tools:

ReferenceError: dyanmic0 is not defined
    at Context.<anonymous> (src/physics-tests.js:44:1)
    ...

Nicely enough this also works together with Qs long stack traces:

require('source-map-support').install();
var Q = require('q');
Q.longStackSupport = true;
Q.onerror = function (e) {
    console.error(e && e.stack);
};

function theDepthsOfMyProgram() {
  Q.delay(100).then(function(){
  }).done(function explode() {
    throw new Error("boo!");
  });
}

Will result in:

Error: boo!
    at explode (src/dev.js:12:1)
From previous event:
    at theDepthsOfMyProgram (src/dev.js:11:1)
    at Object./home/jonas/dev/sandbox/atomic-action/src/dev.js.q (src/dev.js:16:1)
    ...

That's more helpful. :) Thank you Evan!

Comments

Desktop rdiff-backup Script

Written by
screenshot

I have recently revamped the way I backup my desktop. In this post I document the thoughts that went into this. This is mostly for myself but you might still find it interesting.

To encrypt or not to encrypt

I do daily incremental backups of my desktop to an external harddrive. This drive is unencrypted.

Encrypting your backups has obvious benefits - it protects your data from falling into the wrong hands. But at the same time it also makes your backups much more fragile. A single corrupted bit can spell disaster for anything from a single block to your entire backup history. You also need to find a safe place to store a strong key - no easy task.

Most of my data I'd rather have stolen than lost. A lot of it is open source anyway. :)

The data that I'd rather lose than having it fall into the wrong hands (mostly keys) is stored and backed up in encrypted form only. For this I use the gpg agent and ecryptfs.

Encrypting only the sensitive data rather than the whole disk increases the risk of it being leaked. Recovering those leaked keys will however require a fairly powerful adversary which would have other ways of getting his hands on that data anyway so I consider this strategy to be a good tradeoff.

As a last line of defense I have an encrypted disk stored away offsite. I manually update it a few times a year to reduce the chance of loosing all of my data in case of a break in, fire or another catastrophic event.

Before showing you the actual backup script I'd like to explain you to why I'm back to using rdiff-backup for my backups.

Duplicity vs rdiff-backup vs rsync and hardlinks

Duplicity and rdiff backup are some of the post popular options to do incremental backups on linux (ignoring the more enterprisey stuff like bacula). Using rsnapshot which is using rsync and hardlinks is another one.

The main drawback to using rsync and hardlinks is that it stores full copies of every when it changes. This can be a good tradeoff especially when fast random access to historic backups is needed. This combined with snapshots is what I would most likely use for backing up production servers, where getting back some (or all) files of a specific historic version as fast as possible is usually what is needed. For my desktop however incremental backups are more of a backup of a backup. Fast access is not needed but I want to have the history around just in case I get the order of the -iname and -delete arguments to find wrong again without noticing.

Duplicity backs up your data by producting compressed (and optionally encrypted) tars that contain diffs of a full backup. This allows it to work with dumb storage (like s3) and makes encrypted backups relatively easy. However if even just a few bits get corrupted any backups after the corruption can be unreadable. This can be somewhat mitigated by doing frequent full backups, but that takes up space and increases the time needed to transfer backups.

rdiff-backup works the other way around. It always stores the most recent version of your data as a full mirror. So you can just cp that one file you need in a pinch. Increments are stored as 'reverse diffs' from the most current version. So if a diff is corrupted only historic data is affected. Corruption to a file will only corrupt that file, which is what I prefer.

The Script

screenshot

Most backup scripts you find on the net are written for backing up servers or headless machines. For backing up Desktop Linux machines the most popular solution seems to be deja-dup which is a frontend for duplicity.

As I want to use rdiff-backup I hacked together my own script. Here is what it roughly does.

  • Mounts backup device by label via udisks
  • Communicates start of backup via desktop notifications using notify-send
  • Runs backup via rdiff-backup
  • Deletes old increments after 8 weeks
  • Communicates errors or success via desktop notifications.
#!/bin/bash
BACKUP_DEV_LABEL="backup0"
BACKUP_DEV="/dev/disk/by-label/$BACKUP_DEV_LABEL"
BACKUP_DEST="/media/$BACKUP_DEV_LABEL/fortress-home"
BACKUP_LOG="$HOME/.local/tmp/backup.log"
BACKUP_LOG_ERROR="$HOME/.local/tmp/backup.err.log"
# delay backup a bit after the login
sleep 3600
# unmount if already mounted, ensures it's always properly mounted in /media
udisks --unmount $BACKUP_DEV
# Mounting disks via udisks, this doesn't require root
udisks --mount $BACKUP_DEV 2> $BACKUP_LOG_ERROR > $BACKUP_LOG
notify-send -i document-save Backup Started
rdiff-backup --print-statistics --exclude /home/jonas/Private --exclude MY_OTHER_EXCLUDES $HOME $BACKUP_DEST 2>> $BACKUP_LOG_ERROR >> $BACKUP_LOG
if [ $? != 0 ]; then
{
    echo "BACKUP FAILED!"
    # notification
    MSG=$(tail -n 5 $BACKUP_LOG_ERROR)
    notify-send -u critical -i error "Backup Failed" "$MSG"
    # dialog
    notify-send -u critical -t 0 -i error "Backup Failed" "$MSG"
    exit 1
} fi
rdiff-backup --remove-older-than 8W $BACKUP_DEST
udisks --unmount $BACKUP_DEV
STATS=$(cat $BACKUP_LOG|grep '^Errors\|^ElapsedTime\|^TotalDestinationSizeChange')
notify-send -t 1000 -i document-save "Backup Complete" "$STATS"

This script runs whenever I login. I added it via the Startup Applications settings in Ubuntu.

The backup ignores the ecryptfs Private folder but does include the encrypted .Private folder thereby only backing up the cipher texts of sensitive files.

I like using disk labels for my drives. The disk label can easily be set using e2label:

e2label /dev/sdc backup0

The offsite backup I do by manually mounting the LUKS encrypted disk and running a simple rsync script. I might migrate this to amazon glacier at some point.

I hope this post is useful to someone including future me. ;)

Comments

smartcrop.js ken burns effect

Written by

This is an experiment that multiple people have suggested to me after I have shown them smartcrop.js. The idea is to let smartcrop pick the start and end viewports for the ken burns effect.. This could be useful to automatically create slide shows from a bunch of photos. Given that smartcrop.js is designed for a different task it does work quite well. But see for yourself.

I'm sure it could be much improved by actually trying to zoom on the center of interest rather than just having it in frame. The actual animation was implemented using css transforms and transitions. If you want to have a look you can find the source code on github.

Comments

Introducing smartcrop.js

Written by

Image cropping is a common task in many web applications. Usually just cutting out the center of the image works out ok. It's often a compromise and sometimes it fails miserably.


Evelyn by AehoHikaruki

Can we do better than that? I wanted to try.

Smartcrop.js is the result of my experiments with content aware image cropping. It uses fairly simple image processing and a few rules to attempt to create better crops of images.

This library is still in it's infancy but the early results look promising. So true to the open source mantra of release early, release often, I'm releasing version 0.0.0 of smartcrop.js.

Source Code: github.com/jwagner/smartcrop.js

Examples: test suite with over 100 images and test bed to test your own images.

Command line interface: github.com/jwagner/smartcrop-cli

Comments

Wild WebGL Raymarching

Written by

It's been way too long since I've released a demo. So the time was ripe to have some fun again. This time I looked into raymarching distance fields. I've found that I got some wild results by limiting the amount of samples taken along the rays.

Demo

screenshot
View the demo

Behind the scenes

If you are interested in the details, view the source. If left it unminified for you. The interesting stuff is mainly in the fragment shader.

Essentially the scene is just an infinite number of spheres arranged in a grid. If it is properly sampled it looks pretty boring:

boring

Yes, I do love functions gone wild and glitchy. :)

Comments

 View & search all my articles