Git visualization with gource

May 7th, 2012 by latz.twn

Are you using git/svn/mercurial/bazaar as version control system and you ever wanted to visualize your work, how the project developed over time well Gource is there to visualize all this in a beautiful way. It takes the history of your svn/git/mercurial/bazaar repository and visualizes the changes over time, by whom they were done and so forth.

sudo apt-get install gource

Now run the following with path/to/project being your projects root directory, and give gource the .git subfolder. Run it and you should see the animation being presented.

gource /path/to/project/.git/

Now to export this to an mpeg4 video do the following.

gource /path/to/project/.git/ --stop-at-end --output-ppm-stream - | ffmpeg -y -b 6000k -r 60 -f image2pipe -vcodec ppm -i - -vcodec mpeg4 /tmp/gource.mp4

Here an example I created from one of my projects.

Posted in Linux | No Comments »

Monitoring memory on Solaris

April 24th, 2012 by exhuma.twn

I am currently writing a new munin plugin to monitor memory usage on Solaris machines. Strangely the existing plugins are fairly useless. Currently the script is running on a test-machine. If the results are satisfactory, I’ll post them here. Stay tuned.

Posted in Linux | No Comments »


April 20th, 2012 by exhuma.twn

Yesterday evening I enabled comments on This should make commenting easier and more accessible in the future.

Existing comments have been put into the the disqus importer queue, and according to them, they should appear after 24 hours. So nothing is lost!

Posted in Uncategorized | 1 Comment »

Custom bash completion for fabric tasks

March 20th, 2012 by exhuma.twn

Here’s a small bash function to provide TAB-completion for fabric tasks.

Simply add the following to your ~/.bashrc

You may already have a block like if [ -f /etc/bash_completion ]... in that case, simply add the extra line into that block.

Have fun

Posted in Coding Voodoo, Linux | 2 Comments »

First steps with the closure library

March 1st, 2012 by exhuma.twn

I recently switched work, and now after a lot of JavaEE development for the past 5 years, I am finally back home: Web-development. During my Java time, I did some small web bits here and there. Mainly to keep up with evolution, and followed the massive move towards more JavaScript heavy development. During my free time I took baby steps with a couple of JavaScript libraries, starting with Prototype, then, MochiKit and MooTools to get some visual effects done. Without doubt, I preferred MochiKit because of it’s similarity to Python. I also dipped into Dojo because of it’s nice HTML Form widgets, fooled around with backbone.js. But eventually I ended up with jQuery, simply because it currently has the backing of The Community. All I’ve done in all of these libraries can only be summarized as “wetting my feet”. At some point I consumed all the Crockford material I could find, and boy was that eye-opening. I realized I was not aware of the most important aspects of JavaScript (especially: “Everything is global! Use Namespaces!”)!

The next thing that caught my eye, was CoffeeScript. It allowed me to write pythonesque code, and compile right down to proper JavaScript.

So what has this to do with the closure library? Well, recently (as stated above) I started web-development again. And some parts of the project made it obvious, that a good dash of JavaScript would help the project a lot. So I needed to take a decision on what library I would base my work on. My first reflex was jQuery. But out of sheer coincidence I stumbled across a blog post from someone discussing jQuery and closure. Unfortunately, I don’t have the link anymore 🙁 The TL;DR of that blog post was however (paraphrasing):

If I had known about closure before, I would have used it instead of jQuery, because it makes it easier to structure the source code.

This made me more that curious. So the next thing I did, was read the closure tutorials, and also read the API top to bottom. I wanted to know what was in there before deeming it useful. And boy did it look interesting! The most interesting feature, as the author of said blog post pointed out, it the ability to easily structure your code. Using goog.provides and goog.require you can very easily split your JS files into multiple, well structured files, and the closure compiler will then re-combine them into one file, all properly ordered. To me, this feels a lot like writing Java or Python, which both allow you to structure your code very well. This also allows you to easily write modules which can in turn be re-used with ease in other projects (or, you could publish them).

For now, I am not missing anything from jQuery except the easy DOM querying from it. But I can happily live without it.

With my current experience, I can see the following benefits:

  • Makes it easy to structure code
  • The optional linter forces you to write proper/clean code. Especially with the --strict flag. Using it right from the start forces you to write in a uniform coding style, which makes code more readable!
  • The compiler solves dependency chains with the help of goog.provide and goog.require, and can optionally minify your code.
  • The library feels very professional. It’s even got a Logger loosely inspired by java.util.logger, and let’s you do exactly what you expect it to! This is awesome for debugging!
  • The library source code is very readable

And some disadvantages:

  • The community is much smaller than the one around jQuery. But so far I could figure out all I needed by reading the code.
  • It does not seem to have versioned downloads. You have to checkout from SVN trunk. I would prefer to be able to say: “I am basing my work on version 1.0” instead of “trunk”
  • Instead of linking one CSS file (as in jQuery), you have to link multiple styles and figure out which ones by yourself. But given the filenames are self-explanatory, that’s not too difficult.
  • There are no predefined CSS themes. Something jQuery’s theme builder would be nice. But, I assume that the closure-style-sheet project should help writing CSS files easily. I did not yet look into that one!

As I progress with the library I’ll post my findings…

Here are two related links I found while looking for the mentioned blog post:

Posted in JavaScript | No Comments »

JScript to query scheduled tasks

June 9th, 2011 by exhuma.twn

Soon, we will need to send out notifications as soon something bad happens with a scheduled task on Windows. The following JScript file runs natively on Windows and is capable of just that. It uses the command line tool “schtasks” to query the information and wraps the result into a list of usable object instances.

It’s possible to use this list to react to important events in the job executions. For example, you could loop through the list and send emails to the appropriate people if the variable “lastResult” is non-zero.

Posted in Coding Voodoo | No Comments »

Windows script to remove old files

June 8th, 2011 by exhuma.twn

Simple script… still, I thought I’d share…

Posted in Coding Voodoo | No Comments »

Change JPA EntityManager connection properties at Runtime

December 30th, 2010 by exhuma.twn

There are many times you want to be able to use different connection options for a JPA EntityManager. The most obvious is different user-credentials (think of a user login-screen and re-using these credentials to connect to the DB), or to make the distinction between development/testing/production environment.

However, if you let Netbeans create the persistence configuration, it will hardcode all connection parameters into the persistence.xml file. When retrieving an EntityManager instance, it will use this information to connect.

If, instead you would like to do this at runtime, you can do the following:

  1. Remove the “properties” tag from the persistence.xml. This may not be necessary, but this will make it clear, that the properties are set inside the code.
  2. Create a “Map<String, String>” which will contain the properties. A list of standard properties can be found in the specs in section
  3. Use this map to create an EntityManagerFactory and use this to create your EntityManager

An example persistence.xml without properties

<?xml version="1.0" encoding="UTF-8"?>                                          
<persistence version="2.0" xmlns="" xmlns:xsi="" xsi:schemaLo
 <persistence-unit name="myTestPU" transaction-type="RESOURCE_LOCAL">    

In case Netbeans created a method “getEntityManager”, you can safely replace this. Here is an example I currently have in use. The “appConf” instance is a singleton I use to store configuration data in the user’s home folder, and yes, the password is stored in plain-text, but for this test-case I did not need to go any further:

    private EntityManager getEntityManager() {                                  
        Map<String, String> dbProps = new HashMap<String, String>();            
                appConf.get("eclipselink.logging.level", "INFO").toString());  
        // On linux, the GSSAPI is not available. Use a default user/password  
        // pair to connect                                                      
        if ("Linux".equals(System.getProperty(""))) {                    
                    appConf.get("", "my-default-host"),                          
                    appConf.get("db.database", "my-default-db")));                
                    appConf.get("db.user", "my-default-username").toString());          
        } else {                                                                
                    appConf.get("", "my-default-host"),                          
                    appConf.get("db.database", "my-default-db")));                
        EntityManagerFactory fact = Persistence.createEntityManagerFactory("myTestPU", dbProps);
        return fact.createEntityManager();                                      

This example uses eclipselink. All available properties can be found it the EclipseLink Wiki

Posted in Coding Voodoo | 14 Comments »

Plain Text markup (HTML, reST, markdown & co)

September 8th, 2010 by exhuma.twn

Two of my personal quests I’ve been on since I started to work are the hunt for a good note-taking system, and the hunt for a good programming font. This post is related to the former…

My preferred medium to write down notes of any kind has been plain-text for a long time. This feels to me like the digital equivalent of pencil & paper. Mostly…

The big advantage being, that it let’s you focus on what’s important. And that is content! But it has one important challenge. How do you represent section headers, bulleted lists, hyperlinks,… in plain text?

While writing plain text files is quick and easy, sometimes you still want to convert it to a more visually appealing style. Say for example you’d like to incorporate it into a web-page, then converting it to HTML would be nice.

Over the years I’ve dealt with quite a few systems used for plain-text markup. Without a question, the most widely known markup language is HTML. Other’s which come to mind are javadoc, phpdocumentor, ROBOdoc and doxygen. While these are meant to document code, I still count them as markup languages as they have constructs to add style or semantics to the content. All of these are pretty useless as tool to write notes as they are stubbornly assuming you are generating text for code.

About ten years back while writing my own CMS, I needed a way to allow users to markup content. Allowing HTML was not an option because it was hard to edit, and hard to explain for non tech-savvy users. Especially at that time, when the Internet was still something most people considered either “Magic” or the “The Work of The Devil”. I didn’t need much. So I concocted my own markup language, and with not even a one-page introduction, people got productive with it.

A few years later I found what I then considered “The Holy Grail” of markup: And that was markdown. It’s really well designed, and the resulting “source” plain text format is very concise and clean. There’s not a lot of “noise” in the document. It is a perfect candidate for a note-taking format. The only thing that is really missing are tables.

Only recently ( about two years back ) did I come across reST (reStructuredText). It’s been around for quite some time, and I’m surprised I didn’t hear from it earlier. And if I had to sum it up in one word, it would be, without any hesitation: Mature! The “source” you write is, very clean and readable. Some parts (f. ex.: section headings) are even cleaner as in markdown, whereas other areas are not (f. ex.: hyperlinks). But it is much more complete than markdown!

One thing neither markdown nor reST can do very well is interlinking of documents. This is where Sphinx comes in. Sphinx is a document generation system which uses a collection of reST files as input. To completely understand it’s inner workings is not very straight-forward. But easy enough. And it pays off. It generates downright beautiful HTML and PDF (through LaTeX) documents.

In summary I would say that I would in any case prefer reST to anything else. It’s simple for basic cases, but complete enough, when the document starts to grow. Unfortunately I have not yet found convincing parsers for PHP. So in that case I’d go with Markdown Extra. If I needed to write really large documents, I’d consider Sphinx. Primarily because the generated HTML pages have an offline JavaScript search included, and because of the nice default style 😉

Posted in Babble | No Comments »

SPSS, MS-SQL2008 & bigint

March 31st, 2010 by exhuma.twn

There seems to be an issue with SPSS while reading data from an MS-SQL-Server instance. Notably with the SQL datatype “bigint”. Assume the following SPSS syntax:

   /SQL = 'SELECT year FROM  my_table'

If the field in question (in this case: “year”) is of SQL-type “bigint” then SPSS will show these values in the majority of the cases as “MISSING”. Sporadically some values appear, but they are completely wrong.

Once the cause is known (problem with the “biging” type), the solution is straight-forward: Cast the type to another appropriate type which is understood by SPSS. Which type you choose obviously depends on the values stored in the affected fields. Casting blindly to “int” may (I haven’t tested this!) resolve in strange results if the values lie outside of the “int” range (-2^31 to 2^31-1). In this case you may need to cast it to something alphanumeric like “varchar” and re-cast it in SPSS into “Numeric”. As said, I haven’t tested this but I thought it might be worth mentioning!

So, here’s the above query with the appropriate cast:

   /SQL = 'SELECT CONVERT(int, year) AS year FROM  my_table'

Note also that in this case you need to add an alias for the column ( “… AS year” ). Otherwise SPSS will return it as “VXXX” (where XXX is a sequential number).

I have tested this solution on all combinations of SPSS 11.5, SPSS 18, SQL-Server 2008 64bit, SQL-Server 2008 Express 32bit. And casting the value worked every time.

Depending on your use, it may be helpful to create views which do the casting. I have not yet tried this, but I don’t see a reason why it shouldn’t work. Additionally, it might be noteworthy that I have only encountered this problem with “bigint” so far. There may be problems with other types as well. I expect, casting them to something else should work there too.

Posted in Coding Voodoo | No Comments »

« Previous Entries Next Entries »


Recent Posts