12 hours to rate a rails application

Posted in conference, rails, ruby by elisehuard on March 29, 2010

Here’s the presentation I did this weekend

Of course, just as I was about to start, my laptop died and refused to start again. Lost me some precious minutes, but fortunately I had a USB key with a backup, and my coworker Alain Ravet lent me his laptop (Julian Fischer also kindly offered, but his computer didn’t like my presentation). Murphy’s law in all its glory. After that, things went more or less smoothly.

If you had the opportunity to attend, I would really appreciate your feedback on speakerrate, as I’m going to give this talk another couple of times. I sensed when it worked, and when it didn’t, but the more input, the better I can fine-tune it.

Fscking up with git and how to solve it

Posted in Uncategorized by elisehuard on March 24, 2010

I’ve been using git for nearly two years now, but this job is the first time I’m using it with more than 2 people on the same project. And it’s different, I can tell you.

In this case you need a workflow: we adopted this one, also described here.
It’s a good workflow. But human error means that you can go wrong.

Forgetting to branch

The workflow demands that you do all your work on branches – that way you avoid merges on master, and can maintain a nice straight and clear master branch.
But, as you start working on a new feature, you might forget to do that

git checkout -b boldly_go

And start coding and committing on your local master.
What now ? Well, git offers an easy solution. Tag your latest commit (give it a label)

git tag boldly_go_tag

reset the master to its remote position (do not use –hard or you will lose your work)

git reset --soft HEAD

create a branch out of your tagged commit, which is not on any branch just now
git checkout -b boldly_go boldly_go_tag

git checkout -b boldly_go

to create the branch

git checkout master
git reset --hard origin/master

to reset the master to its original position.

My last commit message lacks poetry

Or I forgot to add this one file. Solution:
git commit -a --amend

Forget I ever did that last commit

Careful, this will remove any changes you added in that commit. This will only work locally, of course:
git reset --hard HEAD^

Going off the map

As you work on your branch, you want to do regular rebase with the remote master, to minimize any later merges and avoid surprises. After every commit:
git checkout master
git pull
git checkout boldly_go
git rebase master

Here’s where you can go very, very wrong: as long as you’re in rebase state, you’re in a situation where you’re NOT on a branch. So if after a coffee break you forgot you’re in rebase, and you continue working, you’re working into the void !!
$> git branch
*(no branch)

If you then try to do a rebase again to get up to date with master, you won’t find your commits where you expected them. Oh noes ! Where did my work go !
No panic (to be honest, i did panic a little): git keeps all your commits. You can find them in the .git/objects directory, if you care to have a look. What you just did is create a ‘dangling commit’, that is a commit unlinked to any branch. Fortunately, there’s the aptly named git fsck command.
git fsck | grep commit
(the grep commit helps separating the commits from the other dangling stuff) then use the SHA associated with your commit to create a tag:
git tag sos 9a4e6286aa2e2bd97334ad35b555169c2d3033b4
git checkout -b not_so_bold_now sos

this creates a branch based on the tag, and you can continue working.

another good one to know in that case is
git reflog
this shows all commits, dangling ones included, with their message, making it easier to find the commit you’re looking for. Same procedure for the rest.

This and many other tips, and sources, can be found on git-ready.
Thanks to Alain Ravet for some of the tips 🙂

Scottish Ruby Conference

Posted in conference, rails, ruby by elisehuard on March 20, 2010

Time flies, and next week already,
I'm speaking at the scottish ruby conference
I’m going to talk about how to evaluate a Rails application very quickly: the title of my talk is “12 hours to rate a Rails application”
This is useful in following situations:

  • in case of acquisitions (I’ve been asked to look over an application as an outside expert)
  • when you’re going to take over legacy codebase

I’m going the extra mile on metrics (or quantitative code analysis), which are excellent tools to judge a large-ish codebase.
Whoo ! Large and interesting subject, all compressed in 45 minutes. A day might come when I’ll be totally relaxed about giving talks, but I’m not there yet. Cross fingers.

Cancan: after a closer look

Posted in rails by elisehuard on March 13, 2010

Well, we’re into our first weeks of using cancan, and my earlier enthousiasm has been tempered somewhat.
It turns out that Cancan, although being well-written, is an Opinionated plugin. It may have been intended only for very simple applications.

Let me explain. Authorization happens mostly at controller level. Cancan offers sweetened before_filter for this purpose.
One being:


which will do some standard loading action for you (nesting is possible). It is implied that the model has the same name as your controller, let’s say a CommentsController will load and authorize based on the model Comment.

It’s made flexible to a certain extent, because you can specify another model, like

load_and_authorize_resource :class => Post

Besides that, you can decide to use your own before_filter to do your own custom loading of the model.
If you don’t want to load, you can use


(without the load)

But for me we’re already in muddy waters, First off, I want authorization only, it’s unnecessary to load the instance variables for me, that’s not what I expect from this plugin. So I’ll stick with authorize_resource.

Secondly, we’re wanting to authorize a resource, not a model A resource, as in REST, should be disconnected from the model, that’s implied in the MVC pattern. The resource is what we expose to the outside world, whether as URL or in a more general API. Models are the developer’s business and nobody else’s. Linking both is awfully restrictive: usually, you’ll also have controllers that use several models, controllers that use a cache or set off background tasks etc.

This is possible with Cancan in a rather roundabout way, by using symbols when defining an authorization rule and making a before_filter.

This is why I decided to fork and add the required behaviour to the plugin. To quote what I added to the README:

If the resource is not linked to a model, you can use the authorize_resource filter with the :resource option. When the resource name is the controller name, use

authorize_resource :resource => :controller

(for instance CalendarsController will authorize on :calendar)
When another name is required, a symbol can be used.

 authorize_resource :resource => :coffee

This may be enough for us to be able to work with it … we’ll see.
Update: Hooray ! as of version 1.1.0, cancan now takes :resource (not class), and there have been many nice additions/changes besides. My fork can now quietly disappear. Thank you Ryan Bates !

Tagged with: , , ,

Choosing an authorization framework for rails

Posted in rails, ruby by elisehuard on February 26, 2010

At my main customer’s we needed to choose an authorization framework. This is for a complex enterprise application, and requiring fine-grained authorization on:

  • roles
  • actions
  • model: most users can only access their own objects.

I’d had a look around, and after some digging ended up looking at 3 plugins, Declarative Authorization, grant and cancan.

Grant fell off almost immediately. It centered all authorization in the model, and I felt it was a bit too lightweight for our application.

Then I looked at declarative authorization and cancan.
At first sight, declarative authorization looked like a winner: I’m a believer in open source natural selection, and with about 650 people watching the plugin on github, it looked like a lot of people had found it a good fit. It’s also been lovingly polished since september 2008, so the kinks have probably been ironed out.

I cloned both plugins, and looked at the code and documentation.
Cancan is partly based on declarative_authorization. What struck me at first sight, is how simple cancan looked. Much less code, much less meta-monkey-magic. And a very friendly DSL and documentation.

And get this: I ran reek on both plugins (it’s a hobby of mine). And cancan came out practically clean ! That’s like having an alien in the living room ! It *never* happens ! Run reek on your own code, just for laughs, and you’ll see what I mean.

So we ended up choosing cancan, although declarative_authorization might have more features out of the box, we feel we’ll be able to extend cancan with much more ease, if at all necessary. It feels better to have a clean, fathomable codebase, than a larger engine. I’m aware that cancan has the unfair advantage of having learned from its predecessors, and kudos to the maintainers of declarative_authorization for having inspired others.

Note: I’m aware there are quite a few other plugins out there. If you found another one and you’re very happy about it, please share.

Tagged with: , , ,

Leaving subversion for git

Posted in tools by elisehuard on February 19, 2010

Let’s be fair, subversion is an honourable version control system, better than CVS and some commercial VCS, and infinitely better than no version control at all.
Still, once you’ve tried git, there’s no going back (or not willingly, anyway). The easy branching, rebase, cherry-pick, stash … did I mention I want to marry git when I grow up ?

Many posts have been written on this subject, so I’ll just link to the ones that helped me, and add some comments where necessary.

svn to git repository

you want your whole history to be available on the git server. Easy to do, as described here. I would advise to do an intermediary step to get rid of all svn references.
Your branches might have names that look like paths. If this bothers you, and you’re still using them, you can create nice links pointing to them by doing

git branch -b  <nicer_name> <bothersome_name>

But: this doesn’t really cover the whole story. Your project might have svn:externals.

  • If these externals are served from outside, you could piston them. Piston now works equally well with subversion and git.
  • If these externals are in-house … well. Use this how-to recursively to also have them into git repositories, and git submodule add. If this is sensitive and touching the external is a no-no, just piston them as well.


I had a look at gitorious, but it doesn’t play well with postgres, and I couldn’t be bothered to install mysql just for this purpose.
So I decided on the simplest tool, which is already included in git: GitWeb. How-to is described here. Basically, if you want to try it out, go to a git repo and do:

git instaweb

if you’ve got lighttpd installed, otherwise add option –httpd=webrick
To serve it with Apache (or Nginx), follow the tutorial above – the ‘make’ step is not actually necessary, the gitweb module apparently detects all git repos on the system.

Update: to view local repositories on your own desktop, you can use gitg or gitk (if on linux) or gitx (if on mac).

There you go, time to git clone and start playing.

Note: as you can see from the links, the Pro Git book is a good reference for things git.

Segfault in Ruby

Posted in ruby by elisehuard on February 12, 2010

Note: the following works with C-based ruby, not JRuby or IronRuby, obviously.
This is a sight most rubyists will fear: the segmentation fault. You’re running your tests quite innocently, or your web server is doing it’s job, until BOOM !

[BUG] Segmentation fault

What just happened ?
A segfault means your program tries to play fast and loose with memory it hasn’t been allocated. The operating system says ‘hey you!’. When this occurs on a *nix, the process receives a signal, SIGSEGV. The program crashes, and in so doing leaves a core dump, which is a recording of the state of the program at the time of crash.

Ruby then traps the corresponding signal.
You’ll find corresponding code in signal.c of the ruby source code:

install_sighandler(SIGSEGV, sigsegv);

and the sigsegv function is:

#ifdef SIGSEGV
static RETSIGTYPE sigsegv _((int));
    int sig; 
    if (!is_ruby_native_thread() && !rb_trap_accept_nativethreads[sig]) {
    rb_bug("Segmentation fault");

The rb_bug at the bottom is responsible for the message you see appearing when a segmentation fault happens.

That’s all well, you’ll say, but how to I solve this ?
First off, you have to determine where the issue came from. There’s where the core dump can help you, by telling you if the issue happened in ruby itself, or in its binding to another component, like a database or something similar.

FOSDEM 2010: all was well

Posted in conference, FOSDEM, open source by elisehuard on February 10, 2010

This weekend we had our 10th edition of FOSDEM. Fosdem is the Free Open Source Developers European Meeting. It’s the second year that I’m part of the staff – my reasons to join the team was that since I’m using a lot of open source, but contributing precious little (working on it), I might as well give something back in another way. Since then I found another reason to enjoy working on FOSDEM: it’s amazing to bring about such a mammoth event with just a dozen of people and a larger group of volunteers.

Since FOSDEM is entirely free and doesn’t require people to register, it’s always difficult to estimate the attendance. Judging from the numbers of t-shirts and the booklets, and the constant throngs of geeks in the hallways, the general impression is that we had even more participants than last year. To the point that we start wondering if we’re not going to grow too large.

The organization was a success. Kudos to the whole team for doing a good job.
Sponsoring and donations were crisis-insensitive. The network was up on friday night (respect to Gerry, Jerome, Peter, and all who made it happen), with a glass fiber gigabit uplink. During the conference the geeks didn’t even use 10% of the bandwidth. In one of the hacker rooms there was a sign ‘please use more bandwidth’. The devrooms were mostly packed, and the main tracks were (I think) interesting. It’s not always easy to get brainiacs who can speak in front of an audience, but I think we hit the spot most of the time.

Where last year I had to run around like a headless chicken, this year there were plenty of opportunities to sit down for 20 minutes at a time ! Which meant I got out of it feeling marginally less exhausted than last year.

I think we can say that the organization has now reached a nice plateau, and everything was ticking along very smoothly. The only danger is to grow complacent – it’s never a good idea to let your guard down.

Heart-warming: to have faithful volunteers who help us until the bitter (or should I say dirty) end. To have a great atmosphere, and to get heartfelt thanks from participants. If you’ve got any feedback of your own, tips you’d like to share, let us know.

Tagged with: ,

Bye bye mac

Posted in tools by elisehuard on January 12, 2010

macbook pro
For about 2 years I’ve worked on a Macbook Pro. It’s been a mostly pleasant experience – smooth graphical interface, more than adequate hardware – it took a while getting used to, but it worked out OK.

Still, I find myself turning back to Linux for development.


I’m not the most organized person in real life, but I can be fairly anal about my file organization. And I find it quite an effort to keep my Mac’s file structure clean and simple.
First, there is Apple’s own directory structure – apparently they found it necessary to differ from the BSD they’re based on, and use a list of non-standard capitalized (!) directories (Libraries, System etc – have a look at Applications for pete’s sake).

In my work I use a fair number of open source tools. The easiest way to get those on a mac is using macports. Macports installs things in /opt/local by default, so there’s a few things lying around in their own directory structure there.

The macports people do good work, but it’s difficult to keep up with releases, so often you need a newer version of a tool, or you need an extra library that hasn’t been packaged yet. So you compile. If you’re not careful, the compiled items are then installed in the usual linux directory structure (/usr, /usr/lib etc).

Result: something that works, but it can become a disorganized mess, which chafes a bit.
(and don’t get me started on Mac’s very own dynamic libraries and executables)


when I work, I’m mostly using terminals and the command line. Vi is my editor of choice (good vim rails plugins here). So all the nice graphical effects and applications requiring a mouse don’t have much added value.

Friends have introduced me to a great window manager on Linux, coincidentally called Awesome. This is a tiled window manager – which means that most windows don’t float, but are tiled, and make full use of the screen real estate. There are by default 10 desktops, allowing a good organization of windows. Navigation happens through key shortcuts. Shortcut keys, default applications, the whole interface can be customized using Lua. Now tell me that isn’t awesome.

Linux for the desktop

It used to be a pain in the neck to have Linux be completely functional, especially on laptops. I remember poring over hardware manuals looking for chipsets, and endless trawling through forums to get X to work properly. Nowadays installing an ubuntu or a debian is mostly inserting a disk and clicking through an install.
Because you see, it’s not because I can manually partition, hand-compile kernels and libraries, fiddle about with settings, that I want to spend time doing this for my desktop, per se. We’ve all got better things to do. Zack the Mac was a temporary solution to this issue.


Fourth (minor) reason to go back: well, Apple. They have the hardware, they have the software, they make me pay. It feels like being submissive to the fantastic marketing machine Steve Jobs set up.

I like the mac hardware, and Mac OS X is fine for casual use (like watching movies, email, blogging), and of course for iPhone development, so it’s not a definite parting. Might make my MBP a dual boot (I’m told boot camp makes this very easy). Let’s see how this goes !


Posted in open source, privacy by elisehuard on January 1, 2010

I’m not quite sure how I ended up at 26C3, but I had a blast.
From what I gather, the Chaos Communication Congresses are gathering of geeks and utopians (or both), around security, privacy and hacking. And LEDs.

We arrived the night before the start of the conference. We were lucky to have our places in advance, because when we went to retrieve our bracelets, people were queuing up to get their places.

The location of the 26C3 (and a few previous ones) is fantastic. The Berlin Congress Center is a graceful example of 70 architecture of the 2001 welcome to the future category. It’s basically a bloc containing a cylindrical structure – the outside edges are the corridors, the inside disks are rooms. Saal1 (under the cupula) in particular is phenomenal, but the rest of the building has lots of charm too.

The principle is that groups and projects book a table (or two), and gather round that table to sit, and hack, or misc. I ended up at the Debian table (‘debianist by association’), thanks to the friends I was traveling with. There were tables with people having brought really old hardware, tables with robot arms, and everywhere laptops.

At 26C3 they set up all kinds of networks. The building is properly wired, and there were a few wifi network that worked reasonably well. Then there was the DECT radio network, which means everyone was walking around with your average domestic chordless phone (DECT radio for those in the know). They even set up a GSM network which didn’t work too consistently, but was way cool nonetheless: instead of vodaphone et al. you had an in-house network which even worked for normal domestic calls !
old hardware

The talks were streamed live. I didn’t attend that many talks in person, as the rooms were really packed. Besides, knowing that the conference recordings would be available later also made it less of an incentive to try and pile in. The talks I did attend/listen to were fascinating.

One talk I attended was about stylometry, or how you can in certain situation detect who’s the author of a text by the word choice, grammar, etc. Which obviously means danger for whistle-blowers publishing anonymously against an abusive employer or an oppressive regime. The author was trying to ‘attack’ those techniques, by trying pastiche or obfuscation. Another talk was about intelligence support systems, and their use by all kinds of organization. I also followed a talk about attacks on PKI, which is interesting since my current work is all about PKI.
lego robots
The ground floor was catering mostly, and the lower floor was the hardware hackers floor. The catering floor was visited many times to get a dose of Club-Mate. Mate is a naturally caffeinated kind of tea leaf from South-America. Club-Mate is a soda version of that, and quite tasty and effective, as energy drinks go.

Then there were all the cool toys ! You could buy kits of electronic circuits to assemble yourself. I bought and assembled the TV-B-Gone kit to switch off tv’s, which worked, and a dotblox64, which lots of LEDs, which doesn’t yet (because of slightly shoddy solderwork). There was a group making helicopters, and a group building and programming LEGO robots to fight against eachother. Geek heaven, or what.

lockpicking class
I also had a go at lockpicking, though I must admit that I miserably failed at that, being quite clumsy (the instructions in german might not have helped). The stories of the instructor (from lockpicking.org and the lockpicking club of Berlin) were interesting. He explained about how locks usually worked, about different kinds, and the pleasure and effort to figure it out. He also bragged a little about his exploits, of course: seems he has a master key of the Berlin underground, and the Berlin public toilets.

The general vibe was one of love for freedom. Lots of subculture represented, though obviously the overarching one was geekiness. No judgments, no rules, things were built for fun, not necessity. A fairly mixed audience, a slightly subversive but enthousiastic spirit. I had a good time, and will enjoy watching some of the remaining talks at home.