Saturday, May 29, 2010

LLVM talks

A few very interesting presentations of LLVM from the LLVM developer's meeting 2009.

Wednesday, May 26, 2010

Find parent pid given a pid

A few days ago I needed to get the parent process id (pid) of a given pid on Linux. There are shell commands for that, .e.g, ps -o ppid= p the-pid, but I needed to get the pid programmatically from C.

For some reason the getppid function can only return the parent pid of the current process, not any process. I searched the web after an alternative solution but I found nothing.

So I took a peek at the source for the ps command, which revealed that there is a function called get_proc_stats defined in proc/readproc.h that (among other things) returns the parent pid of a given pid. You need to do install libproc-dev to get this function. You can then do:
#include <proc/readproc.h>
void printppid(pid_t pid) {
proc_t process_info;
get_proc_stats(pid, &process_info);
printf("Parent of pid=%d is pid=%d\n", pid, process_info.ppid);
and compile it with gcc the-file.c -lproc. I hope you find this useful.

Sunday, May 16, 2010

A story of being humble with confidence

I just read a post over at SKORKS, a blog I read every now and then. He writes a bit about The Art Of Unix Programming, which got me reminded of when I first heard of that book. I then remembered reading The Art of Humble Confidence and I felt that I really had to write something along those lines. Here goes.

It was 2006 and I was working with a handful experienced colleagues on a project trying to radically increase the usefulness two large applications by making them work together. This was an extremely interesting time for me and they taught me a lot that I today value very highly.

One day one of my colleagues who was sitting next door to my office knocked gently on my open door to get my attention. He said, "Sorry, am I interrupting you? Do you have a moment?" He was humble as always, speaking slowly and silently to make sure that I wouldn't be anxious about what he'd say next.

"Do you remember what we talked about before?" Even though I wasn't really sure to what he was referring I replied "sure", thinking that I'd get what he means when he starts to talk about it.

While he slowly pulled up a chair and sat down next to me he said, "did you consider what we said about the MessageReceiver class?" I now realized that he was referring to our discussion over a coffee they day before. I nodded, remembering that he didn't really liked how I designed some part of the system we were working on.

Though I couldn't really understand his argument from yesterday I had redesigned it anyway to be more in line with his suggestions. Making a poor design made me feel a bit bad and not understanding why it was bad made me feel worse. But I didn't want to ask him explaining it (again) because I didn't want to look (more) stupid. That would be even worse. Or so I thought.

I guess he realized my anxiety about not properly understanding his design, because he next said "I did a pretty crappy job explaining why that class needed to be changed, right?" He smiling and chuckled, "I was always better at instructing computers than people." We laughed.

"Anyway", he said, "I read this book a bit yesterday and I think chapter 10 explains what I mean much better than I ever can." He handed me the book and said "you can borrow it if you like." He laughed again and added "but not for too long. I need to read it soon again since you ask so incredibly interesting and hard questions." He got up from the chair and said "let's go and get some coffee." He smiled and added "I'm 'starving'".

I grabbed my cup and we walked over to our colleagues offices and asked them to joined us. As we walked to the coffee machine I felt like I was in the greatest group of developers there was. Everyone was working for our mutual goal while having fun, learning, and teaching at the same time.

My colleague had basically just asked me questions, yet managed to tell me something. Yes, he evened managed to tell me what to do. But more importantly, he taught me that you will never know everything and that working in software is a constant process of learning.

Tuesday, May 11, 2010

My worst mistakes (since two days)

I just spent two days on writing, testing, rewriting, testing, debugging, etc, a piece of code only to find the error to be five misplaces pixels. Learn from my mistake: never use i and j as index in nested loops.

What made this problem worse than it should have needed to be was that the erroneous code was in a test-case. Learn from my mistake: never write complex control flow in your test code.

What made it take longer than it should have take for me to find this mistake was that I didn't assert my assumptions. Learn from my mistake: always assert, never assume anything of any piece of code when you're chasing weird behavior.

Friday, May 7, 2010

Threading is not a model

I just saw the Threading is not a model talk by Joe Gregori from PyCon 2010, which I found very interesting. It has some history about programming language development, and some discussion about design of every-day physical stuff and every-day programming language stuff. I especially find the idea of sufficient irritation very funny and interesting. :)

The main part of the talk is about different ways of implementing concurrency, mainly CSP (Communicating Sequential Processes) and Actors. Interesting stuff presented by a good speaker.

Must-know tricks for safer macros

Almost every developer knows to stay away from (C-style) macros unless it's absolutely necessary to use them, or that is saves a lot! of code to use them. And when they do use them, they make sure to write them properly. I won't go into the details of the traditional ways of writing good macros (as in "harder to misuse, but still possible") because there are several pages like that already (see the links above). Instead, I'll discuss an entirely different way of making macros more safe.

Why macros are hard

Let's describe the problems with macros with an example. This simple macro definition multiplies the provided argument with it self to yield the square of the input:
#define SQUARE(x) x * x
Looks simple right? Too bad, because its not that simple. For example, what happens when the following code is evaluated:
int a = 2;
int b = SQUARE(a + 1);
I tell you what happens: all hell breaks loose! The above code is expanded into:
int a = 2;
int b = a + 1 * a + 1;
thus, b will equal 2 + 1 * 2 + 1 = 5. Not quite what we expected by looking at the code SQUARE(a + 1), right? All in all, macros looks simple and harmless enough, but are not simple at to get right. And definitely not harmless, on the contrary, it's extremely easy to get bitten horribly bad. We are now going to discuss how to make macros a bit more safe to work with.

Making them softer: check your typing

Types are an important part of the C language and even more so of C++ with all its classes, templates, and function overloading. Macros, though, are simple text substitution without knowledge of types, so macros fits very badly in the normal type-centric C/C++ world we are used to work with.

For example, you give a function the wrong types as argument. What do you get? A type error. No code is emitted by the compiler. This is good. On the other hand, if you give a macro the wrong types as argument, what do you get? If you're lucky some kind of error; maybe a syntax error, maybe a semantic error. If you're unlucky, though, you won't get any error at all. The compiler will just silently emit ill-behaving code into the .o-file. This is extremely bad because we're fooled into believing our code works as we expects it to.

Luckily, there is a way of making macros more safe in this regard. Let's take simple, yet illustrative example: a macro called ZERO that takes one argument, which is a variable, and sets it to 0. The first version looks like this:
#define ZERO(variable) variable = 0;
and is intended to be used inside a function like this:
void func() {
int i;
// more code here...
Simple but not safe enough for our tastes. For example, this macro can be called as ZERO(var0 += var1) and it will produce code the compiler accepts, but that code does not have the behavior the macro was intended to have. The macro will expand this code to var0 += var1 = 0, which (I think) is equivalent to var1 = 0; var0 += 0. Whatever the expanded code does, its not what we intended ZERO to do. In fact, ZERO was never designed to handle this kind of argument and should thus reject it with a compilation error. We will now discuss how to reject such invalid argument. Here goes...

Halt! Identify yourself!

To make sure that the compiler emits an error when the ZERO macro is given a non-variable as argument, we rewrite it to:
#define ZERO(variable) \
{ enum variable { }; } \
ariable = 0;
That is, an enumeration is declared with the same name as argument inside a new scope. This makes sure that the argument is a valid identifier and not just any expression, since an expression can't be used as a name for an enumeration. For example, the previously problematic code, ZERO(var0 += var1), will expand to:
{ enum var0 += var1 { }; } var0 += var1 = 0;
which clearly won't compile. On the other hand given correct argument, e.g., the code ZERO(var0), we get
{ enum var0 { }; } var0 = 0;
which compiles and behaves as we expect ZERO to behave. Neat! Even neater, the compiler won't emit any code (in the resulting .o-file) for the extra "type-checking code" we added, because all it does is to declare a type, and that type is never used in our program. Awesomeness!

So we now have a pattern for making sure that a macro argument is a variable: declare a enumeration (or a class or struct) inside a new scope with the same name as the variable. We can encapsulate this pattern in a macro VARIABLE and rewrite ZERO using it
#define VARIABLE(v) { enum v { }; }
#define ZERO(variable) \
VARIABLE(variable) \
variable = 0;
Note that with a bit of imagination, the definition of ZERO can be read as the signature (VARIABLE(variable)) followed by the macro body (variable = 0;), making macros look more like function definitions that we are familiar with. This wraps up our discussion about variables as macro argument. But read on, there's more!


Let's assume that we wish to generalize ZERO into another macro called ASSIGN that sets the provided variable to any constant integer expression not just zero. For example, 1, 2, and 39 + 3 are valid arguments, but i + 2, 1.0, and f() are not because those are not constant integers. One way of defining such macro is as follows:
#define ASSIGN(variable, value) \
VARIABLE(variable) \
variable = value;
that is, we simply added an argument value that variable is assigned to. Simple, but as usual with macros, very easy to misuse. For example ASSIGN(myVar, myVar + 1) will assign myVar to a non-constant value, which is precisely what we didn't want ASSIGN to do.

To solve this problem, we recall that an enumerator (a member of an enumeration) can be assign a constant integer value inside the enumeration declaration. This is exactly the kind of values we wish ASSIGN to accepts, thus, we rewrite it into the following code:
#define ASSIGN(variable, value) \
VARIABLE(variable) \
{ enum { E = value }; } \
variable = value;
This version of ASSIGN only accepts variables names for its first argument and constant integers for its second argument. Note, that the constant can be a constant expression, so things like ASSIGN(var0, 1 + C * D) will work as long as C and D are static const int's. If we extract the pattern for checking that an argument is a constant integer int CONST_INT, we get the following two definitions:
#define CONST_INT(v) { enum { E = v }; }
#define ASSIGN(variable, value) \
VARIABLE(variable) CONST_INT(value) \
variable = value;
As for the final version of ZERO, the definition of ASSIGN can be read as the signature of ASSIGN followed by the body of it.


Now we will modify ASSIGN into DECLARE; a macro that declares a variable of some type, which is provided as argument to DECLARE. Similar to ASSIGN, DECLARE initializes the variable to the provided constant integer expression. Our first implementation of such macro is:
#define DECLARE(type, variable, value) \
VARIABLE(variable) CONST_INT(value) \
type variable = value;
However, the compiler will accept code like DECLARE(int i =, j, 0) (assuming j is delcared variable and i is not). So following our habit from previous examples, we wish to make it a bit safer by making sure the type argument actually is a type, e.g., int, MyClass, or MyTemplate<MyClass>. We do this by having the macro using type as a template argument, as follows:
template<typename TYPE> class TypeChecker { };
#define DECLARE(type, variable, value) \
VARIABLE(variable) CONST_INT(value) \
{ class Dummy : TypeChecker<type> { }; } \
type variable = value;
This definition is much more safe from misuse than the previous; code like DECLARE(int i =, j, 0)won't compile. If we extract the argument-checking code into a separate macro, TYPE, we get:
template<typename TYPE> class TypeChecker { };
#define TYPE(t) { class Dummy : TypeChecker<t> { }; }
#define DECLARE(type, variable, value) \
TYPE(type) VARIABLE(variable) CONST_INT(value) \
type variable> = value;
As before, note that we can read this definition as two parts: first the macro signature and then the macro body. Compiler enforced documentation FTW!

Everyone's unique

To not make this post too long, I'll stop giving background and reasons for the rest of the type-checking macros I'll present from now on. I'll just briefly describe what they do.

The following macro makes sure that a list of identifiers only contains unique identifiers:
#define UNIQUE(is...) { enum { is }; }
Note that this macro requires that the compiler supports macro varargs. It used as UNIQUE(name1, name2, name3), or UNIQUE(name1, name2, name1) where former is ok, but the latter will emit an error.


These macros compares constant integer expressions in various ways. The basic idea here is that the size of an array must not be negative and that the boolean value true is converted into 1 in a integer context and false is converted to 0. We use this to implement the macro IS_TRUE as follows:
#define IS_TRUE(a) { struct _ { int d0[!(a)]; int d1[-!(a)]; }; }
Many comparison macros are then trivially implemented using IS_TRUE, for example:
#define LESS_THAN(a, b) IS_TRUE(a < b)
#define EQUAL(a, b) IS_TRUE(a == b)
#define SMALL_TYPE(t) IS_TRUE(sizeof(t) < 8)
You may ask yourself why such macro is needed. Shouldn't templates be used here instead? I agree, but there are some of us who is (un-)lucky enough to use C and not C++...

Let's get general

The general idea we've used so far is to have two parts of the macro. One part that implements the desired (visible) behavior, and another part that works like type-checking code. The type-checking code is implemented by having short trivial side-effect free pieces of code that only will compile under the assumptions you make on the argument. For example, the argument is a variable, or the argument is a constant integer expression.

Of course, it may still be possible to fool the "type-checking" code, but its much less likely to happen indeliberately, which is the most important cases to find.

Descriptive error message?

In short: no. The error messages got from any of these type-checking macros are extremely non-descriptive. However, any error message, even weird and non-descriptive ones, is still better than no error message at all and ill-behaving binary code.

The sum of all fears

Does the approach described here solve all problems with macros? No, it does not. It does however, make it less of an issue. It is possible to write macros that are type-safe and behaves in a good way (by which I mean: either compiles into correct code or does not compile at all). However, I'm pretty sure that are uses for macros that cannot be covered with this approach.

Despite this, I highly recommend to use this idea when you write macros. It will make the macro better in most way possible, e.g., safer and better documented. Compiler-enforced documentation, even! Just like type declarations in all your other C/C++ code. Neat, don't you think?

Sunday, May 2, 2010

Beautiful Dependency Injection in C++

Dependency injection is a very nice way of making classes testable and more reusable. An instance of a class Foo that a class NeedsFoo depends on are simply injected into NeedsFoo's constructor. That is, the client of NeedsFoo (the code instantiating it) controls how Foo instances are created, thus, NeedsFoo is more decoupled from the rest of the system. Why? Because any object that is a Foo can be used with NeedsFoo: subclasses of Foo, instances shared other objects, or a Foo created in a special way (e.g., a singelton or proxy to a remote object). Compare this to the traditional non-dependency injection way, where Foo is instantiated by NeedsFoo, thus making it impossible for the client to control what kind of instance of Foo that is used by NeedsFoo.

Object life-time management complicated

Dependency injection is straight forward to do (correctly) in languages with automatic memory management like Java and Python, but it's much harder to get right in languages like C++ which forces you to manage memory manually. Of course, it possible to do simply delete the injected object in NeedsFoo's destructor; like:
class NeedsFoo {
Foo* foo;
NeedsFoo(Foo* foo) : foo(foo) { }
~NeedsFoo() { delete foo; }
// Methods using foo that needs to be tested.
However, this is far from an optimal solution because now all objects given to any instance of NeedsFoo must be heap allocated because we delete them in the destructor. So even when NeedsFoo is stack allocated (e.g., for performance reasons), its Foo object must be heap allocated. Besides being hard to get right, heap allocation is extremely costly compared to stack allocation. So if we want performance we're screwed, and since we're using C++ I assume that performance is important. If it's not, we could just as well use some other language.

Another reason, arguably more important, for that doing delete in the destructor is bad! bad! bad! is that the injected object cannot be easily share with some other part of the application: who knows when (and if) the object should be deleted...? NeedsFoo don't know it, that's for sure.

How do we fix this? Well, we could simply remove the delete from the destructor:
class NeedsFoo {
Foo* foo;
NeedsFoo(Foo* foo) : foo(foo) { }
// Methods using foo that needs to be tested.
Easy to test, right? Yes. Easy to use in production code? Well, kind of. Easy to use correctly in production code? Definitely not!

Why? Because, as said before, C++ lacks garbage collection thus the injected object needs
to be managed manually somehow. For example using referenced counting pointers, or making sure that foo are has the same life time as the NeedsFoo instance. We will focus on the latter for the remaining of this post. But first a short discussion why it is common that the injected objects (e.g., a Foo instance) should be destroyed at the same time as the object they are injected into (e.g., a NeedsFoo instance).

Let them here on be joined and never parted

Dependency injection is used a lot to make code more testable. However, in my experience it is often the case that if it wasn't for testing, dependency injection wouldn't be needed because its always the same kind if objects that are injected (e.g., the Foo instance is always instantiated the same way for all NeedsFoo instances). In a way using dependency injection is a kind of over-design: the code is more generic than it needs to be, thus, more complex that it needs to be.

Despite this, we still do dependency injection and we still consider it part of a good design. Why? Because, as said, it makes the code testable. We consider testable code to be more important than simple design. I can accept that for languages with garbage collector, but when you do the memory management manually the design get much! much! much! more complex. Not good, not good. Infact it's terrible. The reason it's terrible is that memory management is a program global problem that isn't easy to encapsulate.

For example, if I call malloc in my code and pass the pointer to a function defined in a library, how do I know if that memory is deallocated by the library? I can't. Not without manually inspecting the source code of the library. How about using smart pointers? Doesn't help much. The library is most probably a C library. Crap.

So, how can we do dependency injection to simplify testing while keeping memory management simple? This is what we will discuss now, my friend. Let's go!

Object life-time management simplified

One way making sure that a bunch of objects is created and destroyed at the same time is to stuff them into a class:
class Concrete : public NeedsFoo {
Foo foo;
Concrete() : NeedsFoo(&foo) { }
Here, an instance (actually an instance of a sub-type) of NeedsFoo is created and injected with a Foo object. Since both object are held in a Concrete instance, the NeedsFoo and the Foo instances will be destroyed at the same time. Sweetness!

This approach works well under the assumption we never need to inject a subclass of Foo to NeedsFoo. (Well, of course it's still possible to inject a subclass to NeedsFoo, but if we do that we are back to square one since we need to manage those objects' lifetime manually; Concrete is only usable for Foo objects not subtypes of Foo).

So, for classes that injected with the same kind of instances, this approach solves the memory management problem. Goody-goody!

Lies, damn lies, and untested source code

Wait, what's that? Do you the cries? Breaking news! Lies on the Internet! Read all about it! It sad, but true. And I'm a bit ashamed to tell the truth.. but here goes.

In the code for Concrete above, the injected object Foo is not constructed when it's injected into NeedsFoo's constructor. Why? Because the constructor of base classes are called before the constructor of derived classes. In other words, NeedsFoo's constructor is called before Concrete's, thus, before Foo's constructor. Similarly, when NeedsFoo's destructor is called, Foo has already been destroyed. Why? Because NeedsFoo's destructor is called after Concrete's.

So what does this mean for us? Is everything we done here useless? Fortunately, it is not that bad. All we need to do is make sure Foo isn't used in the constructor or destructor of NeedsFoo. More precisely, we must not in any way touch the memory where Foo will be/was constructed. In fact this is good rule in any case, because constructors shouldn't do any work any way. Constructors should ideally just initialize the class' members. Destructors shouldn't do any work either (except for destroying object, closing sockets, etc., of course).

Making it testable (my favourite pass-time)

Let's take a step back before we continue and look at the piece we have. Note that NeedsFoo uses Foo which is a concrete class, not an interface. Thus, there is no way to inject an object that does not share any implementation with Foo (e.g., because Foo's constructor is called even for derived classes). But this is precisly what we need for testing! Crap! How to solve this? Templates to the rescue!

I know what you think: what a stupid template-loving masochistic guy. Yeah, you're half-right. I'm a guy. But not a template-loving masochistic one. Just a guy, who actually, do think templates can be put to good use (as in: std::vector). However, templates can also be used in way they weren't designed for (as in: most of Boost). There are better ways of doing so-called "template meta-programming"(affectionately called so by the Boost guys; disgustedly called so by sane people[1]). The D language is a good example of this. Check out this talk by the inventor of D, Walter Bright, about how meta-programming should be done.

Anyway, if we use templates we get:
template<class TFOO>
class NeedsFoo {
TFOO foo;
NeedsFoo(TFOO foo) : foo(foo) { }
// Methods using foo that needs to be tested.
and the derived class that manages the injected objects life-time becomes:
class Concrete : public NeedsFoo<Foo*> {
Foo foo;
Concrete() : NeedsFoo<Foo*>(&foo) { }
So, now we have a class that contains the code with interesting behavior (NeedsFoo) and a class that contains the construction and life-time management of the injected objects (Concrete). Furthermore, NeedsFoo can be instantiate with any kind of class that looks like a Foo class, e.g., a stub implementation of Foo. This means that the interesting code can be tested easily because of dependency injection, while still being easy to instantiate and pass around.

Also, note that the template part of the class is never visible in the production source code. That is, all other parts of the production code uses Concrete, not NeedsFoo. Furthermore, the implementation of NeedsFoo does not need to be given in the header file as is traditional for template classes. The reason is that we know all possible types TFOO will refer to. Thus, the build-time will no increase significantly using this approach compared to a initial code we had.

Another important thing to note is that this style of dependency injection is in some ways more powerful than the Java-style dito, because NeedsFoo can actually instantiate the template parameter class TFOO itself. Of course, it requires TFOO to a have a matching constructor, though. In Java, you would need to write a factory to achieve something roughly equivalent.

Pros and Cons

There are several things to consider with the code we started with and what we ended up with. First of all, the code for NeedsFoo has become more complex. This is bad, but I would argue that the test-cases for the production code will be much simpler. Thus, the total complexity (of production code + test-cases) is less for the templated code, than for the code we started with. This is good.

Second, the production code we ended up with is more generic yet maintains a its original simple interface. I say this because the main implementation (NeedsFoo) is now a template, thus, any class that looks like Foo can be used instead of Foo. Yet the interface is simple, since the most common case (when Foo is used) is still simple because of the Concrete helper-class. This is good.

Third, the performance of the templated code should be the same as the original code, because there are no unnecessary virtual calls. Virtual methods are traditionally used for making a class testable, but we managed to avoid those here. Instead we use templates, which means that methods calls can be statically bound to a concrete method implementation. Thus, the compiler can do a better job at optimizing the code. This is good.

The major drawback with this approach is that the error messages emitted by the compiler usually gets harder to read. This is common for templated code though, since the error messages contain a bunch of references to template and the types of the template parameters (here FOO). For someone who is a bit familiar with the compiler's error messages this should be very hard to figure out what the error messages mean, though. However, compared to a non-templated solution, the error messages are much harder to understand for someone not used to them.

I recommend you to try this way of doing dependency injection in C++. I'm not saying its the best way of doing under all circumstances, but I am sying its good card to have in your sleeve. A card that may come handy sometime.

[1] The Boost guys have my fullest respect for what they are doing. It is a really impressive project that has helped me more time than I can count. But that doesn't change the fact that what they are doing is insane.

Saturday, May 1, 2010

Best TED-talk ever

TED is an extremely good place to find interesting (short) talks about various topics. You can literally spend days there watching really high quality presentations ranging from musical performances to science talks. Today, though, I saw something that bets every other TED-talks I've ever seen before.

It was short and witty, it was science and humor, and it was refreshingly different. And best of all, you actually learnt something useful! Stop wasting your time here. Get over there and see it now!