Dependency injection is a very nice way of making classes testable and more reusable. An instance of a class Foo that a class NeedsFoo depends on are simply injected into NeedsFoo's constructor. That is, the client of NeedsFoo (the code instantiating it) controls how Foo instances are created, thus, NeedsFoo is more decoupled from the rest of the system. Why? Because any object that is a Foo can be used with NeedsFoo: subclasses of Foo, instances shared other objects, or a Foo created in a special way (e.g., a singelton or proxy to a remote object). Compare this to the traditional non-dependency injection way, where Foo is instantiated by NeedsFoo, thus making it impossible for the client to control what kind of instance of Foo that is used by NeedsFoo.
Object life-time management complicated
Dependency injection is straight forward to do (correctly) in languages with automatic memory management like Java and Python, but it's much harder to get right in languages like C++ which forces you to manage memory manually. Of course, it possible to do simply delete the injected object in NeedsFoo's destructor; like:class NeedsFoo {
Foo* foo;
public:
NeedsFoo(Foo* foo) : foo(foo) { }
~NeedsFoo() { delete foo; }
// Methods using foo that needs to be tested.
};
However, this is far from an optimal solution because now all objects given to any instance of NeedsFoo must be heap allocated because we delete them in the destructor. So even when NeedsFoo is stack allocated (e.g., for performance reasons), its Foo object must be heap allocated. Besides being hard to get right, heap allocation is extremely costly compared to stack allocation. So if we want performance we're screwed, and since we're using C++ I assume that performance is important. If it's not, we could just as well use some other language.Another reason, arguably more important, for that doing delete in the destructor is bad! bad! bad! is that the injected object cannot be easily share with some other part of the application: who knows when (and if) the object should be deleted...? NeedsFoo don't know it, that's for sure.
How do we fix this? Well, we could simply remove the delete from the destructor:
class NeedsFoo {
Foo* foo;
public:
NeedsFoo(Foo* foo) : foo(foo) { }
// Methods using foo that needs to be tested.
};
Easy to test, right? Yes. Easy to use in production code? Well, kind of. Easy to use correctly in production code? Definitely not!Why? Because, as said before, C++ lacks garbage collection thus the injected object needs
to be managed manually somehow. For example using referenced counting pointers, or making sure that foo are has the same life time as the NeedsFoo instance. We will focus on the latter for the remaining of this post. But first a short discussion why it is common that the injected objects (e.g., a Foo instance) should be destroyed at the same time as the object they are injected into (e.g., a NeedsFoo instance).
Let them here on be joined and never parted
Dependency injection is used a lot to make code more testable. However, in my experience it is often the case that if it wasn't for testing, dependency injection wouldn't be needed because its always the same kind if objects that are injected (e.g., the Foo instance is always instantiated the same way for all NeedsFoo instances). In a way using dependency injection is a kind of over-design: the code is more generic than it needs to be, thus, more complex that it needs to be.Despite this, we still do dependency injection and we still consider it part of a good design. Why? Because, as said, it makes the code testable. We consider testable code to be more important than simple design. I can accept that for languages with garbage collector, but when you do the memory management manually the design get much! much! much! more complex. Not good, not good. Infact it's terrible. The reason it's terrible is that memory management is a program global problem that isn't easy to encapsulate.
For example, if I call malloc in my code and pass the pointer to a function defined in a library, how do I know if that memory is deallocated by the library? I can't. Not without manually inspecting the source code of the library. How about using smart pointers? Doesn't help much. The library is most probably a C library. Crap.
So, how can we do dependency injection to simplify testing while keeping memory management simple? This is what we will discuss now, my friend. Let's go!
Object life-time management simplified
One way making sure that a bunch of objects is created and destroyed at the same time is to stuff them into a class:class Concrete : public NeedsFoo {
Foo foo;
public:
Concrete() : NeedsFoo(&foo) { }
};
Here, an instance (actually an instance of a sub-type) of NeedsFoo is created and injected with a Foo object. Since both object are held in a Concrete instance, the NeedsFoo and the Foo instances will be destroyed at the same time. Sweetness!This approach works well under the assumption we never need to inject a subclass of Foo to NeedsFoo. (Well, of course it's still possible to inject a subclass to NeedsFoo, but if we do that we are back to square one since we need to manage those objects' lifetime manually; Concrete is only usable for Foo objects not subtypes of Foo).
So, for classes that injected with the same kind of instances, this approach solves the memory management problem. Goody-goody!
Lies, damn lies, and untested source code
Wait, what's that? Do you the cries? Breaking news! Lies on the Internet! Read all about it! It sad, but true. And I'm a bit ashamed to tell the truth.. but here goes.In the code for Concrete above, the injected object Foo is not constructed when it's injected into NeedsFoo's constructor. Why? Because the constructor of base classes are called before the constructor of derived classes. In other words, NeedsFoo's constructor is called before Concrete's, thus, before Foo's constructor. Similarly, when NeedsFoo's destructor is called, Foo has already been destroyed. Why? Because NeedsFoo's destructor is called after Concrete's.
So what does this mean for us? Is everything we done here useless? Fortunately, it is not that bad. All we need to do is make sure Foo isn't used in the constructor or destructor of NeedsFoo. More precisely, we must not in any way touch the memory where Foo will be/was constructed. In fact this is good rule in any case, because constructors shouldn't do any work any way. Constructors should ideally just initialize the class' members. Destructors shouldn't do any work either (except for destroying object, closing sockets, etc., of course).
Making it testable (my favourite pass-time)
Let's take a step back before we continue and look at the piece we have. Note that NeedsFoo uses Foo which is a concrete class, not an interface. Thus, there is no way to inject an object that does not share any implementation with Foo (e.g., because Foo's constructor is called even for derived classes). But this is precisly what we need for testing! Crap! How to solve this? Templates to the rescue!I know what you think: what a stupid template-loving masochistic guy. Yeah, you're half-right. I'm a guy. But not a template-loving masochistic one. Just a guy, who actually, do think templates can be put to good use (as in: std::vector). However, templates can also be used in way they weren't designed for (as in: most of Boost). There are better ways of doing so-called "template meta-programming"(affectionately called so by the Boost guys; disgustedly called so by sane people[1]). The D language is a good example of this. Check out this talk by the inventor of D, Walter Bright, about how meta-programming should be done.
Anyway, if we use templates we get:
template<class TFOO>
class NeedsFoo {
TFOO foo;
public:
NeedsFoo(TFOO foo) : foo(foo) { }
// Methods using foo that needs to be tested.
};
and the derived class that manages the injected objects life-time becomes:So, now we have a class that contains the code with interesting behavior (NeedsFoo) and a class that contains the construction and life-time management of the injected objects (Concrete). Furthermore, NeedsFoo can be instantiate with any kind of class that looks like a Foo class, e.g., a stub implementation of Foo. This means that the interesting code can be tested easily because of dependency injection, while still being easy to instantiate and pass around.class Concrete : public NeedsFoo<Foo*> { Foo foo; public: Concrete() : NeedsFoo<Foo*>(
&
foo) { } };
Also, note that the template part of the class is never visible in the production source code. That is, all other parts of the production code uses Concrete, not NeedsFoo. Furthermore, the implementation of NeedsFoo does not need to be given in the header file as is traditional for template classes. The reason is that we know all possible types TFOO will refer to. Thus, the build-time will no increase significantly using this approach compared to a initial code we had.
Another important thing to note is that this style of dependency injection is in some ways more powerful than the Java-style dito, because NeedsFoo can actually instantiate the template parameter class TFOO itself. Of course, it requires TFOO to a have a matching constructor, though. In Java, you would need to write a factory to achieve something roughly equivalent.
Pros and Cons
There are several things to consider with the code we started with and what we ended up with. First of all, the code for NeedsFoo has become more complex. This is bad, but I would argue that the test-cases for the production code will be much simpler. Thus, the total complexity (of production code + test-cases) is less for the templated code, than for the code we started with. This is good.Second, the production code we ended up with is more generic yet maintains a its original simple interface. I say this because the main implementation (NeedsFoo) is now a template, thus, any class that looks like Foo can be used instead of Foo. Yet the interface is simple, since the most common case (when Foo is used) is still simple because of the Concrete helper-class. This is good.
Third, the performance of the templated code should be the same as the original code, because there are no unnecessary virtual calls. Virtual methods are traditionally used for making a class testable, but we managed to avoid those here. Instead we use templates, which means that methods calls can be statically bound to a concrete method implementation. Thus, the compiler can do a better job at optimizing the code. This is good.
The major drawback with this approach is that the error messages emitted by the compiler usually gets harder to read. This is common for templated code though, since the error messages contain a bunch of references to template and the types of the template parameters (here FOO). For someone who is a bit familiar with the compiler's error messages this should be very hard to figure out what the error messages mean, though. However, compared to a non-templated solution, the error messages are much harder to understand for someone not used to them.
I recommend you to try this way of doing dependency injection in C++. I'm not saying its the best way of doing under all circumstances, but I am sying its good card to have in your sleeve. A card that may come handy sometime.
[1] The Boost guys have my fullest respect for what they are doing. It is a really impressive project that has helped me more time than I can count. But that doesn't change the fact that what they are doing is insane.
8 comments:
Hey, check out the library I'm working on. The example is pretty cool, even if I do say so myself!
http://bitbucket.org/cheez/dicpp/wiki/Home
I think the template-based solution can only go so far.
First, in the templated version isn't it wrong to give a Foo object directly to NeedsFoo() constructor which expects TFOO(which is in this case Foo*)?
Second, isn't the argument in the "Lies, damn lies, and untested source code" part which is "In the code for Concrete above, the injected object Foo is not constructed when it's injected into NeedsFoo's constructor." also valid for the templated version?
Adnan, yes, you are right. The second version of Concrete (which inherit from NeedsFoo[Foo*]) should provide NeedsFoo, with a Foo* NOT a Foo. Thank's for pointing this out. I'll update the post.
Again you are right, the injected object is not constructed in the templated version neither. However, this is not a problem unless the injected object is touched (as described under "Lies, damn lies").
Good post, I'm developing a C++ Dependency Injection framework and found your article quite interesting. When I'll release the 1.1.0 version maybe I'll post the link since there are a plenty of nice features. Again I used templates (but in a totally different way), mostly for solving problems that C++-without-templates can't solve. Performance was not my goal, but I found that compiler heavily optimize templates resulting in a very lightweight and surprisingly fast code (In my case errors are pretty easy to understand, but I'm working to get them even easier to understand). This article will be a nice link from "further readings" in my web page:)
What does the '*' mean in the following line?
class Concrete : public NeedsFoo
Ryan: I assume you mean the line with the template instantiation? All it says is that the type "Foo*" (pointer to a Foo) is a template parameter to the base class.
I would be interested in measurements, if this code is really so faster than the one with virtual method calls. Cons of using the templated solution are:
- code bloating, which may cause some near memory jumps will become far memory jumps, and reduce speed much more than one virtual indirection
- error messages will become unreadable, and the system less maintainable
There is one more speed pro which is - in my eyes - much more important, than virtual calls, and that is that the templated solution can use function inlining instead of function calls. This is not possible with virtual methods and will provide much more speedup than virtual methods.
This is a beautiful solution indeed. Thanks for the post!
I have a question though: Why not storing a reference in NeedsFoo instead of a pointer?
Post a Comment