Ruby Driver Retooling

Hello, faithful techblog readers. Those who've been with us for a while may know me as: "the guy who wrote all those MVCC posts". However, I'm also an in-the-trenches engineer here at NuoDB's hidden volcano lair. This is a good thing because it means that the forces of nature wrench my head out of the distributed datastructure clouds every now and again and give me some contact with solid concrete reality.

My latest contact with reality came in the form of retooling the guts of our ruby driver. For those out in the cold, Ruby is a high-level language with all kinds of interesting features such as first class functions, continuations, automatic memory management and much, much more. Since I'm a Lisp guy, I think of ruby as 'lisp without macros' and with an algol-like syntax that eschews the stark beauty of the s-expression. Ruby is also popular with the kids, and it enjoys a dedicated following in the web development community. All that said, I really don't know ruby. My experience is much more at the level of implementing ruby rather than implementing something in ruby. We've had support for ruby since our initial release, and we recently had a customer give our driver quite a workout. They found some bugs, and it was my job to go in and stomp my mighty feet until all bugs were squished.

'Wait a minute...' the more skeptical amongst you are assuredly asking, "...you just confessed that you aren't a ruby wizard. Why would they have you fix the driver?" Good question, skeptical audience. The answer lies in the way the ruby driver was initially implemented. It is described as a 'native' driver not because it's all written in ruby, but rather because it relies on native code to function. In particular, the ruby driver uses the ruby extension system  to call into our native C++ driver code to do all that tedious bit-wranging and networking. I may not know enough ruby to distinguish a gems from an rails, but I do know a lot about how language runtimes work and the nitty-gritty of getting a program to actually jab and poke the hardware the right way. So I got to learn a little ruby, learn a lot about their extension system and play around with something that was, quite frankly, a lot of fun.

Ruby is not C, but big chunks of it are implemented in C and every now and then people need to stitch a C/C++ codebase together with a ruby codebase and make them share data without any tedious copying (e.g. no passing values through a socket or anything). From the point of view of our driver, there were several issues that had to be dealt with:

  1. Converting C++ instances into something ruby could use
  2. Integration with the garbage collector
  3. Destructor invocation
  4. Ruby error handling vs. C++ exceptions
  5. Lazy fetching of metadata

The ruby extensions system is quite nice and pure and a wonderland compared to the dense hostile jungle that is JNI development. There are already provided a set of tools for converting C primitives into ruby values and C pointers into ruby references. The approach taken by the ruby driver is to have a lightweight wrapper around a reference to the C++ class instances that our C++ driver produces. Easy peasy. The user wants a connection, just build one in C++ and then wrap it up and pass it back. Rinse and repeat for statements and result sets. Something like:

struct nuodb_result_set_handle {
    NuoDB::ResultSet* pointer;
};

Now comes the first challenge, integrating explicitly managed code with ruby's garbage collector. For those coming from the ruby side, C et al require the programmer to explicitly allocate memory when needed, and then to keep track of it and explicitly deallocate it (we C people say 'free memory') when it is no longer needed. For those coming from the C side, ruby is a managed language and so periodically walks the heap and frees up any memory not pointed at by the current program state. How to marry these two different approaches? Ruby extensions have taken a pleasant approach, which is to abstract away their GC process as a simple mark-and-sweep collector. Mark-and-sweep collection is a 2 phase approach to garbage collection. In the first phase, a set of initial references (the GC 'roots') is walked and everything they can reach is 'marked' as live. Then, the sweep phase goes through and disposes of those things that aren't marked. The way this is exposed to the native code is that each extension object can register specific functions to call on both the mark and sweep phase.

For nuodb, this means that the C++ objects that are wrapping database state need to make sure that their dependencies are marked during mark, and that everybody is cleaned up properly on the sweep/free phase. In the ruby driver, this means that when a result set is marked, its mark function needs to also mark the statement object that produced it (otherwise the statement might be freed while the result set was still live, which would lead to a truncated result set and errors). Likewise, a statement needs to mark its enclosing connection. At this point, we realize that our simple wrappers aren't going to hack it. So we're going to need something like:

struct nuodb_result_set_handle {
    NuoDB::ResultSet* pointer;
    VALUE statement;
};

And the mark function for result sets would just mark its statement. Something similar could be done for the other wrapped objects.

So, now we can allocate nuodb objects and use them from ruby and not have them break horribly while being used. If a result set is live, the statement that built it and the connection that created the statement will also be live. Cool. Now, how do we handle deallocating resources when they're done? Fortunately for us, the ruby runtime will call a free function we register with it when we wrap an object. This function is called as part of the 'sweep' phase of mark-and-sweep. NuoDB objects are more complicated than a simple structure, it isn't sufficient to just free the memory, the resources must be closed first, which will free up resources on the database. This isn't shocking. In most systems, the database connection at least needs to be explicitly closed before it leaves scope, otherwise the database won't know that the client is done with it. So, the initial urge would be to register a free function that just calls close, then frees the memory. Unfortunately, this doesn't always work. Why it doesn't is an interesting question.

In the C/C++ world, the programmer is required to explicitly manage the entire lifecycle of any objects their program uses. Therefore, the C++ driver for NuoDB expects the programmer to close ResultSets before closing Statements and closing Statements before closing Connections. Furthermore, it is expected that the programmer will have the presence of mind to close things before deallocating them. This becomes problematic when integrating with Ruby's GC because the GC sweep phase makes no guarantees about any order in which it will visit objects. So say one performs a select via ruby (which builds a statement and then a result set). Then both the results and the statement go out of scope and are now 'garbage'. They both should be closed and freed. If the GC thread calls the free function on the result set first and then the statement, everything is fine. If the order is reversed, it will throw an exception. Since the ruby programmer can't control the order the GC visits objects, we need to fix this problem in the ruby driver code itself. How do we fix this? By combining simple reference-counting with a mechanism for deferring close. 

First, let's tackle reference counting. The purpose of the reference count is so that a higher-level DB object can know if it has any lingering dependant object. For example, whenever a result set is created on a statement, the act of creation will add 1 to the statement's reference count. When the result set is closed and freed, it will subtract 1 from the statement's reference count (M.M. for statements with respect to connections). When an object's reference count is 0, it is safe to be closed and freed. Now the free function, rather than unconditionally closing and freeing, will first check the reference count and do nothing if it's non-zero. Ok, so we've used ref. counting to prevent premature closing and deallocation, but what if a statement can't be freed because the GC thread hit it before the ResultSet? Who will deallocate the statement? That's where deferred close comes in.

Deferred close is a word I came up with to describe the simple idea that the last dependent object to be freed is responsible for freeing its parent object. So, if a statement can't close and free itself because there are unclosed ResultSets still hanging off of it, it's the job of the last Result Set in the statement to close and free the statement. I call it the 'Last Kid in the Pool' principle. How can the ResultSet know it needs to free its parent? By looking at the parent's reference count. So we now know that we need a reference count in each handle, and that each handle needs to get at its parent's handle. We also need a way to invoke a function on behalf of a parent in order to clean it up. To do this we use a C function pointer to a cleanup function, so that the child can just invoke it directly when it detects that it was the last child keeping its parent alive. For those interested, the handle will look something like:

struct nuodb_handle {
  nuodb_handle* parent;
  int refCount;
  void (*cleanupFunc)(nuodb_handle*);
}
struct nuodb_result_set_handle : public nuodb_handle {
    NuoDB::ResultSet* pointer;
    VALUE statement;
};

Some of those readers familiar with C++ might be asking, 'why not use destructor magic to help with this?'. The answer is that the ruby extension interface is actually for plain C, and that this has a consequence. Namely that exceptions in ruby are free to use setjmp/longjmp to implement exceptional control-flow. This could make it nigh impossible to leverage well-scoped destruction to make things simpler. The internets seem to be pretty unanimous on avoiding complicated destructors in ruby ext code.

Now we have a set of basic types that can do the right thing in the face of Ruby's GC. The next improvement was in metadata handling. When a result set wends its way back to the client, it's tagged with metadata about each of the returned columns. The most important part of which is the type. This type information lets us do two things: first, it lets us convert from a sql type to the appropriate ruby type when asked; and second, it lets us detect programmer error when they ask for a value of an incorrect type. The metadata must exist for results to be processed correctly, however the ruby program can also just ask for the metadata directly. In the old code, this was done on demand, for every column and every row. The modest improvement of yours truly was to just grab it once when the result set first shows up and cache in in the handle. This has the nice effect of making the new driver much snappier than its prior incarnation.

This concludes my 'brief' overview of the changes I have recently wrought on the ruby driver. I found the ruby extension stuff really fun to use, and the problem was just one of those nice, self-contained and interesting pieces of software development. An experience I enjoyed so much I felt I just had to share it (and of course, brag about my memory and performance improvements).

dlbirch
Anonymous's picture
<p>Nice stuff Trek ...

Nice stuff Trek ... appreciated the description of both the problem and the solution. Best, Doug

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.