One thing I often get asked to describe is the development process of NuoDB’s Cloud Database. That usually comes in the form of a question as to whether we use agile methodologies, waterfall or something else.
Our overall guiding principle to software development is simply this: “Keep It Working”. Everything we do is about building on a solid base and incrementally improving as we go. Our process also has built in checks and balances. Explaining this from the top-down:
It starts with our top-level strategic roadmap. The roadmap describes at a high level what we want to build and when. We are careful to put just enough thought into the features that make up the roadmap to ensure we know roughly how to build the feature and also how much effort it will take. Then we lay out our roadmap month by month with features showing up in particular monthly releases. So, for 2013, we released our 1.0 GA in January and we have scheduled monthly releases throughout the year. Our cadence of releases is the following: 1 feature release followed by 2 maintenance and bug fix releases. That means: in February we released 1.0.1, March we’ve released 1.0.2, April we’ll release 1.1 and so on. So in a nutshell, we do date-driven monthly releases.
Ok, so that’s how we lay out our releases. Our engineers, which by the way are amazing, are assigned bug-fixes or features by team leads. They’ll work on those work items by first laying out what they’ll do with their team-mates and work incrementally to complete those items. We use a wiki to document features, and we use an issue tracker to manage our bugs/tasks. When code is ready to be pushed to our main integration branch it first gets tested by having engineers develop their own tests for their features. Engineers are also required to run our entire test suite to ensure regressions aren’t introduced. Once the new code gets through the code-review and testing gauntlet a developer will push their code to the right development branch (we also do parallel development against future releases). After the code is integrated, our automated tests kick in so that everyone on the team knows whether or not the new code has passed our continuous testing on all the platforms we support (i.e. we do what’s commonly referred to as continuous integration). So in a nutshell, our development includes: peer design review, peer code review, test directed development, and continuous integration. Some of the tools we use are: JIRA for our bug and issue tracking, Confluence for our Wiki, Fisheye/Crucible for code reviews, and GIT for source control.
Getting the code developed and into the code base is a big part of the effort of getting a bug-fix or feature into a release. We also run a much larger set of regression tests every night, which also includes the use of Valgrind for finding memory leaks. We also track our code coverage metrics so that we can incrementally improve the test coverage of our product over time. For new features and substantial bug-fixes, our QA team develops additional tests that are also incorporated into our automated test suite. Oh, and since we’re a standards compliant SQL database we make use of plenty of additional test suites to constantly improve the coverage and quality of our product.
So that’s a glimpse into testing. We additionally take testing to the next level with scalability and performance testing. We’ve made a significant investment in on premise hardware and virtualized cloud hardware to test our product at scale. We make use of standard benchmarks such as YCSB and DBT-2 (this is a clean room clone of TPC-C) plus we’ve developed our own benchmarks, and we work closely with customers to test and tune our product based on their workloads. We routinely run a large set of performance and scalability tests across dozens of machines every day. Some of the tests ensure that we achieve linear scalability across 100 or more machines. We automatically check for performance regressions.
Our customer team is comprised of Technical Support, Quality Assurance and Documentation. That team is part of the development process and also extremely close with our customers. Any new features are tested and documented by that team and they also take testing beyond what our developers do. They are the last “line of defense” before our product makes it out the door of our humble offices in Cambridge, MA. That team also manages the beta cycle for new big features that are in the process of being introduced, such as our Microsoft .Net Support.
I’ve touched on some of the highlights of how we develop software at NuoDB, but really the most important aspect of what we do is how we operate as a team. We all have ownership of the success of the company and product, with that brings a large degree of pride to produce the best possible product we can. In an upcoming post I’ll talk about NuoDB’s Open Source development efforts. See the fruits of our efforts by downloading NuoDB here: download the NuoDB Cloud Database Management System.