“Orbit? Oh, elliptical, of course: for a while it would huddle against us and then it would take flight for a while. The tides, when the Moon swung closer, rose so high nobody could hold them back. There were nights when the Moon was full and very, very low, and the tide was so high that the Moon missed a ducking in the sea by a hair’s-breadth”
Italo Calvino, “The Distance of the Moon“
A few months ago HP came to us with something very cool. It’s called Project Moonshot, and it’s a new way of thinking about how you design infrastructure. Essentially, it’s a composable system that gives you serious flexibility and density.
A single Moonshot System is 4.3u tall and holds 45 independent servers connected to each other via 1-Gig Ethernet. There’s a 10-Gig Ethernet interface to the system as a whole, and management interfaces for the system and each individual server. The long-term design is to have servers that provide specific capabilities (compute, storage, memory, etc.) and can scale to up to 180 nodes in a single 4.3u chassis.
The initial system, announced this week, comes with a single server configuration: an Intel Atom S1260 processor, 8 Gigabytes of memory and either a 200GB SSD or a 500GB HDD. On its own, that’s not a powerful server, but when you put 45 of these into a 4.3 rack-unit space you get something in aggregate that has a lot of capacity while still drawing very little power (see below). The challenge, then, is how to really take advantage of this collection of servers.
NuoDB on Project Moonshot: Density and Efficiency
We’ve shown how NuoDB can scale a single database to large transaction rates. For this new system, however, we decided to try a different approach. Rather than make a single database scale to large volume we decided to see how many individual, smaller databases we could support at the same time. Essentially, could we take a fully-configured HP Project Moonshot System and turn it into a high-density, low-power, easy to manage hosting appliance.
To put this in context, think about a web site that hosts blogs. Typically, each blog is going to have a single database supporting it (just like this blog you’re reading). The problem is that while a few blogs will be active all the time, most of them see relatively light traffic. This is known as a long-tail pattern. Still, because the blogs always need to be available, so too the backing databases always need to be running.
This leads to a design trade-off. Do you map the blogs to a single database (breaking isolation and making management harder) or somehow try to juggle multiple database instances (which is hard to automate, expensive in resource-usage and makes migration difficult)? And what happens when a blog suddenly takes off in popularity? In other words, how do you make it easy to manage the databases and make resource-utilization as efficient as possible so you don’t over-spend on hardware?
As I’ve discussed on this blog NuoDB is a multi-tenant system that manages individual databases dynamically and efficiently. That should mean that we’re a perfect fit for this very cool (pun intended) new system from HP.
After some initial profiling on a single server, we came up with a goal: support 7,200 active databases. You can read all about how we did the math, but essentially this was a balance between available CPU, Memory, Disk and bandwidth. In this case a “database” is a single Transaction Engine and Storage Manager pair, running on one of the 45 available servers.
When we need to start a database, we pick the server that’s least-utilized. We choose this based on local monitoring at each server that is rolled up through the management tier to the Connection Brokers. It’s simple to do given all that NuoDB already provides, and because we know what each server supports it lets us calculate a single capacity percentage.
It gets better. Because a NuoDB database is made of an agile collection of processes, it’s very inexpensive to start or stop a database. So, in addition to monitoring for server capacity we also watch what’s going on inside each database, and if we think it’s been idle long enough that something else could use the associated resources more effectively we shut it down. In other words, if a database isn’t doing anything active we stop it to make room for other databases.
When an SQL client needs to access that database, we simply re-start it where there are available resources. We call this mechanism hibernating and waking a database. This on-demand resource management means that while there are some number of databases actively running, we can really support a much larger in total (remember, we’re talking about applications that exhibit a long-tail access pattern). With this capability, our original goal of 7,200 active databases translates into 72,000 total supported databases. On a single 4.3u System.
The final piece we added is what we call database bursting. If a single database gets really popular it will start to take up too many resources on a single server. If you provision another server, separate from the Moonshot System, then we’ll temporarily “burst” a high-activity database to that new host until activity dies down. It’s automatic, quick and gives you on-demand capacity support when something gets suddenly hot.
I’m not going to repeat too much here about how we drove our tests. That’s already covered in the discussion on how we’re trying to design a new kind of benchmarkfocused on density and efficiency. You should go check that out … it’s pretty neat. Suffice it say, the really critical thing to us in all of this was that we were demonstrating something that solves a real-world problem under real-world load.
You should also go read about how we setup and ran on a Moonshot System. The bottom-line is that the system worked just like you’d expect, and gave us the kinds of management and monitoring features to go beyond basic load testing.
We were really lucky to be given access to a full Moonshot System. It gave us a chance to test out our ideas, and we actually were able to do better than our target. You can see this in the view from our management interface running against a real system under our benchmark load. You can see there that when we hit 7200 active databases we were only at about 70% utilization, so there was a lot more room to grow. Huge thanks to HP for giving us time on a real Moonshot System to see all those idea work!
Something that’s easy to lose track of in all this discussion is the question of power. Part of the value proposition from Project Moonshot is in energy efficiency, and we saw that in spades. Under load a single server only draws 18 Watts, and the system infrastructure is closer to 250 Watts. Taken together, that’s a seriously dense system that is using very little energy for each database.
We were psyched to have the chance to test on a Moonshot System. It gave us the chance to prove out ideas around automation and efficiency that we’ll be folding into NuoDB over the next few releases. It also gave us the perfect platform to put our architecture through its paces and validate a lot about the flexibility of our core architecture.
We’re also seriously impressed by what we experienced from Project Moonshot itself. We were able to create something self-contained and easy to manage that solves a real-world problem. Couple that with the fact that a Moonshot System draws so little power, the Total Cost of Ownership is impressively low. That’s probably the last point to make about all this: the combination of our two technologies gave us something where we could talk concretely about capacity and TCO, something that’s usually hard to do in such clear terms.
In case it’s not obvious, we’re excited. We’ve already been posting this week about some ideas that came out of this work, and we’ll keep posting as the week goes on. Look for the moonshot tag and please follow-up with comments if you’re curious about anything specific and would like to hear more!