الجمعة، 15 يوليو 2011

Google Americas Faculty Summit Day 1: Cluster Management



On July 14 and 15, we held our seventh annual Faculty Summit for the Americas with our New York City offices hosting for the first time. Over the next few days, we will be bringing you a series of blog posts dedicated to sharing the Summit's events, topics and speakers. --Ed

At this year’s Faculty Summit, I had the opportunity to provide a glimpse into the world of cluster management at Google. My goal was to brief the audience on the challenges of this complex system and explain a few of the research opportunities that these kinds of systems provide.

First, a little background. Google’s fleet of machines are spread across many data centers, each of which consists of a number of clusters (a set of machines with a high-speed network between them). Each cluster is managed as one or more cells. A user (in this case, a Google engineer) submits jobs to a cell for it to run. A job could be a service that runs for an extended period, or a batch job that runs, for example, a MapReduce updating an index.

Cluster management operates on a very large scale: whereas a storage system that can hold a petabyte of data is considered large by most people, our storage systems will send us an emergency page when it has only a few petabytes of free space remaining. This scale give us opportunities (e.g., a single job may use several thousand machines at a time), but also many challenges (e.g., we constantly need to worry about the effects of failures). The cluster management system juggles the needs of a large number of jobs in order to achieve good utilization, trying to strike a balance between a number of conflicting goals.

To complicate things, data centers can have multiple types of machines, different network and power-distribution topologies, a range of OS versions and so on. We also need to handle changes, such as rolling out a software or a hardware upgrade, while the system is running.

Our current cluster management system is about seven years old now (several generations for most Google software) and, although it has been a huge success, it is beginning to show its age. We are currently prototyping a new system that will replace it; most of my talk was about the challenges we face in building this system. We are building it to handle larger cells, to look into the future (by means of a calendar of resource reservations) to provide predictable behavior, to support failures as a first-class concept, to unify a number of today’s disjoint systems and to give us the flexibility to add new features easily. A key goal is that it should provide predictable, understandable behavior to users and system administrators. For example, the latter want to know answers to questions like “Are we in trouble? Are we about to be in trouble? If so, what should we do about it?”

Putting all this together requires advances in a great many areas. I touched on a few of them, including scheduling and ways of representing and reasoning with user intentions. One of the areas that I think doesn’t receive nearly enough attention is system configuration—describing how systems should behave, how they should be set up, how those setups should change, etc. Systems at Google typically rely on dozens of other services and systems. It’s vital to simplify the process of making controlled changes to configurations that result in predictable outcomes, every time, even in the face of heterogeneous infrastructure environments and constant flux.

We’ll be taking steps toward these goals ourselves, but the intent of today’s discussion was to encourage people in the academic community to think about some of these problems and come up with new and better solutions, thereby raising the level for us all.

ليست هناك تعليقات:

إرسال تعليق