High-probability successes, by design

October 27, 2011 § Leave a comment

Boy, it’s hard to get back into the rhythm of blogging once you stop.  It’s been a busy few weeks — if you read the Initiative in Systems Pharmacology post you know a little bit about why, but also there have been a number of grant and fellowship deadlines, and on top of that we’re recruiting this year.  In short, the day job has been taking up (even) more of the evening than it usually does.  I like to be busy, but there is such a thing as going too far. However, somewhat to my surprise, I find myself missing blogging — the rest of my job doesn’t require me to read papers and think about them, so thinking about science can fall by the wayside if I’m not careful.  In some ways it’s like missing the pain from a nagging tooth, but any kind of absence can make the heart grow fonder.  (Have you missed me?)

So, to get back into the swing of things, here’s a paper that I read a while ago but never finished writing about (Barnes et al. 2011. Bayesian design of synthetic biological systems, PNAS doi.10.1073/pnas.101792108).  It deals with ways to do a better job of designing biological systems.  A dominant argument in synthetic biology has been that the job of synthetic biologists is to make biology more modular, to take inspiration from the standardization of engineering parts such as screws and nuts and bolts (which were once wildly varied, but now have standard sizes and screw threads) and attempt to develop similar standardizations of biological parts.  This direction has had some successes, but it’s clear that there are challenges; and the challenges loom ever larger as the system one is trying to design becomes more complex.   Biology is implemented in probabilistic chemical reactions, not cold steel, and the analogy of mechanical engineering can only take us so far.  And so Barnes et al. argue that we should pay more attention to tools from statistics, specifically to Bayesian analysis.

Here’s their argument, which personally I find quite compelling.  Bayesian analysis is used in biology to try to pull network information out of large, noisy data sets.  The general idea is that, given observed data,  Bayesian analysis allows you to infer a range of possible network structures that are consistent with the data.  More importantly, it gives you a rigorous way of ranking how likely it is that a given network produced the data you observed.  (For a slightly less general idea, see this nice Primer by Sean Eddy — you can also find it here).  Tellingly, we call this “reverse engineering”.  The process of designing a system to produce the output you want could be viewed as the reverse of this (“forward engineering” or just “engineering”, perhaps?).  You can define the output you want to see in response to a signal, pretend that you just collected a dataset with the desired characteristics, and ask what kind of network might be able to produce those data.  You could even add experimental error to your pretend data, though that might seem perverse.  What you would get out of this exercise would be a rank-ordered list of network designs that could have produced the data you “observed”, in order of probability: in other words, a list of designs that can give you the output you desire, ranked according to how easy it should be to get the desired result.  Seems useful, no? Barnes et al. comment that “[t]he ability to model how a natural or synthetic system will perform under controlled conditions must in fact be seen as one of the hallmarks of success of the integrated approaches taken by systems or synthetic biology”.

Note that Bayesian analysis allows you to be rather broad-minded about the nature of the network that might produce your pretend data.  You can look at a very wide range of possible network structures if you have the computing power.  This is a big advantage, at least in principle, over other ways of designing systems. Most people start with a design that they think should work — based on the network analysis a human intellect is capable of, which varies with the human but is usually limited — and model how it might behave, to make sure that it has at least some chance of behaving as desired.  But this is a far cry from identifying the best network for the task.  Bayesian analysis offers the potential (again, if you have a large enough computer) of being able to search through all the network designs you can think of. You can then choose the best one for your particular desired behavior, i.e. the one that does what you want it to in the widest range of parameter space.   Of course, you can also limit the range of network designs considered — for example, you may want to say that you’re only willing to build a network with 3 or 4 nodes.  And you can also decide to include specific useful reactions; the mathematics for this already exists, because of methods developed to introduce “prior knowledge” into the analysis.

All of this is kind of nice because switching from reverse engineering (I know something about input/output relationships, what networks can explain the behavior?) to forward engineering (I know the behavior I want, how can I implement it?) turns bugs into features.  In the reverse engineering approach, it’s annoying if you end up with lots of possibilities – which one is correct?  (If you reject a hardwired view of biology, maybe several at once.  Who knows?) In the proposed forward direction, it’s great to have lots of possibilities.  This means you have more potential ways to get what you want.  Evolution must feel happy when this happens.

So does it work? Barnes et al. test three examples of network designs, and ask whether their approach would have led them to identify the best network designs for a given behavior.  The examples are three-node protein networks, asked to produce perfect adaptation, with or without cooperativity (which they compare with the analysis here), bacterial two-component systems (which come in two flavors, orthodox and unorthodox, believed to have different properties such as varying robustness to noise and to the levels of circuit components); and stochastic toggle switches (bistable on/off switches) without cooperativity.  In all cases the Bayesian analysis seems to narrow the possible design space considerably, and in the right directions.  (If you happen to be interested in these network designs, check out the paper: in several cases, Barnes et al. were able to provide additional insight on top of reproducing what was already known.)

If you’re a synthetic biologist trying to build a network to do something new, being able to focus on just a few of the bewildering array of possible networks would seem to be a very good idea.  I look forward to news from our intrepid engineers about whether this approach actually works.

Barnes CP, Silk D, Sheng X, & Stumpf MP (2011). Bayesian design of synthetic biological systems. Proceedings of the National Academy of Sciences of the United States of America, 108 (37), 15190-5 PMID: 21876136

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading High-probability successes, by design at It Takes 30.

meta

%d bloggers like this: