The (model) elephant in the room

March 5, 2012 § 5 Comments

Banksy's Elephant In The Room

Jeremy Gunawardena recently wrote a very nice minireview about the lessons of the Michaelis-Menten equation for model-building (also available here).  Michaelis-Menten is an equation with many lessons for modern systems biologists (as I’ve  discussed before) and is so deeply ingrained in biochemistry that I am sure that most people who learn about it regard it as simply a fact of life; but instead, it is a simplified way of expressing certain facts about life, i.e. a model.  When Michaelis and Menten developed it, it was a highly theoretical construct that assumed the existence of a chimeric creature called the enzyme-substrate complex, which would not be observed until 30 years later.  Jeremy calls the enzyme-substrate complex the “elephant in the room”, and argues that what was remarkable about Michaelis and Menten’s accomplishment was not the fact that it fitted the experimental data, but that by doing so it provided evidence for something unseen.  Provocatively, he argues that the fact that MM was adopted so quickly by biologists, despite this great hole in the evidence, indicates that biology is more theoretical than physics; and that this, in turn, is because biology is more complicated than physics and needs all the help it can get.  Go and read it, and discuss.

Gunawardena J (2012). Some lessons about models from Michaelis and Menten. Molecular biology of the cell, 23 (4), 517-9 PMID: 22337858

Moving through the matrix

March 1, 2011 § Leave a comment

Blood vessel formation is one of the wonderful adaptive processes in biology.  If a tissue is under-oxygenated, it sends out a cry for help and lo and behold, a new blood vessel forms.  This is great if the rescued tissue was under-oxygenated because it was cut off from its normal supply by a wound.  It’s not so good if the under-oxygenated tissue is a tumor.  Tumors that successfully acquire a blood supply of their own can metastasize to different sites by travelling through the circulatory system, and grow much faster than avascular tumors.

So how do the new blood vessels actually form?  In the context of a tumor, what happens is roughly this: the tumor sends out protein signals such as VEGF (vascular endothelial growth factor), which diffuses through the tissue until it reaches an existing blood vessel.  The endothelial cells that line a blood vessel have receptors for VEGF, and they react to it by producing proteases that chew up the basement membrane that supports the blood vessel.  The freed endothelial cells are then able to migrate into the extracellular matrix, again often chewing their way along using proteases such as matrix metalloproteinases. VEGF induces both chemotaxis and proliferation, so the new “sprout” of the blood vessel moves towards the tumor cell (heading up the gradient of VEGF) creating a column of endothelial cells that will later become hollow, grow basement membrane around it, and become able to support blood flow to the tumor.  Presto, new blood vessel.  In fact, many new blood vessels: the original sprout will generally branch several times, creating a new network to feed the tumor.

Do we completely understand this process?  Well, no; for example, we have little understanding of why the sprouts branch. Jeremy Gunawardena pointed out a very nice modeling paper from a couple of years ago (Bauer et al. 2007. A cell-based model exhibiting branching and anastomosis during tumor-induced angiogenesis.  Biophys. J. 92 3105-21) that used cell-based modeling to offer some interesting insights about the mechanisms that may be responsible for branching.  Alas, as far as I can tell from Google Scholar, this paper has only ever been cited by other modeling papers, although the question of what controls branching (and issues like the role of the cytoskeleton in branching) are active areas of research.

« Read the rest of this entry »

Modeling and the scientific method

February 15, 2011 § Leave a comment

Last Friday’s Theory Lunch was interesting for a reason the speaker didn’t entirely intend.  In the preamble for his talk, Daniel Beard wanted the audience to agree with him that the vision of the scientific method articulated by John R Platt — devise hypotheses, devise experiments to distinguish among them, perform said experiments, repeat (known as “strong inference”) — is the way all science should be done, and that systems biology has special importance as a way to articulate and test hypotheses.  He was unexpectedly ambushed by Tim Mitchison, who made a spirited argument that hypothesis-driven approaches often limit the size of the conceptual advance that can be made.  If I understood Tim correctly, he was arguing that forcing a hypothesis into clear alternatives that can immediately be tested almost always makes the question too small.  Big ideas, he said, usually start out fuzzy.  The Platt-style scientific method is useful for crisping up the edges of the big ideas, but not for having them in the first place.

Beard, who was courteous and open-minded throughout, allowed himself to be dragged away from his planned topic — not just (or even, not primarily) by Tim but also by other TL participants.  [This does happen, in TL.  It takes a strong-willed speaker to avoid being distracted by the barrage of semi-relevant questions.  I love the fact that TL participants aren’t afraid to ask questions, but sometimes I wish they would stay on point.]  It was a shame that Beard allowed himself to be diverted, because he had an interesting story to tell.  I thought it might be salutary to go through the story he might have told — if he’d been allowed to — and then discuss whether this story is indeed an example of a Platt-style cycle of hypothesis/experiment.

« Read the rest of this entry »

Redefining optimal

November 16, 2010 § 3 Comments

When you’re trying to use models to probe the behavior of a complex biological system, there usually comes a point where you have to “fit parameters”.  This happens because the model is trying to build up a macroscopic picture from underlying features that may be impossible to measure.  For example, in the case of tumor growth, your model might use local nutrient density as a parameter that affects the rate of the growth of individual cells in the tumor and therefore the growth of the tumor overall.  But nutrient density might not be possible to measure, and so you would have to use experimental data on something that’s easier to measure (e.g. how rapidly tumors grow) to deduce how nutrient density changes across the tumor.  This might then allow you to make a prediction of what would happen in a different set of circumstances.  A good deal of work has gone into figuring out how to estimate model parameters from experimental data, because it’s difficult; you may have to computationally explore a huge space to test which parameter values best fit your data, and you may find that your experimental data can’t distinguish among several different sets of parameters that each fit the data quite well.  A recent paper (Fernández Slezak et al. (2010) When the Optimal Is Not the Best: Parameter Estimation in Complex Biological Models. PLoS One 5: e13283. doi:10.1371/journal.pone.0013283) highlights a disturbing problem of parameter estimation: the parameters you find by searching for the optimal fit between the model and experiment may not be biologically meaningful.

You might find this statement self-evident, and I’ll admit I didn’t fall off my chair either.  But bear with me, because this is a more interesting study than you may think.  What the authors do is to start with a model built by others that sets out to model how solid tumors grow when they don’t have a blood supply.  The model recognizes the fact that solid tumors are composed of a mixture of live and dead cells, and treats the nutrients released by dead cells as potential fuel for the live cells.  The question in the original model was how far a tumor can get in this avascular mode, and what factors lead to growth or remission. Fernández Slezak et al. aren’t interested in this question, though: they’re using the model as a test case to explore how easy it is to find parameters that match both the model and the experimental data.  This particular model has six free parameters (which is more than 4, and fewer than 30); this is manageable, though large.  I’ll mention two of the parameters, since they become important later: β, which is the amount of nutrient a cell consumes while undergoing mitosis; and C(c), the concentration of nutrient that maximizes the mitosis rate.  Many people would have had to settle for a rather sparse sampling of parameter space for this size of model, but because some of the authors work at IBM they had access to remarkable computational resources (several months of Blue Gene‘s compute time).

« Read the rest of this entry »

Physical modeling of clot formation

September 24, 2010 § Leave a comment

Jeremy Gunawardena pointed me to a pair of papers documenting an impressive effort in multiscale modeling, aimed at connecting biochemical events with events that happen on the cellular and super-cellular scales (Xu et al. 2010. doi:10.1016/j.bpj.2009.12.4331; Xu et al. 2008  doi: 10.1098/​rsif.2007.1202; full references below).  These papers are fascinating for many reasons: first, they describe a model for the formation of a blood clot that does quite a good job of recreating the dynamics of clot formation and the complex, inhomogeneous structure of the clot itself.  Second, the model is a great demonstration of how to merge models of different types of biological events that happen at different length scales (tens of nanometers to hundreds of micrometers) into one coherent whole. This is the kind of modeling we’ll need to get good at if we want to understand how molecular interactions influence the physiology of tissues and whole organisms, and the authors offer a very thoughtful discussion of why they selected the types of models they used at each scale and how they merged them.  And third, the authors are able to test model predictions against (their own) in vivo data; and they are finding interesting ways in which the model is wrong, so we’re learning something.

« Read the rest of this entry »

More models, better biochemistry

September 10, 2010 § Leave a comment

Peter Sorger, Will Chen and Mario Niepel have a new review out in Genes & Development, which looks to me as if it was only classified as a Review because the journal doesn’t have a category called Tutorial (Chen et al. 2010. Classic and contemporary approaches to modeling biochemical reactions Genes Dev. 24 1861-75 PMID: 20810646).  It’s a very useful discussion of why and how to model, and it looks tailor-made for use in graduate-level courses.

Chen et al. start out by reminding us of the approximations we use every day and where they come from.  The first is mass action kinetics, an approximation that allows us to use the idea of “concentration” but restricts us to situations where it’s reasonable to think of the species in a reaction as having a continuous distribution, in a well-mixed setting, where there is not much fluctuation in either the number of molecules available to react with each other, or the number of interactions between them.  This covers a good deal of eukaryotic biology, but not all of it.

« Read the rest of this entry »

Raising the standard high

July 8, 2010 § 2 Comments

Jagesh Shah pointed out this Retrospective in MBoC (Drubin DG, Oster G. 2010  Experimentalist meets theoretician: a tale of two scientific cultures. Mol Biol Cell. 21 2099-101 PMID: 20444974), which tells the tale of a collaboration between theorists and experimentalists to investigate the mechanical forces involved in vesicle formation in endocytosis, an effort that resulted in an interesting theory paper (Liu J, Sun Y, Drubin DG, Oster GF. 2009 The mechanochemistry of endocytosis. PLoS Biol. 7 e1000204. PMID: 19787029).

You should remember, as you read, that the piece is published in a journal aimed at cell biologists.  Some of the messages it delivers — theory really can be helpful!  You can learn to talk to theorists if you try! — are primarily aimed at this audience.  Other messages — mutual education, mutual respect, learn each other’s languages, patience, caffeine helps — are in my view generally applicable to collaborations of all kinds, including marriage (with the possible exception of the point about caffeine).  But none the worse for that.

The most interesting issue comes up at the end.  The authors, like many of our readers, ran into the problem that publishing an interdisciplinary paper is hard, and that the standards for publishing theory in biology are quite unclear: “Several reviewers wanted us to “prove” that our model was true by performing additional experiments. But this was a modeling paper, not an experimental paper. So, what should a theory paper accomplish and what should be the criteria for evaluating such an article?”  It seems that the paper was rejected at least once for this kind of reason; the authors comment about the difficulty of pleasing a combination of reviewers from experimental and theoretical backgrounds.  Many of you have been in the same situation and can sympathize.

The question of what the standard should be for theory papers in biology is an important, though complicated, issue for systems biologists to discuss. One point to be clear on, though it will make some people grumble, is that the standard is going to be different depending on how much recognition you are seeking for the work you have done.  The paper in question ended up in PLoS Biology, a very fine journal that is aimed at a wide audience.  This causes me to suspect that the journals that chose not to accept the paper were also general journals, most likely one or more of Nature, Science, or Cell.  And reviewers for these journals have a strong tendency to ask for something extra in a paper before they are willing to recommmend acceptance.

« Read the rest of this entry »

Where Am I?

You are currently browsing the Philosophies of modeling category at It Takes 30.