Microbes that see the future

May 19, 2011 § 1 Comment

Quite a few stories have come out recently about microorganisms that use one type of stress as a signal that they should prepare themselves for another stress. For example, an Escherichia coli bacterium on a piece of the salad you ate for lunch (let’s hope normal E. coli, not the pathogenic sort) may find itself traversing your digestive tract.  One of the first things it can observe about its new environment is that the temperature has gone up; soon afterwards, the level of oxygen goes down.  It turns out that the transcription of genes associated with dealing with oxygen starvation is induced by an increase in temperature.  It seems that E. coli has evolved a response that anticipates oxygen starvation when it sees a temperature elevation.  Another study found that when E. coli encounter lactose they upregulate the genes required for dealing with maltose (but not vice versa), mirroring the order in which the bacteria are likely to see these sugars as they traverse our guts.  In an artificial setting in which sugars are offered one after the other, wild-type E. coli grew better than a strain in which this anticipatory response is broken; in other words, the anticipatory response provided a fitness advantage.  There have been similar findings in other settings, for example the response of yeast to the conditions it encounters during the process of wine production.  Human pathogens such as Vibrio cholerae and Candida albicans appear to have responses like this as well.

Tzachi Pilpel and colleagues have contributed much to the idea that this so-called “predictive” behavior might be a general phenomenon.  In a recent paper, they set out to develop a theoretical framework for analyzing the costs and benefits of an anticipatory response (Mitchell and Pilpel, 2011, A mathematical model for adaptive prediction of environmental changes by microorganisms.  PNAS doi:10.1073/pnas.1019754108).  The problem here is simple: when a cell upregulates a set of genes that it doesn’t immediately need, it starts paying a cost.  Some time later, the genes start to be useful, and it gains an advantage.  How far ahead can you start paying the cost, and still find the advantage worthwhile?  And if the gap between the first signal and the second signal is variable, how fast does that erode your ability to gain an advantage from anticipation?

The general form of this problem may be familiar to you from the discussions people have about retirement planning. [Or perhaps it isn’t, if you live in a country where retirement planning is less of an obsession than it is in the US — every time I log into my bank account I get a screen asking me whether I’m saving enough for retirement.  The answer is always “probably not”, which is disheartening.  Then again, that’s what happens when you tell the calculator that you plan to live forever.]  When you start saving for retirement, you pay a cost; you reduce your standard of living now in an attempt to improve your standard of living later.  But, of course, you don’t know exactly how long you need your retirement savings to last.  So what’s the right amount to save?  I think people probably solve this problem less efficiently than bacteria.

The big difference between retirement saving and gene expression is that for gene expression compound interest goes in the wrong direction: the mRNA and protein you make early on will be degraded, or diluted out as the cells divide, and so the expression level tends towards a steady state.  The result is that too-early preparation does the microorganism no good at all; they pay the costs for a long time, and get the same benefit as they would have done if they had started much later.  But if the gap between the “alert” signal and the challenge itself is too short, it’s also difficult to prepare adequately.  Mitchell and Pilpel used two experimental systems to look at the balance between cost and benefit at different times: in one, they give a brief pulse of a lactose analog to E. coli growing on glycerol (a less preferred carbon source), followed at some variable interval by real lactose; in the other, they give a brief, mild heat shock followed by a potentially lethal heat shock some time later.  They then measure the relative fitness of the “conditioned” bacteria versus a control population that didn’t receive the “alert” signal.

In both experimental systems, the fitness curve is roughly like an inverted U.  As the time between alert and challenge increases, the fitness of the conditioned population initially goes up until it reaches a maximum.  Then it goes down, and in both cases falls well below 1: the cost of preparation impairs the fitness of the conditioned population if the delay is too long.  The fact that both of these curves were qualitatively the same encouraged the authors to believe that a single mathematical framework might be generally useful in the analysis of situations of this sort, and so they sharpened their pencils and got to work.

Conceptually, the mathematical analysis is simple enough.  After the “alert” signal, bacteria start producing protein at a certain rate, which is degraded (actually, diluted in this analysis) at a certain rate; it therefore approaches a steady state via an exponential curve.  This production has a cost, which reduces the growth rate of the bacterium.  When the challenge arrives, there’s a benefit that’s proportional to the level of protein that has accumulated at the time of the challenge; this now increases the growth rate of the bacterium.  The authors then integrate all of these effects, and get an expected overall benefit (or cost) of the preconditioning.

Note that we are talking about two different kinds of fitness here: in the case of lactose, the conditioning allows the bacteria to use a preferred food source more readily, which is nice but not essential; in contrast, in the case of the heat shock the level of the induced proteins may determine whether the cell lives or dies.  This does change the analysis a bit, since the increase in fitness in the former case is an integral over the time between the challenge and the point at which the control strain catches up in its protein expression level (minus the cost of the premature expression), whereas for the latter case there’s simply a yes/no question to answer — is the level high enough for survival?  But in both cases the model manages to fit the experimental data rather well.  (Parameter-fitting hawks may want to know that for the lactose case, all parameters were measured from experiment, while in the case of heat shock the degradation rate wasn’t known and had to be fit from other data.)

Mitchell and Pilpel then use their model to explore the question of when anticipatory responses would be expected to be selected, varying (1) the relative cost and benefit of the response, (2) the predictability of the challenge coming after the alert signal, and (3) the average delay between the alert and the challenge.  Quite large regions of the space they tested would be predicted to select for anticipatory responses.  Personally I was suprised by the fact that even in a situation where the bacterium gains only modestly by anticipating the challenge, the delay can be up to 10 generations provided that the cost is low and the challenge is highly predictable.  Of course, higher cost narrows the regions in which these kinds of responses make sense.  So it seems probable that stresses in which only one or a few proteins need to be induced will be more likely to show this anticipatory behavior than situations that require a genome-wide response.

All of this assumes that the response to the “alert” signal is identical to the response to the challenge itself.  What if the bacterium instead induces a partial response to the “alert”, followed by a full response to the challenge?  This would reduce the cost of the response even further, but (depending on the delay, and the variability of the delay) wouldn’t necessarily reduce the overall benefit in the case of the lactose type of challenge.  The authors show that this kind of partial response would be expected to be beneficial in an even wider set of environments.  Indeed, this may well be what happens in some of the known cases of anticipatory responses.  Mitchell and Pilpel call this a “bet-hedging” strategy, a situation in which the bacterium finds it advantageous to invest a little energy to prepare for a situation that may well happen (but might not) — just as if you see your bus coming around the corner, you may invest a little energy in walking faster, even though it’s far from certain that the bus will arrive at the bus stop before you if you continue to walk at your previous pace.

Talking of bet-hedging, many of you will have noticed that the discussion so far ignores the potential for differences between individual bacteria in these responses.  That fact has not escaped the authors’ attention; they plan to extend their model to deal with such population-level heterogeneity in the future.

Mitchell A, & Pilpel Y (2011). A mathematical model for adaptive prediction of environmental changes by microorganisms. Proceedings of the National Academy of Sciences of the United States of America, 108 (17), 7271-6 PMID: 21487001

§ One Response to Microbes that see the future

  • nilesh nishant says:

    environmental changes occur in microorganism so there should be a model to predict the adaptive changes, the concept is really great. if one can explain the differences between individual bacteria in these responses.

Leave a comment

What’s this?

You are currently reading Microbes that see the future at It Takes 30.

meta