Cell Systems launches Math | Bio article format

October 22, 2015 § 1 Comment

The newest Cell sister journal, Cell Systems, just launched an interesting new format called Math | Bio.  In an editorial announcing the new format, Quincey Justman explains that by creating this new format the editors hope to encourage papers like John Hopfield’s 1974 work on kinetic proofreading — a paper that discussed an idea about how biology could work, without attempting in any way to prove that biology did work that way.   Let’s hope it’s successful! There are many nice things about this format.  What I like most is that it opens a channel between people who have theoretical ideas but no way to test them and people who may have relevant experimental results but are puzzled about how to interpret them. Also, if biology is more theoretical than physics — as Jeremy Gunawardena has argued — we need more and better channels to get the theory out there.

And it’s also nice, of course, that the inaugural Math | Bio paper is from our Department.  In it, Yoni Savir, Ben Tu from UT Southwestern, and Mike Springer describe a possible design for a biological linear rectifier.  A linear rectifier produces an output that is linear relative to input above a threshold, and over a wide range of input values.  Savir et al. argue that such a device could be useful in many settings in biology, including nutrient regulation of growth rate and gene expression; and they show that a relatively simple motif involving competitive inhibition could behave in this way.

So, is anyone out there scratching their head over an output that seems weirdly linear and just won’t saturate?  You could be lucky: it might not be an artefact after all, but the first example of an exciting new form of biological signal processing. Check it out.

Cellular Morse Code

July 10, 2012 § Leave a comment

For decades now, the biological community has been focused on the question of how cells transmit information from place to place.  It’s a central problem if you want to understand pretty much anything about cell behavior.  A signal to grow, for example, might start when a growth factor arrives on the outside of a cell, say in your tissue culture dish when you add fresh medium with growth-factor-containing serum in it.  The information that it’s time to grow might be transmitted across the membrane by a membrane-spanning receptor, triggering a series of events such as a cascade of phosphorylations that cause enzymes within the cell to change activity.  The final result might be a change of activity of a transcription factor; the presence of a signal outside the cell has thus been converted into a change in the gene expression profile inside the nucleus of the cell.  We chiefly think of these processes as linear — a pathway — with a well-defined flow of information from A to B to C. We draw diagrams that show A near the cell membrane, passing information to B (closer to the nucleus) and then to C (closer still).  But of course this is just an analogy we use to make it easier for us to think about what’s going on, and like all convenient analogies it has the potential to be seriously misleading.  Our so-called “pathways” loop and branch and pass information forward and backward and sideways, losing precision all the way; A, B and C are most often distinguished by the timing of their activation, rather than by their location in the cell; and while it’s easy to tell a general story about how an external stimulus leads to a response inside the cell, it’s still hard to know why the response is the size it is, or happens at the time it does.

One of the most puzzling aspects of signal transduction is what happens when multiple signals impinge on the same mediator — when “paths” cross, or diverge, or merge.  In the case of the important anti-oncogene p53, we draw several paths coming in to p53 and several paths going out again.  The downstream consequences of p53 activation vary dramatically, from transient cell cycle arrest to senescence and apoptosis.  How does this single protein receive and transmit several different types of information?

One idea is that the p53 network is in fact many different, distinct pathways, each using a different p53 isoform (say, p53′, p53” and so on).  All these pathways look as if they overlap because they all involve an increase in the total level of p53 protein, but p53 can be modified in many ways (phosphorylation, acetylation, ubiquitination, methylation… ) at many different sites, producing modified versions of p53 that have varying functions.  It’s well established that this happens, and that the modifications do indeed modulate p53’s behavior.  But there’s another dimension, literally, to explore here: time.  Although activating the p53 pathway always causes p53 protein levels to increase — by definition — that doesn’t mean that the timing and duration of the response is always the same.  The role of protein dynamics in the transmission and processing of information in biology is seriously under-explored.

Here’s a dramatic example: exposure to gamma radiation, which causes double-strand breaks in DNA, leads to repeated individual pulses of p53 that have a stereotyped size and shape and appear at defined intervals.  Increasing the dose of radiation doesn’t increase the average size of the pulses; instead, it increases the number of pulses.  Irradiation with ultraviolet light also causes damage to DNA, but this time the breaks are primarily single-stranded.  The response of p53 to UV is quite different from its response to gamma.  Instead of repeated pulses of unchanging average size, you get a single wave whose size varies depending on the amount of irradiation: the bigger the radiation dose, the bigger the wave.  But what do these differences mean?  The Lahav lab has been pursuing this question pretty much ever since the lab began, and now they think they have an answer (Purvis et al. (2012) p53 dynamics control cell fate. Science doi:10.1126/science.1218351). 

« Read the rest of this entry »

Learning from the enemy

March 2, 2012 § 1 Comment

Viruses are the ultimate hackers of biological systems.  Synthetic biologists might begin to catch up in a billion years or so, depending, of course, on how strong the evolutionary pressures on them are. But for now, for frighteningly elegant and complex interventions in cellular behavior, viruses are hard to beat.  And that means that when you find a virus messing with your system, you can learn a lot from how it achieves its effects.

A recent paper (Maynard et al.  2012. Competing pathways control host resistance to virus via tRNA modification and programmed ribosomal frameshifting.  Mol. Sys. Biol. 8; 567) dissects a case in point. In earlier work, this group identified some pathways in E. coli that rather unexpectedly affected the efficiency of lambda phage replication.  For example, knocking out members of the 2-thiouridine synthesis pathway inhibited replication; conversely, knocking out members of a pathway involved in making iron-sulfur clusters increased replication.  These are both pathways that use sulfur, and so it was natural to suspect that the two activities are related, though both mechanisms were unknown.

What is 2-thiouridine good for? One of its uses is to modify certain tRNAs (those that accept Lys, Glu and Gln as payloads) with a thiol group, providing a clue that something to do with translation might be involved.  In fact, thiolation of these tRNAs is important for reducing ribosomal frame-shifting.  You might think that this is an unlikely place to look for effects on the virus.  But you’d be wrong.  It turns out that many viruses, including HIV, use a lovely strategy called programmed ribosomal frameshifting to make themselves more efficient by producing two proteins from one gene.  It works like this: when the ribosome reaches a so-called “slippery sequence”, it — um — slips, either backwards or forwards.  When the ribosome slips, it then misses a stop codon further along in the gene, so one protein is made that stops at the stop codon and the other is made as a result of read-through.  The ratio between the proteins is determined by the frequency of slippage, and the ratio matters because the two proteins have different functions.  In the case of lambda, the proteins made in this slippery fashion are called gpG and gpGT, and they seem to act as chaperones for the asssembly of the phage’s tail.

« Read the rest of this entry »

Windows on the cellular soul

November 4, 2011 § 2 Comments

One of the things we wonder about a lot in biology is what is going on inside a cell.  We have many ways to get at partial answers — Western blots, GFP fusions, transcriptional profiling, various proteomic techniques — and the number and power of these approaches is increasing. Here’s a new window on the internal state of a cell that makes use of a fundamental process of biology: the presentation of peptides by the class I MHC complex (Caron et al. 2011.  The MHC I immunopeptidome conveys to the cell surface an integrative view of cellular regulation.  Mol. Syst. Biol. 7 533-547, doi:10.1038msb.2011.68).

MHC class I transports peptides to the surface of cells. From Wikipedia, http://en.wikipedia.org/wiki/File:MHC_Class_I_processing.svg

To understand what’s going on here you need to know a little about the amazing mechanisms that underlie an immune response.  One of the problems the immune system has to solve is that viruses live inside cells, so in the early stages of a viral infection there may be little to see, and little for the immune system to respond to, in the extracellular environment.  And so, conveniently, we have evolved a system to allow the immune system to look inside the cell.  For historical reasons it’s called the Major Histocompatibility Complex class I, or MHC class I — it was discovered as a genetic locus that controlled the rejection of skin grafts in mice (hence histocompatibility), by George Snell.  Basically what MHC class I molecules do is to go fishing inside the cell for peptides of a certain length, which are required to be from proteins that are made within the cell.  These peptides are then captured in a “bear trap”-like structure at the top of the MHC molecule (which I have sketched here), transported to the cell surface, and offered up for recognition by T lymphocytes.

You don’t really need to know about the other parts of this system for the purposes of discussing this paper, but thanks to a mechanism called “tolerance”, briefly touched on here,  T lymphocytes generally manage not to respond to peptides that come from proteins made by the host — that’s you.  Instead, they focus on the foreign peptides, which are presumed to originate from viral proteins.  The point to remember is this: the MHC itself isn’t selective for viral peptides, but brings a broad sampling of what’s inside the cell to the cell surface.  It’s not an unbiased sample; peptides from some proteins are over-represented, others under-represented, and specific arrangements of amino acids are preferred for binding.  But it offers a view of what’s going on inside the cell that is hard to get any other way. The question is, what is this view telling us?  Caron et al. set out to answer this question by using a drug to manipulate the internal state of the cell, and looking with mass spec to see what happens to the peptides presented on MHC as a result.

« Read the rest of this entry »

High-probability successes, by design

October 27, 2011 § Leave a comment

Boy, it’s hard to get back into the rhythm of blogging once you stop.  It’s been a busy few weeks — if you read the Initiative in Systems Pharmacology post you know a little bit about why, but also there have been a number of grant and fellowship deadlines, and on top of that we’re recruiting this year.  In short, the day job has been taking up (even) more of the evening than it usually does.  I like to be busy, but there is such a thing as going too far. However, somewhat to my surprise, I find myself missing blogging — the rest of my job doesn’t require me to read papers and think about them, so thinking about science can fall by the wayside if I’m not careful.  In some ways it’s like missing the pain from a nagging tooth, but any kind of absence can make the heart grow fonder.  (Have you missed me?)

So, to get back into the swing of things, here’s a paper that I read a while ago but never finished writing about (Barnes et al. 2011. Bayesian design of synthetic biological systems, PNAS doi.10.1073/pnas.101792108).  It deals with ways to do a better job of designing biological systems.  A dominant argument in synthetic biology has been that the job of synthetic biologists is to make biology more modular, to take inspiration from the standardization of engineering parts such as screws and nuts and bolts (which were once wildly varied, but now have standard sizes and screw threads) and attempt to develop similar standardizations of biological parts.  This direction has had some successes, but it’s clear that there are challenges; and the challenges loom ever larger as the system one is trying to design becomes more complex.   Biology is implemented in probabilistic chemical reactions, not cold steel, and the analogy of mechanical engineering can only take us so far.  And so Barnes et al. argue that we should pay more attention to tools from statistics, specifically to Bayesian analysis.

Here’s their argument, which personally I find quite compelling.  Bayesian analysis is used in biology to try to pull network information out of large, noisy data sets.  The general idea is that, given observed data,  Bayesian analysis allows you to infer a range of possible network structures that are consistent with the data.  More importantly, it gives you a rigorous way of ranking how likely it is that a given network produced the data you observed.  (For a slightly less general idea, see this nice Primer by Sean Eddy — you can also find it here).  Tellingly, we call this “reverse engineering”.  The process of designing a system to produce the output you want could be viewed as the reverse of this (“forward engineering” or just “engineering”, perhaps?).  You can define the output you want to see in response to a signal, pretend that you just collected a dataset with the desired characteristics, and ask what kind of network might be able to produce those data.  You could even add experimental error to your pretend data, though that might seem perverse.  What you would get out of this exercise would be a rank-ordered list of network designs that could have produced the data you “observed”, in order of probability: in other words, a list of designs that can give you the output you desire, ranked according to how easy it should be to get the desired result.  Seems useful, no? Barnes et al. comment that “[t]he ability to model how a natural or synthetic system will perform under controlled conditions must in fact be seen as one of the hallmarks of success of the integrated approaches taken by systems or synthetic biology”.

« Read the rest of this entry »

Designed by iGEM: implemented by nature

August 29, 2011 § 3 Comments

I’ve been thinking recently about this year’s iGEM Jamboree, which is coming up soon. For those of you who don’t know, iGEM, the international Genetically Engineered Machines competition, challenges undergraduate students and high school students to make useful machines out of biological parts and implement them in living cells. The ideas are always interesting — usually somewhere between creative and wild, actually — and the Jamboree is where the different teams (165 of them this year) share their results, celebrate the new parts they’ve characterized, and generally have a good time. iGEM has turned out to be a major way for students from engineering and the quantitative sciences to get their first taste of biology.

iGEMmers are always on the lookout for biological modules that can be re-used for other purposes, and quorum sensing is something of a favorite. The system that produces bacterial gas vesicles that allow bacteria to float also seems to be ripe for re-engineering. And so a recent paper that identifies a gas vesicle system controlled by quorum sensing caught my eye (Ramsay et al. 2011. A quorum-sensing molecule acts as a morphogen controlling gas vesicle organelle biogenesis and adaptive flotation in an enterobacterium. PNAS doi:10.1073/pnas.1109169108). Ooh, I thought — that looks interesting. You could target bacteria to something you want to float up — the Titanic, say — and turn on the gas vesicles when you have enough bacteria. And indeed, it’s a natural iGEM project; so much so, that the 2008 Kyoto team already tried to do it. They did not, in fact, raise the Titanic, but they did show [pdf] that their engineered bacteria could move a ~10µm bead. One must start somewhere.

Many bacteria produce gas vesicles to regulate their buoyancy, but we don’t know all that much about how the production of these vesicles is regulated.  Ramsay et al.’s paper is the first to show that quorum sensing can control the production of these vesicles in nature.  Changes in the availability of light or oxygen have also been shown to increase vesicle production in  some cases. Thus, it’s thought that the vesicle-producing cells may turn up the gas when they find themselves drifting too far away from the air-water interface. It’s an alternative to turning on flagellum formation (which would allow swimming towards the surface), and under some circumstances appears to be a more energetically favorable option.

« Read the rest of this entry »

The fear chemical?

June 29, 2011 § Leave a comment

We often talk, often rather vaguely, about instincts and how they shape our behavior (my instinctive reaction was…, etc.).  Predator-prey interactions are one place where instincts are real, and really matter. A cat that doesn’t realize that a little scuttling squeaky thing is also a good meal probably won’t be welcome in the barn of a corn farmer.  Similarly, if a mouse doesn’t know to avoid the lair of a fox without having to be trained in avoidance, that mouse is at severe risk of not getting a chance to pass on its genes to its progeny. Hard-wired responses to the smell of predators are well documented, but not well understood.  A new paper from a multi-disciplinary collaboration led by our close neighbor Stephen Liberles (and including our even closer neighbor Bob Datta) has identified one of the chemical components that lead to this response (Ferrero et al. 2011. Detection and avoidance of a carnivore odor by prey.  PNAS doi/10.1073/pnas.1103317108).

Ever since the pioneering work of Linda Buck and Richard Axel, we’ve known, roughly, where our experience of smell comes from: volatile odorants are detected by a large class of receptors expressed in the neurons of the olfactory epithelium.  Each neuron expresses just one receptor. A scent is typically a mixture of many odorants, each of which may bind to and trigger the activity of several receptors; the combination of neurons activated by a given scent is unique to that scent, and so each scent sends a distinct set of signals to our brains.  As a result of this combinatorial detection, we can discriminate an extremely wide range of scents with a relatively limited set of receptor molecules.  Stephen Liberles and Linda Buck later identified a second set of receptors that detect amine odorants, the “trace amine-associated receptors” or TAARs.  These receptors may detect some of the important signals mammals use to communicate with each other about, for example, their state of sexual receptiveness.  But in most cases the question of which odorant a specific receptor responds to has yet to be answered.  There’s no easy way to guess or deduce this: what you have to do is try various possible odorants and see which ones activate the receptors.  Luckily, since all odorant receptors respond to activation by causing an increase in the level of the second messenger cyclic AMP (cAMP), this is now possible to do in vitro: you express your receptor of interest in an ordinary cell line, making sure that the cell line has the appropriate machinery to connect the receptor to the adenyl cyclase that makes cAMP.  Then you add a cAMP-responsive reporter gene.  And then you try every possible odorant you can think of, and look for blips in reporter gene expression.  Then you move on to the next odorant receptor, and do it all again.

« Read the rest of this entry »

The wisdom of cellular crowds

June 22, 2011 § Leave a comment

Once again, an interesting Theory Lunch talk has inspired me to write a blog post.  Last Friday’s talk was from Mike White, who described (among other things) his lab’s efforts to understand the transcriptional behavior of the prolactin gene.  This gene is primarily expressed in the pituitary, and controls the production of milk in breastfeeding mothers.  On a cell-by-cell level, its expression is very variable in pituitary tissue; neighboring cells express the gene to very different extents.  And yet the random expression patterns in individual cells together add up to a coordinated response at the tissue level.  If we hope to build from an understanding of how cells behave to an understanding of how organisms behave, we need to know what underlies this kind of “wisdom of crowds” effect.  And so White and colleagues set out to determine why this gene shows variable expression (Harper et al. 2011.  Dynamic analysis of stochastic transcription cycles.  PLoS Biology doi:10.1371/journal.pbio.1000607) and how this expression might be coordinated on a population level.

Cell-to-cell variability in the levels of proteins and mRNAs has been much studied in bacteria, where at least two factors are likely to be important: first, key regulatory molecules may be present in the cell at very low numbers, leading to randomness in gene activation; and second, unequal partitioning of components at cell division may create additional variation that is pretty much indistinguishable from the fluctuations caused by sporadic gene activation.  There have been fewer studies in eukaryotes, so far, but people are already speculating about additional sources of differences: perhaps genes are moved in and out of “transcription factories” at different times in different cells, or perhaps the differences are caused by chromatin remodeling.

« Read the rest of this entry »

Counting phosphorylations: one, two, many…

June 6, 2011 § Leave a comment

Jeremy Gunawardena’s lab just published a paper that should probably be required reading for anyone in the habit of attempting to measure the relative levels of phosphorylated proteins using Western blots (Prabakaran et al. 2011.  Comparative analysis of Erk phosphorylation suggests a mixed strategy for measuring phospho-form distributions.  Mol. Syst. Biol. 7 482).  If you are in that category, be warned: you will find this paper depressing.

What Prabakaran et al. wanted to do was to find a way of determining the pattern of phosphorylations on a protein.  They chose the simplest situation possible — Erk, a protein with just two phosphorylated sites — and set out to develop a reliable method for finding out how much of the protein was phosphorylated at only site 1, how much at only site 2, and how much on both sites.

Did you realize that with all our technology, we still can’t do this?  Many people don’t. Quantitative mass spectroscopy techniques have recently made it possible to get a number for how much of the protein is phosphorylated at site 1 or site 2, but that still doesn’t tell you the distribution of the phosphoforms.  Suppose you have a protein that looks like this:

XXXS1XXX[cleavage site]XXXS2XXX

where S1 and S2 are the sites of phosphorylation. The [cleavage site], obviously, is the point at which the enzyme you’re using to chop the protein into peptides to run it on the mass spec acts.  When you analyze your peptides, you will have no idea whether the XXX[P]S1XXX peptides you see come from a protein in which just S1 is phosphorylated, or a protein in which both S1 and S2 are phosphorylated.  So, if you see 50% [P]S1 and 50% [P]S2, you won’t know whether this reflects a situation in which both sites are phosphorylated independently (leading to a mixed population of proteins with only S1, only S2, and both sites phosphorylated) or a situation in which S2 is only phosphorylated after S1 (50% of the protein is phosphorylated on both sites, and 50% not at all).  This could easily be biologically important, don’t you think?

« Read the rest of this entry »

Something new under the sun

May 31, 2011 § Leave a comment

One of the great surprises of the genomic era has been how similar the coding regions of genes are between species.  It seems that we have been underestimating the evolutionary role of altered regulation — increasing or decreasing the expression of a gene, in different places, at different times — relative to protein sequence changes.  So the question of how evolution produces novel patterns of expression of existing genes has become one of the hot topics of the day.  There are at least 4 ways you can imagine this happening via changes to the DNA near your gene of interest.  First, an enhancer that drives the expression of your gene could be created out of nothing by mutation, in a region of DNA that previously had nothing to do with regulating your gene.  Second, a pre-formed enhancer may “jump” into the region near your gene, carried by a transposon.  Third, an enhancer that was originally driving the expression of a neighboring gene may switch its activity to the promoter of your gene.  And finally, an existing enhancer that drives the expression of your gene at a particular time in development, or in a particular site in the body, may be modified by mutation such that it now causes expression in a different time and place.  This is called co-option.  The idea is that every functional enhancer involves many transcription factor binding sites: if an enhancer has binding sites for transcriptional activators A, B and C and repressor D, it will be active at times and places when A, B and C are present and D is not.  If evolution now adds activator site E, through a random mutation of a sequence that was quite similar to E anyway, then perhaps the enhancer can activate transcription at any time or place where you have three out of the four activating transcription factors: ABC still works, but so do ABE, ACE and BCE… and perhaps if you have all four, you can over-ride the presence of D.  I’m making this up, you understand, but that’s the general idea: you co-opt some of the pre-existing binding sites, add one or more new ones, and the result is that your gene of interest is expressed somewhere novel in time or space.

As usual for events that happened millions of years ago, specific examples of novelty are not that easy to identify with confidence.  But Sean Carroll and colleagues now think that they’ve spotted a new and interesting example of co-option (Rebeiz et al. 2011.  Evolutionary origin of a novel gene expression pattern through co-option of the latent activities of existing regulatory sequences.  PNAS doi/01.1073/pnas.1105937108).  What they did was to take a group of several closely related species of Drosophila and identified 20 genes that might be expected to evolve relatively rapidly.  They then carefully examined the expression of these genes in several larval stages of each of the Drosophila species.  They saw many changes in expression, most of which turned out not to meet their definition of a novel expression pattern — for example, some changes that initially looked as if something new was happening turned out to be merely shifts of the timing of an expression pattern in one species relative to another.  But they did find one gene that had a unique pattern of expression in one species and no others, the Ned-1 gene.  In most of the Drosophila species studied, this gene is expressed in wing, leg and central nervous system tissues.  In one species, D. santomea, it’s also expressed in the developing optic lobe. Rebeiz et al. checked exhaustively that this pattern was neither a timing shift nor a remnant of an older expression pattern that had been lost in all of D. santomea‘s relatives.  It really does look novel.

« Read the rest of this entry »

Where Am I?

You are currently browsing the Biological Circuits category at It Takes 30.