Cell Systems launches Math | Bio article format

October 22, 2015 § 1 Comment

The newest Cell sister journal, Cell Systems, just launched an interesting new format called Math | Bio.  In an editorial announcing the new format, Quincey Justman explains that by creating this new format the editors hope to encourage papers like John Hopfield’s 1974 work on kinetic proofreading — a paper that discussed an idea about how biology could work, without attempting in any way to prove that biology did work that way.   Let’s hope it’s successful! There are many nice things about this format.  What I like most is that it opens a channel between people who have theoretical ideas but no way to test them and people who may have relevant experimental results but are puzzled about how to interpret them. Also, if biology is more theoretical than physics — as Jeremy Gunawardena has argued — we need more and better channels to get the theory out there.

And it’s also nice, of course, that the inaugural Math | Bio paper is from our Department.  In it, Yoni Savir, Ben Tu from UT Southwestern, and Mike Springer describe a possible design for a biological linear rectifier.  A linear rectifier produces an output that is linear relative to input above a threshold, and over a wide range of input values.  Savir et al. argue that such a device could be useful in many settings in biology, including nutrient regulation of growth rate and gene expression; and they show that a relatively simple motif involving competitive inhibition could behave in this way.

So, is anyone out there scratching their head over an output that seems weirdly linear and just won’t saturate?  You could be lucky: it might not be an artefact after all, but the first example of an exciting new form of biological signal processing. Check it out.

Calling future leaders in Synthetic Biology

July 16, 2012 § Leave a comment

SynBio LeAP: Synthetic Biology Leadership Accelerator Program
October 1-5th, 2012
Airlie Center, near Washington, DC

• Do you have great ideas for advancing synthetic biology in the public interest?

• Do you want time, tools and partners to develop your ideas into action?

• Do you want to build a community working to best advance biotechnology?

Then join us as one of twenty emerging leaders who will spend a week developing plans for how they – and others – can best advance synthetic biology for the public good.

• Work with your peers, a professional creative facilitation team, and guest experts across sectors in biotechnology.

• Explore frameworks for assessing how biotechnologies can create public value.

• Develop leadership skills for engaging across diverse organizational contexts shaping biotechnology.

• Create actionable plans for mobilizing your ideas for best advancing synthetic biology.
Share your plans with individuals and organizations that can support your goals beyond LeAP.

• Relax in Airlie’s beautiful grounds, enjoy great food and drink, network, and benefit from focused time to develop your ideas.

LeAP is shaped by your ideas and goals. If you want to lead a great future for – and through – synthetic biology, LeAP with us. We welcome participants across career stages, disciplines and sectors.

LeAP participation is fully sponsored by an open consortium of community funders and organizers, including the NSF, Alfred P. Sloan Foundation, SynBERC, BioBricks Foundation, and the Woodrow Wilson Center’s Synthetic Biology Project. If you or your organization is interested in supporting LeAP, please contact us.

Applications are now being accepted on a rolling basis. Limited spots are available and will fill up soon. Tell your friends and don’t wait to apply!

More information: synbioleap.org
Contact us: info AT synbioleap.org
Spread the word: #synbioLEAP

Cellular Morse Code

July 10, 2012 § Leave a comment

For decades now, the biological community has been focused on the question of how cells transmit information from place to place.  It’s a central problem if you want to understand pretty much anything about cell behavior.  A signal to grow, for example, might start when a growth factor arrives on the outside of a cell, say in your tissue culture dish when you add fresh medium with growth-factor-containing serum in it.  The information that it’s time to grow might be transmitted across the membrane by a membrane-spanning receptor, triggering a series of events such as a cascade of phosphorylations that cause enzymes within the cell to change activity.  The final result might be a change of activity of a transcription factor; the presence of a signal outside the cell has thus been converted into a change in the gene expression profile inside the nucleus of the cell.  We chiefly think of these processes as linear — a pathway — with a well-defined flow of information from A to B to C. We draw diagrams that show A near the cell membrane, passing information to B (closer to the nucleus) and then to C (closer still).  But of course this is just an analogy we use to make it easier for us to think about what’s going on, and like all convenient analogies it has the potential to be seriously misleading.  Our so-called “pathways” loop and branch and pass information forward and backward and sideways, losing precision all the way; A, B and C are most often distinguished by the timing of their activation, rather than by their location in the cell; and while it’s easy to tell a general story about how an external stimulus leads to a response inside the cell, it’s still hard to know why the response is the size it is, or happens at the time it does.

One of the most puzzling aspects of signal transduction is what happens when multiple signals impinge on the same mediator — when “paths” cross, or diverge, or merge.  In the case of the important anti-oncogene p53, we draw several paths coming in to p53 and several paths going out again.  The downstream consequences of p53 activation vary dramatically, from transient cell cycle arrest to senescence and apoptosis.  How does this single protein receive and transmit several different types of information?

One idea is that the p53 network is in fact many different, distinct pathways, each using a different p53 isoform (say, p53′, p53” and so on).  All these pathways look as if they overlap because they all involve an increase in the total level of p53 protein, but p53 can be modified in many ways (phosphorylation, acetylation, ubiquitination, methylation… ) at many different sites, producing modified versions of p53 that have varying functions.  It’s well established that this happens, and that the modifications do indeed modulate p53’s behavior.  But there’s another dimension, literally, to explore here: time.  Although activating the p53 pathway always causes p53 protein levels to increase — by definition — that doesn’t mean that the timing and duration of the response is always the same.  The role of protein dynamics in the transmission and processing of information in biology is seriously under-explored.

Here’s a dramatic example: exposure to gamma radiation, which causes double-strand breaks in DNA, leads to repeated individual pulses of p53 that have a stereotyped size and shape and appear at defined intervals.  Increasing the dose of radiation doesn’t increase the average size of the pulses; instead, it increases the number of pulses.  Irradiation with ultraviolet light also causes damage to DNA, but this time the breaks are primarily single-stranded.  The response of p53 to UV is quite different from its response to gamma.  Instead of repeated pulses of unchanging average size, you get a single wave whose size varies depending on the amount of irradiation: the bigger the radiation dose, the bigger the wave.  But what do these differences mean?  The Lahav lab has been pursuing this question pretty much ever since the lab began, and now they think they have an answer (Purvis et al. (2012) p53 dynamics control cell fate. Science doi:10.1126/science.1218351). 

« Read the rest of this entry »

Fluorescent protein labeling: a cautionary tale

June 29, 2012 § 1 Comment

There was a time when we viewed bacterial cells as mere bags of randomly mixed molecules.  Lacking the obvious compartmentalization of eukaryotic cells, bacteria were viewed as being completely unstructured.  But increasing numbers of studies seem to show clearly defined localization patterns for proteins in bacteria.  One example is that the main proteases responsible for regulated proteolysis in bacteria — the Clp proteases (pronounced “clip”) — have been observed in several studies to form a single bright proteolytic focus, detected by fluorescent protein labeling.

The Paulsson lab spotted these observations and became intrigued.  One of the major interests in the lab is variation between individual cells at the RNA and protein level, and this looks like a potentially significant place where variation may happen.  If all proteolysis in a cell is localized into a single spot, then when a cell divides something interesting must happen: either the spot also divides, or one of the two daughter cells gets all of the Clp proteases in the cell while the other daughter gets nothing.  The second option would lead to a potentially enormous difference in the ability of the two daughters to perform proteolysis.  So a graduate student, Dirk Landgraf, set out to look at whether this difference exists, and if so how long it lasts (Landgraf et al. 2012, Segregation of molecules at cell division reveals native protein localization.  Nature methods doi: 10.1038/nmeth.1955).

The first step was to ask what happens to the proteolytic focus at cell division.  Landgraf et al. made movies of cells carrying fusions of a Clp family member, ClpP, with two different fluorescent proteins, Venus and superfolder GFP.  In each case they saw a single focus of fluorescence, and when the cell divided the whole fluorescent focus went to one daughter.  After a few generations, fluorescent foci (one per cell) reappeared in the line of cells descending from the other daughter.  This strongly suggested that there should be significant variation in the level of proteolysis going on in different cells. If regulated proteolysis is an important function for the cell — which we believe it is — this seems odd, and therefore interesting.  So the authors tested this possibility directly using another fluorescent tag (mCherry)  fused to a Clp substrate, allowing them to measure the variation in the degradation of the substrate in pairs of daughter cells from a single division event.

Clp foci (green) are present in only one of the two daughter cells after cell division.

This is where things get surprising, not to say shocking.  Yes, the lines in which ClpP was labeled with Venus or superfolder GFP showed very significant daughter-to-daughter variation.  But in the wild type strain, in which the ClpP was unmodified, very little daughter-to-daughter variation was seen.  The inescapable conclusion is that the fluorescent protein tags are changing the behavior of the protein being studied.  And this is not a small change: the whole notion that ClpP self-organizes into a single localized focus, which has led for example to the idea that protein degradation needs to be compartmentalized, appears to be an artifact.

Fluorescent proteins have swept the world of cell biology. What better way could there be to study the behavior of your favorite protein than to put a brightly glowing tag on it and watch it going about its normal business?  The images you get are beautiful and compelling, and make great figures in your paper. We’ve become so comfortable with the essential benignity of fluorescent protein fusions that we barely bother to worry about whether adding an extra 238 amino acids to a protein changes its behavior.  Partly this is because we can see so much with fluorescent protein fusions that we could never see before, so there is no easy way to be sure that the behavior of the protein under study hasn’t changed. But partly, too, it’s because the standard in the field has shifted.  Fluorescent proteins are the gold standard now.  If your results from an older and apparently cruder technique, such as immunofluorescence, don’t match the results from live-cell imaging using fluorescent proteins, then the immediate suspicion is that the older technique is wrong.  And probably this is often true.  What Landgraf et al. show, however, is that in the case of the Clp family the older methods are the better methods.  Immunofluorescent staining shows many small Clp foci, probably corresponding to individual protease complexes, located throughout the cell in the wild type, but also detects the large single clump induced when fluorescent tags are added.   « Read the rest of this entry »

Paleolithic Park

March 6, 2012 § Leave a comment

Perhaps not quite as exciting as revivified dinosaurs, but still amazing: plants from the late Paleolithic era are claimed to have been regenerated from fossil material (Yashina et al. 2012. Regeneration of whole fertile plants from 30,000-y-old fruit tissue buried in Siberian permafrost.  PNAS doi:10.10.73/pnas.1118386109).  This has very little to do with systems biology, but I was interested and thought you would be too.  Perhaps I could trace some kind of connection (did you know that our Artist-no-longer-in-Residence, Brian Knep, shared two Academy Awards for his work on the movie Jurassic Park?) but it would be forced and hardly worth it.  Better to admit to mild frivolity.

The plant material in question came, not from an insect trapped in amber, but from fruits buried in burrows of an Arctic-dwelling squirrel.  Some of these burrows contain hundreds of thousands of fruits and seeds.  I guess when you’re a squirrel living in the Arctic, you grab what’s going while the grabbing is good.  Shortly after the squirrels stored their hoards, about 30,000 years ago, the area froze, was buried deep in icy sludge, and has never since melted.  Constant subzero temperatures, with all available water immobilized as ice, are the best conditions you’re likely to find for cryopreservation.  Although the oldest plant seed previously germinated was only 2,000 years old, the authors were bold enough to make an attempt to grow plants from ancient frozen seeds of the plant Silene stenophylla, the arctic campion.

The placental tissue in a campion fruit. From Yashina et al.

In the end, what Yashina et al. say that they were able to grow was not the seeds themselves but an outgrowth of the placental tissue some of the immature seeds were embedded in.  The authors speculate that part of the reason they were successful with this tissue is because it has especially high levels of organic substances such as sucrose and phenolic compounds that would be expected to offer some protection against frost damage.  The plants derived from these placental tissues grew to maturity and were even capable of breeding.  They look somewhat different from modern Silene stenophylla, and they handle their flowering arrangements differently; flowers of the modern plants are always bisexual, whereas the ancient plants produced female flowers first, followed by bisexual flowers.

Micropropagation of plants from ancient placental tissue. From Yashina et al. 2012.

Though previous claims that plants have been grown from very old seeds have been debunked, the authors say that because the burrows were buried ~20-40 meters down and were apparently undisturbed they are confident that their samples were not contaminated with modern seeds.  They also performed direct radiocarbon dating on their samples.  And the plants that resulted were visibly different from their modern counterparts.  It’ll be fascinating to see the DNA sequence; I’m sure it’s on its way.

Sadly, the last author of the paper, Dr. David Gilichinsky, died just 2 days before the paper was published.

Yashina, S., Gubin, S., Maksimovich, S., Yashina, A., Gakhova, E., & Gilichinsky, D. (2012). Regeneration of whole fertile plants from 30,000-y-old fruit tissue buried in Siberian permafrost Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1118386109

The (model) elephant in the room

March 5, 2012 § 5 Comments

Banksy's Elephant In The Room

Jeremy Gunawardena recently wrote a very nice minireview about the lessons of the Michaelis-Menten equation for model-building (also available here).  Michaelis-Menten is an equation with many lessons for modern systems biologists (as I’ve  discussed before) and is so deeply ingrained in biochemistry that I am sure that most people who learn about it regard it as simply a fact of life; but instead, it is a simplified way of expressing certain facts about life, i.e. a model.  When Michaelis and Menten developed it, it was a highly theoretical construct that assumed the existence of a chimeric creature called the enzyme-substrate complex, which would not be observed until 30 years later.  Jeremy calls the enzyme-substrate complex the “elephant in the room”, and argues that what was remarkable about Michaelis and Menten’s accomplishment was not the fact that it fitted the experimental data, but that by doing so it provided evidence for something unseen.  Provocatively, he argues that the fact that MM was adopted so quickly by biologists, despite this great hole in the evidence, indicates that biology is more theoretical than physics; and that this, in turn, is because biology is more complicated than physics and needs all the help it can get.  Go and read it, and discuss.

Gunawardena J (2012). Some lessons about models from Michaelis and Menten. Molecular biology of the cell, 23 (4), 517-9 PMID: 22337858

Learning from the enemy

March 2, 2012 § 1 Comment

Viruses are the ultimate hackers of biological systems.  Synthetic biologists might begin to catch up in a billion years or so, depending, of course, on how strong the evolutionary pressures on them are. But for now, for frighteningly elegant and complex interventions in cellular behavior, viruses are hard to beat.  And that means that when you find a virus messing with your system, you can learn a lot from how it achieves its effects.

A recent paper (Maynard et al.  2012. Competing pathways control host resistance to virus via tRNA modification and programmed ribosomal frameshifting.  Mol. Sys. Biol. 8; 567) dissects a case in point. In earlier work, this group identified some pathways in E. coli that rather unexpectedly affected the efficiency of lambda phage replication.  For example, knocking out members of the 2-thiouridine synthesis pathway inhibited replication; conversely, knocking out members of a pathway involved in making iron-sulfur clusters increased replication.  These are both pathways that use sulfur, and so it was natural to suspect that the two activities are related, though both mechanisms were unknown.

What is 2-thiouridine good for? One of its uses is to modify certain tRNAs (those that accept Lys, Glu and Gln as payloads) with a thiol group, providing a clue that something to do with translation might be involved.  In fact, thiolation of these tRNAs is important for reducing ribosomal frame-shifting.  You might think that this is an unlikely place to look for effects on the virus.  But you’d be wrong.  It turns out that many viruses, including HIV, use a lovely strategy called programmed ribosomal frameshifting to make themselves more efficient by producing two proteins from one gene.  It works like this: when the ribosome reaches a so-called “slippery sequence”, it — um — slips, either backwards or forwards.  When the ribosome slips, it then misses a stop codon further along in the gene, so one protein is made that stops at the stop codon and the other is made as a result of read-through.  The ratio between the proteins is determined by the frequency of slippage, and the ratio matters because the two proteins have different functions.  In the case of lambda, the proteins made in this slippery fashion are called gpG and gpGT, and they seem to act as chaperones for the asssembly of the phage’s tail.

« Read the rest of this entry »

If I understand it, can I build it?

February 16, 2012 § Leave a comment

It depends on your definition of “understand”… and possibly your definition of “build”.  Thanks to Pam Silver, I’ve belatedly become aware of the CAGEN competition.  CAGEN, which somehow trips off the tongue less elegantly than iGEM (but perhaps I’m just not used to it yet) stands for Critical Assessment of Genetically Engineered Networks, and the competition sets challenges for the synthetic biology community that “if achieved, would imply that significant improvements in the state of the art have been made”.  This year, the challenge is to design a circuit that provides a robust gene response: rapid expression of a fluorescent protein at a controlled level (moving rapidly from 1x to 10x) upon the introduction of a chemical inducer, with minimal variation in expression between cells.  It should work both in E. coli and S. cerevisiae, be sustained over time, and have minimal temperature-dependent variation.  Specifics, including the metrics to be used, are on the Challenge page.

Think you have a design that will work?  You have until June 15 to submit it.

The laws of averages

February 7, 2012 § 3 Comments

The Hitchhiker’s Guide to the Galaxy, that truly remarkable book, points out that since the area of the universe is infinite and the number of populated worlds is finite, the population of the universe is, on average, none.  So although you might see people from time to time, they are most likely merely products of your imagination.  Arguing from averages is always tricky; many people in the department are fixated on the question of what happens when the average is not a good surrogate for what’s happening to the individual, as for example when there are two populations behaving in distinct ways and the average captures neither behavior.  But a recent paper argues that there is quite a lot you can deduce about the physical limits to cell behavior by knowing the average behavior of the proteins that make up the cell (Dill, Ghosh and Schmit 2011.  Physical limits of cells and proteomes.  PNAS doi/10/1073/pnas.1114477108).  Actually the average alone is not enough: you need to know the distribution around the average as well.

The argument goes like this.  Because the mass of a cell is (on average, and excluding water) about 50% protein, the physical properties of the mixture of proteins that make up the proteome are likely to be important in dictating the physical properties of the cell itself.  You might think this is a rather unhelpful idea: if you need to measure the properties of individual proteins one by one and average them all together to determine the overall behavior of the proteome, then it may be easier to measure the physical properties of the cell directly.  But it turns out that many physical properties of proteins depend strongly on their length.  For example, the free energy of folding of a protein is directly correlated to the number of amino acids it’s made up of (let us, creatively, call this number N).  While the details of the structure of the protein — secondary structure, the number of hydrophobic amino acids, the number of salt bridges, etc. — may be important for individual proteins, on average these details appear to have only a minor effect.  This means that you can, in principle at least, figure out quite a lot about how a cell’s proteome responds to heat by simply knowing the relationship between N and folding free energy, and the average and distribution of N.  Which, in principle, you can get from genomic information.  Similarly, if you assume that proteins are in general globular, then the overall size of a protein depends fairly straightforwardly on N.  That means that the rate of diffusion of a protein also depends on N.  And if you know the distribution of N for a cell’s proteome, and the size of the cell, you also know something about the density of the intracellular environment.

So Dill et al. are suggesting, among other things, that you should be able to use sequence databases to predict the response of different cells to heat shock.  They go further than simply suggesting that it should be possible: they set out to do it.  First, they needed to figure out the relationship between N and the free energy of folding, ΔG.  Since the free energy of folding of a given protein must be dependent on temperature, T, they use T as a variable as well.  They use literature measurements of ΔG for 116 proteins to create two different approximations for the ΔG/N/T relationship, one for proteins from mesophiles (those of us who like to live at moderate temperatures) and the other for proteins from thermophiles (those who like to think they’re hot, and live at 45ºC or above).

Having done this, all we need to know is N to be able to determine ΔG for any given temperature.  Using the mean and variance of protein chain lengths in the organism’s proteome, predicted from genome sequence information, you can get an approximation for this too.  By putting the two equations together (mesophile ΔG/N equation with N distributions from mesophiles, and thermophile ΔG/N equation with N distributions from thermophiles, naturally), Dill et al. can then produce an estimate for the distribution of stability of proteins in a given proteome.

This is already interesting because the thermophile protein stability equation is different from the mesophile equation — so ΔG depends not only on N but also on the class of organism.  And Dill et al. note that it isn’t entirely clear where the difference comes from.  Nevertheless, within each class of organisms there seems to be a reasonable linear relationship between ΔG and N.  So let’s just assume that all mesophile proteins behave the same way as each other, and take a look at a plot of the number of proteins versus stability in the genome of the biologist’s favorite organism, E. coli, at 37°C.  It has a pronounced skew and looks like this:

Figure 2 from Dill et al.

What this shows is that although the average protein is predicted to be fairly stable at 37°C (with a free energy of folding of about 6.8 kcal/mol), there are a few hundred proteins that are predicted to be only marginally stable (free energy of folding < 3 kcal/mol). So for E. coli, even a small change in temperature — say 4°C — would be predicted to destabilize about 16% of the proteins in the proteome.  Which would be bad; misfolded proteins are a problem, as we’ve discussed before. But just how bad would it be?

« Read the rest of this entry »

Repost: What are graduate school interviews like?

January 13, 2012 § Leave a comment

Lots of people found this post from Jue Wang useful last year, so here it is again.  Comments welcome!

Jue Wang writes:

To help 2011 graduate school interviewees, I collected some advice from current Systems Biology graduate students.  Here are their thoughts:

“Relax, have fun, don’t try too hard to impress people, and don’t get drunk at Marc’s house!” [Marc is our Chair of Department]

“Feel free to interrupt a professor’s monologue with questions. Dress warmly. Don’t be scared if your first interviewer makes you perform calculations on the fly :p”

“Have a good ‘elevator pitch’ for any research you’ve done, so you can explain it clearly and quickly when asked. You won’t be asked to recall your entire thesis, coursework, etc.”

“Don’t get thrown off if it seems like the interviewer is quizzing you. They just want to see you talk through some science with them, and don’t expect you to know all the answers. Just be honest about what you don’t know, and they’ll probably help you along.”

Update: another tidbit someone sent me today: “You should interpret the interview invitation as ‘We love you on paper, we’d like to know whether we like you in person. Hey, we hope you like us, too!'”

To give the recruits a little context to these remarks, I’ll just add a basic description of the interview weekend as my murky, pre-grad-school memory has it: a sweet hotel room, a few hours of what seemed like shooting the breeze with faculty members, meeting lots of people who inexplicably seem really excited to talk to you, and free food and drink everywhere. Did I mention the free food and drink everywhere??

I realize some might feel a little nervous about the process, especially if you take the “interview” part to mean something akin to the med school or Wall-Street job interview process. In reality it’s much more laid back. I was fortunate/foolish enough not to have given this much thought, but my first interview was with a faculty who was much taller than I realized, and somehow I found this very intimidating. It didn’t help my nerves that someone told me he was in the middle of writing a grant proposal and hadn’t slept for 3 days, and he was slightly late so I had a good 10 minutes to just sit there and hope that if I said something stupid he’d be too tired to notice. As it turns out, neither my nervousness nor any evidence of his sleep deprivation lasted more than 2 minutes into the conversation, and we had a fascinating, sprawling discussion of his research and the things we were both interested in (with a slight bias toward the former). This is basically how most interviews go.

Some of my classmates had interviews in which they were asked specific scientific questions. This can be nerve-wracking, as at least a few of the G1’s can attest, but like my classmates mention above, it’s best to just take it in stride and talk through your answers. I’ve been told to avoid trying to seem like I know something I don’t, which is confusing advice because I can’t imagine many recruits who’d be consciously trying to mislead people in their interviews. My guess is that it can seem this way if you are not engaged in a conversation and just nod like a zombie, or if you gratuitously mention ideas and jargon out of fear. Another reason to sleep well the night before and have some kind of ‘elevator pitch’ for your past and future interests, so that you can speak plainly and concisely—and therefore seem genuinely interested—about science.

One other thing I remember is being impressed and therefore slightly intimidated by the other recruits. Maybe it’s a function of how diverse — and accomplished — Harvard SysBio’s recruitment base is, but it seemed like everyone was a star at something. Also, some recruits probably went to school in the area or even worked in the department during undergrad, so they sound like they’re already in grad school. These things are a net win though, because you end up having lots of great conversations (and some insider knowledge on the best places to get pizza and whatnot).

Finally I’ll reiterate the most important advice, at least for Boston, which is to dress warmly — the East coast is very cold in late January, and the weather is more unforgiving than any of your interviewers will be. Fortunately there will be plenty of chances to warm up with beer, food, and new friends during the whole experience, so enjoy it!

Where Am I?

You are currently browsing the Actual Science category at It Takes 30.