March 2, 2012 § 1 Comment
Viruses are the ultimate hackers of biological systems. Synthetic biologists might begin to catch up in a billion years or so, depending, of course, on how strong the evolutionary pressures on them are. But for now, for frighteningly elegant and complex interventions in cellular behavior, viruses are hard to beat. And that means that when you find a virus messing with your system, you can learn a lot from how it achieves its effects.
A recent paper (Maynard et al. 2012. Competing pathways control host resistance to virus via tRNA modification and programmed ribosomal frameshifting. Mol. Sys. Biol. 8; 567) dissects a case in point. In earlier work, this group identified some pathways in E. coli that rather unexpectedly affected the efficiency of lambda phage replication. For example, knocking out members of the 2-thiouridine synthesis pathway inhibited replication; conversely, knocking out members of a pathway involved in making iron-sulfur clusters increased replication. These are both pathways that use sulfur, and so it was natural to suspect that the two activities are related, though both mechanisms were unknown.
What is 2-thiouridine good for? One of its uses is to modify certain tRNAs (those that accept Lys, Glu and Gln as payloads) with a thiol group, providing a clue that something to do with translation might be involved. In fact, thiolation of these tRNAs is important for reducing ribosomal frame-shifting. You might think that this is an unlikely place to look for effects on the virus. But you’d be wrong. It turns out that many viruses, including HIV, use a lovely strategy called programmed ribosomal frameshifting to make themselves more efficient by producing two proteins from one gene. It works like this: when the ribosome reaches a so-called “slippery sequence”, it — um — slips, either backwards or forwards. When the ribosome slips, it then misses a stop codon further along in the gene, so one protein is made that stops at the stop codon and the other is made as a result of read-through. The ratio between the proteins is determined by the frequency of slippage, and the ratio matters because the two proteins have different functions. In the case of lambda, the proteins made in this slippery fashion are called gpG and gpGT, and they seem to act as chaperones for the asssembly of the phage’s tail.
August 11, 2011 § Leave a comment
Our immune system has quite a problem on its hands: it needs to notice and fight off invaders of all kinds, including bacteria and viruses that evolve extremely rapidly relative to us. There are two obvious strategies for dealing with such attackers: the first is to look for a hard-to-change tag that the attacker usually carries, rather as one army recognizes and attacks the uniform of another. This is the strategy a neutrophil uses in recognizing the formylated peptides produced by bacteria. The second is rather like the method used by the inhabitants of an isolated village when a visitor from the big city arrives: a local person knows everyone who “belongs”, and if you’re not recognized as belonging then you must be foreign. The immune system uses both strategies: the innate immune system, generally speaking, recognizes tags, while the adaptive immune system takes the “you’re not from around here” approach. To tell the difference between locals and invaders, the adaptive immune system uses a method that once upon a time seemed counterintuitive, but perhaps will not seem so to today’s readers. The method depends on exploration and selection: first, the cells of the adaptive immune system produce an array of recognition proteins with the widest possible range of reactivities, each of which could be helpful, or useless, or harmful. An individual cell expresses only one of these recognition proteins. Next, each recognition protein is tested for whether it reacts to “self”. If it does, the cell expressing it is killed. What’s left after this rather brutal procedure is a set of cells expressing recognition proteins that could react to almost anything; the only thing they won’t do, at least in theory, is attack the person or animal producing them. And thus, if you’re “not from around here”, you run into a rather violent reception, while if you’re a local you’re benignly ignored.
The recognition proteins of the adaptive immune system come in two flavors, T cell receptors and antibodies. Both use the same principle to create a wide range of recognition proteins: combinatorial gene rearrangement. The business end of an antibody, the part that has the potential to bind to a specific target (if the correct target comes along), is built of three separate segments, the V (variable) D (diversity) and J (joining) segments. There are many copies of each of these segments in the genome, and to produce an antibody the B cell carves up its DNA and rearranges it so that just one V segment sits next to one D segment and one J segment. Each antibody is made up of two copies each of two chains of different sizes, called “heavy” and “light”, and each chain uses its own set of gene segments (VDJ for heavy, VJ for light). For the human heavy chain there are between 55 and 65 Vs, 27 Ds and 6 Js, i.e. over 10,000 possible combinations. On top of this, there are special mechanisms that randomly chew up and add back nucleotides at the junctions between each segment (adding junctional diversity), and that encourage specific regions of the immunoglobulin gene to mutate rapidly (called somatic hypermutation). When a particular antibody turns out to be useful, the cell producing it is stimulated to divide (this is when somatic hypermutation happens) and the sequence of the gene producing the antibody becomes more strongly represented in the population.
One implication of all this is that you should be able to monitor immune responses by sequencing the antibody genes found in the cells that circulate in the blood. In fact, sequencing might be the only way to get a comprehensive, detailed picture of an antibody response. But there’s some question about whether even sequencing can look deeply enough, at least given current technology. A recent paper (Arnaout et al. 2011. High-resolution description of antibody heavy-chain repertoires in humans. PLoS One 6 e22365) now tackles this question directly. And the results are rather hopeful.
July 15, 2011 § 1 Comment
I talk a lot about drug-resistant bacteria and why we should worry about their inexorable rise — the most recent example of which is chronicled here. Now I want to offer you another thing to worry about: drug-resistant fungi. It’s the same general problem — when you use a drug that inhibits the growth of some organism, and you use it a lot, that organism has a real incentive to evolve around the drug. The special worry with fungi, though, is that we never had a huge array of useful drugs in the first place. The best broad-spectrum antifungals are the azole derivatives, such as fluconazole; these inhibit an essential enzyme that is the product of the gene ERG11. But — same old story — they’re gradually losing their effectiveness.
Why are effective, broad-spectrum antifungals so rare? The problem is not so much that fungi are hard to kill, it’s that they’re hard to kill without killing us as well. Fungi are eukaryotes, and the pathways they use to thrive and survive are awfully similar to the analogous pathways in us. Even fluconazole suffers from this problem: the ERG11 gene encodes a cytochrome P450 enzyme, and we humans have many similar enzymes; cross-reaction of fluconazole with the human enzymes causes significant toxicity. The fact that fungi and humans have so much in common has made it hard to identify single agents that reliably kill (or inhibit the growth of) one, while sparing the other.
A new study (Spitzer et al. 2011. Cross-species discovery of syncretic drug combinations that potentiate the antifungal fluconazole, Mol. Syst. Biol. 7 499) now offers hope that combinations of drugs will do better. Spitzer et al. started with the observation that although there are only 1100 genes in the yeast Saccharomyces cerevisiae (biologists’ favorite model fungus) that are essential under normal lab conditions, many more genes can become essential under other conditions. In particular, if you knock out one non-essential gene, you often find that you can no longer knock out certain other non-essential genes without killing the yeast. This is called synthetic lethality. Roy Kishony likes to use the following analogy to explain it: suppose you put an eyepatch over your right eye. This may make you look as if you’re auditioning for Pirates of the Caribbean 5 (or are we up to 6 now?), but it doesn’t completely prevent you from seeing. The same is true if you put the patch over your left eye. It’s only if you wear two patches, one over each eye, that you get the “synthetic lethal” effect on your vision. Similarly, knocking out two non-essential genes — or, in this case, inhibiting their products with drugs — may have a lethal effect even though targeting just one of the two genes doesn’t do much.
To expand the universe of useful antifungal drugs, Spitzer et al. wanted to look for synthetic lethal combinations that involve drugs not currently used as antifungals. They took a library of bioactive drugs, including a number of clinically approved drugs that are no longer covered by patents, and screened them in combination with fluconazole at a concentration where fluconazole isn’t effective on its own. They used four different fungi: our old friend S. cerevisiae, and the human pathogens Candida albicans, Cryptococcus neoformans, and Cryptococcus gattii. Almost 150 compounds — over 10% of the library — showed activity against one or more of the fungi, including examples of some surprising drug classes including antidepressants, antibiotics, and antipsychotics. (I guess even fungi get depressed.) These drugs were not active against fungi on their own, but in combination with fluconazole they had an effect. You could call them conditional antifungals.
June 10, 2011 § Leave a comment
We’ve talked before about microbes playing dead to avoid the effects of antibiotics. A recent paper (Baek et al. 2011. Metabolic regulation of mycobacterial growth and antibiotic sensitivity, PLoS Biol. 9 e1001065) identifies a new mechanism that Mycobacterium tuberculosis uses for switching into a low-metabolism, drug-tolerant state.
M. tuberculosis, as you undoubtedly know, is the bacterium that causes tuberculosis (TB). It’s a nasty pathogen, made worse by the fact that it’s really hard to kill. Treating tuberculosis involves a 6-month-long course of antibiotics — anything shorter, and not only does the infection come back, it’s now drug-resistant. Multi-drug resistant TB (MDR-TB) is increasingly a nasty public health problem. People just aren’t very good at taking pills for 6 months, without fail, even after they’ve started feeling better.
Why does M. tuberculosis take so long to kill? The way these bacteria survive is rather bold: they live inside macrophages, the cells that normally help get rid of bacteria, and indeed inside the vesicles (phagosomes) that are intended to chew them up. Here they grow, but very slowly: they divide maybe once every 100 hours. Many studies have shown that antibiotics generally do better at killing bacteria that are growing rapidly. Maybe this slow growth has something to do with the poor killing.
It’s a stressful environment inside a phagosome. If you’re a bacterium, this is an environment that’s designed to kill you. There’s not much oxygen, the pH is low, and important nutrients, including iron, are lacking. Simply restricting the oxygen supply in vitro causes the bacterium to become slow-growing and antibiotic-tolerant. Baek et al. used these hypoxic bacteria in a transposon-based genetic screen for mutants that don’t slow down their growth when oxygen is limited, to look for genes involved in the pathway that controls the growth shutdown. The mutants they find are… in genes to do with the production of fat. Triacylglyceride, to be precise. Huh?
April 25, 2011 § 3 Comments
If you’re interested in looking at how the migration of a microbe into new populations can affect its evolution, the ideal setting for your study is probably a situation where an infected population meets a population that has never been infected before. It helps if the contact between the two populations is limited, so that you can trace the infection more precisely; and it’s even better if the infection happens in neighboring populations at different times. All of these conditions applied in Canada in the early 18th century to mid 19th century, when Mycobacterium tuberculosis was spread from French settlers to indigenous Canadians as a result of contacts made while trading furs. The resulting patterns of M. tuberculosis dispersal have now been described in a recent paper (Pepperell et al. 2011. Dispersal of Mycobacterium tuberculosis via the Canadian fur trade. doi:10.1073/pnas.1016708108).
Much of Canada was completely isolated in the 18th century. The European settlers initially didn’t penetrate very far beyond the Atlantic seaboard. It was the fur trade that created the impetus for developing a vast network of transportation routes, largely based on canoes, that connected the interior with the growing settlements at the edges. The trade also offered career options for fur company employees: guides, translators, navigators and negotiators, and especially the voyageurs who traveled deep into the mysterious interior of Canada to bring back the furs. [The lives they lived look pretty miserable to us now: 14-16 hour days of constant paddling, occasionally interrupted by a portage, in which they would carry at least 180 pounds of furs — repeatedly — across rugged terrain. They often suffered hernias, and they ate mostly pemmican (dried bison meat), but they sang a lot, and so are now considered deeply romantic figures.]