To clot or not: the trigger for coagulation
October 21, 2010 § Leave a comment
Following up on the papers from the Alber lab I wrote about a few weeks ago, John Higgins pointed out this paper (Panteleev et al. 2010. Task-oriented modular decomposition of biological networks: trigger mechanism in blood coagulation. Biophys. J. 98 1751-1761), which also aims to use modeling to probe the mechanisms of clot formation. There’s an interesting contrast here between the different approaches used by the Alber lab and the authors of this paper. The Alber group embeds their model of the biochemical events of the coagulation cascade in three layers of models of the physics of clot formation: the change in behavior of platelets as they become activated, the shear force of blood flow, and the interactions between the clot and the flowing blood; this allows them to trace the effects of alterations in biochemical events all the way to the predicted behavior of the overall clot. In this paper, Panteleev et al. focus just on the cascade itself, and ask whether it can be broken down into different sub-parts with distinguishable tasks. This is a test of what could be a general divide-and-conquer strategy: identify subtasks, identify the components involved in each subtask, and determine which components are changing rapidly and need to be modeled explicitly, and which are changing slowly and can be approximated as “constant” (a.k.a. “separation of timescales”). If you can do all of this you will end up with a simple(r) model of the key events that drive the behavior you’re interested in, and it might even be simple enough to make you feel that you have an intuitive understanding of what’s going on.
In setting the stage for their approach, Panteleev et al. point out that the mapping between the biological reactions in the coagulation cascade and the task the cascade performs is not straightforward. Only two reactions in the network have obvious functions: binding of factor VIIa to tissue factor (TF) is responsible for recognizing the site of damage, and cleavage of fibrinogen to fibrin causes blood to form a gel, blocking the hole resulting from damage and preventing excessive leakage of precious bodily fluids. Why do we need the dozen or so factors that are involved in the whole cascade? That’s a bit of a straw man, of course; the coagulation cascade is much more than a simple on/off switch. The clotting community, if that’s what they call themselves, have recognized at least 4 subtasks this network needs to accomplish:
1. deciding whether a site of damage is serious enough to require a clot, based on the size and shape of the damaged region and the TF concentration;
2. propagating in space to generate a solid three-dimensional clot that will really plug the hole, not just a thin film of fibrin that could break open at any moment — and send lumps of fibrin downstream to cause problems elsewhere;
3. ensuring that the clot is localized to the damage site;
4. preventing clot-initiating factors from spreading and initiating additional clots.
Are specific sections of the cascade especially responsible for these tasks? Panteleev et al. chose to focus on the activation of clotting in a homogeneous system, ignoring spatial heterogeneity of the type that Alber and colleagues modeled. (They acknowledge that this is only an approximation of what happens in vivo, but argue that it’s not a bad place to start, and is in any case relevant to the in vitro clotting assays used to monitor the efficacy of anticoagulants in the clinic.) The question they set out to address is whether clotting has an activation threshold. In other words, if the damage is small enough, can the clotting system ignore it? Experimental work suggests that this may be so, but it’s not clear what kind of threshold it is, or where it comes from biochemically. Another observation they want to explain is that clotting, once initiated, is explosive: you go from no clot to 100% clot in just a couple of minutes.
To make a long story short, Panteleev et al. go through all the steps outlined above to determine which parts of the cascade are key for clot initiation. They start with a full model of the cascade, which they had previously published, and which uses parameters they determined from (their own) experimental observations such as videos of growing clots, as well as literature values. They systematically went through the model setting the concentration and/or reaction rate of a component to zero to determine how much effect this has on the kinetics of clotting. They used (their own) experimental results as well as a comparison with the full model to determine which perturbations were significant in changing the time to clot formation. Sometimes context matters, for example different factors are important at low TF (the more relevant condition when studying activation) versus high TF. Many of the components of the network reached levels that were — relatively speaking — constant rather quickly, and could be treated as being at quasi steady state. This allowed them to reduce the key equations in the model from ~30 down to about 5 without much change in model behavior at early time points. Roughly speaking, the new model of what’s important in clot triggering looks like this (note that just because a factor isn’t explicit in the model doesn’t mean it’s not important; all it means is that it’s not changing fast enough to matter for this task):
It is not always easier to see the connections between biochemical events and behavior in a model with fewer equations. But in this case removing clutter does appear to bring clarity. Panteleev et al. compare the phase space diagram for the full model with that for the reduced model, and — although the overall behavior is similar — they can see important features in the reduced model that were obscured by the complexity of the full model. The most interesting of these is an unstable saddle point that prevents runaway thrombin activation (at least in the model). But the key observation is that Factor V activation is the only positive feedback on fibrin activation that is fast enough to make a difference to the kinetics of initial clot formation. And yes, there is a threshold for activation: there is no clotting at TF < 0.03 pM, and clotting is maximal at TF > 0.04 pM. This threshold is set by the factor V feedback.
To test this novel prediction, they look at clotting kinetics in normal and factor V-deficient plasma by adding in defined amounts of TF. Their experiments broadly track what the model predicts — there’s a sharp threshold for TF concentration in normal plasma, whereas in factor V deficient plasma you get a gradual increase in clotting instead of a sharp transition. The factor V deficient plasma doesn’t behave quite as expected, though, which leads them to wonder whether it really is completely deficient in factor V (even 1% activity would explain the differences they see) or whether another positive feedback is confounding their results. But it’s clear that factor V is very important for the clotting threshold.
It’s interesting that there is no bistability in the system as they modeled it, in contrast to previous models. Instead there is a very sharp sigmoidal curve with a Hill factor of about 4. The difference arises because Panteleev et al. take into account evidence that the activators of factor X are inhibited by tissue factor pathway inhibitor. On the other hand, activated factor V accumulates in this model. The consequence is that there is only one true steady state in their model, at zero activation. Maybe this is important for preventing runaway activation — which would be very bad….
The authors speculate that the role of the factor V-dependent triggering of explosive clotting is to make sure that clots are properly solid even in the presence of low TF, so that they can’t drift off downstream to cause pulmonary embolism and other such unpleasant things. This might help rationalize the observation that factor V deficiency has been reported in a few studies to be associated with thromboemboli — which is not what you would expect from a disorder that causes reduced clotting. It’s not obvious how you would test this idea experimentally, however; watch this space for the next installment.
Is this a general strategy for making sense of biological networks? It’s certainly true that it’s hard to understand why coagulation is so complex, and what the various reactions in the network evolved to do. Perhaps this new form of reductionism — studying what’s essential to accomplish one isolated task — can help make progress where the old version of reductionism failed. On the other hand, while modeling seems to be our only hope of making sense of pathways that are this complex, it’s clear that we are a long way from where we want to be. Why are some individuals prone to deep vein thromboses and “Economy Class Syndrome” while others aren’t? Which genetic/environmental/historical factors are important in clinical problems that are caused by inappropriate clotting? Are there better ways to prevent inappropriate clotting in at-risk patients than the blunt instrument of prescribing daily aspirins, or other anticoagulants? We’d all love to know, and maybe the model discussed here will provide insight — but right now, it’s just a model, and the predictions from it are hard to test rigorously. Before we can use models to help plan pharmacological interventions, we need ways to increase our confidence that the models we’re using are faithful to the in vivo events. That looks to me like the next great frontier in systems biology.
Panteleev MA, Balandina AN, Lipets EN, Ovanesov MV, & Ataullakhanov FI (2010). Task-oriented modular decomposition of biological networks: trigger mechanism in blood coagulation. Biophysical journal, 98 (9), 1751-61 PMID: 20441738