Thursday, October 16, 2008

I'd better just change the subject

I should qualify this by saying that my field is physics, not economics, but I think there's a strong argument you can make about the financial crisis that is not at all an indictment of the free market. My understanding of the financial crisis is that there's three primary culprits: 1) a systematic underestimation of credit risk, 2) excessive subprime lending, 3) mortgage derivatives linked to the excessive subprime lending (credit default swaps, CDOs that apparently no one knew how to quantify).

(1) seems to me the underlying factor. One interpretation of this is that bankers just used the wrong probability distribution to estimate risk (a normal instead of a Lorentz distribution, and a gaussian decays much faster than a Lorentz function, which follows a power law). Alright, but why? One answer, I guess, is that bankers (to paraphrase the unlamented Rumsfeld) didn't know what they didn't know. If you're modeling a chaotic system containing lots of recursive feedback loops, and things seem to be following a roughly bell-shaped curve, to start with, shouldn't you examine the function's asymptotic behavior carefully to make sure it's actually a bell curve, and not, for example, a Lorentz function, which has a completely different scaling form? Another, possibly more convincing answer, is that bankers just assumed that even though their mathematical model was incorrect, it wouldn't matter, because they could just use (in my opinion, absurdly complicated) derivatives to push the risk off onto the big investment banks, by way of Fannie Mae and Freddie Mac. Which are, of course, government-sponsored enterprises, which had, as I understand it, fairly explicit instructions from Congress to encourage subprime lending. At least some of the complex credit derivatives, and the special legal classifications built around them, were created by Fannie and Freddie, as well.

So, as a political football (and it's nothing if not that), there's plenty of blame to go around. From what I've read, there was plenty of bona fide stupidity involved. (Anecdotally, the guys I knew in college who went into banking didn't seem like the brightest folks around, but they were geniuses compared to the people who wanted to go into politics.) My understanding is that both John McCain and Barack Obama were complicit, although they're both dissembling ferociously and scrambling for the moral high ground. Not having Rick Davis blathering on his behalf has probably helped Obama in this regard. Congressional Democrats have firmly exonerated themselves, which makes no sense, but the Republicans see the whole economic mess as such electoral poison (economic issues tend to favor the Democrats, etc.) that they're not making an issue of it. This is reasonable short-term (read: electoral) but disastrous long-term politics, to say nothing of policy. The worst part is that the argument is so simple: the Republicans could really drive home the point that 1) the housing and financial markets were actually heavily regulated, 2) these regulations were a big part of the problem, so 3) in reality it wasn't anything like a free market, so this isn't an indictment of free market economics.

This doesn't mean the bailout is good or bad, and I really have no idea what ought to be done at this point. But I think the near-universal consensus that this was caused by an unregulated market run amok is wrong, and while I do think the bailout itself has to be very carefully regulated, I don't think a blind charge into more regulations on this or other markets is necessarily going to help. Obama's been quite vocal about McCain's history of deregulation, but unfortunately, McCain's response to this has been to recklessly invent schemes to out-regulate the Democrats. But hey, who needs principles when you can change the subject?

Sigh.

Further thoughts: I guess I should append to this that the actual unregulated markets were the secondary markets (credit default swaps and collateralized debt obligations), and it's the fact that each default had a huge number of derivatives attached to it that allowed the subprime crisis to amplify to the point where it could sink these huge investment banks. This is, of course, what everyone's focusing on, and why the Republicans are so leery about confronting the issue: the deregulation of the secondary markets was, I think, pushed through by Republicans. But the key point that often gets missed is that this only became an issue because of the heavily (but poorly) regulated mortgage market. To draw the analogy out a bit, everyone's up in arms trying to figure out how the signal got amplified (which is important), and completely ignoring the faulty wiring that produced the signal in the first place.

Tuesday, October 14, 2008

A cure for cancer

I thought of a way to cure cancer, using gold, light, and a genetic circuit. I'll update this later; I'm going to flesh this out for a fellowship proposal!

Monday, October 13, 2008

Thought of the day

Is the life of a graduate student much different from that of a monk? I guess I don't know that much about how monks live, but I spend most of my waking hours isolated, thinking. It's possible for me to go entire days in total silence. It's sometimes jarring for me to return to normal conversation, since the thought patterns accompanying it are so different. I tend to eat sparingly and simply because I can't really afford anything better. I haven't been in a real relationship for...I don't even want to think how long. A year and a half now, I guess.

Not sure how I ought to feel about this...

Saturday, October 04, 2008

What happens when you poke a red blood cell?

First of all, I've got to ask...why call it an erythrocyte? Red blood cell is so much better, just rolls off the tongue. Almost seems like something you'd want to know more about, just based on how great the name is! Almost. But! If you couple that great name with its simplicity - and the fact that they're damn important, and red blood cell structural deficiencies are implicated in a whole host of diseases, the most famous of which is probably sickle-cell anemia - then you've got something worth looking at, I'd say.

You can use an experimental technique called optical trapping to analyze the mechanical properties of red blood cells, and this can get you force-displacement data all the way down to the piconewton level. (For reference, the force exerted by the Earth's gravity on the typical person is between 600 and 700 newtons. A piconewton is 10-12 newtons, or 1 trillionth of a newton. Impressively precise information, in other words!) Optical trapping works on the principle that when a laser is passed through a high-refractive index dielectric (in this case, a tiny silica bead), its photons lose momentum, which causes the bead to move towards the laser's focal point. Attaching these microbeads onto red blood cells can be used to extract information about how much force is required to stretch the cell a certain amount (the resulting plot of this information is called a force-displacement or force-extension curve). The question is, can you use this information to build a model that accurately describes red blood cell deformation?

Well, you can try, and it turns out some pretty sharp folks have been trying for a while, since red blood cells are unusually tractable for eukaryotic cells because they're so simple. There's no nucleus. No mitochondria, no insulin receptors, no organelles at all. They're just small, not-quite-donut shaped bags of hemoglobin. This is nice, because it lets you focus just on that bag, and ask, What's its structure?

That simple question turns out to be fairly complicated, though. The RBC cell wall (composed of a phospholipid bilayer, membrane proteins, and cholesterol molecules), sits atop a flexible grid of a structural protein called spectrin, which looks like a ropy mesh joined together in a network of interlinked triangles, with a complex of other structural proteins at each vertex. Think of a fat man lying on a hammock, and you've got the right basic idea (fat and cholesterol sitting on top, and a concave grid beneath...it's a better analogy than I realized, actually!). Each of these spectrin links is like a rope, composed of two long, flexible rods (called polypeptides), very similar to one another, which are twisted together, running antiparallel between the junction vertices.



So it turns out materials scientists have already done the hard work of developing mathematical frameworks for different sorts of polymers, including the aptly-named worm-like-chain (WLC) model. This is pretty much what it sounds like: it's just a way of modeling a polymer that is a continuously flexible rod, and has been used to model things as disparate as strands of DNA and strands of cooked spaghetti. By using this model to describe the force-displacement behavior of the individual spectrin molecules, we can extract three key pieces of information about it: its length at equilibrium, the maximum extension length of the link, and its persistence length. Considering the polymer as a parametric curve described by a single path variable, the persistence length is defined as the value of this variable at which there's no longer a correlation between the unit vector tangent to the curve at 0 and that at the current value. Less formally, the persistence length tells you how stiff your polymer is: if you poke one end of a piece of uncooked spaghetti, that affects the whole strand, but if you do the same thing to a piece of cooked spaghetti, only the end and a little length near it will move. It turns out this 'little length' is about 10 centimeters; that's the persistence length value. In contrast, a DNA double-helix has a persistence length of around 50 nanometers: it's about 2 million times floppier!

What the WLC model gives you, mathematically, is the force exerted on each spectrin chain as a function of the chain's length. This is useful because you can integrate over the chain's length to derive the Helmholtz free energy (which is a thermodynamic state function that tells you, basically, how much work you can get out of an isochoric, isothermal process) contribution from each spectrin chain. Do this for every chain in your system, and add them up, and add that whole sum to the total hydrostatic elastic energy stored in the membrane and assorted proteins, and you've got an expression for the free energy in the entire plane defined by your spectrin network. This in-plane free energy can, in turn, be summed together with the bending free energy, as well as the surface area and volume free energy constraints on the system. The upshot of all this is that you've now got a way to mathematically describe how a red blood cell responds to mechanical stress, by calculating how the total free energy of the system changes.

This model is very high-resolution: it gives you a description of the network all the way down to the individual junction complexes. However, the computational cost of these simulations is steep: since each junction complex is a degree of freedom in this model, this results in about 30,000 degrees of freedom! A useful adjunct to this model, then, would be a way of systematically coarse-graining this model in order to reduce these degrees of freedom and the corresponding computational cost. Coupling this with a coarse-grained flow model, it would be possible to model large numbers of RBCs in the bloodstream.

And, that's the punchline, of course...there was a nifty paper, published last month in Physical Review Letters, that outlined how you'd go about doing this.

So, how can you coarse-grain this model? Basically, you just need to decrease the number of vertices you're considering, but how do you do this without losing accuracy, since the original model was set up so that the number of vertices approximated the number of junction complexes in an actual red blood cell? One simple way is to consider coarse-grained versions of the parameters in the finer-grained model: that is, estimating the effective parameters (equilibrium length, persistence length, hydrostatic elastic energy, and spontaneous angle) based on geometric arguments. The effective equilibrium length (and maximum length, which is taken to be around triple the equilibrium length) can then be estimated as the actual equilibrium length in the finer-grained model multiplied by the square root of the ratio of the number of particles in the finer to the coarser model. A similar argument can be made for the spontaneous angle between adjacent triangles: the effective angle is the original angle multiplied by the ratio of the coarse to the finer equilibrium length.

Coarse-graining the parameters in the in-plane energy equation is more complicated. One way to accomplish this is using a mean-field argument, which is a way of estimating the properties of the network by ignoring the correlations between vertices. That is, estimate the physics of the whole network from that of a single vertex! Using this approach, you can derive expressions for the shear modulus (the ratio of shear stress to shear strain) and the bulk modulus (the resistance of the membrane to compression). This method provides a handy way to coarse-grain the persistence length, as well, since in this mean-field argument the shear and bulk moduli are unchanged from their fine-grained values if the ratio of the equilibrium to the maximum length is fixed. The persistence length can then be systematically adjusted as the product of the original persistence length with the ratio of the fine to the coarse grained equilibrium lengths.

Taken together, this gives us a framework for extracting a complete set of parameters for the model at any level of coarse-graining. Of course, although you can extract effective parameters for an arbitrary level of coarseness, this approach won't be useful if you pick a vertex number of, say, 3. So how far can you take this approach, exactly, before you lose the ability to describe the cell deformation meaningfully? The most straightforward way to answer this, as well as to assess how useful this procedure is in general, is to just brute-force the question (AKA heave a big pile of simulations at your hapless cluster and spend a week drunk and high with a giddy pack of scantily-clad valley girl strippers in Vegas as your data collects itself, not that I would ever do that of course). So, cheap suits and bowls of cocaine at the ready, the authors run a bunch of simulations and discover...the lower limit is about 100 vertices. Below that, the simulated deviations in the cell's axial and transverse diameters become more pronounced. Here's a snapshot of their data:



The plot shows both axial and transverse force-displacement curves. The black diamonds are experimental data points, whereas the solid colored lines represent simulation results at different levels of coarseness (blue line: 23867 points, red: 5000, green: 500, magenta: 100). Except for the magenta axial (lower) curve, the simulated curves are in relatively good agreement with the experimental results. The inset gives an idea of how sensitive their model is to the way they adjusted the persistence length. All the curves except the magenta curve use the adjustment procedure described by the authors; the magenta curve retains the fine-grained value for the persistence length, and as you can see, the results are nothing like the desired linear relation!

So, this is interesting because it shows that their coarse-graining procedure produces results that are comparable to the much more computationally intensive fine-grained model, and fast is always good, of course, because faster = more time to tweak = less power used = less money used, etc. But the real value here is in using the coarse-grained model to do flow simulations of RBCs in circulation. They do this using the dissipative particle dynamics (DPD) method, which is a way of describing clusters of molecules moving together in a flow. The RBC and surrounding fluid are both modeled as DPD particles, and their interactions are modeled using soft quadratic potentials. The flow domain (the simulated capillary) is a tube 10 microns in diameter. The RBC starts out immersed in the fluid, at rest, in the middle of the tube, and then they watch the flow simulation develop. The deformation sequence for 500 vertices is shown below:



The 'parachute' shape observed in (c) is consistent with experimental observations, as well as is the ultimate restoration of the RBC's biconcave shape. They also simulated the behavior of the RBC in a shear flow, and confirmed that their simulations seemed to match experimental evidence.

Pretty neat, all in all. I still need to look into how DPD works (I have only the most general sense of the technique), as well as exactly how you do an optical tweezers experiment, since that data is kind of at the heart of all this!