Sunday, February 27, 2011

gas prices and oregon...

I noticed today a strange thing in Oregon.
Gas prices have changed, suddenly and drastically.

The whole year I've been here (almost-- can't believe it has been that long!) gas prices have been around 2.80-3.00. That's not the cheapest in the country, but it is pretty constant, and I like that.

I am not a fan of 3.65.
Now, I don't have to drive much, so it's not a big deal, and I guess it's just all this turmolt in the Middle East that's driving it but still. As an Oregonian, I know that there are a lot of people whose (timber) lives depend on driving and using gas. Hmmm... I feel bad for them and our industry. Trust me they are looking at alternative sources of fuel (wood chips) but that technology just isn't advancing as swiftly as most people would like.

Well, with spring (hopefully) coming it will be good incentive to walk around more. It's probably only a mile or so to the Safeway shops, Market of Fail, etc. and only 2 miles to places like Kimbos. Actually, school is only about 3 miles away if you take that path.

I am laughing a little now at how seriously I am considering getting a "Razor Scooter"-- 20 dollars to increase my MPH enough to make the commute feasible on a tight schedule! (I'm not much of a biker, sadly, but that also would be fun).

Thursday, February 24, 2011

Snow? What...?

There is a chance of snow in San Francisco!
This is crazy...I don't think it has snowed here in about 100 years.

Wednesday, February 23, 2011

Why travel?

"A hundred reasons clamor for your going. You go to touch on human identities, to people an empty map... You go because you are still young and crave excitement, the crunch of your boots in the dust; you go because you are old and need to understand something before it's too late."

- Colin Thubron, Shadow of the Silk Road

Tuesday, February 22, 2011

Stupidity as an emergent property

An amusing observation I've heard a couple times is that a person is smart, but people are stupid.

At first glance, this seems like a silly (or even paradoxical) idea, but I think there's actually some truth to it. Suppose that, in the aggregate, irrational behavior in groups of people does not tend to cancel out, but rather, tends to be self-reinforcing. If a herd of horses running down a trail in the woods comes to a fork in the trail, the first horse has about a 50-50 chance of going right versus going left, but the second horse does not -- it will tend to follow the first horse. The third horse will have an even stronger tendency to imitate the first two, and so on. In this way, 'herd mentality' can be thought of as a sort of positive feedback loop.

Now, think of people. For example, people buying and selling stock. Clearly, trading stocks is a much more complicated situation than a hypothetical herd of horses running down a trail. However, what if the same sort of herd mentality exists there, as well? It may be true that, on average, stock traders generally behave as independent actors. However, competing with this rationality, I think it is reasonable to say that they have a (perhaps only slight) natural tendency to imitate each other. This tendency is safely ignored most of the time, but in the case where very many traders are suddenly all buying or selling the same stock, individual traders are going to be enormously tempted to imitate the large group. After all, who wants to be the only one left out of a really hot stock? Who wants to be the sucker last in line during a bank run? If you are the thirteenth horse on the trail, and all twelve of the horses before you went left, then you figure, maybe they know something you don't know. Chances are, you'll break left, too.

Just a thought I had. This seems consistent with the observation that stock price fluctuations are a heavy-tailed (power law) distribution; these tend to imply that an underlying coupling exists in the system. Now, if I could just get my hands on some stock price fluctuation data, I might be able to really take this somewhere...

Monday, February 21, 2011

Physics envy

In many parts of physics, theory drives experiment: theory is advanced, complete, and quantitative enough that it is able to lead the way into unknown territory, suggesting new experiments. However, this is not the case is almost all other fields that have a significant theoretical component. For example, in biology, theory is almost always retrodictive: the best it does, or even tries to do, is to help better explain or quantify things that are already qualitatively understood through experiment. While this is certainly an important task, it does lead biological theorists (meaning, in this case, me...but also others I've spoken to!) to a certain 'physics envy.'

Because this is the current state of biological theory, when developing a new model, I've found there is an awkward balance between my inner gung-ho theorist, which tells me I should make all kinds of counter-intuitive, out-there assertions based on a sort of reductio ad absurdum application of my model, and my more practical, engineering-like mindset, which insists that I focus on interpreting data that already exists, and only make predictions which can be experimentally verified without a ridiculous time and/or expense. Where they find common ground is that (1) the theory does need to agree with whatever data currently exists, and (2) it does need to make meaningful predictions, which are (at least in principle) testable experimentally.

(1) seems straightforward, but consider this. I'm in the final stages of building a model for the evolution of protein-protein interaction (PPI) networks. It agrees with the data pretty well for every standard network property I could think to measure -- degree, clustering, betweenness, eigenvalues, closeness, error tolerance...you name it! Altogether I've got 12 things I'm comparing, and it seems like my model nails pretty much all of them, in fruit flies, yeast, and humans. I've also confirmed that various other models do not capture all these features. So, off to a good start, right?

Here's the complication: these are all static features. My model builds a network which ends up looking, at least topologically, very much like present-day PPI networks. The model also makes specific predictions about the evolution of the network (which is, of course, the point of the thing in the first place). For example, it describes (at a very rough level) the evolution of the first cell. The thing is, there's no data against which I can validate these predictions, and I feel that it would be very strange to make any grand claims regarding evolution without being able to at least qualitatively verify them.

Ironically, my goal with this model was to see how simple a model I could build that would still accurately represent the PPI network's structure. However, it turns out that the model's excessive genericness works against it: it's hard to find things to test! This is my quandary at the moment: how do I make predictions about evolution which won't require millions of years to actually test? I am very interested in augmenting this model in various ways, including functional and environmental factors into it. But the first step is to verify what I've got already, and that's tricky, precisely because it's so simple at the moment.

One idea (which was suggested by my wife) is that I should try and apply the model to allopatric speciation events -- that is, to organisms' sudden evolutionary burst in response to geological or environmental changes. I think I can incorporate this kind of event into my model framework in a relatively natural way, and this has the great advantage that it happens on a rapid enough time-scale that there is real data out there to compare my predictions with.

Sunday, February 20, 2011

Inset plots in Matlab

Here's a Matlab code snippet that I've found to be useful. Something I find myself needing to do pretty regularly is to create a plot that has a smaller plot as an inset. (This is common because many journals have pretty strict space limits, so you've got to try and pack as much info as possible into the small set of figures you're allowed...)

X = -5*pi:0.05:5*pi;
Y = sin(X)./X;

figure(1)
clf
plot(X,Y,'b-','LineWidth',1)
hold on
ylabel('sin(x)/x')
xlabel('0 \leq x \leq 5\pi')
title('Halfway to a mexican hat')
axis([0 5*pi -0.3 1.1])
h1 = figure(1);
h2 = get(h1,'CurrentAxes');
% Note: edit these numbers to change position
% and size of inset plot
h3 = axes('pos',[.575 .575 .31 .31]);
plot(X,Y,'b-')
xlabel('-5\pi \leq x \leq 5\pi')
axis([-5*pi 5*pi -0.3 1.1])

When you run this, you get the following extremely sexy plot:
Full disclosure: I copied this code from somewhere else a while ago, but I can't remember where. (Cool story, huh?) Anyway, thought I'd post it here, in case it is useful to someone.

Saturday, February 19, 2011

Is this a deconvolution?

Matlab has a handy built-in function to do two-dimensional convolutions. These are useful for a variety of reasons: in any situation where you have a 2-D array, and you want to adjust each value in the array according to the values of its neighbors, a 2-D convolution can do this quickly and painlessly. It's a standard tool used, for example, in image processing: programs like Photoshop use 2-D convolutions to filter images in various ways, such as blurring or sharpening it.

Here's a related problem I encountered recently. Suppose, instead of adjusting each array value as a weighted sum of its neighbors, you want to use each array value to adjust all its neighbors. Specifically, you've got an array of mostly 1's, interspersed with the occasional 0, and you want to take each zero and surround it with 8 other 0's. One way to do this is to have a nested for-loop that checks if each element is equal to 0, and if so, sets its neighbors to 0. This is the brute force method. One annoying feature of this approach, however, is that you have to include special exceptions for any values along the edges: for example, if there's a 0 in the first row of your array, then it only has 6 neighbors to set equal to 0. Also, if the array is of any reasonably large size, nested for-loops take forever and a day to run. Anyway, here's a code snippet I wrote that avoids some of these issues:

A = ones(5);
A(1,1) = 0;
A(2,4) = 0;

pA = padarray(A,[1 1],1);
A1 = padarray(A,[2 2],1,'post');
A2 = padarray(padarray(A,[0 1],1),[2 0],1,'post');
A3 = padarray(padarray(A,[0 2],1,'pre'),[2 0],1,'post');
A4 = padarray(padarray(A,[1 0],1),[0 2],1,'post');
A5 = padarray(padarray(A,[1 0],1),[0 2],1,'pre');
A6 = padarray(padarray(A,[2 0],1,'pre'),[0 2],1,'post');
A7 = padarray(padarray(A,[0 1],1),[2 0],1,'pre');
A8 = padarray(A,[2 2],1,'pre');

zA = pA.*A1.*A2.*A3.*A4.*A5.*A6.*A7.*A8;

zA(1,:) = [];
zA(:,1) = [];
zA(6,:) = [];
zA(:,6) = [];

So, what this does is sort of like a convolution. It first 'pads' the array with 1's along all its edges, then creates 8 arrays that are shifted to every position that needs to be zeroed out. Then it multiplies all the arrays together (element-wise multiplication, of course, not matrix multiplication!). The final four lines just chop off the 1's that the array was padded with.

While this works fine, and is much faster than looping through the indices explicitly, I have a nagging question of whether this can in fact be done using a standard 2-D convolution/deconvolution operation... Hmm. In any case, in the off chance that someone will find this code useful, it's here!

UPDATE (2/20/11): As KSP points out in the comments, this is really just an interpolation. With this in mind, it is straightforward to rewrite this as a convolution:

A = ones(5);
A(1,1) = 0;
A(2,4) = 0;
F = ones(3);
zA = ~conv2(double(~A),F,'same');

(F is the convolution filter.) This does the same thing as the previous code snippet, but it runs about 3.5 times faster. Not sure how I didn't see that you could do it this way to begin with! D'oh.

ah spring time

I'd like to point out that it's like 109 degrees at this same place in May because it's so exposed.

Godspeed latent heat flux.

 I need some old harvest prescriptions to finish my work for this paper. They are located at Andrews. If this works ok, I'm uploading a time lapse of Andrews so that we can all see the fun weather.
If not, you can see it here:
http://webcam.oregonstate.edu/andrews/

Monday, February 14, 2011

One step closer to Honolulu


I applied to the conference, which means now I wait to hear if the department will give me a grant!

Tuesday, February 01, 2011

Beautiful epiphytes