Saturday, January 30, 2010

The sacrificial lamb has been delivered

Whew...finally got the second draft of my manuscript in! I guess this is why it's usually considered a bad idea to completely scrap your existing introduction and theory sections and start over: it's a ridiculous amount of work. Now it's time to wait and see how long it takes the other folks on the project (including, of course, our in-house economist, statistician, and lumberjack-of-all-trades Master_of_Forests...hmm, should that be 'lumberjill' or something, or is lumberjack gender-neutral?) to tear it to pieces.

Friday, January 29, 2010

Who is Mr. Pareto?


Everywhere I look, I run into this guy, "Pareto." Pareto's this and Pareto's that. Pareto's Optimality. Pareto's Multipliers. Pareto Analysis. Pareto's Efficiency. Pareto's Flying Car and Elephant with Giant Zerg Eating Its Ears while Painting Its Hooves. Well, maybe I exaggerate a little bit.
I don't think I've actually heard of Pareto's Multipliers... I think that's just a term I've been using when I see Pareto mention in the context of LaGrange multipliers.

Which begs me to wonder, just who is this Mr. Pareto... and how did he discover everything?

As it turns out, WIKIPEDIA, thank you, Vilfredo was an industrialist and economist enjoying the reign of "our good friend" Mussolini. Vilfredo was the paean of fascist economists and the father of power laws in relation to income distribution. And... I would explain more, but my view of the wiki link to Pareto Distribution exposed me to my favorite Greek symbol that I do not understand, also known as the "squiggle." So I will leave the explaining of Pareto's math to those who are not currently explaining a simple exponential function for 200 thesis pages and get back to my work. Meanwhile... I am sure you are glad to know about Vilfredo and the original context of his existence.

On a side note, Pareto Optimality is considered a main goal of natural resource economists today. Ah, it seems like all my favorite economic theorists have been associated with a rather unfortunate period and culture of history-- Ezra Pound's La Economiste, anyone? Brilliant work... bad timing.



Wednesday, January 27, 2010

Metonymy

So this is what happens (the image doesn't do this justice, but the replay does) if you spend all of your resources making archons:

LOOK HOW MUCH DAMAGE THEY'RE DOING!!!

But on to the serious stuff.

This morning I ran through Clemson's "Abernathy Park" on the lake. I didn't know we had an a park on the lake, but we do, and it was nice. The water has been brown here for the past few days and I don't know why. It had been extremely green until yesterday, and then randomly it turned brown. The green was beautiful, like the Gildensee. It reminded me of good, alpine water. The brown, well... it just looks like a ton of kaolinite in solution. Anyway, as I ran through Abernathy Park, I realized, "I have been here before." There was a playground at the edge of the park, near Riggs, where I live.

When I went to Physics Camp the first time as a kid, we went to that playground to learn about the basic stuff: force, mass, acceleration, gravity, etc. I remember I was messing around with something on the slide when the instructor said something along the lines of, "Did you know that everything in the universe is attracted to each other through gravity. So that every one of you is pulling towards you, just a little bit, the furthest away stars? But there was also the big bang, which also has to do with physics, and so everything is also moving away at a rapid speed." I remember that this news felt like a huge burden to me. I didn't quite understand the scale of the thing. I was attracting the stars to me, but the stars were moving away from me as well? The burden I felt was, well, how do I increase my influence so that the stars don't go away?

Now, obviously, there is a huge problem of understanding "magnitude" on both the time and force scale that I was totally missing, but to me this was just crushing news-- I couldn't figure out how to stop the expansion of the universe. So I walled off this problem in my heart and in my brain and tried not to think about it. Well, of course, when I saw that playground this morning, I thought of the "problem" again. I really think it is a great metonymy for how our minds continue to work-- in the light of something huge, we forget that we are epically small.

I often think, like when I am doing research, that I need to quickly provide a solution in my research for crucial problems that have not yet been resolved-- what's a good interest rate to choose for a 200 year investment? How do I predict tree height from diameter using only one variable measurement? How can I put a quantitative value on "the appreciation of wildlife"? How do I stop the expansion of the universe? It's good to run past the playground and to view questions through the metonymical lens-- every problem has a solution (I can't stop the universal expansion, there is some equation that no one knows that predicts tree height from diameter, etc.)-- but the best I can do for now is find a system that works and stick with it devoutly.

One slide and a few swings and many questions about the nature of everything.

Monday, January 25, 2010

So, you're a lumberjack?

Today I went to the woods to take some tree measurements. The stand we wanted to work on was thinned, so we had to go to a different stand on North Forest. It hadn't been thinned recently, so there was a lot of understory growth and suppressed trees. It was a very windy day, and the young trees of this stand were so limber from the youth and from all the recent rain that they were blowing above us like green, tufted feathers.

Taking tree measurements is pretty easy and fun. You just go out there and measure diameter with a d-tape and height with (today we used) a Haga Altimeter. Then you record and repeat. The whole time you are out there working it's just kind of pleasant. I can't help but thinking, wow, I am so lucky that this is what I do.

One of the reasons I really enjoy the prof I am TA-ing for this term is that he is just absolutely passionate about trees. He really knows "everything there is to know" about ecology. Today as we were working he would go off and find leaves or cones from trees I didn't even notice; it turns out we were right next to Clemson's old arboretum, so a lot of strange species were out there. Like China-Fir or Table Mountain Pine. He just loves the woods. I love the woods.

To me, being in the woods feels like I am being healed. I breathe and the trees siphon through themselves all the junk and stress. I feel the wet soil encroach on my feet through my boots. Twigs scratch at my face. Dirt climbs under my fingernails. I feel completely alive.


A new desktop background

Well, I do love looking at Golden Gate Bridge, but I don't think I've ever laughed as hard as I laugh when I think about the archon doing damage. It is so amusing, I decided to spice up my desktop background with this humor. Behold my new background. Observe that I am now officially L337.... or not.

Sunday, January 24, 2010

I'm doing damage!

Saturday, January 23, 2010

Marginal Analysis

I am reading up about shadow pricing and it reminded me of an article a friend once showed me about marginal analysis.

Behold the conundrum which brought an economic department to a standstill: Why do people walk up stairs, but not up escalators?

Before you get to reading, and discover the (obvious, but economically somewhat tenuous) solution, consider the margin:

Taking one step costs energy X
Moving up one stair benefits by distance Y

The marginal analysis of stair and escalator is [in the limited way I have presented it] equivalent... Why choose to "spend" on stairs, but not on escalators? Now, go back up, and read for yourself. Be amused at economists. We are very amused with ourselves.

Friday, January 22, 2010

Coppices

I have recently been trying to obtain my newest S assignment, which is "add 40 references to your model background by Wednesday" by reading and seeking out some lovely documentation from CU's massive CAFLS access. Of course, for every one relevant article I read, I also read 3 that I think "oh, these are interesting, but how will I ever use that?" Well, this is about one of those articles...

With my study of urban forests, one thing I have failed to think about is coppice. Let me explain. Timber forests are often grown for height and trunk thickness; a "tall, straight bole" is considered to be the best kind of tree. If your tree has even a slight krick in it, you will lose value and grades because you get less boards out of it and the grain alignment is off, which reduces some of the wood products properties. For example, wood has the most tensile strength "with the grain"-- which means if you have to cut your boards with an angle, you'll lose strength, and when you lose strength, you lose grade, and when you lose grade, you lost money... it's all a cycle.

The opposite is true in urban forestry. Wood is not even a concern, really, so trees, yes, are grown for height, but their breath (crown diameter and crown height, and to some extent, I guess you could include LSA) is really of value. I see my dad trimming the crape-myrtle at home all the time, when I ask why, it is because "it grows more." What he doesn't realize is that he is actually practicing a silvicultural technique that was very prominent in the medieval days and also in Japanese forestry: coppice.

The idea of coppice is to promote wide, low growth trees. These trees, obviously, are not being grown for timber, but for other products. Early coppicing was used to get charcoal and wood for fuel. Recent silvicultural practices (including one of my favorite sites on the CEF) use the coppice with standards technique, which is where you have a sparse level of older, tall growth over a level of low growth. "Biomass" is the hit word right now in our field, so obviously this is going over very well. Let's say you have a hardwood forest and you want to grow timber trees but you also want steady cash flows... oh, it's so easy! You plant oaks, let them grow tall, and maintain a lower level forest of, say, maple or beech. You harvest the maple or beech frequently for biomass products by simply performing a massive prune. This is different than say, seed tree, when you've only got a few trees in your upper level-- in coppice with standards you've got about a 50% full overstory and a completely full understory.

So what does this have to do with urban forestry? Well, most of the valuation strategies I've been doing have been, somewhat, based on strategies used in traditional timber situations-- not coppicing. But I wonder-- an urban forest behaves much more like a coppice-- its value is based on breadth/crown size/ almost "biomass" versus timber. I wonder, when looking at some of the measurements that are used for urban forests, whether or not we shouldn't view it through the lens of coppice?

This is something to think about. Perhaps something to think about AFTER I've finished the work I'm currently doing. In the meantime... coppices for the win.

Wednesday, January 20, 2010

Where did all that data come from?

Here's something that really bugs me: when authors mention that they are using some "database with 1,000,000 data points" for some subject, then totally fail to mention what database it is, or how they got access to it. This seems like something that should absolutely be a standard part of the citations procedure. For example, I've now read several papers by a certain author (who shall remain unnamed) who, in every publication, does this, and it drives me up the wall. I can't even double-check his conclusions, because he doesn't say how he got all this data!

Tuesday, January 19, 2010

3 AM at Lehotsky Hall

I am in my office.
It is three in the morning.
It is about 1000 degrees in here.
I am retyping my thesis to fit Clemson's formating guidelines. They are INTENSE.

I am listening to Pandora radio... it reminds me of this summer and doing the Big Cruise office work all night long for several nights... and how R-squared ate about 6 bags of tortilla chips during this experience... and how we slept in the plantations and I had a hammock and everyone was jealous. Also, there were many snakes. Did you know snakes like to climb trees? They do. They will hang out on branches curled up into terrifying black "nests" of snake. These are obviously the "nice snakes" that only eat small animals and don't bite, but nonetheless, it's never pleasant to realize that the thing randomly touching your face... is a snake.

I am thinking... soon I will be on the west coast. This makes me happy. Finishing my thesis will also make me happy. Back to work!

Monday, January 18, 2010

Einstein

I've heard before that Einstein's dying words were "I wish I learned more math."

I didn't understand before, but I'm starting to see why he said that. Math is kind of like Wikipedia... every time you learn about one thing, you get curious to know about at least 10 more. Maybe it's an addiction. There are worse things to be addicted to, I suppose.

Today I saw a basic I-O matrix (2 commodities). When I solved the algebra behind it I realized that I was actually solving for the "determinant." Ah, the link between theory and application has been made. This is good.

Sunday, January 17, 2010

Longing for Kashi and the American Chestnut


In case you haven't noticed, I am not sure my brain really knows how to make metaphors correctly. Often I start thinking about one thing, which makes me think about another, and then I sort of sigh and relax into "the great smug curtain that goes around everything / has always been there / will always remain."

I used to be a Kashi fiend. I mean, I absolutely loved that stuff-- it was like rocket fuel for me. I could eat Kashi for three meals a day, every day. That cereal was great-- unsweet, crunchy, fibrous, delicious. I see it all the time-- on ads, in stores, etc., and recently, man, I've been wanting some Kashi.

The only problem is.... Kashi is known for it's seven whole grains and sesame... and the first two of those grains are derived from... wheat.

Sadness. No, really. I went to my local "health food establishment" hoping to find a replica for Kashi, something similar to Kashi, only sans wheat, and, surprisingly, nothing existed. There were lots of tasty "Leapin' Lemurs," which I also enjoy, and the "Nature's Harvest 'cheerios'," but nothing with the delicious non-sweet healthy wonders of... Kashi. I was actually feeling rather frusterated, and contemplated creating my own flakes and twigs out of gluten free grains (until I realized, but wait, I hate cooking! Grilling is okay, but I know I will never be a baker.) Anyway, today for a change of pace, I came to Greenville, still craving Kashi, and on a last ditch effort, I went to Earthfare and found the cereal shown above.

Okay, so maybe its closer to "Honey Bunches of Oats"... still, it filled me with great happiness to finally find a not-so-sweet "flakes + twigs" cereal. I honestly believe that one day I will be healed of this. I know that this should not be possible... but I believe in quantum mechanics, and God, and I think between the two of them, impossible stuff can happen.

This makes me think of the American Chestnut. What's so interesting about the story of the woods in the Southeast is how much the American Chestnut shaped the woods, yet there isn't an American Chestnut to be seen around here. Last fall, we went to North Carolina to look for American Chestnuts... we found one. It was about five feet tall, and there was a dead skunk near it, so we didn't spend much time with it.

Prior to 1904, American Chestnuts were the dominant tree in the overstory of the Eastern USA. When I say dominant, I mean that Chestnuts were roughly the size of the redwoods in California-- huge, beautiful trees, rich in tannins and providers of delicious mast for deer populations. Under the shade of the chestnuts, the oaks grew tall and strong (oaks can dig shade, but much of their competition, like poplar and maple, doesn't). Then, in 1904, Cryponectria parasitica (Chestnut blight pathogen) was introduced to NY. In 40 or so years, all the chestnuts were wiped out. It was that fast.

Well, maybe I shouldn't say wiped out. We still have American Chestnut... shrubs. The CP pathogen causes cankers on the bole of the tree below the branches, so that the tree canopy can't exist, but the tree can (and does) produce a bunch of ancillary sprouts below the gall. A tree the size of redwoods is now smaller than crape-myrtles that get planted in yards. It's kind of a sad story.

I'm a big fan of the idea that woods tell a story: when we look at the (eastern) woods today, we're seeing the aftermath of American Chestnuts. Without the safe canopy, the old oaks (that have reached a height greater than understory species) are still standing strong, because they grew in the time of the chestnuts. New oaks, born after the chestnuts, are kind of small and weak, and worse, overpowered by understory competition. I've heard from some people that (in part) because of the loss of chestnut, at some point, the eastern woods will just be poplar and maple. No oaks. In terms of ecosystem balance, this could be devastating. Oaks are a prime mast provider, but understory trees are not. A loss of oaks is a loss of good food for animals... I'm not trying to be an animal hugger and say "oh, the poor animals," but I am a huge fan of economics, and I know that when you change one supply in a multi-commodity system, you affect the supply and demand functions of every commodity in the system and the system equilibrium.

But that's the glory of the field of management. Management (at least for forests) is about the maintenance of an equilibrium in the face of a system that naturally strives for otherwise. An effective management system will mimic and ultimately become a natural process for the area, but one that will achieve pre-specified goals. What's beautiful about this... and about not being able to eat Kashi, I suppose... is that management of a seemingly hopeless situation can produce stellar results. Chestnut is still struggling now, but there's a lot of devotion to its restoration. My intestines may be fighting now, but I'm devoted to healing them. Chestnut is not going down without a good silvicultural fight, and I don't think it will actually go down at all. If anything, I believe that chestnut will return to the overstory. And I believe that one day, I'll be healed. It's not at all a logical train of thought... but as S says frequently, "everything in Nature is logical, the hard part is figuring out how to understand it."

Well, since that was long enough, I'll just conclude and say that I'm glad to be here today, with my "pseudo-Kashi." It's forty-five and rainy again; the world is still elegant.

Saturday, January 16, 2010

"Woo-whee" as deconstructivist criticism

I love to run when it's forty-five degrees and rainy. It is my absolute favorite weather. I love it because it's just wonderfully silent and rhythmic--I never see anyone on these days-- it's just me, the rain, and the sound of my breathing-- slow and methodic. I thought this morning: I am so blessed; I am going to spend the next four years of my life in a place where it will be like this EVERY DAY (+ mountains).

But that's not what I am really going to talk about. You see, as I was enjoying this weather, I was thinking about Clemson. One of the things I really love about the program here is how intuitively we are trained to work the forest. Labs often involve going out into the woods and looking at the trees and ground and trying to tell the forest's "story." What did it look like in 1600? In 1840? 1920? If you can't tell the story of the forest, you can't work with it effectively. You have to know the forest like you know your family-- you know its history, its composition, it's behavior. It's funny-- the best advice I got from a professor here about how to improve my forestry was "walk around in the woods more." What a tough life :) . This morning, I wondered-- with all the precision agriculture techniques and algorithms driving them out there, would it ever be possible to simulate the absolute intuition of an experienced forester? It's sort of like Gary Kasparov's situation. But, I am not so sure that a real forester's experience could ever be trumped by a model (I would say "a constrained optimization" but that implies that there is a set of definite goals, which is not necessarily true in real situations). I needed an analogous way to assess this situation.

This made me think of literary criticism methods.

In the early 20th century, literary critics (and art critics) began to really push the limit of "what is a work?" They sought to break the boundary between something that is art and something that is real. To break that boundary, they had to define it. And they found that definition was significantly more complicated than expected. (If you read "Pale Fire" or "House of Leaves," you'll see this in full force). Of course, people have been looking at this for a long time, but there are three critical movements that I think really fostered ways we think about "what is a work?"

The first is the deconstructivist movement. Descartes. The idea behind deconstructive criticism is to look at juxtapositions of literary constructs in a work and examine symbolic fallacies. Diction, for example, you might look at certain noun-noun or noun-verb or verb-adjective (they don't have to just be dualistic combinations, but that's the easiest to describe) and think of all the possible historic and social symbolic references for each term (for example, blood might invoke "red," and "war," and "love," and "water," etc.) and then you look at the references for another term and you compare them and try to find combinations that "fall apart," which can basically be described as combinations that make you say "why would anyone write about that?" You go through the whole "work" this way. You can also deconstruct syntax, plot, character specifications, etc. The idea is that by deconstructing the literature you find the fallacies in the literature that prevent it from being "real."

The second movement is the... and I am somewhat hesitant to use this term loosely, but I will, "constructivist" movement. I am going to allude to an author here who is not exactly a typical constructivist, but I think he does a good job of describing the idea. Donald Davidson's poem "The Ninth Part of Speech" describes how a literary work is composed of eight parts of speech combined in such a way that different actions are "run" within the work. In a way, it is sort like thinking of literature as a computer program. "Yes! (You) get the blue ball quickly" combines interjection + subject (noun) + verb (act) + adjective, adjective (specification) + adverb (specification to verb). That sentence "constructs" a very specific scenario. I cannot go get the blue ball. You cannot go get the red ball. You cannot go get the blue cow. You cannot go get the blue ball slowly. Etc. Every construct destroys all the other possible constructs. However, that construction still does not exactly mimic reality. Davidson suggests that a "ninth part of speech," something that can be akin to, I guess, an "intuition" or a "soul"-- something greater than the sum of constructs in the context of all the impossible constructs-- cannot be written, and therefore no matter what constructions are employed, no literary work can ever have the dimensionality of reality.

The third movement is the vorticist movement. I don't think many literary professors would be happy that I mention this here, but I think it is the most superior method for trying to unite a work and reality. A vorticist constructs a "web" of meaning. I actually thought of vorticists yesterday when I looked at graph theory-- just like coloring the vertices of graphs in certain manners can create an impression for the whole graph (and there is a distinct theory behind how the vertices should be colored in order to avoid repetition), a vorticist's web (or vortex) of meaning combines parts of speech / their relevant symbols in such a way that the literary work creates on overall meaning greater than the sum of its parts. It is therefore considered to be a "vortex" with symbols, words, images, etc. on the periphery that sucks in the reader or viewer. Because the reader or viewer is required to participate in the full work in order to elucidate this deeper meaning, it is therefore a powerful method of breaking the barrier between work and reality.

So, what does this have to do with foresters? I think, in a sense, that the deconstructive method is very similar to a forester in a forest. He or she has the intuition to look at problems on a microscopic scale and construct generalizations about the forest, and then to go back and assess these generalizations on the microscopic scale. I think of precision agriculture as being similar to constructivist method, where different combinations and constraints can be put together in order to seek the optimal "solution" given landowner objectives. I wonder how the idea of the vorticists could be used as an analogy to forestry... is there some way that an intuitive, evolving model could be constructed... a model that is greater than the sum of its parts?... a model so good that, dare I say it, it might be a perfect representation of reality?

I'm not sure how I feel about this. Ethically, it feels unstable. Maybe it's just me, but I like thinking that the "woo-whee'ers" of the world will always have just that little edge on what I can do with my models.

Friday, January 15, 2010

Stay or jump...this shouldn't be complicated!

I've been trying to think of a way of modifying the standard transition matrix method to describe a process where you have many "particles," but they may only switch states in some fixed number higher than 1. For example, assuming the particles may only hop between states in pairs, but are otherwise independent, is there a simple way to predict the system's development over many time steps?

Here is the standard approach. Removing the "must hop in pairs" constraint, the transition matrix is a simple way of analyzing many time steps for a single particle. Suppose there are two states a particle may be in, state 1 and state 2 (denoted by s1 and s2). During each time step, a particle in state 1 has probability a11 of remaining in state 1, and probability a21 of switching to state 2. A particle in state 2 has probability a22 of remaining in state 2, and probability a12 of switching to state 1. The transition matrix, A, is

A =
 
a11 a12
a21 a22
 

The probability of finding the particle in each state after t time steps is found from At. If we're given its initial location, then

 
s1(t)
s2(t)
 
=
 
a11 a12
a21 a22
 
t
 
 
 
s1(0)
s2(0)
 

The components of the final state vector, s(t), are mutually exclusive: either the particle has found its way to state 1 or state 2. So the total probability of the particle being somewhere is just their sum (which is, of course, 1). To calculate the statistics for N independent particles, this sum is raised to the N power. For 1 time step, if the particle starts in state 1, this is a simple binomial:

Q = (a11 + a21)N

Q, the sum over all available system states, is the partition function. For multiple time steps, Q is calculated as

Q = ([1 1]Ats(0))N

where [1 1] just "flattens" the final state vector into a sum, so the multiparticle expansion is all-to-all, as required. This is mathematically convenient in this case, but it's worth noting that the final state may be kept in vector form: we can still do an all-to-all multiplication by convoluting the vector with itself (which is, in an unfortunate convention, denoted by *),

Ats(0) * Ats(0) * · · · * Ats(0)

Writing the multiparticle expansion as a convolution seems like a pointless complication, since we're going to ultimately flatten the vector to calculate Q. However, keeping the vector's structure intact has the advantage of being able to keep track of pairs of particles switching states, because that information has not been flattened out. This is most clearly seen by writing the convolution in matrix form. The convolution matrix, C, will have the state vector as its first row, and a "shifted" state vector as its second row. (A matrix created using this rule is called a "circulant matrix.") After 1 time step,

C =
 
a11 a21
a21 a11
 

This circulant matrix is handy because it lets us calculate convolutions by doing ordinary matrix multiplication. The total expansion for a two-particle system (N = 2) is:

C2 =
 
a112 + a212 2a11a21
2a11a21 a112 + a212
 

The internal structure of the circulant stores even powers of the hop probability a21, corresponding to pairs of particles switching states, on its diagonal, and odd powers on its anti-diagonal. To see this more clearly, look at the circulant for a four-particle system,

C4 =
 
a114 + 6a112a212 + a214 4a113a21 + 4a11a213
4a113a21 + 4a11a213 a114 + 6a112a212 + a214
 

So the partition function for 1 time step for pairs of particles can be calculated as 1/2 of the trace of the circulant raised to the number of particles! Alternately, to store every third power of the hop probability on its diagonal (a "three particles must jump simultaneously" rule), a 3x3 circulant matrix may be used:

C =
 
a11 a21 0
0 a11 a21
a21 0 a11
 

and so on, for every fourth (4x4), fifth (5x5), etc. power.

This is not that useful as-is, because it's only for a single time-step, where it is not difficult to brute-force calculate every possible trajectory by hand. It would be very nice to be able to do this for multiple time steps: in effect, combining the transition matrix and circulant matrix to allow you to easily develop a multiparticle system for many time steps, while keeping the allowed trajectories separated from the forbidden trajectories inside the circulant's structure. This is a bit frustrating, because I think there should be a simple way to do this, but for some reason I just can't see what it is...

Sportswochen

I love Google Earth. Recently, I was thinking about Saalfelden... my Austrian home away from home. Saalfelden was beautiful! Unfortunately, when I lost my computer to crash-age, I lost all my pictures of Saalfelden (and Abers! :( ) but thanks to Google Earth, I can still get a glimpse of the Sportswochen.

So there it is (the stuff in the red circle)... where I spent the summer of 2005. Look at how close that mountain was! Yeah, I know. I'm blessed.

Thursday, January 14, 2010

Amortization tables... just another fun thing to do with finance

So, one of the most exciting parts of what I do is work with amortization tables. Amortization tables are essentially a way of keeping track of what portion of your loan payment goes toward reducing your principal, and which portion goes towards interest. In general, loan payments that we want to amortize are done on a month-to-month basis.

To figure out an amortization table for a loan is relatively easy; the first step, as usual, is to define your financial criteria: the loan amount, the amount of years/periods, and interest rate.

So, lets say we have loan amount = 100,000$, with a 30 year time frame, paid monthly, and an annual interest rate of 5%.

One of the most important things to understand about finance is that your "periods" are not how many years that you have, but how many payments you are making. In this case, you are going to be making 12 x 30 payments = 360 payments. This means that your true interest rate is not 5%, but (1.05)^(1/12)-1. The difference isn't HUGE on a small scale, but if your principal is big enough, your time length long enough, or your compounding periods small enough, the difference can add up... for example

FV= 100(1.05)^1 = $105.00
FV= 100(1.004)^1 + 100.40(1.004)^2 + 101.20(1.004)^3... k(1.004)^12= $105.13 and you can see where this is going. In other words, the more frequent the compounding, the more "effective" the interest.

In the case of a loan, sadly, this works against you, but you can think about the implications of that yourself: just remember, when looking at a car payment APY is written yearly for a reason!

So, now that we know our monthly interest rate (AKA the effective interest rate) we can calculate the periodic payment using the annuity function:

Payment= [interest*principal(1+interest)^# of payments]/[(1+interest)^# of payments -1]

So for our example, that's essentially: [.004*100,000(1.004)^360]/[(1.004)^360-1]

in other words, we're looking at a monthly payment of $536.82.

But this is where the fun part comes in... AMORTIZATION... you see, because this is a loan, you are really having to pay both the amount of money you borrowed and the accumulated interest on that money. So, by the end of this lovely investment, you have pulled not $100,000 from your pockets, but $193,255.20! (which, conveniently, is $536.82 x 360) Why? Because finance is fun. Now, we can obviously deduce from the above equation why the monthly payment is greater than the "divide loan by number of years payment," but an amortization table lets us see how much of that loan payment goes towards the principal and how much goes towards the interest; it is, in a sense, the mechanism by which the formula is explained.

Our amortization table for the above would appear like this:
MONTH PAYMENT TO INTEREST TO PRINCIPAL BALANCE
1 $536.82 416.66 120.15 99879.14
2 $536.82 416.16 120.65 99759.18
....

The underlying math is based on the concept that the loan balance is only reduced by the money paid to the principal, and not the money paid to the interest. Oh, those sneaky financial folk!

First month: loan balance= $100,000, interest = .0041666, loan x interest = 416.66 <-- the amount paid to interest

$536.82 - $416.66 = $120.15<-- the money paid to principal


In the second month, the balance is reduced by only $120.15, leaving a balance that is larger than what we would "expect."

As you can guess, as time goes on, the amount of money paid to interest lessens, whereas the amount of money paid to principal increases; both always add up to $536.82, however.

I was thinking earlier today that there is probably a good way to express this process with matrices. Now I am not so sure that is the case, but I am also not fully confident in matrices. Part of the glory of the amortization table is that it is so laborious. In fact, if you haven't already guessed, the word "amortization" is related to the latin prefix "mort" -- which means death. No, not that it's deathly boring to run an amortization table, but that you can really dig yourself a ditch with how much interest you're paying.

Why is this important? In some accounting and property situations, your interest can be written off as deductible. Knowing how much interest you can deduct is important; this ability to deduct will obviously decrease over time. Additionally, your net worth is obviously affected by how much you actually "owe" (liabilities)... not how much you paid off in interest. In the long run, it is always valuable to look at an amortization table before getting into any kind of loan...

In short, amortization is the tedious way of stating the mantra of financial folk world wide: YOU BETTER BE KNOWIN' WHERE YOUR MONEY'S GOIN'.

Matrix multiplication

I'm probably going to be teaching an introductory linear algebra class this year, so I've spent some time trying to come up with an intuitive way to explain matrix multiplication. (Admittedly, half the reason I'm typing this up is to test out this awesome website for putting matrices into HTML!) Almost everyone with a high school education knows how to mechanically multiply two matrices together, but, for some reason, these mechanical instructions aren't always accompanied by an explanation of why you'd ever want to do this, or why it follows the weird rule that it does.

The cleanest, one-sentence explanation of matrix multiplication is that a matrix is just a special kind of function ("build a linear combination of the matrix's columns"), and multiplication of matrices is just the composition of functions: "build a linear combination of the first matrix's columns, and use this as the input to build a linear combination of the second matrix's columns."

Here is a simple example that may help illustrate this somewhat abstract concept. Suppose you are looking at some kind of iterative process. For example, coupled probabilities of something happening: let's say that, each year, there's a 25% chance of someone living in South Carolina (x) moving to California (y), but only a 1% chance of someone living in California moving to South Carolina. (For simplicity, assume these are the only two options.) In this scenario, you start out in one of the two states: if you're living in South Carolina, then x0 = 1 and y0 = 0. If you're living in California, x0 = 0 and y0 = 1. After 1 year passes, what are your chances of being in either state, x1 and y1? These are linear equations:

x1 = 0.75 x0 + 0.01 y0
y1 = 0.25 x0 + 0.99 y0

After 2 years:

x2 = 0.75 x1 + 0.01 y1 = 0.75 (0.75 x0 + 0.01 y0) + 0.01 (0.25 x0 + 0.99 y0)
y2 = 0.25 x1 + 0.99 y1 = 0.25 (0.75 x0 + 0.01 y0) + 0.99 (0.25 x0 + 0.99 y0)

etc. for as many years as you want. Clearly, after a few years, these equations are going to turn into a confusing mess. A way of cleaning this up is to write the coefficients as a matrix:


 
x1
y1
 
=
 
0.75 0.01
0.25 0.99
 
 
x0
y0
 

 
x2
y2
 
=
 
0.75 0.01
0.25 0.99
 
 
x1
y1
 
=
 
0.75 0.01
0.25 0.99
 
 
0.75 0.01
0.25 0.99
 
 
x0
y0
 
=
 
0.75 0.01
0.25 0.99
 
2
 
 
 
x0
y0
 

where side-by-side matrices indicate that they should be multiplied together following the standard rule of (element i,j of the resulting matrix) = (dot product of row i from the left matrix with column j from the right matrix). The matrix

 
0.75 0.01
0.25 0.99
 

is the "transition matrix" for this process. Matrix multiplication is an easy way of encoding an iterative, coupled process. This is one of the reasons the eigenvalue decomposition is so useful: even though it's easier to multiply together lots of these matrices then to work out the iterative process by hand, it's even easier if you can diagonalize the matrix! (The introductory linear algebra class will also cover eigenvalues. I may type up an explanation for that later, too...)

We shall not cease...

This morning I was thinking about explorers. You know, I think I can name a few explorers (thank you, third grade), like Magellan, Lewis and Clark, Columbus, etc. but when I think about these explorers, I never think of "what country did they come from?" I mean, I know the answer to that question for the aforementioned three explorers, but generally I associate explorers with "what did they discover?"

I think that's the way to live. Not to be known for where you come from, but to be known for where you went.

Wednesday, January 13, 2010

The "Burdens of Proof" of Ecological Economics

Once again S blows my mind by lending me these amazing textbooks on the condition that I read them. So, I read them. Anyway, today I got a book on Ecological Economics. That's like manna from academic heaven to me. Well, there is this essay in there by a professor out at UCSB (citation below), and I think it's very interesting; it essentially sets out a list of ecological assumptions that need to be proved. What's so cool about it is that he really "hit the nail on the head" with these assumptions: he mentions so many economical principles that I take for granted. Without further ado, here they are:

The "golden rule" (akin to the rule of entropy according to this essay): "There are no free lunches"

Subsequent rules:
1. The economical world available to the current human population is limited to earth.
2. The first law: We can never do merely one thing
3. The second law: There's no one to throw away to
4. The third law: Population x per capita impact = total human impact on environment
5. Scale effects, though sometimes compensable, are inescapeable
6. Cultural carrying capacity is inversely related to standards of living
7. The maximum is not the optimum
8. "The greatest good for the greatest numbers is nonsense"--> I love how he describes this. So I will quote it: "The theory of partial differential equations tells us that we cannot maximize for more than one variable at a time. Since the time honored utilitarian ideal is mathematical nonsense, it must be practical nonsense, also... we must decide whether we want to maximize the total number of human beings on earth, or to maximize their average-- not their total-- well being."
9. Attempts to create perfectly reliable machine human couplings are inescapeably self-defeating.
10. Thou shall not transgress thy carrying capacity.
11. Every shortage of supply is equally a longage of demand.

I really think that's a great, concise way to sum up a lot of economic assumptions: Credit to the master below:

Hardin, Garrett. 1991. Paramount Divisions in Ecological Economics. Ecological Economics. Colombia University Press. pp. 48-57.

Dead sexy



Photo of the Martian dunes.

Cold Weather Running- 11 years of tips and tricks

While I can't solve the problem below (next on my list after the Riemann Hypothesis, of course), I thought this morning of something that I wanted to post on here. You see, this morning, it was a balmy 16 degrees in Clemson, and I decided to go running. Now, I've read a lot of articles in fine running literature such as "Runner's World" that detail the proper techniques for winter running, and I've got to say, I think that experience with winter running really dictates the proper way to handle it. With this in mind, I am going to share what my 11 years of running experience have taught me about running around when it's very cold outside.

1. If you make it out the door, consider yourself victorious. Yes, you may be able to log a 70 mile week in the summer, but that's much less feasible in the winter. And if you think before heading out the door "must run 11 miles today," it's easy to say "screw it, I'll just sleep." On the other hand, aiming for a four-mile run, that's only looking at 30 minutes of activity-- you can do that easily. The first step to cold weather running is DOING IT.

2. Always overdress. I struggled with this for years. I used to wear shorts in the winter, thinking pants hampered my ability to run fast. Then I realized: cold hampers my ability to run fast. If your quads are too tight to spread out because they are cold, your stride is going to suck. As a general rule, if the weather is less than 50 degrees, I wear pants (if I am racing, I'll extend that down to about 35 degrees, but do a good warm up inside and only go outside at the last minute). The same goes for shirts. I think the best attire is "something dry fit" + "a fleece" + "a windbreaker"... you won't sweat into your fleece, you'll stay warm, and the wind will be blocked. No, a really nice jacket isn't the same as layering-- the layers make friction and air pockets that keep you warmer. Trust me.

3. A good pair of socks is hard to find. You want a pair of socks that won't get wet, but are really thick. So thick they feel funny on your feet. I like skiing socks. If your feet get cold, you'll get cold. Believe me.

4. Mittens>Nothing>Gloves. Gloves suck. Don't use them. When you have nothing on your hands, you'll pull your hands into your sleeves and your fingers will touch, keeping you warm. When you wear mittens, your fingers also touch. But gloves separate your fingers. Macro: think of being in a sleeping bag with a friend or being in sleeping bags next to one another: sharing is warmer.

5. Wait 1.5 hours after sun-up. I don't know why this works, but it is substantially warmer 1.5 hours after sun-up than 1 hour after. Just do it, and thank me.

6. Run sunny roads when it's not windy and run trail when its windy. Darkness and wind are cold. Avoid when possible. However, wind is colder than darkness, I think, so if both are an issue, get in the trails. Also, running by lakes, buildings, and bodies of water is good: they radiate heat or something; it is always warmer running by a river in the winter than on a farm road.

7. Out-and-backs: Good and Bad. If you think you are up for it, an out-and-back will guarentee that you'll get in your miles. On the other hand, if it's just too cold, you are going to walk back, and that can be dangerous. Choose your routes wisely, and always have "take out points" along the way where you can bail if you feel awful.

8. When you start to retch because of cold, call it quits. Seriously. If your body's shunting blood from the stomach to keep you warm, that's a sign that it's too cold for the length of run you are doing. You don't need to die of exposure to get in a few extra runs.

9. Consistency is key. I've known plenty of runners who stayed in perfectly good shape running 3 miles six times per week, and plenty of runners who were out of shape running 10 miles twice a week. Length makes you mentally tough, but getting out there makes you fit.

10. Misery really does love company. We used to run "rant sundays"-- a sunday long run devoted to complaining-- in the winter when I was in college. Whatever it takes. When everyone is suffering, not everyone suffers as much.

11. Drink coffee/tea before you run, and eat a snack. I don't know why this works, either, but you will be much warmer while you run if you do this. A little warm fuel really helps.

12. Cover your neck. Zip your fleece all the way up, or wear a scarf if your chest gets hot. Guys, grow beards. Girls, wear hair in a ponytail under your hat so its against your neck.

13. Black pavement is bad. If the pavement looks black, it hasn't been newly cemented; that's ice. Don't run on it. Also, avoid pinestraw or anything next to a littered soda cup. No need to fall on your butt. Wooden bridges are also bad news.

14. Permafrost hurts your feet; it's easier to run on pavement in the winter because frozen, uneven ground= painful. Even if you can deal, you may get blisters, or you may get PF or ITBS. Not worth it.

Hopefully these will be inspirational for cold weather running.

A (logarithmic) leap of faith

I intended to relax tonight, but somehow, post-Starcraft, my mind ended up wandering back onto research. Funny how that works! Here's something random that I was mulling over: how do you know what form of constraint you should be using in a maximum entropy calculation? The standard variational principle for thermodynamics is maximum entropy: maximizing the entropy, S = ∑i pi ln pi, subject to an average constraint on an experimental observable tells you what your probability distribution will look like at equilibrium. For example, a constraint on the system's average energy would be set as

E⟩= ∑i Ei pi

where the angle brackets denote an average.

"Free energy" is defined as the sum of entropy and energy, or F = S +⟨E⟩, and to maximize the entropy subject to the constraint on the average energy, we set the total derivative dF = dS + β dE⟩= 0, where β is the Lagrange multiplier for the constraint and dE⟩= ∑i Ei dpi. The total derivative is

dF = ∑i ( ln pi + 1 + α + β Ei ) dpi = 0

where α is the normalization multiplier, ∑i pi = 1, so that αi dpi = 0. The resulting probability distribution is exponential:

pi = e-1-α e-β Ei = Q-1 e-β Ei

where Q = ∑i e-β Ei is the partition function.

One assumption built into this calculation is that an average observed quantity implies a linear constraint on the underlying variable, E. For observed average energy, this is empirically true, but clearly for other quantities an experimental average would imply a nonlinear constraint on the variable: observed average momentum, P, for example, would constrain E = P2 / 2m. This quadratic constraint results in a Gaussian probability distribution, rather than exponential:

pi = Q-1 e-β Pi2 / 2m

This raises an interesting question: for an arbitrary observed quantity, how do we know, a priori, what the form of the constraint on the underlying variable should be? For a multiplicative process, a logarithmic constraint might be the most natural choice. (Ok, so that's not much of an argument. Well, it's late and let's face it, if you couldn't make half-baked arguments on your blog, there wouldn't be too many blogs around, would there?) In this case, for an observed value ⟨k⟩, the constraint would be

k⟩= ∑i ln (ki) pi

This results in a probability distribution which follows a power law,

pi = Q-1 ki-β

The challenge, then, is to decide what the most justifiable form is for the constraint, both in terms of curve fitting, as well as the underlying microscopic explanation. In the case of a logarithmic constraint, is there a solid argument, for example based on the principle that the constrained quantity must be extensive, that certain constraints should be logarithmic, rather than linear?

Tuesday, January 12, 2010

SAS vs. R... a fight for dominance in the statistical packages world

So I am enrolled in a statistics course that is pretty cool. I will admit that I do not know nearly as much about statistics as I want to know, and I especially struggle with remembering which tests want the P-value to be less than the statistic and which want it to be more in order to reject H naught (okay, well, saying that, I now remember: Shapiro Wilks and Levine want it to be more, but I forget in key situations, like, uhm, exams).

Anyway, we've just received our first "homework" and as nerdy as this sounds, I am excited about it... except for one little thing... it's done on SAS.

Now, I can't claim to be an expert in SAS or in R, and maybe someone who is will give me a little tip on why my opinion is complete fail, but I really think R is much more user friendly, especially for someone like me who has very little programming experience outside of making macros in Excel (prior to the great loss of VBA apps in Excel 2008 for Mac... the horror!), making a few visual basic programs that can taunt the user with insults given a certain input, or making annoying pop ups when certain coordinates on a screen are moused over. In other words, I know almost nothing about programming.

For example, I am about to begin a program in SAS for making a correlation matrix. Right, simple, a correlation matrix. Let me show you what that would look like in R.

DATANAME<-as.matrix(read.table('/R/dataname.csv', sep=","))
#possibly a few specifications about how to read the data included in that first bracket
DATACORR<-corr(dataname)
DATACORR

That's it!! That's the whole correlation matrix for R. It will show up immediately after you type in that second DATACORR (R likes to print automatically, oh, how Ilove R!)

Now let's talk about how we do THE EXACT SAME THING in SAS

DATA DATANAME;
INPUT VAR1 VAR2....etc. #not going to input vars here- worthless for the rant
# here is where you input all of the relationships between the variables if you need to
DATALINES;
#now you type in all the data, because SAS really hates reading from files... or you can be lazy like me and use Excel's export features to make files that are suitable for SAS... and easily adaptable to the variables you want to input
PROC CORR NOSIMPLE DATA= DATANAME;
VAR VAR1 VAR2 VAR3 .... etc.
WITH VAR1 VAR2 VAR3 ... etc.
RUN; QUIT;

Now, I think we can all see the problem here; as your data set gets bigger and bigger, and it will, incorporating all this stuff into SAS... pain in the butt. For SAS experts, there may be some shortcut to doing this, but let's think about it for a second. I am not a SAS expert, and many people who need to use statistical packages for large data set applications (biologists, etc.) probably aren't programming experts, either... So this seems like a lot of trouble to me.

With that being said, I think a fight needs to go down: SAS v. R... who will dominate the "Entry-Level Statistical Packages World"? I am obviously biased to R. Here's some points to think of

FOR SAS:
- more traditional package, has been around for a long time
- already incorporated into curriculums
- produces standard plots
- labels statistical tests and their solutions (for example, on the SW test it will remind you that PW) clearly
- can easily run "tests" for equality of B's that help choose model form
- traditional ("old-school") programming syntax

FOR R:
- really sweet graphics
- lots of cool packages to download
- easy commands to use
- reading files is not TOO difficult (it can even work with some Excel files-- which is amazing-- it knows the data entry from the function entry!)
- calculates statistical tests
- new-school syntax
- 3D graphics that look just amazingly sweet (2 comments on the graphics, I know, but you can make just about anything on there! You can layer it with a MAP for goodness sake... I mean... hot skippy, that's amazing!)

So... it's the duel of the century in nerd world. I guess as this class progresses we will see-- will my exposure to SAS build my love for it or will my heart remain devoted to R? More of this epic saga to come.

Monday, January 11, 2010

Mass Carriers and Britain from the Sky


Well, it appears that I am now a bunch of mass carriers... :)

Nothing wrong with that! In a world of computerized battling, I dominate. That being said, this is a world in which hordes of flying centipedes often randomly fly around, shooting green vomit out of their mouths, and armored tanks engage in battles with giant, glowing white men who can cast spells.

In other words, it is exactly like real life.

Well, carriers aside, I am here more to post another useless link that should keep you from doing work for at least the next few minutes:

You may think, well, that's not all that impressive, but one thing I have learned from working with remote sensing is... it's pretty hard to do! You've got to get those coordinates just right. In fact, one of the coolest things I have on my desk right now is a copy of something a friend of mine here at Clemson made for GIS class-- it's a similar image to the Britain one above, but instead it has "Clemson University."

So that impressed me for the day; I've heard also that Britain is currently "iced over" and that this can also be seen on Google earth. I've been meaning to check this out. Speaking of ice, I had a pretty stellar run today; it was about 24 degrees here and I didn't think that the ice storm had actually affected Clemson, but part of my run was through the back of the forest by the dikes, and there were several trees that were wearing ice jackets. When I looked up over the golf course, some of the lower Appalachians were snow-dusted. One of my favorite things about this place is how well the people know these foothills. I mentioned this afternoon to someone that I had seen the snowdusting; they asked where; I replied "on the fourth mountain over from table rock, and also the long flat one with two peaks." Immediately my respondent knew exactly what I was talking about. I thought during today's run, you know, Clemson isn't the most exciting place in the world, but its wonderfully authentic. The people who live here know this land in such an intrinsic way, and as much deer as they shoot and beer as they drink, I think there's something to be said for that.

I feel also in light of the brilliant quotes included in previous posts by Pericles I should therefore also add in a particularly awesome quote... and may I be one of the few Christian folk out there who thinks that Nietzsche, yes, Nietzsche, says one of the most beautiful lines I have ever heard:

"I still live, I still think: I still have to live, for I still have to think. Sum, ergo cogito; cogito, ergo sum. Today everybody permits himself the expression of his wish and his dearest thought; hence, I, too, shall say what it is that I wish from myself today, and what was the first thought to run across my ehart this year-- what tought shall be for me the reason, the reason, warranty, and sweetness of my life henceforth. I want to learn more and more to see as beautiful what is necessary in things; then I shall be one of those who makes things beautiful. Amor fati: let that be my love henceforth! I do not want to wage war against what is ugly. I do not want to accuse. Looking away shall be my only negation. And all and all and on the whole, someday I wish to only be a yes-sayer."

Okay, well I hope you enjoyed, and at least maybe if you hadn't heard that Nietzsche quote you thought, well, that's an interesting quote.

By the way, Rice Chex are now gluten free, and that's a big win.

Friday, January 08, 2010

When Street Luge is just not hardcore enough

So, the other day I was curious-- what are the most extreme sports out there?-- My list, prior to being informed, would have been something like this:

1. Ultramarathoning
2. Kite-boarding
3. Elk hunting
4. Rugby (I mean, you've seen those dudes after a game, that's frickin' hardcore)
5. BJJ
6. White Water Kayaking
7. Mountain Boarding
8. Trick Bikes (like on the X-games, those bikes that do tricks similar to skateboards/roller skates)
9. Free climbing
10. Free running (parkour, I think, is the formal name)
11. Luge (in all forms... street, snow, ice, fire, whatever)....
12. "Ghost Ride the Whip" (may not actually be a sport but... come on, you've gotta love some ghost riding... for those of you who don't know what ghost riding is, please youtube it... hours of entertainment await)
13. Being an astronaut (not sure if that's a "sport" but I'd do it for recreation!)
14. Bowfishing (Hardcore points are owed to bowfishers-- They are shooting a FISH with a BOW AND ARROW!!-- the physics of doing that alone boggle me, not to mention the fact that you have to be able to shoot a bow and arrow extremely accurately)
15. Speed skating

Anyway, those seemed pretty awesome to me... until I looked on my good friend, the World Wide Web, and found out that I have a really sissy perspective on the term "hardcore."

Well, I found this:

Okay, so not all of it is that special, but check out sport number 2: Limbo Skating... I mean... that's ridiculous. Everything in my life that I thought was hardcore has just become equivalent to eating a pile of marshmallows.

I'm going to go make mass carriers now so that I feel like a bad @$$ again.