New blog

2 01 2012

I’d allowed my blogging to fall off last year, but for a few reasons I think I will have better luck this year. (Plus I’m making it a New Year’s resolution.) In the interest of starting with a clean slate, though, I’ve registered a new blog and new domain. Please visit it if you would like to read my writing: Traveling Lands Beyond





Fallacy

15 02 2011

I don’t know if there’s a word for this, but there should be: the fallacy that because you (or someone you know) reached A by going through B, you think that B is the only way to have achieved A. Even if you consciously know that there’s more than one way to reach A, your frame of thought is biased by the fact that you got there through B, not through C or D. When thinking about how others might reach A, or how you might reach it again, you may not give fair consideration to C and D. When making decisions or guiding others’ decisions, you choose B as the “best” or “safest” way to get to A because you did it yourself (or you know someone who did it), even if your evidence really only amounts to one data point. It’s a bit like the post hoc fallacy, but here we’re not disputing causality.

I am particularly thinking of this fallacy as it applies to large-scale, relatively non-repeatable events in life, like your choice of friends, your spouse, your career accomplishments, the way you raised your kids, etc. I think people tend to look for explanations where none may exist and may construct designs after the fact when none existed. Even people who don’t believe in God succumb to a tendency to write causality and narrative into their lives. There’s nothing wrong with this – only the most robotic of people wouldn’t imagine their lives as a story from time to time – but it may impair judgment, especially because it’s the relatively big, non-repeatable life decisions in which this fallacy is most likely to come up. Practice thousands of free throws and you’ll figure out the best way to stand and release the ball. But if you’ve never shot a basketball in your life and now you’ve got to make one big basket and you know someone else threw it like this and made it, you may make far too great of an effort to mimic the way that person did it.





The rules we follow

1 02 2011

I just finished reading No Country for Old Men, and in a book full of pithy one-liners, one stands out in my mind. Anton Chigurh is conversing with a man he is poised to kill, pointing a shotgun at him, and he asks him, “If the rule you followed led to this of what use was the rule?” The meaning of “rule” here could be anything from a general principle of life to his target’s most recent specific decision.

It’s a good question, a piercing question, and one worth asking ourselves even in situations where we don’t find ourselves staring down the wrong end of a gun. (Chigurh’s point, in part, is that if you think about this more regularly you’ll be less likely to find yourself in those situations.) I would break it down into the following sub-questions when posing it to myself:

  1. What were the rules I followed to get here?
  2. Am I happy here? What about where I’ve ended up is good and what is bad? (It would probably be useful to enumerate here, maybe like an accounting ledger.)
  3. How much of what got me here was luck, either good luck or bad luck?

And if 2 is unsatisfactory with respect to 3, then I’d better think about changing something about 1.





Musical chairs, part 2

30 01 2011

Please refer to the previous post for the simpler version of the problem below.

Question: let’s make things a little harder and assume that we have N people and N chairs, and that K people other than the last person sit randomly. (Assume N > 2 again, and assume K < N.) Anyone other than these K people sits in his own seat if it is unoccupied or a random seat otherwise. What is the probability that the last person will get his own seat? Again, please read no further if you’d like to work on this on your own.

Answer: last time I described the mathematical technique of induction to solve a problem of the form P(N). We now have two variables in our problem, so let’s denote the probability in question in this problem as P(N, K). This problem can also be solved by a more generalized form of induction. The idea remains the same: we prove a statement about P(N, K) for some simple base cases, and then we prove a recursive case where we show that, assuming the statement holds for certain values of N and K, the statement also holds for other values of N and K.

Our previous problem was a specific case of this problem where K = 1, and we proved that P(N, 1) = 1/2 for all N. (I also assumed then that it was only the first person who would sit randomly, but if someone later were to sit randomly instead, it would effectively be the same problem with a smaller value of N.) In this case, we proved that given P(N, 1) = 1/2 for all N = 2 to n, that P(n + 1, 1) = 1/2 as well. Because our recursive case allowed us to “climb” from any n to its successor n + 1, we were then able to say that P(N, 1) = 1/2 for all N greater than or equal to our base case of N = 2. In the multivariate case here, we likewise need to make sure our base cases and recursive cases are defined such that our recursive case allows us to “climb” from our base cases to any value of N and K.

The answer to this problem, and the statement which we will prove, is that P(N, K) = 1/(K + 1). The first set of base cases we will prove is K = 0. This is trivial, since if no one sits randomly, everyone of course sits in their assigned seats, and the probability that the last person sits in his own seat is indeed 1/(0 + 1) = 1. The second set of base cases we will prove is N = K + 1. In these cases everyone but the last person sits randomly, and so each chair is equally likely to be available for him at the end. Thus the probability that his own chair is the one available is 1/N = 1/(K + 1).

As a visual aid, let’s draw a grid where we illustrate the values of N and K for which we have proved the statement:

The gray squares represent cases where K >= N, and we don’t care about those. The blue squares represent our base cases; the lower row is the first set of base cases and the diagonal is the second set of base cases. The white squares represent unproven cases. Our task is now to prove a recursive statement that will allow us to “climb” from our base cases to any white square.

Consider P(n, k) for some unproven case (n, k). Let’s think about where the first person might sit. As before, there are three cases to consider. First, there is a 1/n chance that he will sit in the last person’s chair. In this case, the probability that the last person gets his chair is of course 0.

Second, there is a k/n chance that he will sit either in his own chair or one of the chairs belonging to the other k – 1 people who are going to be sitting randomly. Note that it doesn’t matter in this case whether he actually sits in his own seat or not. If he sits in a seat that is not technically his, we can for all practical purposes reassign the person who was supposed to sit there into seat number 1, because if that person ends up in seat 1 then everyone else’ seating will remain undisturbed by this switch. Under this scenario, we will be left with n – 1 people/seats remaining and k – 1 people left who will be sitting at random. The probability that the last person gets his seat in this case is therefore equal to P(n – 1, k – 1).

The remaining possibility is if the first person sits in a seat that is assigned to neither the last person nor to any of the first k people. There is an (nk – 1)/n probability that this happens. In this case, there are also n – 1 people remaining but k people left who will be sitting at random, since the person who was supposed to sit where the first person sat will now be a random-sitter. So the probability that the last person gets his seat in this case is P(n – 1, k).

The probability P(n, k) can therefore be written as the probability of each of the three outcomes above multiplied by the probability that the last person gets his seat under each outcome:

P(n,k) = \frac{1}{n} \times 0 + \frac{k}{n} \times P(n-1,k-1) + \frac{n-k-1}{n} \times P(n-1,k)

P(n,k) = \frac{k}{n} \times P(n-1,k-1) + \frac{n-k-1}{n} \times P(n-1,k)

Let’s presume that P(N, K) = 1/(K + 1) for N = n – 1 and K = either k or k – 1. Then this expression simplifies to

P(n,k) = \frac{k}{n} \times \frac{1}{k} + \frac{n-k-1}{n} \times \frac{1}{k+1}

P(n,k) = \frac{1}{n} + \frac{n-k-1}{n(k+1)}

P(n,k) = \frac{k+1}{n(k+1)} + \frac{n-k-1}{n(k+1)}

P(n,k) = \frac{n}{n(k+1)}

P(n,k) = \frac{1}{k+1}

This presumption, combined with our base cases, is enough to prove that P(N, K) = 1/(K + 1) for any N and K where N > 2 and K > 0. We can “climb” from our base cases to any value of (N, K) necessary; we can use this recursion to prove that the statement holds for all K < N for N = 3 (the base cases already cover N = 2), and then N = 4, and so on. This is easiest to see on our diagram from before. The below example shows us using our already proved base cases to complete the proof for all N = 3 cases:

Then we can prove it for all N = 4 cases:

And we can repeat this process to prove it for all N and K.

Computer programmers will recognize the parallels here between induction and dynamic programming, a technique by which you solve a problem by expressing it in terms of smaller sub-problems and solving those. To prove P(N, K) = 1/(K + 1), we start with small base-case values for N and K and build our way up to prove it for whatever value of N and K is desired.






Musical chairs, part 1

30 01 2011

Question: (if I recall correctly, this was originally presented to me by a friend as a D.E. Shaw interview question) suppose you have 100 chairs and 100 people, each of whom is assigned his own chair. Now let’s suppose one person sits down in a random chair. Every subsequent person, one at a time, either sits in his assigned chair if it is empty, or sits in a random chair if it is occupied. What is the probability that the last person will sit in his assigned chair? Please read no further if you would like to work on this on your own.

Answer: the proof I know of to solve this problem uses a mathematical technique called induction. For those not familiar with it, induction involves proving something is true for all N by proving a “base case” for the smallest value of n that we care about and then proving a “recursive case” where we prove, given that the statement is true for all values of N from the base case up to n, that the statement is also true for n + 1. If we can prove both of these, we’ve effectively proved the statement for all N (greater than or equal to the base case) because we could start with the base case and then apply the recursive case repeatedly until we got to whatever value of N we wanted.

In the case of this problem, the answer is 1/2 for all N greater than 2, N being the number of chairs-people involved. (So the answer would still be 1/2 for 3 chairs, 99 chairs, or a million chairs.) Let’s denote the probability in question for N people/chairs as P(N). Our base case for this proof is to prove that P(2) = 1/2. It’s pretty obvious that this is the case; the first person has a 50-50 chance of sitting in either chair, so the second and final person has a 50-50 shot of having his own chair left over to sit in.

For our recursive case, we must prove that P(n + 1) = 1/2 under the supposition that P(N) = 1/2 for all N from 2 to n. The first pereson who sits down has a 1/(n + 1) chance of sitting in his assigned chair, in which case everyone else will also sit in their assigned chairs and the last person will get his assigned spot with probability 1. There is also a 1/(n + 1) chance that the first person will sit in the last person’s spot, in which case the last person will not get his chair no matter what.

If he sits in one of the (n – 1)/(n + 1) other chairs, then what will happen is that everyone will sit in his assigned chair until the person who was supposed to sit in the chair occupied by the first person, who will then sit in a random chair just like the first person did. For all practical purposes at this point, he may as well be assigned to the first person’s chair, because if he sits there then everyone else remaining gets his assigned spot. And just like the first person, he could either sit in his “own” chair (person 1’s chair), the last person’s chair, or one of the other chairs. The problem is the same as it was before, except for some smaller value of N. Effectively we’ve just kicked the can down the road. But we’ve done so in a helpful manner, because we’re operating under the presumption that the probability for all these is 1/2.

The probability that the last person gets his chair, P(n + 1), is the probability that each of these three scenarios occurs (first person sits in his own chair, the last person’s chair, or in someone else’s chair) multiplied by the respective probability that the last person gets his chair under each scenario:

P(n+1) = \frac{1}{n+1} \times 1 + \frac{1}{n+1} \times 0 + \frac{n-1}{n+1} \times \frac{1}{2}

P(n+1) = \frac{1}{n+1} + \frac{n-1}{2(n+1)}

P(n+1) = \frac{2 + n - 1}{2(n+1)}

P(n+1) = \frac{n + 1}{2(n+1)}

P(n+1) = \frac{1}{2}

So we’ve proved both the base case and the recursive case, and therefore the probability is 1/2 for all values of N, including 100.





Tidal locking

21 01 2011

It’s well known that the same side of the Moon always faces the Earth. When I was little and I first heard this, I thought it was a crazy coincidence, but later I learned that it’s a gravitational effect called tidal locking and that a lot of the moons in the Solar System are tidally locked with respect to their planets.

Today I thought through some of the consequences and read more about tidal locking on the Internet. One of the things I wanted to see is if a tidally locked moon orbiting a planet also causes the planet’s rotation to slow down, and the answer is yes. According to this presentation from the Ohio State astronomy department and this NASA website, Earth’s day is getting longer by 0.0023 seconds per century, although both note that this exact quantity is variable. On a geological timescale that’s a lot. The dinosaurs died off about 650,000 centuries ago, and if we assume (probably wrongly, but I‘m just trying to give perspective on the magnitude of that number) that the 0.0023 has been a good average all this time, then the Earth day was 1495 seconds shorter then than it is now, or almost 25 minutes. Some day working “24/7” just won’t mean what it used to mean (although workaholics have a head start because the Earth day is really just over 23 hours and 56 minutes).

Also, I learned two interesting things about Pluto and its moon Charon. One is that they are comparable enough in size that they are tidally locked with respect to each other, so the same side of Pluto always faces the same side of Charon. (Wikipedia’s page on Charon includes an artist’s rendering of the surface and Charon looks just huge! Imagine living on a planet where the moon dominated the sky like that.) The second is that Charon is not necessarily pronounced “Karen” like the Styx ferryman, but “Sharon,” because the scientist who discovered it, Jim Christy, named it after his wife Charlene; Christy himself pronounces it “sh.”





Still alive

20 01 2011

I recently learned that there is a sequel to Portal coming out in early 2011. The original Portal was a fantastic computer game, breathtakingly original and thoughtful. For the uninitiated, the object of each level is simply to get to the goal. You have a gun that you can use to place portals in walls, one orange and one blue, and if anything enters one it comes out the other, at the same orientation and speed with which it entered. You end up using the portals to get over walls, cross pits, trigger switches from a distance, etc. The game is completely enjoyable even if you don’t play computer games. It’s also very cheap to buy at this point so I recommend that you (all of you) go out and get it. The only downside is that it is very short, and I hope that the sequel is longer.

Once you beat the game, several of the levels are available for replays with challenges where you have to clear a level using only a certain number of portals, or within a certain amount of time, or taking only a certain number of footsteps. I never solved all of these. One of the most fun gaming experiences I’ve had was the first fewest-portals challenge. Of course the first time I played through the level I solved it in trial-and-error fashion and shot a bunch of extra portals in the thinking process. But the method I’d used, even if I were to replay the level and shoot only the portals I thought I’d needed, would still have failed to obtain the “bronze medal” ranking. So I thought hard and eventually found a better way that got me the bronze.

But I had to use one or two fewer portals to get to silver. After already having trimmed down all the fat I could cut to get to bronze, I was initially at a total loss. So I thought harder and longer and tried more things, and after a lot of experimentation I found a third, fairly different method that got me to the exit. The gold medal was one fewer than silver, and again I was at a loss, so once again I thought hard and tried out new things. Most of these were total failures, but finally I found a fourth way that finally got me the high rank that was substantially different from any of the methods I’d tried before! The game really gave me a sense of accomplishment and made me think and analyze in the same way that I’d have to think about and analyze many other challenging problems in life.

For another classic and crazy example of this kind of thinking (this won’t appeal to non-gamers nearly as much), have a look at Quake Done Quickest.





Musical list: where nothing, nothing ever happens

19 01 2011

(In which I produce a themed list of songs from my music collection, and talk briefly about each one)

Hüsker Dü – “Girl Who Lives On Heaven Hill” – A pounding rock song, with a straightforward drum beat, simple guitar line, and the Hüskers’ trademark fuzzy blare. Grant Hart serves up some forceful, declamatory vocals that get increasingly frenzied and incoherent towards the end of the song. You get some pretty fantastic screaming in the third verse. I think some people get too easily put off by the fuzz and the low-in-the-mix vocals, neither of which make for a very radio friendly sound. This band is the reason why I’ve memorized that ü is typed alt-0252.

Pavement – “Heaven Is A Truck” – Lounging and lazy, the sound of a weekend morning when you’re sitting around in a comfortable flannel that only has a few washes left before it falls apart. Nonsense lyrics common amongst Pavement songs: “I know that sharks, they don’t have wings.” Huh? I like the piano line that plays over the chorus, it really gets that yawning, shambolic feeling going. I consider Crooked Rain, Crooked Rain to be one of the best albums I own.

Elliott Smith – “St. Ides Heaven” – Smith made an art out of the little sounds that creep in between and around the notes of a song. Some singers turn away from the mic to breathe but I’m inclined to think that he leaned in closer. Also he lets his fingers run along the frets when he moves his left hand and you hear the clack of the strings against the metal. Smith is thought of more as a man and his guitar, but the part I like the best about this song is the soft cymbal tapping into the chorus. Also I like the line “The moon is a light bulb breaking.”

Talking Heads – “Heaven” – The chords and cadence of this song do a great job of capturing the feeling of disappointment. It’s the sound of repetition, of a letdown. Lyrically excellent, and you can pick out literary devices from this song as you do with formal English poetry. Everything comes in twos: David Byrne says one thing, and then he says pretty much the same thing again. “Could be so exciting, could be so much fun,” or “It will not be any different, it will be exactly the same,” but most of all the fact that the chorus is the same line repeated twice, and each line itself sub-repeats the clauses within the single sentence. “Heaven, heaven is a place, a place where nothing, nothing ever happens.” I once heard a busker playing this in Harvard Square and gave him some change.





Conservatism, the individual, and state-federal power

17 01 2011

One of the hallmarks of American conservatism, at least as I see it, is the claim that it prioritizes the protection of individual liberties above the power of government. But in its implementation in the Republican Party, the focus appears to be on protecting the powers of state government from federal government, not individuals from government as a whole. It might be selection bias on my part – maybe I pay much closer attention to federal politics than the politics of varied states of which I am not a resident – but I don’t think nearly as many Republicans spend time worrying about

And on social issues like religious freedom, drug laws, and gay marriage, I think the Republicans are generally on the anti-liberty, pro-government side of the issue. To use the example of gay marriage, an argument I’ve heard from conservatives is that the states should individually have the power to define marriage, not the federal government. But why stop there; why not define it county by county, or town by town? Why aren’t conservatives worrying about the state’s powers encroaching on the rights and freedoms of individuals, rather than the federal government’s powers encroaching on the rights and freedoms of states? The state always seemed to me to be a fairly arbitrary point in the political hierarchy to throw up the bulwarks.

I do understand why someone might take this approach from a judicial perspective. The Constitution is written with a mind to federal powers versus state powers (note in particular the 10th Amendment, and also the Articles of Confederation, which the Constitution superseded). If you take a strict interpretative approach to judicial law which I don’t personally, but which I can respect, then you will likewise need to keep a mind to federal-state legal boundaries. But from a legislative and executive perspective, trying to decide the proper rules of law for the country we’d like to live in rather than solely to interpret preexisting law, I don’t think it makes sense to let our thinking be bound in this manner. I regard the emphasis on federal vs. state power to be more of a product of history, and as technology and growth shrink our world, local boundaries blur and fade over time. The law is our servant, not our master, and the fact that laws were cast in a certain way over 200 years ago should not hold us back from making good decisions about the law for the next 200 years.

As a liberal and a proponent of many of the Democratic Party’s positions, I can’t really say in good faith that “I don’t mean to pick on the Republicans,” because I kind of do. But I should point out that the Democrats have their own inconsistencies as well. In a two-party democracy like ours, each party cannot possibly attract about half of the US population and yet be an internally consistent, systematically sound platform of political views. The parties are coalitions born in no small part from practical necessity.





Warrants and call options

17 01 2011

Both a warrant and an equity call option give you the right to purchase a company’s stock at a certain point in time at a contractually determined strike price. The difference is that a warrant faces the company itself, whereas a traditional call option faces some other participant in the market. Exercising a warrant results in the company issuing more stock out into the public (increasing the float) and therefore dilutes the value of everyone else’s stock.

For example, suppose the market capitalization of a company is $100mm: 1mm shares worth $100 each. If you own call options on 250k shares struck at $50 and they expire today, you will surely exercise them. One or more market participants out there is obligated to deliver to you 250k shares for $50 a share, and since they’re worth $100 each, your position is worth $12.5mm. If you owned warrants struck at $50 rather than call options, you would still exercise them. But if the market is aware that you own these warrants, you could argue that the price of the underlying stock should fall to $80 after you exercise, because the float has increased from 1mm shares to 1.25mm shares on an otherwise identical company. So the position should only be worth $30 per share in this case, or $7.5mm.

If the market only thought things through to this depth and not further, then a savvy trader who owned the warrants would sell the stock for $100 a share and then exercise his warrants, leaving him with a profit of $50 per share anyway. But the market isn’t that easy to fool. As the stock price rallies and it becomes more likely that the warrants will be exercised, the market will suppress the price of the stock in accordance with the expected increase in the float associated with the warrants. The market value of the overall company will still fluctuate the same way as it did before, but the stock price will not fluctuate one-for-one with the overall market value. (Although of course a company that has issued warrants now has a liability on its books that grows as its stock price grows. It’s another complication for a fundamental valuation of a company to take into account.)

So in the above example, if the market values a company at $100mm and there are 1mm shares outstanding and 250k deep in-the-money warrants outstanding, the market should price the stock at $80, not $100, because the float is effectively 1.25mm shares. Before and after exercise there will actually be no change in the stock price, since the market has already priced in full warrant exercise. And the presence of the warrants, by suppressing the price of the stock, also suppresses the value of any traditional call options. Both the warrant and the call option are worth $30 per share at the time of exercise, given the $80 stock price.