Sunday, July 23, 2017

Sunday: I don't know.

Mostly I didn't know what I wanted to eat.  I eventually decided to go to Kono's, but then as I was driving, I didn't want to stop, so I just kept going, deciding along the way to do the complement of yesterday's dinner.
I got the super big one, because my plan is to eat part of it for dinner.  My tummy is grumbly, so that's probably going to be right after I finish this post.
Then I stopped at the park there.

According to the calculator, the tide level was about two and a half feet higher than normal.
 That explains why this area that's never flooded was flooded.

Finally, I was reading a book, and it made the claim that if a trait is in 1/200 people, then knowing 200 people means you know somebody with that.  "That doesn't sound quite right," I thought to myself.  This is basically the same binomial distribution problem from earlier, just with the case of looking at N_obs = 0.  So, plotting this up for three different probabilities to see how P_zero changes with the number of trials:
Side note, I do not like how snek handles grids.  You can plot everything using the plot object, but you cannot set the grid spacing.  That's an axis parameter, and you have to create an axis object by creating a subplot in the plot, even if you only want to have one plot.  Why is there this level of obscuration?
Anyway, for the 200 case, knowing 200 people only means there's a ~37% chance of not knowing someone with the trait.  That's pretty large.  Then I thought, "this is just the same curve, scaled along the trials axis, isn't it?
Yep.  This only looks reasonable in the semilog scale.  It'd be nice to have more gridlines, but nope.
So the 50% probability is at like 69% of the (N_trials * Pexp = 1) value.  These are specific values, and there should be some way to say "this is exactly equal to something".  I spent way to long trying to sort it out in terms of a Gaussian approximation to the binomial distribution before realizing that it's actually a Poissonian approximation I should be using.  Whoops.  As N_trials becomes big, the mean goes to N*P, with variance N*P*(1-P).  As P goes to zero, (1-P) goes to unity, and the mean equals the variance, and both are equal to NP.  This becomes the Poissonian lambda, so at the NP = 1 point, the probability of observing zero is just 1 / e.  The 50% point is the inverse CDF, and I don't have an inverse incomplete upper gamma function, but estimating from the gamma function I do have puts the point around 0.69315.  Cool.  Answers that question no one cared about.


  • If they succeed in their repeal, they will bankrupt some people and kill others.  The two sets are not exclusive.
  • I chose the 1/50 above based on this article's estimate of the chance of catching a legendary after the battle, because this is literally the exact same binomial problem popping up today.  2%.  I'm not going to bother with it if the rate is that low.  The comments here sum up my thoughts pretty well.  This isn't worth it.
  • A cat story.
  • Hulk and Valkyrie.

No comments:

Post a Comment