Take the data vector, run a k-means scan on it to determine group means. Then, calculate the probability that that mean is a good representation of the data as P = prod(1 - normcdf(abs((X - K_means) / sigma))). That gives you weights for each of the input means telling you which is dominant. Use that value, and then calculate weights proportional to the weight relative to that best K_mean times the weight from the data value variance. This should shrink the weights of things that don't match the K_mean sharply, decreasing the effect of outliers. Values close to the best K_mean (which, naively, should be close to the median) are also weighted differently, but are all roughly weighted the same. This should give a weighted mean that is fairly resistant to outliers.
This should be pretty easy to calculate, so that's nice. My only concern is that it breaks down as the fraction of outliers goes above 0.5. Then the K_mean is hard to select, and you start weighting the components equally. I guess that's the right way to do it, as when you have no clue about the answer, you should gradually return to a regular weighted mean.
Anyway, math.
|
Birthday Panda looks terrified. |
|
Watch out, Bear! That's probably hot! |
I had the idea to reheat my chicken saltimbocca
en papillote, which worked out well. It kept the moisture in, and basically steamed the chicken back up to temperature, which allowing the bottom to warm up directly from the cookie sheet. Good idea, and I think I'll do it again tomorrow. Another thing that was a good idea was spooning the sauce from yesterday into the bun, wrapping that in foil, and letting it steam as well. I didn't bother separating the fat out, making it like buttering the bread (with a bit of chicken jello for flavor). It helped keep the bun pliable when I was loading it with chicken slices. I do think I'll need to double check the sage before cooking. Today's chicken was lacking that bright sage flavor that I want to cut through.