1 minute read

I’ve decided to combine my weekly’s for these two weeks because I’ve had to spend a lot of time traveling (Germany->Strasbourg->Paris->Montpellier) and now I have a 2 week block of all-day lectures for my Erasmus program.

What I did

  • It’s been last two weeks of classes and traveling. My friends at Williams are moving in for next semester, and the Germans are still giving lectures for the Spring semester. Nevertheless, I’ve continued my Bayesian statistics course on Coursera.

What I learned

  • MLE = Maximum Likelihood Estimate. MLE is sort of like the opposite of a probability density distribution. In this case, given a set of data, predict the probability that is most likely to produce that data.
  • Expectation values for different distributions:

    x ~ Bernoulli(p), E[x] = p(1-p)
    x ~ Binomial(n, p), E[x] = np(1-p)
    x ~ exp(lambda), E[x] = \(1/\lambda^2\)
    x ~ uniform, where \(f(x|\theta_1, \theta_2) = \frac{1}{\theta_1 - \theta_2}\)
    E[f(x)] = \(f(x)dx = \frac{1}{b-a} \frac{1}{2} (b^2-a^2) = \frac{a+b}{2}\)

  • Bayesian inference allows you to answer questions like “what’s the probability this is a trick coin, taking into account factors like how likely you think you were given a trick coin, as well as data from a few coin flips and the likelihood that a trick coin would come up heads.
  • Prior vs Posterior prediction. Prior is like, I think there’s a 60% chance he gave me a loaded coin. Posterior prediction is like, well, after flipping the coin 5 times it came up head 4 times. Now the prediction might be something like 90% chance the coin is loaded. This is the posterior prediction.

What I will do next

  • I’m in two weeks of class for grad school, 9am – 5pm every day. Not a ton I can do in the meantime, but I’ll continue the statistics notes in the evening and practice writing some common coding interview questions when I’m bored during class. I can also continue to re-read through my stats notes in class.