So when we know everything, in the complete data case, our problem is pretty easy. To estimate the bias of coin B, we have 9 heads across 2 sets of 10 flips for an estimated bias of 9/20 = 0.45. So a reasonable estimate of the coin bias would be 24/30 or 0.8. Let's start with coin A: across three trials of 10 flips, there are 24 heads. In this case it's easy, and boils down to estimating each independently. Let's first imagine that this piece of paper shows which coin was chosen for each trial: How can you provide a reasonable estimate of each coin bias? Let's refer to these coins as coin A and coin B and their bias as $\theta_A$ and $\theta_B$. Here's the clue she's provided: a piece of paper with 5 records of an experiment where she's: They might be fair coins, be more heavily weighted towards heads you don't know. Suppose your friend has posed a challenge: estimate the bias of two coins in her possession. While the original paper and course coverage of this example are good, I found that a few key details were glossed over in this notebook I aim to lay everything out in its entirety, and hope understanding this example in detail will provide intuition for how EM works, laying the foundation to study its theory and more complex examples further. The best introductory example I've come across, which considers a series of coin flips, is from the paper, " What is the expectation maximization algorithm?", and is also covered in one of University of Michigan's Machine Learning course's lectures and discussion section notes. Expectation Maximization with Coin Flips ¶Įxpectation Maximization is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.
0 Comments
Leave a Reply. |