Do Complex Election Forecasting Models Actually Generate Better Forecasts?
day of the dalek writes:
We are just a few weeks away from the general election in the United States and many publications provide daily updates to election forecasts. One of the most well-known forecasting systems was developed by Nate Silver, originally for the website FiveThirtyEight. Although Silver's model is quite sophisticated and incorporates a considerable amount of data beyond polls, other sites like RealClearPolitics just use a simple average of recent polls. Does all of the complexity of models like Silver's actually improve forecasts, and can we demonstrate that they're superior to a simple average of polls?
Pre-election polls are a bit like a science project that uses a lot of sensors to measure the state of a single system. There's a delay between the time a sensor is polled for data and when it returns a result, so the project uses many sensors to get more frequent updates. However, the electronics shop had a limited quantity of the highest quality sensor, so a lot of other sensors were used that have a larger bias, less accuracy, or use different methods to measure the same quantity. The science project incorporates the noisy data from the heterogeneous sensors to try to produce the most accurate estimate of the state of the system.
Polls are similar to my noisy sensor analogy in that each poll has its own unique methodology, has a different margin of error related to sample size, and may have what Silver calls "house effects" that may result in a tendency for results from polling firms to favor some candidates or political parties. Some of the more complex election forecasting systems like Silver's model attempt to correct for the bias and give more weight to polls with methodologies that are considered to have better polling practices and that use larger sample sizes.
The purpose of the election forecasts is not to take a snapshot of the race at a particular point in time, but instead to forecast the results on election day. For example, after a political party officially selects its presidential candidate at the party's convention, the candidate tends to receive a temporary boost in the polls, which is known as a "post-convention bounce". Although this effect is well-documented through many election cycles, it is temporary, and polls taken during this period tend to overestimate the actual support the candidate will receive on election day. Many forecast models try to adjust for this bias when incorporating polls taken shortly after the convention.
Read more of this story at SoylentNews.