MIT Researchers Say All Network Congestion Algorithms Are Unfair
takyon writes:
MIT Researchers Say All Network Congestion Algorithms Are Unfair
We're all using more data than ever before, and the bandwidth caps ISPs force on us do little to slow people down - they're just a tool to make more money. Legitimate network management has to go beyond penalizing people for using more data, but researchers from MIT say the algorithms that are supposed to do that don't work as well as we thought. A newly published study suggests that it's impossible for these algorithms to distribute bandwidth fairly.
[...] The new study contends that there will always be at least one sender who gets screwed in the deal. This hapless connection will get no data while others get a share of what's available, a problem known as "starvation." The team developed a mathematical model of network congestion and fed it all the algorithms currently used to control congestion. No matter what they did, every scenario ended up shutting out at least one user.
The problem appears to be the overwhelming complexity of the internet. Algorithms use signals like packet loss to estimate congestion, but packets can also be lost for reasons unrelated to congestion. This "jitter" delay is unpredictable and causes the algorithm to spiral toward starvation, say the researchers. This led the team to define these systems as "delay-convergent algorithms" to indicate that starvation is inevitable.
Read more of this story at SoylentNews.