Error'd: Servicing Machines
With all the investments that have recently been made instatistical/brute-force methods for simulating machineintelligence, it seems that we may at least at last haveachievedthe ability to err like a human.
Patient Paul O. demonstrates that bots don't work well before they've hadtheir coffee any more than humans do. "I spent 55 minutes on this chat, most of it with a human in the end, although they weren't a lot brighter than the bot."
Brett N. discovers that Google's AI apparently has a viciously jealous streak: "I was just trying to log into my icloud account"
But despite stirrings of emotion, it's still struggling with its words sometimes, as Joe C. has learned. "Verified AND a shield of confidence!"
And an anonymous shopper shows us that Ebay is working hard on an effective theory of mind. Like a toddler, just trying to figure out what makes the grownups tick.
Says the shopper more mundanely "I'm guessing what happened here is they madethe recommendation code reference the original asking price, and didn't test with multiple counteroffers, but we'll never know."
Finally, RedditorR3D3 has uncovered an adolescent algorithm going through its Goth phase. "Proof the algorithms have a sense of humor?"
[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!