A very interesting article by Cathy O’Neil on the relationship of machine learning in our political and policy spheres. She makes a quick, but to the point observation on Donald Trump:
He is not like normal people. In particular, he doesn’t have any principles to speak of, that might guide him. No moral compass. That doesn’t mean he doesn’t have a method. He does, but it’s local rather than global.
Trump’s goal is simply to “not be boring” at Trump rallies. He wants to entertain, and to be the focus of attention at all times. A born salesman.
What that translates to is a constant iterative process whereby he experiments with pushing the conversation this way or that, and he sees how the crowd responds. If they like it, he goes there. If they don’t respond, he never goes there again, because he doesn’t want to be boring. If they respond by getting agitated, that’s a lot better than being bored. That’s how he learns.
The reason she brings this up, she mentions in the article, is that it’s a great way of understanding how machine learning algorithms can give us stuff we absolutely don’t want, even though they fundamentally lack prior agendas.
Second, some people actually think there will soon be algorithms that control us, operating “through sound decisions of pure rationality” and that we will no longer have use for politicians at all.
Boing Boing compares this same technique to machine-learning strategies:
This is one of the core strategies of machine-learning: random-walking to find a promising path, hill-climbing to optimize it, then de-optimizing in order to ensure that you haven’t plateaued at a local maximum (think of how an ant tries various directions to find a food source, then lays down a chemical trail that other ants reinforce as they follow it, but some will diverge randomly so that other, richer/closer food sources aren’t bypassed).
Read Cathy’s full story on why she thinks Donald Trump is like a biased machine learning algorithm.