Machine learning is a great thing. The fatal flaw is our expectations of their capabilities exceed the ability of these systems as designed. Especially when they interact with people. Take the news feed on LinkedIn or Facebook for instance. Their design goal is "show people what they like to see". People will stay longer on the platform and consume more advertising. It works very well. The problem is it collapses everyone's worldview and they can no longer absorb new ideas and viewpoints. Just the opposite of being well read. The social good diminishes.
My point being is that whatever the AI designer's target goal is, that will become extremely efficient. But like all computer systems, they are extremely inflexible in looking at macro impacts, yet we deploy them without understanding the unintended consequences. I like collusion avoidance in a car because it enhances my driving skills. I don't like autonomous driving systems because it makes me a stupid, disconnected driver.
Comments