Sophisticated data modeling is used today to automate all manner of decision making: who gets a loan, who gets hired, what insurance rates should be, how teachers are evaluated. The list goes on. Data modeling is fast, and it’s efficient. But are the answers always right or fair? In her talk, the cleverly titled “Weapons of Math Destruction,” Cathy O’Neil asks a sobering question about the algorithms that are increasingly affecting our lives: What if they are wrong or, more likely, what if the interpretations of their results are wrong? The problem, she suggests, goes deeper than an algorithm whose code took an erroneous turn. It’s the destructive cycle that results when these errors run unchecked: there is no accountability, no notice that things are amiss, and no fix because of blind faith in the data and the algorithm.
Models, O’Neil contends, are built with good intentions, but their workings can be opaque. Requests, such as loan applications, are processed en masse, and because so many of the decisions (say, approvals for those borrowers with strong credit) clearly look right, the harder calls are reinforced. The model looks spot on. Rarely does anyone look deep into the loop to make sure that the answers truly are right, nor is such feedback integrated into the model. This situation might result in unfair results for individuals, but we see that it is also a lost opportunity for businesses. Consider the bank that keeps refusing loans to solvent people.
The call for algorithmic inspection and governance was also voiced by Alistair Croll in his talk “Big Data, Smart Agents, and Interruption: The Next Ten Years of Human-Computer Interaction.” In Croll’s view of the just-down-the-road future, it’s not only technology that will evolve but also the human species.
Croll sees a trifecta of forces shaping the next decade of technology: big data, smart agents (or artificial intelligence), and so-called interruptive interfaces that tell us, with sensory cues (for example, the Apple Watch’s buzzing the wearer’s wrist to announce the arrival of new e-mail), what we need to know at the right time. These forces will combine in various ways, some of them extremely beneficial: an artificial intelligence, for example, that can make a diagnosis faster than a human oncologist. Such advances will let us do a better job of managing and supplementing scarce resources.
But at the same time, Croll warns us, we will grow increasingly dependent on tools that “whisper in our ear.” These smart agents don’t just simplify our lives; they know our lives. This Super-Siri isn’t hard to imagine, not with the growing array of personal data that can be captured and processed: cell phone data that tracks where we have been, purchasing data that tells what we like to buy, health data, social media data, travel data, e-mail, and so on. Such tools will give us “super powers” like perfect recall (as we can just turn to our devices for information). They’ll know what makes us happy. They’ll help us optimize our lives.
Croll thinks that we—a society that is excited about these new tools and is actively embracing them—may be missing the big picture: this technology trifecta may lead us to become a kind of human-machine hybrid. He suggests that we start thinking about the legal, ethical, privacy-related, and societal issues of that brave new world.
If a machine can direct us to the optimal path, and we get used to following it, Croll thinks that creativity may be stifled and that we will be less likely to make discoveries on our own. Thoughtfulness can change, too. If we get a digital reminder of a friend’s birthday and we send a card, did we do that because we really cared? Or was it because we have good software?
Perhaps it is only fitting then, that the conference also featured the mathematician-philosopher Luc de Brabandere, a BCG Fellow. In his presentation, “Homo Informaticus,” he took us through a 4,000-year history of technology, showing how logic and mathematics were created by the major philosophers. Even today, we can see their influence at work. For example, big data is largely focused on the idea of inferring causes (say, customers’ needs) on the basis of effects (their purchases). That’s a notion that stems from the work of Thomas Bayes, an English philosopher. But in a world of ever-increasingly capable smart agents, perhaps Brabandere’s own notion will prove particularly relevant: if artificial intelligence becomes a reality one day, it is because we have decided not to use ours anymore.