One-to-One (Personalized) Public Policy
One of the primary reasons I joined University of Chicago was the chance to be at the intersection of Computation, Data, and Policy, and work on large-scale social problems that lead to an impact on public policy. I’m often asked how Data/Analytics/Machine Learning can practically impact any kind of policy decisions and the analogy I like to use is one from the marketing world.
One-to-One Marketing has been a buzzword in marketing for a very long time. The drivers: need for personalization and relevancy and increasing expectations from consumers (driven by retailers like amazon, netflix, etc.). The enablers: technology, data, and analytics. Data allows us to personalize at scale. It’s very hard to scale personalization if you weren’t able to predict the needs and behaviors of people at an individual level and then provide them with what they need. The same thing applies to public policy where machine learning/analytics/data mining allows us to do two things at scale:
1. make predictions about individual people and their behaviors (or risk of certain outcomes)
2. figure out the optimal action/intervention for each individual that improves the outcomes for that individual.
Both of these things when aggregated over the individuals can be acted on in several ways by policy makers. The obvious one would be to take the individual predictions and interventions and use them to make a single globally optimal policy. That would be equivalent to the old days of marketing when all you could do was mass marketing – you had one channel and every individual consumer marketers interacted with got the same message. That was the best thing you could do given that you had to find a single optimal message. The more interesting, and hopefully better, approach is to create different policies for different (types/segments of) people – all designed to optimize the outcome for the specific segment (and individual in the limit) and overall optimize the global good. That’s what I would call one-to-one or personalized “public” policy. The goal of those policies would be to agree on a global metric that we’d want to optimize, say improve the overall health of the population, or the education level, or employment rate and then the interventions/actions at the individual level would be personalized to every person (in the limit).
A critical component of this approach is that people have to agree on what a policy is trying to optimize – is it the median level for a particular outcome, say poverty? is it maximizing the 20th percentile? Is it minimizing the difference between the 75th and 25th percentile? Getting people to agree on that goal is difficult but having this conversation is key. And building systems that provide individual recommendations based on the global goal allows the goal to be transparent, auditable, and open to debate, which i’d argue are all good things.
I see that as the next step when computer scientists, statisticians, and policymakers start interacting and working together, very much like the work we have started doing at the University of Chicago, between the Harris Public Policy School, Computation Institute, and the Computer Science Department. I’m excited about the possibilities these collaborations open up and seeing how we can build and use large-scale computational and data analysis tools to create these personalized policies.