Many machine learning practitioners advocate for the importance of ethical AI. But in practice, few ML teams put even basic fairness / bias checks and balances in place.
This week, we’re covering a blog post that proposes an analogy to explain this phenomenon.
Production ML Papers to Know
Welcome to Production ML Papers to Know, a series from Gantry highlighting papers we think have been important to the evolving practice of production ML.
We have covered a few papers already in our newsletter, Continual Learnings, and on Twitter. Due to the positive reception we decided to turn these into blog posts.
Responsible machine learning is like security
In Responsible Machine Learning is like Security, the author argues that, like in data security, responsible ML (RML) investments are hard to justify by any particular team in the short term. Both are motivated by the risk of rare-but-costly reputation-damaging incidents: the kind that don’t show up in your team’s OKRs.
In ML, these risks fall into a few buckets:
- The ethical risk of creating or perpetuating patterns of inequality
- The reputational risk associated with being linked by journalists and users with biased decisions
- The policy risk associated with violating emerging AI regulations
Your team might care about these risks in the abstract, but ML teams tend to be overworked as it is. It’s hard to prioritize tail risks when the models don’t work yet.
What does that mean for doing ML responsibly
If responsible ML is like security, it may need to be prioritized like security as well. Just like every large organization has a central security org, maybe they should have a RML org, too. A centralized RML team could prioritize the things that individual ML teams can’t, like:
- Performing risk assessments
- Defining standards
- Developing tools and best practices
- Supporting the individual teams
The Upshot
You might be asking: “I’m a builder, not an exec, does this apply to me?”
My favorite section of this post makes the connection between doing responsible ML and making your ML-powered product better in general.
RML is largely about understanding the subsegments of your data on which your model performs poorly. These analyses can identify biases in your model, but they can also identify other opportunities to make your product better, like:
- Growing your userbase by finding high potential but underperforming segments that are too small to show up in aggregate metrics today
- Making sure your model performs especially well for your most import users
That leads me to an “alternate ending” for this post. RML is like security, but it’s also like product management. Maybe the best way to encourage ML teams to build responsibly is to make RML tools useful in their day-to-day work.
If that idea resonates, you’re going to love what we’re working on at Gantry :)