Counterfactual privilege ICML talk

Talk at ICML2019

I’m talking about some new fairness work at ICML, here’s the schedule, slides, poster, and paper. Below is a brief description of one of the new concepts in this work.

Counterfactual privilege: an asymmetric fairness constraint

In previous work (paper, blog post) we described a causal framework for defining and understanding algorithmic fairness. We start with a mathematical model which can be represented as a graph with arrows designating causal relationships between variables, like this example:

mermaid(
  "graph TB;
  A[Gender]-->X[Car Color]
  U(Aggression)-->X
  U-->Y[Risk]"
)

From such a model we can compute model-based counterfactuals: if this person had identified as a different gender, would they have the same car color or a different one? Any variables which have a directed path starting at gender and ending at that variable might take different values in the counterfactual world. In our previous counterfactual fairness work we proposed making the prediction algorithm invariant to such changes. Roughly, it should give the same predictions from actual data that it would give using the model-based counterfactuals.

Predictions are often used to make decisions about the real world (interventions). In this new work, we shift focus from giving fair predictions to designing fair interventions. A common approach to designing optimal interventions is to maximize some utility function, so we illustrate a new fairness constraint in that framework. Let \(U(a,x,z)\) denote the utility of an individual with \(A=a\) as the value of their protected attribute (e.g. gender or race), \(X=x\) the values of their other predictor variables, and \(Z=z\) represents our intervention (e.g. \(Z = 1\) or 0 for a binary decision about this individual). We’re trying to find the best intervention \(Z\) in the sense that it maximizes expected utility

\[ \text{maximize} \sum_i \mathbb E[U(a_i, x_i, z_i) ] \]

and we impose the bounded counterfactual privilege constraints

\[ \mathbb E[U({\color{red}a_i}, x_i, z_i)] \leq \mathbb E[U({\color{blue}a'}, x_i, z_i)] + \tau \] for all \(a' \neq a_i\). In other words, we constrain the intervention so that we don’t make any individual more than \(\tau\) units of expected-utility better off than they would have been if they had a counterfactual value of the sensitive attribute.

Implicit monotonicity assumption

Encoded in the usual application of fairness ideas and the notion of privilege is an implicit assumption that an individuals’ utility would be higher if they had the most privileged value of the sensitive attribute. This means that in practice, the constraints we impose will tend to be active for those individuals who already have privilege in the actual world, and inactive otherwise. In math, if \(a_i\) is not a privileged value of the sensitive attribute but \(a'\) is, then (usually)

\[ \mathbb E[U({\color{red}a_i}, x_i, z_i)] \leq \mathbb E[U({\color{blue}a'}, x_i, z_i)] \] even without the extra \(\tau\).

School example

In our paper we describe an example where some government entity has a budget for new classes, the individuals are schools and the intervention value is 1 if that school gets a new class and 0 otherwise. Suppose that school 1 is in a neighborhood with higher average incomes, the students there are predominantly white, and the school is well-resourced and has a high rate of graduation and applications to college. And suppose that due to historical injustices like redlining, school 2 is in a neighborhood with lower average incomes, the students are mostly African-American, and the school has been poorly-funded and has lower graduation rates.

Consider counterfactuals where we switch the majority race of the students in that school in a (realistic) causal model of the world. In the actual world, for school 1 we have \(a_1 = \text{white}\) and school 2 has actual \(a_2 = \text{AA}\), and if utility is defined using things like graduation rates or percent of students applying to college, then based on history and social relations in the US we would expect \[ \mathbb E[U(a_1 = \text{white}, x_1)] > \mathbb E[U(a_1 = \text{AA}, x_1)] \] We’re trying to design an intervention (fund new classes) to address this inequality by maximizing total utility across schools, and doing so in a way that the amount of the inequality above does not exceed some small value \(\tau\). Meanwhile, since we also expect that \[ \mathbb E[U(a_2 = \text{AA}, x_2)] < \mathbb E[U(a_2 = \text{white}, x_2)]. \] the intervention has more room to improve the outcomes for school 2 before it reaches the privilege constraint boundary.