Big Data Not Discriminatory?

Daniel Castro writes:  Many individuals have made the claim that big data might lead to more discrimination,  An online retailer might offer one price to Asian customers and another price to Latino customers This idea appeared in the White House big data review led by John Podesta, where the final report stated that “An important conclusion of this study is that big data technologies can cause societal harms beyond damages to privacy, such as discrimination against individuals and groups.” This idea was also the subject of a recent Federal Trade Commission (FTC) workshop exploring what FTC Chairwoman Edith Ramirez termed “discrimination by algorithm.” While this type of discrimination is plausible, there are many compelling reasons why it may never come to pass. Moreover, the focus on how big data might be used to harm individuals has overshadowed the bigger opportunity to use data as a new tool in the fight for equality.
One reason concerns about discrimination are likely overblown is because many laws, including the Americans with Disabilities Act (ADA), the Genetic Information Nondiscrimination Act (GINA), the Fair Credit Reporting Act (FCRA) and the Employee Retirement Income Security Act (ERISA), protect consumers from employers, creditors, landlords and others who may take adverse actions against them based on protected classes of information. Big data does not exempt businesses from following these laws. Earlier this year, for example, the FTC brought charges and entered into a settlement with Instant Checkmate for violating the FCRA.

Of course, just because a business might be able to create a racist or sexist algorithm, does not mean that it will do so. After all, most businesses are not actively seeking to discriminate against minorities, and in fact, many companies are actively championing a more inclusive worldview. Moreover, even where there are “bad apples,” companies face significant market pressure to not engage in such behavior, a lesson that most executives have probably learned following the swift departure of sponsors after the disclosure of former Los Angeles Clippers owner Donald Sterling’s racist remarks. Big data has not changed these factors.

But the bigger point is that the focus on preventing discrimination has diverted attention away from the bigger opportunity to use big data to create a more inclusive society. There are at least three ways this can happen. First, automated processes can remove human biases from decision-making. For example, while loan officers or apartment managers may discriminate, perhaps even unintentionally, on the basis of age or race, computers can be programmed to ignore these variables. Second, data creates feedback loops that encourage people to treat others as individuals. While some taxi drivers have notoriously refused to pick up passengers because of the color of their skin, apps like Uber allow drivers to decide whether to give someone a ride based on the passengers’ ratings, which are mostly tied to whether the riders are punctual and tidy. Third, data is a useful way to identify latent racism, such as discriminatory hiring practices or racial profiling by police. For example, data collected about the disparate impact of New York City Policy Department’s controversial stop-and-frisk policy has helped change opinions on the approach.

In short, not only are fears that big data will lead to discrimination in the future likely overblown, but they have clouded the debate. Those working to fight discrimination should look to data as a way to further eliminate unjust biases in society and create a more fair and transparent society.

Algorithms

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.