Mozilla experiment aims to reduce bias in code reviews

Mozilla is kicking off a new experiment for International Women’s Day, looking at ways to make open source software projects friendlier to women and racial minorities. Its first target? The code review process.

The experiment has two parts: there’s an effort to build an extension for Firefox that gives programmers a way to anonymize pull requests, so reviewers will see the code itself, but not necessarily the identity of the person who wrote it. The second part is gathering data about how sites like Bugzilla and GitHub work, to see how “blind reviews” might fit into established workflows.

The idea behind the experiment is a simple one: If the identity of a coder is shielded, there’s less opportunity for unconscious gender or racial bias to creep into decision-making processes. It’s similar to an experiment that began the 1970s, when U.S. symphonies began using blind auditions to hire musicians. Instead of hand-picking known proteges, juries listened to candidates playing behind a screen. That change gave women an edge: They were 50 percent more likely to make it past the first audition if their gender wasn’t known. Over the decades, women gained ground, going from 10% representation in orchestras to 35 percent in the 1990s.

Mozilla is hoping to use a similar mechanism – anonymity – to make the code review process more egalitarian, especially in open source projects that rely on volunteers. Female programmers are underrepresented in the tech industry overall, and much less likely to participate in open source projects. Women account for 22 percent of computer programmers working in the U.S, but only 11 percent of them contribute to open source projects. A 2016 study of more than 50 GitHub repositories revealed that, in fact, women’s pull requests were approved more often than their male counterparts – nearly 3% more often. However, if their gender was known, female coders were .8% less likely to have their code accepted.

What’s going on? There are two possible answers. One is that people have an unconscious bias against women who write code. If that’s the case, there’s a test you can take to find out: Do I have trouble associating women with scientific and technical roles?

Then there is a darker interpretation: that men are acting deliberately to keep computer programming a boy’s club, rather than accepting high-quality input from women, racial minorities, transgender individuals, and economically underprivileged folks.

A Commitment to Diversity

What does it mean to be inclusive and welcoming to female software engineers? It means, first of all, taking stock of what kind of people we think will do the best job creating software.

“When we talk about diversity and inclusion, it helps to understand the “default persona” that we’re dealing with,” said Emma Humphries, an engineering program manager and bugmaster at Mozilla. “We think of a typical software programmer as a white male with a college education and full-time job that affords him the opportunity to do open source work, either as a hobby or as part of a job that directly supports open source projects.”

This default group comes with a lot of assumptions, Humphries said. They have access to high-bandwidth internet and computers that can run a compiler and development tools, as opposed to a smartphone or a Chromebook. “When we talk about including people outside of this idealized group, we get pushback based on those assumptions,” she said.

For decades, white men have dominated the ranks of software developers in the U.S. But that’s starting to change. The question is, how can we deal with biases that have been years in the making?

Inventing a Solution

Mozilla’s Don Marti, a strategist for Mozilla’s Open Innovation group, decided to take on the challenge. Marti’s hypothesis was: If I don’t know who requested the code review, then I won’t have any preconceived notions about how good or bad the code might be. Marti recruited Tomislav Jovanovic, a ten-year veteran of Mozilla’s open source projects, to create a blinding mechanism for code repositories like GitHub. That way, reviewers can’t see the gender, location, user name, icon, or avatar associated with a particular code submission.

Jovanovic was eager to contribute. “I have been following tech industry diversity efforts for a long time, so the idea of using a browser extension to help with that seemed intriguing,” he said. “Even if we are explicitly trying to be fair, most of us still have some unconscious bias that may influence our reviews based on the author’s perceived gender, race, and/or authority.”

Bias goes the other way as well, in that reviewers might be less critical of work by their peers and colleagues. “Our mind often tricks us into skimming code submitted by known and trusted contributors,” Jovanovic said. “So hiding their identities can lead to more thorough reviews, and ultimately better code overall.”

Test and Measure

An early prototype of a Firefox Quantum add-on can redact the identity of a review requestor on Bugzilla and the Pull Request author on GitHub. It also provides the ability to uncover that identity, if you prefer to get a first look at code without author info, then greet a new contributor or refer to a familiar contributor by name in your comments. Early users can also flag the final review as performed in “blind mode”, helping gather information about who is getting their code accepted and measuring how long the process takes.

Jovanovic is also gathering user input about what types of reviews could be blind by default and how to use a browser extension to streamline common workflows in GitHub. It’s still early days, but so far, feedback on the tests has been overwhelmingly positive.

Having a tool that can protect coders, no matter who they are, is a great first step to building a meritocracy in a rough-and-tumble programmer culture. In recent years, there have been a number of high-profile cases of harassment at companies like Google, GitHub, Facebook, and others. An even better step would be if companies, projects, and code repositories would adopt blind reviews as a mandatory part of their code review processes.

For folks who are committed to open source software development, the GitHub study was something of a downer. “I thought open source was this great democratizing force in the world,” said Larissa Shapiro, Head of Global Diversity and Inclusion at Mozilla. “But it does seem that there is a pervasive pattern of gender bias in tech, and it’s even worse in the open source culture.”

Small Bias, Big Impact

Bias in any context adds up to a whole lot more than hurt feelings. There are far-reaching consequences to having gender and racial bias in peer reviews of code. For the programmers, completing software projects – including review and approval of their code – is the way to be productive and therefore valued. If a woman is not able to merge her code into a project for whatever reason, it imperils her job.

“In the software world, code review is a primary tool that we use to communicate, to assign value to our work, and to establish the pecking order at work in our industry,” Shapiro said.

Ironically, demand for programming talent is high and expected to go higher. Businesses need programmers to help them build new applications, create and deliver quality content, and offer novel ways to communicate and share experiences online. According to the group Women Who Code, companies could see a massive shortfall of technical talent just two years from now, with as many as a million jobs going unfilled. At 59% of the U.S. workforce, women could help with that shortfall. However, they make up just 30% of workers in the tech industry today, and are leaving it faster than any other sector. So we’re not really heading in the right direction, in terms of encouraging women and other underrepresented groups to take on technical roles.

Maybe a clever bit of browser code can start to turn the tide. At the very least, we should all be invested in making open source more open to all, and accept high-quality contributions, no matter who or where they come from. The upside is there: Eliminate bias. Build better communities. Cultivate talent. Get better code, and complete projects faster. What’s not to like about that?

You can sign up for an email alert when the final version of the Blind Reviews Experiment browser extension becomes available later this year, and we’ll ask for your feedback on how to make the extensions as efficient and effective as possible.

 


Share on Twitter