blog

Gender Bias in the Music Industry

Mohamed Kamal

CEO and Cofounder of Unbias

When studying new songs to predict if they will be successful, the Unbias AI considers several hundred audio qualities. One is female vocals. In fact, whether a song has male or female vocal features is a powerful indicator. And yet, this gendered vocal data reveals so much more.

Details pulled from the audio files by the AI point to an unsettling truth: There is a bias in the industry against female artists. This bias is clear.

The data sets produced by Unbias show that female vocal features have a negative correlation with high performance. Broadly, this means that the presence of female vocal features might make a song less likely to land on popular playlists.

In a song, if female vocal features fall closer to 1 on a 0-to-1 scale (indicating a high value of features), then the song in question is very likely to have a female vocalist — but also likely to be lower performing.

After observing this discrepancy, I set out to understand why. Did it stem perhaps from deeper biases ingrained in the music industry? The question matters for more than academic reasons or satisfying personal curiosity. The answer is about representation, about what kinds of musicians can begin and maintain successful careers.

Now, discerning this prejudice in the data does not imply correlation is the same as causation. My gut says the biases in the industry are heavily spilling over into our models. I’d encourage academic and research-driven institutions to use our data for studies to gather official evidence.

Unbias is a platform that music industry executives use to make business decisions. These decisions have the potential to stall or start artists’ careers. Given the gravity of the choices being made based partly on our data, it is important to me that our models give everybody an equal opportunity regardless of gender. To be an equitable platform, we need to have the ability to level the playing field, even if bias exists in the industry.

Our current data sets, which span most genres of music, consisting of 65% songs with male vocals and 35% with female. From a technical standpoint, there is no direct way to flag a song as male or female. Rather, the Unbias AI uses audio features to identify the likelihood that the vocals in a given song are male or female with a high degree of certainty.

These imbalanced percentages reflect imbalances in the industry at large. This is because Unbias pulls songs from public playlists to build its data sets.

This bias, of course, isn’t harmless. It manifests in which songs become hits, which get played, and which make playlists. The fact that 65% of songs have male vocals underlines the biased mindset of the industry’s gatekeepers (whether they are aware of this bias or not), the people who decide which songs have a shot to go mainstream. Our data shows that these gatekeepers have put women at a disadvantage.

In order to fully consider the effects of this bias against women, we had to consider some implicit biases in machine learning.

In machine learning, machines study datasets to “learn” before making determinations. When they study human activity — which has been necessarily shaped by all the historical inequities built into day-to-day life — machines are learning from data that reflects human biases. Simply put, the data sets foundational to machine learning will be skewed along with the same biases as human activity.

This is no different when machines learn about the music business. Collecting a representative sample of songs means, inherently, collecting a data set with biases that occur in the related subset(s) of society. The bottom line: When Unbias gathers a data set based on thousands of songs, this data set will exhibit biases built into the music industry.

In a way, machine learning can hold a mirror up to human biases. And yet machine learning can do more damage — it has the potential to amplify them.

Decisions made on biased data sets have consequences. It happens like this: The system outputs from machine learning — the ripples created by human decisions based on this learning — can more deeply entrench these biases. These decisions and their ripple effects can create a level of bias beyond the one that already exists, feeding back into that first tier of bias. The result is that stereotypes can amplify, users can become more alienated, and distorted social expectations can become more engrained — all perpetuating the cycle of underrepresentation.

This bias feedback loop is prevalent in music recommendation systems, Google search results, facial recognition, and natural language predictions.

In the music industry, the data doesn’t lie: 65% male versus 35% female. And when we balance the data sets, we get the same results. That result is a systemic bias in the industry.

Though this is one powerful data point, there will have to be other parts of the story. To further understand this bias against songs with female vocals, one would have to ask questions and look at the statistics beyond the scope of what Unbias does: extracting meaningful data from audio files.

For instance, we know there is a similarly unbalanced split between male and female artists when it comes to record deals. We know women in the industry are treated differently from men in terms of attire, acceptable performances, and sexual harassment. There are fewer women than men on boards of directors. Management companies represent more men than women. These stats might round out our findings.

So, too, might an analysis of male and female vocalists with similar pitch. How do high-pitch males compare to high-pitch females, low-pitch males to low-pitch females?

Conducting research to identify a direct causal relationship between acoustic gender features and high-performing songs will take years. Until then, the good news is that, when predicting whether a song will be high performing, we can remove the bias against female vocalists from the calculation.

We do that by explicitly ignoring the gender of the vocalists as the AI listens to a song. This, of course, means that the AI will be operating with one less data point. This will make the ensuing projections a shade less accurate — though the AI will still be using hundreds of data points.

Although there has been some change, for instance, four women won the top four prizes at the 2021 Grammys, the big picture is far from rosy: The data clearly shows that songs with female vocals are underrepresented in the industry.

There is a reason why women in the industry have to lift each other up. Still, Unbias can help to equalize this bias on some level. It can use its AI, instead, to predict whether a song will be a hit based on other features. And yet, even these results will be slightly skewed, as there is still bias in the industry.

While we are at the cusp of considerable advancement and breakthroughs in AI, the mechanisms to redress the harms by unfair decisions are slow to catch-up. Very slow.

recent posts