In response to the #BlackLivesMatter movement and in the light of its latest civil-rights audit, Facebook is conducting a new investigation into potential bias within its algorithms.
Facebook and Instagram both will launch new examinations on their core algorithms, as reported in the Wall Street Journal.
As per the Wall Street Journal:
“The newly formed “equity and inclusion team” at Instagram will examine how Black, Hispanic and other minority users in the U.S. are affected by the company’s algorithms, including its machine-learning systems, and how those effects compare with white users, according to people familiar with the matter.”
As stated, in response to the increasing demands for better inclusive representation at all levels, the recent protests at #BlackLiveMatter revealed many issues with the platform’s processes, including possible algorithmic biases, whereas Facebook’s own civil rights audit, carried out over 2 years and released earlier this month.
As per the report by Facebook:
“Because algorithms work behind the scenes, poorly designed, biased, or discriminatory algorithms can silently create disparities that go undetected for a long time unless systems are in place to assess them.”
Algorithms on Facebook have made discriminatory practices in the past unwittingly simpler. In 2016, a ProPublica study found that Facebook’s demographic “cultural affinities” could be used to exclude some racial groups from the coverage, which violates federal laws.
Facebook then removed its ability to target advertising by excluding racial groups but at the time, Facebook also acknowledged that many of Facebook’s machine learning systems, based upon usage patterns, had developed ad targeting options like this. Instead of Facebook actively promoting these approaches, they were more a product of the algorithm offering options based on the data available.
Last year, Facebook removed all possible discriminatory housing, jobs or credit advertising targeting options. However, experts have also noted that every algorithmic method, based on the data collection, remains vulnerable to inherent bias.