Wednesday, April 10, 2019

Algorithm Bias

This weeks discussion was based off biases that are found in algorithms. The first video we watched explained how bots were made with a teacher bot and a builder bot, and because of this, we do not fully understand how bots operate. Because of the lack of human intervention in the creation of these bots, one would think that algorithms would be relatively un-bias, but that is not the case. The second video we watched showed how facial recognition algorithms, including face ++, Microsoft, and IBM did not sense an African-American woman's face because the bot was taught an incomplete data set of the array of human face possibilities. Another video demonstrated how facial recognition would label predominant African-American women in history as men or boys, which was not accurate at all. Multiple biases can affect an algorithm, including the low representation of women in computer science (1/4 computer jobs are held by women). Because women aren't in important meetings making decisions on coding, bots are missing important information on women. This can lead to prejudicial bias in the code, sample bias where the data set is incomplete, and implicit bias caused by experimenters. The solution to these biases is unclear because we need to further understand exactly how bots work, but supervised training of the bots, and legal actions can help prevent it. Encouraging women to pursue passions in computer science would also lead to more equal algorithms. Personally, I was encouraged to code in middle school when every year we would participate in the hour of code. I never really got hooked on coding, because technology isn't my thing, but my sister got really into coding programs like Scratch after participating in the hour of code.  I think opportunities like the hour of code are a really good way to introduce computer science to young girls.

No comments:

Post a Comment

What do you think about this issue?

Note: Only a member of this blog may post a comment.