Friday, May 11, 2018

AI

This past Monday, Ms. Kass delivered a lecture to our class like none other before her. I genuinely had no prior knowledge of the topic. She taught us about Artificial Intelligence and Machine learning, the idea that a program can sense, reason, act, adapt and learn to improve itself. Examples of these programs are Siri, Alexa, and the programs that learn from us (the user) to suggest what we might like. Although these artificial sentients are helpful in many cases they can have a very large backfire in society. Algorithmic bias has led to exclusionary apps that can only recognize faces of white males, wrongfully identifying people for criminals, and hurtfully categorizing people into certain roles by gender such as Google automatically making jobs like doctor, lawyer, and successful masculine while nurse and lazy remain feminine. Artificial Intelligence can also be harmful in the context that unlike us they cannot filter through the bad things they come across. They pick up on everything, so bad comments affect the way they think. Tweeting things like “Feminism is Cancer” and I hate niggas” can lead programs down a dark path even to the extreme of aligning with the views of Adolf Hitler. This shows that there are many improvements still to be made in working with machine intelligence and learning. The idea behind Artificial Intelligence is a great one but you need a lot more than an idea to make something happen/work. There needs to be some serious reflection and investigation into how AI is collecting data and how it identifies people.

No comments:

Post a Comment

What do you think about this issue?

Note: Only a member of this blog may post a comment.