Readings:
Paglen: "Seeing-machines" +
Tom Simonite: "Machines Taught by Photos Learn a Sexist View of Women".
Trevor Paglen's description of "seeing machines" argues that intentional, camera-based photography, as it has commonly been understood, is increasingly a minor contributor to the vast number of images produced in the world. This line of thinking has led to him to the conclusion that most images being today aren't even being made for humans, a point taken up by Simonite for WIRED. Simonite's article takes up questions of machine learning based on culturally constructed archives, both textual and pictorial.
Post a short response to these texts to your Tumblr. Consider one of the following questions:
- What pictures do you encounter (regularly, occasionally, or even one time) that don't have an easily identifiable author? Where do you encounter them? What perspectives (points-of-view) do these pictures seem to represent?
- How do you think AI represents a challenge to "truth" versus "aspiration"? When might you want a machine to "see" things in a manner that isn't concerned with bias, versus wanting machines to avoid cultural bias, like gender and profession? Try doing a Google image search for different professions. How do the results compare with your existing assumptions about who represents those professions? Do they confirm, surprise, contradict?
Post a Comment