‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You? Discussion Question

If bias is inevitable, how should we build up the classifying team that organize and found the data base? And what’s the criteria and how should we achieve our datas in the future?

To what extent is it the responsibility of the designers of these products to be transparent about the process? I’m struck by the part that says, “Microsoft and IBM have updated their face-recognition services.” What does this mean?

If AI is always learning, is it possible for it to unlearn this? Why is it necessary for it to go through these racist and misogynistic processes, and what kind of group was in charge in the first place?

How can we solve data learning crisis?

How to ensure the security of personal information?
“The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.” What dose the mean?How to solve this problem?

Is the reason for the existence of such biases the fact that often AI is a reflection of the views prevelant in society? In such cases, should only selective information be fed to such programs and should the ways in which they process and respond to it be limited?

Will AI always be a mirror of who we are? 

Should we expect A.I. to have objective conclusions/opinions?

Would it be necessary for the future to have a comprehensive guide for algorithms? Would that be biased? How can we avoid that?

Some things this article makes me think of is Alexa and the concept of ‘uncanny valley’ regarding artificial intelligence. Does the bias of an ai system reflect the bias from the designer/engineer? What gets to determine these responses?