Are new technologies simply tools, or are they inherently imbued with values? 5 questions with Researcher, Broadcaster and Author, Stephanie Hare.
Stephanie Hare is the author of 'Technology is Not Neutral: A Short Guide to Technology Ethics'. She will open this year’s Rhodes Technology and Society Forum on Saturday 12 November, by speaking with Rumman Chowdhury, who until recently led Twitter’s AI Ethics Team. They will discuss how technology can support — or oppose — shared values of good governance, equity, and dignity for all.
Which values are represented in our technology today?
It varies depending on the technology!
Some experts argue that technology is neutral, and that the way we use it is what determines whether it's good or bad. However, this skips over the many considerations that determine whether a technology is ever funded or created – much less how or why it is created, and by whom, and who is deliberately or unintentionally excluded from this process – to say nothing of how the consequences, intended and unintended, of a technology are understood and mitigated. Long before we ever use a technology or experience it being used on us, many other people have been making decisions about it based on their values, and these decisions and values help shape our reality.
Once a technology is released as a product or service, we have another experience of values. As Professor Sheila Jasanoff of Harvard pointed out in ‘The Ethics of Invention’, the same technologies can be found from Kansas to Kabul, but people experience them differently depending on where they live, how much they earn, how well they are educated, and what they do for a living.
She and Caroline Criado Perez, author of ‘Invisible Women: Data Bias in a World Designed for Men’, notes that part of the problem is that the ‘default human’ for whom most technologies and tools are designed is almost always a man, often a white, heterosexual man of a certain body shape and size – assuming a globality that simply does not exist.
This matters, both scholars explain, because the difference in impact is not limited to how we individually experience technological innovation – it can also change our relationships with one another, and even with our environment.
Recommended by LinkedIn
How can we balance ‘smart’ technologies so that they enhance, rather than diminish, our lives?
This depends on the reasons why we need or want 'smart' technologies. There might be examples of where they are truly essential and helpful versus simply fun and convenient. The mere fact that we have accepted to call these technologies 'smart' makes them sound positive, but if we renamed them 'surveillance' technologies to reflect all their data collection, we might think more negatively about them. Do you want your devices gathering data about you in your home, your car and as you move through your life? Do you want them gathering information about your kids? Do you want the companies that make these devices to use your information for purposes other than those that relate to the device, or sharing and selling your information with third parties?
How does surveillance affect our human rights?
Surveillance can be very harmful to our human rights, which is one reason why liberal democracies have tried to restrict its use in law. In recent decades, technology has transformed the possibilities for individual and mass surveillance by both the state and companies. Against them, our legal protections are woefully inadequate. Moreover, our regulators are in general pretty toothless, which means that even when there are laws in place to protect us, they are not well enforced. Meanwhile there is a cultural dimension to the question, which is that our norms around human rights and surveillance may be shifting. To what extent is privacy and individual control over data even possible today?
What do you wish everyone knew about Big Data?
That it's not something to simply shrug off. There's a reason companies and governments and hackers want our data, both at the individual and the collective level: knowledge is power. With our data they can do things that affect our lives and our life chances, and we most likely won't ever know about it. We need more transparency, explainability and enforceability around our data, and to elevate our society's understanding of what is already happening and what we can do about it.
Who is ultimately responsible for keeping us safe online?
There is no one person or entity that is responsible for keeping us safe online. We all have a part to play, and some people have more power and thus more responsibility than others. Yes, there are things we can do as individuals to protect ourselves, but ultimately online life is a complex structural problem and requires responses at many levels, including companies and governments, lawmakers and regulators, journalists who can help raise awareness and hold power to account, and teachers and parents who can and must help protect children and teach them how to protect themselves and get the best out of what online can offer.