Applying Futures Thinking to Decision Making
New technologies pose new risks to essential public goods like truth, democracy, privacy, economic security, safety and collective mental health. Just how far in advance can we start to anticipate and mitigate these risks? In other words: If Facebook knew 10 years ago what it knows now about fake news, or if Twitter knew then what it knows now about propaganda bots and state-sponsored trolling... what would or could they have done differently?
The Ethical Operating System is designed to help tech makers act in the best interest of the company and humanity at the same time. It was created by the Institute for the Future and the Omidyar Network Tech and Society Solutions Lab, with input and feedback from a wide range of tech actors, including startup founders, high level Silicon Valley executives, product managers, board members, VCs, and tech incubators.
It identifies the eight social impact harms that loom largest over the next decade -- including surprising ways that bad actors might take advantage of increasingly popular facial recognition technologies, how mental-health detection tools might influence our future economic and work opportunities, and new ways for common ground to be eroded - from "fake education" to "fake consent" and "fake elections".
In this course, you'll use a risk mitigation checklist to decide which emerging risk zones are most relevant to the technologies you build or use daily. You'll practice techniques for drawing out unexpected consequences, so we can all avoid being blindsided by a future we helped make. And together we'll discuss possible solutions that can increase the ethical health of the entire tech community.
Director of Game Research + Development, Institute for the Future
Lisa Kay Solomon
Designer in Residence, d.school