«When we live and work with computers so much, we tend to want to be like them. Today, we see ourselves more as computers. We try to “process” information or to “multi-task”. (…) We tend to think of ourselves in terms of our utility value». Douglas Rushkoff has answered our questions from his place in New York. The American digital theorist reclaims control on the technologies we, the humans, have designed. His latest book, Team Human, has been translated in Italian: Ledizioni released it at the very beginning of July 2020 (while Italy is experiencing the New Normal, both online and offline). What follows is the report of our virtual Q&A. A funny typo included.
You argue that there’s an “anti-human agenda” embedded in our technology. In your book you make clear that “Team Human is not against technology”, though.
Right. I’m against the anti-human agenda. I love the idea of computers and networks connecting people, allowing for free expression, helping with the redistribution of capital and resources. I don’t love that major corporations now use digital technology to promote addiction, manipulate people’s behavior, induce dangerous “fight-or-flight” mental states and turn humans against one another.
So I draw a distinction between technologies and how they are used.
I’m also aware, however, that different technologies have different biases. So while I agree that “guns don’t kill people, people kill people” I believe that guns are more biased toward killing people than pillows are. Even though both can be used for the purpose of murder. I’m aware that digital technologies, too, have certain biases. I write about them a lot in my books. Television, for example, promoted globalism – for better and for worse. The “media environment” created by television is part of what spawned the environmental movement and what took down the Berlin Wall; but it also led to global markets and runaway global capitalism. Digital technology has different biases. It’s very local and distinct – but it’s also polarizing and alienating. So we have to be careful not to use digital tech as a way of controlling others, or amplifying the extractive nature of capitalism.
If digital technologies can deter human connection and expression, during the pandemic they’ve also provided tools and informal paths to navigate a tough situation. Lots of people – my parents and a bunch of older ones I know – changed their minds and finally got the bright side of the web. Quite the opposite, I couldn’t avoid thinking of the rising onlife divide, involving kids from low income families and people with no digital “touch points”. It was like the world going upside down: I could get the dark side of this tech deluge. What’s your point of you* on that?
*You mean point of view, but I kind of like “point of you”.
I think you’re asking about two things. Access and utility. Yes, I think the net has revealed its utility value to lots of people who thought of it as a social thing. Simple Google Docs are being used to distribute medical equipment or inform people about testing. The net is rising to the occasion of providing real functionality for real people. That said, access to the Internet has become a requirement for those hoping to get essential information, educate their kids, or apply for financial aid during the crisis. In more civilized countries, they are coming to realize that having some sort of internet access amounts to a basic human right – like literacy or access to money. How do I feel about that? I’m not sure. I think it’s only necessary when we are so dependent on top-down systems and giant global supply chains for our basic needs. We may become more local again, and then less dependent on global digital networks for our survival.
In a lecture of yours in 2018 you said: “ the future became from a space for creativity to a place for speculation”.
In the last three months, what did the future become a place for?
I don’t know that that’s the exact quote. I think I was referring to our perception of the future – to the way we think about the future. So, in the early internet days, we looked at the future very creatively. We thought we could create any future we wanted. When the internet speculators came onto the scene, they wanted to make money. So they bet on the future. They wanted to be able to predict the future – to limit the possibilities – so that they could make the right bet.
Now, I think people simply want a future. Just any future at all. Many people are coming to the conclusion that our extinction has begun. Many people are aware that the probability of our civilization surviving the century is very very low. There will still be some humans on the planet, but not in the civilization as we now understand it. And without our civilization, it will be hard to continue storing the waste from nuclear power plants. Those “fuel rods” require a civilization’s worth of energy and effort to keep them cool. So I don’t think we can go back to some earlier, pre-industrial civilization.
So, in answer to your question, I think we are looking toward the future as a place we hope will still be there when we arrive.
You wrote: “All Digital People are paranoid”. I guess you aren’t talking about ALL, right?
I’m not sure of the context where I wrote that, but it sounds kind of true. I mean, digital technology induces a mind state something like paranoia. We get bits and pieces, and have to assemble the sense ourselves. In the book “Present Shock”, I call it “fractalnoia” – this need to connect the dots. In the US, we see it in the followers of QAnon, who doesn’t really say anything, but puts out lots of little factoids that people have to assemble for themselves.
When you put the pieces together to make sense, it amounts to a kind of paranoia. It’s definitely more vulnerable to conspiracy theories and paranoia. We think all the different points have to be connected. That they have to make some sense. That there is a united theme or story – even if they are actually random.
“Having accepted our roles as processors in an information age, we strive to function as the very best computers we can be”. Would you explore this thought of yours?
Well, I’ve explored it in whole books, like “Team Human”! But briefly, I’m suggesting that when we live and work with computers so much, we tend to want to be like them. Back in the Industrial Age, we saw people as machines. As clocks. A person could get “wound up”. Today, we see ourselves more as computers. We try to “process” information or to “multi-task.”
So when our understanding of what it means to be human is dominated by the metaphor of the computer, we tend to think of ourselves in terms of our utility value. How much can we output? How many connections can we maintain? How many ‘windows’ can we keep open at the same time? I think it’s a dehumanizing perspective on our existence. There’s no room for relationship, rapport, or even our living senses. I would rather measure my worth in hugs than in processing cycles.