In his article “Faux Friendship,” William Deresiewicz makes this frighteningly plausible argument: “We have given our hearts to machines, and now we are turning into machines” (12). The notion that technology is making us more like machines than human beings is an idea suggested by many of the texts that we have read over the past couple weeks. In my opinion, there is no denying the essential validity of these claims.
Deresiewicz discusses how this is already taking place at a social level. According to Deresiewicz, the Facebook phenomenon is changing the way we view such concepts as friendship and is impacting our methods of social interaction. Yet Facebook is not the first technological development that has changed the way in which we relate to others. Technology, in general, seems to produce increasingly distanced and isolated lives. We work in small confined cubicles, travel in enclosed vehicles, and even order food and obtain money from the bank with little to no social interaction. Enhancing this epidemic is the recent ability to work remotely. With the option of logging in to a company network and accessing a remote computer, many jobs do not even require individuals to leave the confines of their own homes.
At a cognitive level, a society accustomed to convenience will be increasingly content with letting computers do all the work. Google’s belief that “we’d all ‘be better off’ if our brains were supplemented, or even replaced, by an artificial intelligence” (Carr) seems outrageous and unbelievable. However, isn’t this already taking place? Just think of how we write our papers. A word processor is capable of automatically correcting our spelling errors and even identifies sentences that are improperly constructed. Eventually, individuals will only know how to spell words that aren’t corrected by an automatic spell checker. Similarly, when driving to a location, we no longer have to pull out a map and figure out the best route; we only need to type an address into a GPS and we are automatically told where to go. Isn’t it true that our brains are already being “supplemented, or even replaced, by an artificial intelligence”?
The differentiation between humans and machines is steadily dissolving. Humans are become more like machines and machines are become more like humans. Eventually, it will be much harder to tell the difference between the two. This seems to be what William Gibson anticipates in Johnny Mnemonic. Gibson’s story obscures the boundaries that have traditionally marked us off as human. The lines between human, machine, and even animal are significantly blurred, such that it becomes difficult to determine which of the characters, if any, are human and which are not. In additional to this distortion of identity is a depersonalization possessed by nearly every character in the story. Gibson’s characters are entirely monotone, unemotional, and unsympathetic. This cold, mechanical mood persists throughout the majority of Gibson’s text. Furthermore, in Johnny Mnemonic, the human brain is basically a mere commodity, a receptacle in which information can be stored, retrieved, and even bought and sold at will.
At first glance, Gibson’s text seems like a significant stretch. However, an honest evaluation reveals that the lines between human and machine are actually being blurred in our own society. Considering Google’s frightening perspective that “the human brain is just an outdated computer that needs a faster processor and a bigger hard drive,” (Carr) Gibson’s literary world may not be too far off from our own potential future.
Isn’t it true that our brains are already being “supplemented, or even replaced, by an artificial intelligence”?
I have an interesting point of contrast that was brought up to me by someone who is a proponent of automatization/standardization by technology when I posed your same question. He said (essentially): “What I want is for machines to automate the processes that I don’t care to do–basically everything trivial that takes up my cognitive energy that I would like to spend on other things, such as deep, analytical thinking. I don’t care if these skills become atrophied because I don’t find these skills valuable.” So, for example, he didn’t mind spell check because he felt that spelling is not a necessary skill, and if it is lost, humanity will not suffer because of it. And how do we know that maintaining these skills isn’t distracting us from doing something more valuable with our brains? (Just trying to play devil’s advocate here 🙂 )
Strangely, I’ve actually never thought of it that way. When posed in that light, it is interesting to think about what we could be capable of, but there’s a huge problem with that idea. Yes, it’s great to think that if we freed up cognitive energies in this way that we would use them, but how realistic is that? I’m always thinking to myself that if I just had more time, I would get to all those unread books on my shelves (or whatever the case may be), but when I do have free time, I’m far more likely to actually waste it – sitting in front of the television or something similar.
I’m not saying that everyone is like me, but a fair percentage of the population probably is. And some portion is lazier, and some portion is more driven. Some portion would actually use the freed cognitive energy for intellectual pursuits. My guess, though, is that the majority would not. If we’re talking about a future in which technology may become ever more integrated into not just our daily lives but potentially our physical bodies as well, that is also a future of ever-increasing distraction and information overload. How many of us, with machines “to automate the processes that I don’t care to do,” would realistically end up spending a good chunk of that newfound free time essentially zoning out on Internet time-wasters of one form or another?
A slightly different response to Kristin’s comment, but perhaps along the same lines as Alissa’s. My experience is that spell checkers do not work. I have run student essays through them successfully only to find that they did not catch errors. The result of this at the macrocosmic level might be the widespread production of texts without standardised spelling. That was the case in the Middle Ages, when reading habits were very different. Reading arguably required more concentration and attention as a result of less predictable spelling.
Kristin,
Thanks for the interesting reply. I’m glad to see some good discussion on this topic. To an extent, I agree with your friend’s perspective that technology can be useful when it frees us up for more valuable endeavors. However, I think that allowing ourselves to lose certain cognitive abilities could be dangerous. Call me old fashioned, but I don’t necessarily think that knowing how to spell is a dispensable ability. I think that the more competent we are with language and various grammatical skills, the more capable we will be for deep, analytical thinking as well. In other words, I think there is a direct relation between the two. Being able to correctly structure words, sentences, paragraphs, etc. enhances our ability to structure thoughts and lines of reasoning. Yet I definitely see the appeal of your friend’s argument.