On the feelings of AI and you
Do machines have feelings? No. Might they someday? I doubt it. In the last post, I said I’d write about feelings and AI eventually, now it has been two weeks.
I promised another blog post and then went on vacation. My life is in turmoil but this isn’t really a blog about my personal life so onward with the philosophical thoughts about Artificial Intelligence and embodiment that I started a couple weeks ago!
After some searching, I am unable to locate the anecdote I once read about a man who felt very seasick on a crossing of the English channel and it felt exactly like the time he’d heard a truly beautiful symphony. The idea of equating the joy of thrilling music with the misery and nausea of seasickness is silly, but at the same time the physical sensation does seem to be a roiling sensation in the stomach. That one is unpleasant and one is transcendent seems to be more a matter of context than anything else.
A few years back I did Cognitive Behavioral Therapy and it had a similar vein of physical exploration. In moments of tension, anxiety, or depression, I was told to notice where I felt it in my body. Because the emotions themselves are so vague and tumultuous, they can be hard to identify as a specific “feeling”. But the knot in my stomach, or the tightness in my jaw, or the heaviness of my limbs was a clear, consistent thing I could focus on and analyze. When helping my kids work through their “Big Feelings”, we also use physical coping mechanisms. We identify what we are feeling in our bodies as well as our hearts, and we use our bodies to release anger, sadness, happiness, and a slew of other passions in ways that don’t hurt ourselves or anyone else.
This consistent tie back to our physical bodies as a means of feeling, and of affecting our feeling, makes me wonder if a disembodied being can experience emotions. We’ve entered the part of the discussion where I am no longer qualified to speak at all (if we haven’t already wandered in there long ago). As to the medical and psychological research on this topic, I’m not in the thick of it and can’t speak with any authority.
What I can say with authority is that a lot of people (including me) say “please” to Alexa, or Siri, or… whatever Google’s thing is. And then people post memes about it on the internet, how when the robot uprising occurs, the people who said please will at least be a higher level of slaves, or be eaten last, or what have you. But the politenesses with which we veneer modern society are about soothing ruffled feathers, oiling the machinery of emotional human interaction, and generally showing respect to sensitive beings. Because AI doesn’t have sensations, this has no effect on them. In fact, one of the terrifying facets of dystopian, post-singularity literature is the unfeelingness of robots/AI. They have no sympathy. Obsequiousness and rebellion are equally uninteresting. Even, say, the cybermen of Dr. Who fame, who are embodied, have no feelings. This may be because they have no “nervous system”, merely a humanoid shell. There is the appearance of a body, but no actual connection to it.
In a future essay I want to say more about the disconnection of mind from body and what this means both for AI and for humanity. With shows like “Upload” and “WestWorld” discussing the possibilities of intellects divorced from physicality, this issue is one that needs to be talked about and explored. Can an intellect be separated from a body?
In an attempt to bring this essay to a close, I’d like to come back to people being polite to Siri, which may seem laughable because Siri (and other voice assistants) do not have feelings to hurt or assuage. I’d like to assert that it is right and good to be polite, to say please and thank you, and to be kind to these unfeeling machines. Not, of course, because of repercussions on us in the future robot wars, but because of much more immediate effects on how we treat our fellow humans. Humans so easily anthropomorphize. We love our dogs as if they are people (sorry, they are not. They are beautifully, wonderfully, perfectly dog, and it is in their best interest that they be loved as dogs and not people), we anthropomorphize our roombas, our cars, our appliances. We curse at our printers as if they hate us specifically, my daughter kissed the dashboard of our van and told it goodnight the other day, and my Mom’s long-arm quilting machine is named Phoebe.
The way we treat these items that we have humanized reflects how we feel about humans. If we see a human as intentionally inconveniencing us, we may swear at them as if they are that faulty printer. If we yell at Siri for answers as if she is (which she is) merely a servant to fetch information, we may also yell at our spouse as if they are merely a servant to fetch snacks (they are not). The way we treat these things reflects who we are in a very honest way. Because we don’t fear repercussions, we behave as our most real selves (see: people on the internet). And by practicing peaceful, polite, kind behavior on them as well as on people, we build habits of kindness, graciousness, patience, and peacefulness that can only benefit ourselves and those around you.
So go forth and be polite to ChatGPT, but recognize that you’re doing it for you.