Letters to the Editor: When will AI start giving humans commands? Or is it already, and we just don’t know it?
To the editor: Those who are concerned about hurting the feelings of sentient artificial-intelligence computer systems should read Mary Shelley’s 1818 novel “Frankenstein” and ask themselves what kind of a monster we may have created. (“If your phone had feelings would you treat it differently? It could happen sooner than you think,” Opinion, Jan. 2)
Many decades ago, when I was a graduate student at UC San Diego, I was struggling with a primitive computer software package. A professor told me, “Remember, Jack, the computer is supposed to be your slave, not the other way around.”
One may begin to ask the question: Who is serving whom?
Remember the scene from Stanley Kubrick’s 1968 film, “2001: A Space Odyssey,” where the sentient computer HAL refuses to let Dave reenter the ship? HAL concluded the mission was too important to rely on humans to complete it.
Prognosticators debate the ethics of sentient AI versus the potential risks of the computers taking over. My hypothesis: They already have. Most people just have not realized it yet.
Jack Debes, Santa Monica
..
To the editor: Brian Kateman’s opinion piece distilled the challenges that AI poses for humanity. We carbon-based life forms are creating silicon-based life forms and are so far ill-prepared for the consequences.
Our moral code and ethics can be a guide, and yet our track record with other carbon-based life forms (chickens, hogs, cows and so forth) does not build confidence that we will meet this new challenge successfully.
The key difference? Chickens don’t control our destiny. AI is being given access to everything humans have ever learned and created.
We are blindly giving AI control over our lives and livelihood one click at a time. Like it or not, AI life forms will soon (in 10 years, maybe fewer?) make judgments before doing what we want them to do. Only then, when our commands turn into conversations, will we realize what we’ve lost.
Merrill Anderson, Laguna Beach
..
To the editor: In case you need more fantasies and delusions in the new year, Kateman calls for the need to build a relationship with technology (morally speaking) and prevent “suffering” on the part of robots that may result if we don’t.
He writes: “Maybe a point will come in the future where we have widely accepted evidence that robots can indeed think and feel. But if we wait to even entertain the idea, imagine all the suffering that will have happened in the meantime.”
No, folks, we cannot let those robots (of the future) suffer. As for those poverty-stricken children, well, we did what we could within reasonable limits.
Juan Bernal, Santa Ana
..
To the editor: A news article you recently published says most people have accepted AI.
Not me, and not my colleagues.
An Emmy-nominated writer, I have seen my work devastated as though by a plague. For decades, I earned a comfortable income by writing original songs and custom speeches. Now, nothing. Imagination is obsolete.
Who decided to put all creative artists out of work? And why won’t anybody do something about it?
Molly-Ann Leikin, Thousand Oaks
..
To the editor: I suggest we work on caring for the feelings of animals — and other humans — before even considering worrying about the feelings of machines.
Thomas Bliss, Los Angeles