Humans of 2022! Can AI robots be good, evil or have a soul? – Albany Times Union

Here’s how passionately the world’s rich and powerful crave brilliant artificial intelligence research.
Wealthy researchers recently flew Rensselaer Polytechnic Institute AI expert Selmer Bringsjord to a luxurious Italian castle where they fed him and other scientists gourmet meals for two weeks while they played games and solved puzzles exploring how machines learn.
And, yes, he had ample time to enjoy Tuscany. Bringsjord is an RPI professor of cognitive and computer sciences, logic and philosophy. He’s also Rensselaer AI and Reasoning (RAIR) Laboratory director with robots that look clever and cuddly enough to star in a sci-fi blockbuster. 
RPI students work on robots with RPI computer science professor Selmer Bringsjord (standing) who also holds a philosophy degree. Bringsjord and RPI colleague Jim Hendler are Artificial Intelligence experts who explore whether a robot can ever develop what humans call a soul or morality?. Theoretically, an AI could be programmed to be “good” or “evil” which gives the question urgency.
As director of RPI’s Institute for Data Exploration and Applications, Jim Hendler has had his share of adventures. Hendler was a chief scientist at the U.S. Defense Advanced Research Projects Agency. The Air Force gave him an Exceptional Civilian Service Medal in 2002.
AI is a mystery to most Americans who know of it primarily through science fiction AI robots evolved to be good (Data of “Star Trek TNG,” Bishop in “Aliens”) or evil (Terminator, “Blade Runner” replicants).  
In reality, no AI has exhibited spiritual awareness. Last year, the Atlantic Council reported that Russian and Ukraine have weaponized AI (Google, Rakuten and Samsung had Ukrainian AI research centers). With Russian troops massed along Ukraine’s border, it’s a good time to wonder; can AI ever develop into a life form with a soul? 
Hendler is an Orthodox Jew and Bringsjord describes himself as “non-denominational orthodox Christian.” Over dinner at Perreca’s, they shared their insights into that question and whether AI can be programmed to be good or evil and who can be trusted to teach morals to AI?
Q: Stanford University science historian Adrienne Mayor believes ancient Buddhist texts and Jewish folklore’s Golem (figure of a man sculpted from mud then animated by a rabbi to protect Jews) describe AI. Does religion hold wisdom about how humans should regard AI?
Hendler: Talmudic scholars try to imagine what future science will create so they can argue principles for that future before it exists. AI has been debated since the 1800s. Interestingly, there’s a phrase in our Rosh Hashanah (New Year) prayer, “We are neither angels nor robots.” I once left a toy robot on my rabbi’s desk with a little sign that said: “unfair to robots.”
Certain prayers can only be said by 10 adult Jews…A golem can’t be one of the 10. I’ve been in debates on whether IBM’s Watson (supercomputer) can count as one of the 10. The real question is, who counts as human? Can a slave count as an adult human? Converts to Judaism? Women? Who do we see as human?
It takes hundreds of bits of data to teach a robot to lift a package. RPI professor of computer, web and cognitive sciences Jim Hendler and his students cover a wall with data that aims to teach an Artificial Intelligence to comprehend words. Hendler is devoutly Jewish and has discussed the possibility of an AI having a soul with his rabbi.
Q: What characteristics would AI need to display to prove it’s a life form?
Bringsjord: Self-awareness. A fungus is alive but not aware of its life or the world around it. Sophisticated, novel use of language. The other big essential is free will. But that term doesn’t resonate with AI experts these days.
Q: “Free will’s” too religion-ish?
Bringsjord: Probably. The term currently in vogue in AI is ‘radical autonomy.’
Hendler: It means an AI makes its own decisions about what to do and when to do it.
Bringsjord: It also means AI could decide that it’s fundamentally lacking and it needs to change itself. That would be an important leap. There are continuous arguments about how to define the moment when an AI demonstrates imagination, not just…
Hendler: … not just reciting what it’s been taught. Creativity is a key attribute. There’s a term for awareness called theory of mind (Editor’s note: It’s humans’ ability to comprehend beliefs, intent, emotions of others by remembering and analyzing their behavior to infer what each person will do next.) Understanding facial expressions, interpreting body language.
Hendler: Questions people have asked for decades, we can’t  answer them all tonight …
Bringsjord: Maybe with some more espresso …
Q: It takes an immense amount of data to teach a robot to lift a box. How much data is needed to teach a robot friendship?
Hendler: We don’t know. … An AI that can beat a world champion at chess may be unable to tie shoes. Someone taught the AI chess but not shoelaces. If machine learning can teach an AI with empathy, it would need to give the AI lots of scenarios. If XY happens, the human may react with Z and need this from you.
Q: Like a robot doctor learning a patient might need his hand held after getting bad news?
Hendler: Partly, yes.
Q: Would it take more data to teach an AI to be evil or good?
Hendler: Evil and good probably require the same effort and amounts of data. 
Bringsjord:  A new book, “The Age of AI” by Henry Kissinger, ex-Google CEO Eric Schmidt and Daniel Huttenlocher (MIT Schwarzman College dean) argues it’s time to establish human values AIs should learn.
Q: Kissinger’s conduct of the Vietnam War is bitterly controversial. Many might argue against him deciding morals for robots.
Bringsjord: People are right to question who should be teaching moral and human values to AI. It’s crucial.  (He notes that President Joe Biden’s technology advisers urged him to appoint a panel of experts from science, philosophy, art, music, foreign affairs and more to create AI guidelines).
The RPI robots look adorable but could they ever be programmed to be killer robots? Theoretically, yes, but they would need an immense amount of data. For example, an AI robot could be trained to deactivate a bomb or beat a chess champ but be stumped if told to cut a slice of purple pie in half..The bomb squad robot would need to be taught pie-cutting. As technology increases the speed and depth of AI learning, experts wonder if robots can be programmed to make morally sound decisions. 
Recently, an AI completed Beethoven’s Unfinished Tenth Symphony. An AI named CLIP responded to a request for paintings of “melancholy” with eerie human figures melting like candle wax against abstract backdrops of vivid colors and menacing shadows stylistically akin to Francis Bacon, Salvador Dali or Edvard Munch’s “The Scream”.
Q: Is this evidence of AI creativity?
Bringsjord: Whenever you read about AI creating art, ask how humans were involved: Did humans give AI a predilection for specific types of painting? That’s different from a machine originating its own taste and choices.
Hendler: CLIP got thousands of images to illustrate various emotions. Selecting images classified under melancholy is different from comprehending and visually translating emotion.
Q: When do you think we’ll have an AI as empathetic as “Star Trek”’s Data?
Bringsjord: Not in our lifetimes. Probably not our children’s lifetimes.
Q: If AI is still so limited, why do scholars like those advising Biden or analyzing Russian weaponry sound panicky about AI that hates humans?
Hendler: We may not be able to create “Star Trek”’s android but something is coming. An AI that mimics human thought processes but has no morals or soul can be dangerous.
Bringsjord: People need to think carefully about the risks of creating an entity with intelligence but no soul. It might be more like Hal than Data. (laughs). Hal was relatable to movie audiences. (In “2001: A Space Odyssey”) he had more emotion than the astronauts. And he fought for his life.
Q Are your wives scientists who embrace a religious faith?
Hendler: My wife has a Ph.D. in molecular biology and is a cantor who composes music.
Bringsjord: My wife has a Ph.D. in educational psych and statistics, and is a vice chancellor for the SUNY system. As to her embracing a faith: yes.
Q: They’re brilliant! So, they understand your AI quest and what you’re looking for?
Bringsjord: (laughs) No.
Hendler: I don’t understand what I’m searching for!
Q: But the search is exciting?
Hendler: Absolutely!
Lynda Edwards is a reporter, editor of Faith & Values and content editor. She began her career at PBS Frontline and freelancing for The New York Times, Rolling Stone, The Washington Monthly and Miami Herald. She was a Nightly Business Report associate producer at PBS and worked for The Village Voice in New York and Miami, The Associate Press and Gannett and newspapers in Arizona, South Florida, Tennessee and Colorado. You can reach her at or 518-454-5403.

Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published.

© 2022 AI Caosuo - Proudly powered by theme Octo