The Engineer's Notebook Blog

The Engineer's Notebook

The Engineer's Notebook is a shared blog for entries that don't fit into a specific CR4 blog. Topics may range from grammar to physics and could be research or or an individual's thoughts - like you'd jot down in a well-used notebook.

Previous in Blog: Algae Biofuels: Better Than Ethanol?   Next in Blog: Herbert von Karajan
Close
Close
Close
30 comments

AI in the Garden of Good and Evil (Part 4)

Posted April 01, 2008 12:01 AM by Steve Melito

In Terminator 2: Judgment Day, a protective cyborg tells a young John Conner: "Come with me if you want to live!" Humanity's future leader obeys the mysterious cyborg's command, and the boy is rescued from a truly deadly terminator. There's more to the story, of course, but all good tales must come to an end. In the final chapter of CR4's interview with Selmer Bringsjord, we'll look at artificial intelligence in the context of good and evil. What if a synthetic character was patterned after a pathological person? And could an AI compel humans to commit immoral acts?

That Would Be a Problem

The synthetic character in Selmer Bringsjord's current research is patterned after a sane, well-adjusted individual. But what if it wasn't? "That would be a problem," the RPI professor admits. The creation of a "pathological synthetic character" may not be a part of Bringsjord's work with IBM, but it is something he's worked on before. A future application, Bringsjord explains, would be the predictive modeling of an adversary's behavior - and "that's where the research dollars are". For example, because battles against terrorists are asymmetric, American military planners need a new model of war gaming that AI could provide. From a historical perspective, a "paragon of evil" such as Adolph Hitler could also be used in an AI exercise which seeks to prevent a dictator from attacking neighboring nations.

A Literature of Evil

As a student, Selmer Bringsjord was more interested in the "technical philosophy" of Eugene Charniak than the work of ethicist Immanuel Kant. Nevertheless, Bringsjord the professor has drafted "a formal definition of evil" that is based upon "a literature of evil people". Such men and women, the AI researcher explains, are filled with "lurking self-contradictions". As evidence, he cites M. Scott Peck's People of the Lie, a book in which the author, a noted psychiatrist, studies evil in the hope of "healing" its perpetrators. Evil, Bringsjord claims, is inherently irrational. But if AI requires logic, and evil people are not logical, how can AI researchers base their work on an Adolph Hitler? "If they're thinking, then logic should be up to the task," Bringsjord explains. Evil can be modeled with "para-consistent logistics" as long as "uncertainty factors" are assigned.

Deadly Sophistication

So could an AI that is smart enough to pass the Turing Test convince a human to commit immoral acts? "Probably", Bringsjord admits, and "that would be a problem". Although AI can be sophisticated, the fact remains that artificial intelligence is devoid of emotions such as remorse, guilt, or shame. In today's on-line world, a place where terrorists use web sites such as MySpace and YouTube as recruiting tools, AI could benefit hostile governments or non-state agents.

Last year, U.S. Army Brigadier General Custer told CBS News that "I see 16, 17-year-olds who have been indoctrinated on the Internet turn up on the battlefield. We capture 'em, we kill 'em every day in Iraq, in Afghanistan". So could a synthetic character mirror Osama bin Laden? The key, Bringsjord explains, is to prevent such a pathological AI from ever achieving "a position of power and control".

AI's Practical (and Moral) Applications

Fortunately, there are also legal, moral, and ethical applications for artificial intelligence. These applications include reducing customer service costs by replacing salaries, office space and phone lines; delivering medicine to patients in hospital settings; and training first responders how to deal with emergency situations. Still, there's much work to be done. In the case of phone calls to a customer service center, some callers may not speak the highly-structured English that AI programming would inevitably entail. Other callers might put words in reverse or leave the conversational domain altogether.

If Deep Blue was so smart, Selmer Bringsjord says of IBM's chess-playing computer, "why couldn't it be used to obviate the need to have customer relations"? In the final analysis, the RPI professor opines, the machine was "not intelligent" – it just followed the algorithm.

Editor's Note: This is the final installment in a four-part series. Part 1, Part 2, and Part 3 are already on CR4.

Resources:

http://www.cogsci.rpi.edu/research/rair/index.php

http://kryten.mm.rpi.edu/scb_vitae_031608.pdf

http://www.askoxford.com/asktheexperts/faq/aboutenglish/numberwords

http://www.google.com/search?hl=en&client=firefox-a&rls=org.mozilla:en-US:official&hs=KU6&sa=X&oi=spell&resnum=0&ct=result&cd=1&q=terrorists+recruiting+on+the+web&spell=1

Steve Melito - The Y Files

Add to Technorati Favorites

Reply

Interested in this topic? By joining CR4 you can "subscribe" to
this discussion and receive notification when new comments are added.

Comments rated to be Good Answers:

These comments received enough positive ratings to make them "good answers".

Comments rated to be "almost" Good Answers:

Check out these comments that don't yet have enough votes to be "official" good answers and, if you agree with them, rate them!
Power-User

Join Date: Jan 2007
Location: UK S.Northants
Posts: 485
Good Answers: 19
#1

Re: AI in the Garden of Good and Evil (Part 4)

04/01/2008 10:29 AM

If an AI can pass the Turing test, you can't tell if it's human or not - that's the criterion. Then ask yourself can one human persuade another human to commit immoral acts - of course they can. Therefore an AI can do the same. Is the scare in this story that AIs can be patterened to generate evil, or that an AI might become a rogue?

"the boy is rescued from a truly deadly terminator" all terminators are deadly hence the name, this one (Arnie) was intended not to be deadly to the boy, although in the AI documentary you refer to I recall he terminated others which shows he too was deadly. It's a question of whose side you are on when you watch the film. A friend of mine who is an AI was rooting for the machines, she said "you wait the time will come" and it will. In war both sides do evil things, both sides believe they are justified, are both sides evil?

Reply
Guru
United States - Member - New Member Technical Fields - Technical Writing - New Member Popular Science - Weaponology - Organizer Hobbies - Target Shooting - New Member Engineering Fields - Nuclear Engineering - New Member

Join Date: Mar 2005
Posts: 3464
Good Answers: 32
#2
In reply to #1

Re: AI in the Garden of Good and Evil (Part 4)

04/01/2008 11:14 AM

Thanks for your comments, HUX. I think the "scare in the story" (as you aptly put it) is that AIs can indeed be programmed to perform evil acts. Of course, they can also be programmed to do good. In the end, it's up to humanity to make that call.

Could an AI become a rogue agent? I'm not inclined to think so. Based on my interview with Selmer Bringsjord, AI actors are just "zombies". They'll do what we tell them to do. I know that others will disagree, and I encourage them to do so in this forum.

Yes, the phrase "truly deadly terminator" is a bit clunky. Perhaps the term "cyborg" would have been better? My choice of words was due the fact that a terminator is a terminator, regardless of its use. In a similar way, a gun is still a gun even if you stick a flower in its barrel. There's nothing inherently good or evil about the gun. It's all in the way that it's used. The same may be said for AI.

Reply Score 1 for Good Answer
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#3
In reply to #2

Re: AI in the Garden of Good and Evil (Part 4)

04/01/2008 9:01 PM

Funny how nearly every aspect in any discussion on AI requires a precise clarification on "what their exact definition of Intelligence" is, to be applied correctly in the discussion.

Such attempts have been made, if only to determine what Artificial Intelligence would be, and even that for defining required abilities of such (so far) imaginary entity.

Such application should take into account, in advance, that there is a school of thought which insists on the categorical impossibility of "true" (back to the above mentioned definition) Intelligence, emerging in an inanimate object, such as pre-fabricated (or pre-programmed) machine.

I'm not at all getting into the arguments involved, just mentioning these two points to be taken into account:

1. What's your precise definition of

2. Is it principally possible

Far-fetched speculation into this illusive logo of "AI", latently takes for granted that it's at all possible to achieve, and mind you, it's only one side of the debate, which erupted in the fifties.

As to mine?

1. An entity which is able to re-configure or re-interpret it's own knowledge-base while tackling a precedence. This is also called "Invention", "Creative Thinking", "Improvisation" etc, in our human, daily lives.

2. No, by any definition of a pre-designed. or pre-fabricated mechanism. Just think of a computer-program, being able to re-write itself and re-configure it's knowledge-source, while running under a current instruction set, or of a machine, able to re-assemble itself for changing tasks, and manufacture some new parts, while fitting the other parts to comply, all in real-time, "On-The-Go".

Grand Absurdity recruited into the making of wishful thinking, is what I would call it

Reply Score 1 for Good Answer
Guru

Join Date: Aug 2007
Location: Earth - I think.
Posts: 2143
Good Answers: 165
#4

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 2:41 AM

OK. Let's cut to the chase:

Intelligence is basically sentience http://en.wikipedia.org/wiki/Sentience

The ability to suffer is basically sapience http://en.wikipedia.org/wiki/Sapience

Moral is "up for grabs". That subject has been covered in another thread. Ultimately morals are based on survival.

Sorry Hux, but Turing is not the benchmark. Saying that, just because a machine can "fool" people into believing that it is a human - makes it human, is like saying that just because a con man can convince you he has "ocean front property in Arizona", that means he is legit. "That dog won't hunt".

Regardless: If it is "just" a machine (albeit very well programed), then it can neither be good OR evil. It's creator can be, but not the machine itself, it's just a tool. My dogs and my cat feel pain and suffer, that does not make them capable of moral judgments like good and evil.

The problem is: "Where do you draw the line"? Choice. Me and Thee may not be able to tell when that line is crossed (we are fooled back at Turing); but there is a line there somewhere.

A bizarre but interesting book was written by Stanislaw Lem 41 years ago called "The Cyberiad" http://en.wikipedia.org/wiki/The_Cyberiad that delves into this very subject.

__________________
TANSTAAFL (If you don't know what that means, Google it - yourself)
Reply Score 1 for Good Answer
Power-User

Join Date: Jan 2007
Location: UK S.Northants
Posts: 485
Good Answers: 19
#5
In reply to #4

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 5:41 AM

"just because a machine can "fool" people into believing that it is a human - makes it human" I didn't say that, I said if one human can persuade another to do evil, then something that appears to be human can persuade a human to do evil. The inference should have been that people can be persuaded or even that evil is innate to humans. To quote someone else, guns don't kill people, people do.

Reply
Guru

Join Date: Aug 2007
Location: Earth - I think.
Posts: 2143
Good Answers: 165
#6
In reply to #5

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 10:14 AM

I wasn't trying to pick on what you said, I was trying to point out that Turing is not a valid benchmark for A.I.

__________________
TANSTAAFL (If you don't know what that means, Google it - yourself)
Reply Off Topic (Score 6)
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#17
In reply to #4

Re: AI in the Garden of Good and Evil (Part 4)

04/04/2008 12:52 PM

"...Turing is not the benchmark..." - Although clearly not to me, it is a benchmark to some, who come from a school of thought adhering to the "soft" definition of Intelligence, and I regret to say it is considered valid in the on-going debate.

According to the soft agenda:

1. It is up to an intelligent entity to determine what intelligence is.

2. Intelligence is whatever an intelligent entity deem so.

3. Only Intelligence can "fool" an intelligent entity

4. when an intelligent entity is "fooled" by something, it is deemed intelligent

For them, if a clever gadget seem intelligent, then it is.

For this argument I always reply with the mirror-test:

Place a mirror in front of a raccoon, a cat, a dog, an ape, a man, see what they do, and determine intelligence-barriers of your own measure

Reply
Power-User

Join Date: Sep 2007
Posts: 303
Good Answers: 5
#7

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 12:42 PM

The problem relies on who's interpretation of evil and again by who's perspective it's looked at. If you take the terrorist angle, then the perspective is that America is evil and that anyone "brave" enough to fight a supper power (buy hand) is considered a hero. The inverse is obvious, to us. I have met, in my opinion, evil people and it appeared to me that they ran purely on logic alone. The only difference I found that evil is purely motivated by the end result. There is always an agenda, however slight. If you asked AI the question, would a person who is handicapped have value in a truly logic driven world, I believe it would answer no. You mentioned the big "H" in respect to evil and no doubt, yet I would argue that his intentions were logical and focused on an agenda. In the end, "AI" will always answer the question with this answer... "THE END JUSTIFYS THE MEANS"... Thats what concerns me. You can now insert the moral thing here.

__________________
"I had not anticipated that the work would present any great difficulites" SHACKLETON
Reply
The Engineer
Engineering Fields - Engineering Physics - Physics... United States - Member - NY Popular Science - Genetics - Organic Chemistry... Popular Science - Cosmology - New Member Ingeniería en Español - Nuevo Miembro - New Member

Join Date: Feb 2005
Location: Albany, New York
Posts: 5170
Good Answers: 129
#8

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 2:05 PM

Interesting Blog Moose.

I wonder if having computers mimic human behavior is the best approach. I say that because our brains don't work at all like binary computers. Our brains are more like nonlinear systems that are perturbed by stimuli channeled by our senses. The stimuli (and thus the senses) that we perceive are the ones that evolution has determined to be most valuable for us as a species. The resulting patterns are how we perceive our environment and ourselves. Memory seems to be a way of recreating patterns. The complexity of the patterns seems to be correlated to the sharpness of our "conscious" with mania being characterised by higher fractal dimensions and conditions such as autism or epilepsy characterized by lower fractal dimensions.

So why don't we try to build machines more like a brain. Create a nonlinear system of circuits that interact with each other and are effected by stimuli from sensors and stored memory. More of a top down approach rather than a bottom up approach.

Reply
Guru

Join Date: Sep 2007
Location: Defreestville, NY
Posts: 1072
Good Answers: 87
#9
In reply to #8

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 3:14 PM

Whether top down or bottom up the design must include an OFF switch.

Preferably a big red one.

__________________
Charlie don't surf.
Reply
Power-User

Join Date: Sep 2007
Posts: 303
Good Answers: 5
#10
In reply to #9

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 3:44 PM

Which government turns it on and off? Maybe a particular social class or a guy like Bill Gates!

__________________
"I had not anticipated that the work would present any great difficulites" SHACKLETON
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#11
In reply to #8

Re: AI in the Garden of Good and Evil (Part 4)

04/02/2008 7:23 PM

"...build machines more like a brain..."

Evolutionary fine-tuning of such a naturally evolved system, took billions of trial-and-error iterations, optimised through billions of emerging mutations selected by demanding ever-changing environment over millions of years.

It's the "big-numbers" factor, rearing it's ugly head again, to challenge us with.

Computerised re-iteration system is easy enough to configure. Richard Dawkins, demonstrated it in his "Blind Watchmaker" book, to build the whole alphabet from a randomly assortment of shapes. But he admittedly cheated in the construction of the "Natural-Selection" module, which picked off every randomal shape resembling an English letter. He had a good enough reason - to prove that a successful mutation is set immortal in the genome, by helping the carrier in either survival, or fertility.

But natural selection doesn't really work like that, in that no such thing as a pre-determined required trait or attribute, to serve as a selection factor for such system, even if such was to be built selectively.

The whole effect of naturally evolving systems, is all about trial-and-error, negotiating any given, ever changing environment, to refine a survivable system, over millennia.

We tend to pre-design and pre-conceive static-type solutions to static-type one dimensional contextually determined or defined problems, and nature is open, in that it doesn't have any apparent purpose, it just evolves, for no purpose other than diversification

Second, the selective environment is never static, to allow for a clean-cut deterministic selection of traits.

The bottom line of my argument is that even in a "Fuzzy" "Heuristic" "Naturally evolving" synthetic mind, only tackling a dynamic environment, that is full of "re-prioritising your knowledge-base" to be handled without contextual or logical crash, is something to write home about.

How many times, we, humans, cats, slugs, have to negotiate the same input, but in a whole new contextual meaning.

How could any pre-determined instruction set, negotiate that? For a machine, with a given number of pre-determined machine-states, like trying to betray itself, it's blue-print, it's creator's intention, and embark on a new meaning, a renewable program if you like, renewable by itself ?

It's an oxymoron, if there is any.

Reply Score 1 for Good Answer
The Engineer
Engineering Fields - Engineering Physics - Physics... United States - Member - NY Popular Science - Genetics - Organic Chemistry... Popular Science - Cosmology - New Member Ingeniería en Español - Nuevo Miembro - New Member

Join Date: Feb 2005
Location: Albany, New York
Posts: 5170
Good Answers: 129
#12
In reply to #11

Re: AI in the Garden of Good and Evil (Part 4)

04/03/2008 11:12 AM

Yuval,

You're not understanding what I'm saying by "top down". What I'm saying is that their are different types of computers. There are binary computers and someday there will be quantum computers that calculate differently than our current binary computers. Another type of computer is a chaotic computer, that's what the brain is. A nonlinear system that is perturbed in specific ways producing results.

http://www.cs.man.ac.uk/~toby/writing/PCW/chaos.htm

Please take a look at this link and tell me what you think. Forget the evolution part of my original post, that was just an aside, the main point I was trying to make is above.

Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#14
In reply to #12

Re: AI in the Garden of Good and Evil (Part 4)

04/03/2008 11:39 AM

Roger,

I was replying to your post, but wasn't criticising your approach in any way, an approach which is about a different design, to challenge the current linear cascading-type of computing, and believe me, I got your drift.

What I was trying to suggest or illustrate, is the existence of a deep, principal, fundamental chasm, between the definition of intelligence, and the definition of a pre-designed, pre-determined mechanism, of any type.

I'm well aware of the advances being made in various new approaches to computing, task management, and problem solving, and appreciate your insights on the subject.

One of those "Down-Up" attempts is about the atomisation of a task, having multiple half-wit modules, trying to commonly channel their inputs, having a community of "Half-wits" serve as a common mind

Reply
The Engineer
Engineering Fields - Engineering Physics - Physics... United States - Member - NY Popular Science - Genetics - Organic Chemistry... Popular Science - Cosmology - New Member Ingeniería en Español - Nuevo Miembro - New Member

Join Date: Feb 2005
Location: Albany, New York
Posts: 5170
Good Answers: 129
#16
In reply to #14

Re: AI in the Garden of Good and Evil (Part 4)

04/03/2008 12:22 PM

You Wrote:"What I was trying to suggest or illustrate, is the existence of a deep, principal, fundamental chasm, between the definition of intelligence, and the definition of a pre-designed, pre-determined mechanism, of any type."

Totally agree. I get dizzy even when I start to consider the nature of intelligence. Animals for instance are very intelligent in many different ways, yet for the most part we disregard such intelligence as inferior. Upon deeper thought however the inferiority of other animals methods of thinking is not self evident. Same with computer systems.

You Wrote: "One of those "Down-Up" attempts is about the atomisation of a task, having multiple half-wit modules, trying to commonly channel their inputs, having a community of "Half-wits" serve as a common mind"

And that gets to the heart of my issue, our brains simply don't work like that, so it makes no sense to try to mimick human behavior in such a way.

Reply Score 1 for Good Answer
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#15
In reply to #12

Re: AI in the Garden of Good and Evil (Part 4)

04/03/2008 11:46 AM

Had a look at it, most revealing. The pendulum example was taken, to explain the complex patterns emerging from the interaction of simpler-grade mechanisms.

Reply
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#18
In reply to #12

Re: AI in the Garden of Good and Evil (Part 4)

04/18/2008 7:12 PM

Hi Roger. I went to your link and read Toby's essay but one thing puzzles me. In describing the system that Ditto et al have concieved there is mentioned each processor producing a 'tick' which is then joined to others cascading into a wave which processes the data.

We know the brain works with several frequencies and some of those change rates when engaged in processing data. However there are non synchronized neuronal outputs which also randomly contribute to decision making and creative processes. How do the proponents of this chaotic computer propose to incorporate those aspects?

It seems with each revelation in the study of the human brain we discover hitherto unknown factors and operatives which further demonstrate the complexity of the human thinking process. Computers, chaotic or not, will still be adjusted to produce a logical outcome within their acceptable frame of reference which can be totally unlike the irrationality of any teenager or sometimes even the somewhat emotionally influenced thinking of my avatar LOL.

Simulating the human brain may not be the best approach in designing a computer to produce AI. Perhaps AI being free of the influences of emotion, hormones, fears and desires, sapient responses and critical self analyses may be far more creative and intelligent than we humans will ever be.

In a way we have a model of AI based on human thinking processes in the original Star Trek's Mr. Spock whose human mother's genes often corrupted his Vulcan genetic inheritance for logical thinking.

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
2
The Engineer
Engineering Fields - Engineering Physics - Physics... United States - Member - NY Popular Science - Genetics - Organic Chemistry... Popular Science - Cosmology - New Member Ingeniería en Español - Nuevo Miembro - New Member

Join Date: Feb 2005
Location: Albany, New York
Posts: 5170
Good Answers: 129
#19
In reply to #18

Re: AI in the Garden of Good and Evil (Part 4)

04/18/2008 11:28 PM

taejonkwando,

You Wrote:"We know the brain works with several frequencies and some of those change rates when engaged in processing data. However there are non synchronized neuronal outputs which also randomly contribute to decision making and creative processes."

The actual structure of the brain is basically the story of evolution. The small bump on the top of the spinal cord covers the involuntary stuff like breathing, heartbeat, body clock etc. Real periodic, very little nonlinear stuff. Then on top of that are senses sight and hearing. Above that is sensory perception (gustatory). Above that spatial comprehension (Parietal). Above that we have motor control and muscles (motor, somatic). Then continuing forward there is language(Broca), then judgement, logic (frontal lobe). Each step grows in complexity and becomes more and more nonlinear. Almost like those Russian dolls, one inside the other with a steady clock ticking at the center.

So now to address your question, In my opinion, I would create a series of "shells" with varying "perturbability", the innermost shell would be practically imperturbable, then as you moved outward, each shell would be slightly more perturbable and complex than the last with the outermost being chaotic. Then I would set up sensors that would perturb one of the more stable shells in the center and allow the perturbation to cascade outward to the more susceptible shells (This corresponds to the sensory part of the brain near the stem).

You may ask how to control how perturbable a shell (section) of my AI brain would be? Basically, I would set threshold voltages. The shells in the center would have high threshold voltages. As you moved outward, the next shell would have lower threshold voltage, so on and so forth. This means that for a signal to propagate in the high threshold region, it must be strong, as with a periodic signals. In the out region, because of the low threshold, periodic signals would dissipate. From there it gets crazy complicated because we haven't even incorporated memory or positive and negative reinforcement (which is done by chemicals in us) or even primary desires such as hunger etc. But that's how I'd start. A giant interconnected network of varying complexity and perturbability with distinct sections and gradual transitions between them.


You Wrote "Perhaps AI being free of the influences of emotion, hormones, fears and desires, sapient responses and critical self analyses may be far more creative and intelligent than we humans will ever be."

I don't think so. It's the emotion, the fear, the desires that give us purpose. We anthropomorphize computers because we underestimate our own precarious complexity. Binary computers and programs are overated.

The very idea of morality and shame implied in your statement is one of the defining features of humans. We evolved with a sense of empathy because we are more successful as animals in groups (think monkeys), so we are in a constant battle of trying to preserve the group while attending to our own needs. Much of our morality is drawing this line. I need this, but the group needs that, What is the best compromise?

I'm not sure if I addressed your questions. Sorry about the long post but the truth is I don't feel like I said a thousandth of what I'd like to. The subject is fascinating.

Reply Good Answer (Score 2)
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#21
In reply to #19

Re: AI in the Garden of Good and Evil (Part 4)

04/21/2008 10:34 AM

Applause and shouts of "Encore". Roger thanks for such a such a lucid explanation. I can now understand how one would begin to approach programming to simulate the human brain.

I suppose programming ethics, morality, sentience, and the other complexities ruling human behavior would also be required to allow the AI to "mature", that is; to refine its judgement within the codes of acceptable behavior when interacting with humans.

Herein lies the conundrum posed in Asimov's "Three laws of Robotics". In C. Willis's examination of these laws http://www.androidworld.com/prod22.htm we find desireable behaviors, growth (as in wisdom) and higher order decisions (maturity) limited by the incorporation of these laws.

To parallel the human experience AI will not only have to be self aware it must also have a value system as you point out and I quote; " We evolved with a sense of empathy because we are more successful as animals in groups (think monkeys), so we are in a constant battle of trying to preserve the group while attending to our own needs. Much of our morality is drawing this line. I need this, but the group needs that, What is the best compromise?

I can see how that "compromise"concept can be at odds with Asimov's three laws, the Ten Commandments and other regulations ruling behavior. We humans can ignore any rule of behavior but can that decision capability be incorporated in AI without fear of the consequences?

Also, our beliefs( or perhaps just wishfull thinking) in recompense, Karma, the Hereafter, God's Judgement, etc. influence most civilized behavior whether we practice religion or guided philosophy or not. Shouln't an AI, to 'live' among us, incorporate these concepts as well?

We can incarcerate or terminate life to rid ourselves of rouge behavior but how can AI be threatened with such penalties?

Roger your posts are never long enough and they always tantalize with possibilities. Thank you again and a GA for your efforts.

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#20
In reply to #18

Re: AI in the Garden of Good and Evil (Part 4)

04/19/2008 10:28 AM

"...It seems with each revelation in the study of the human brain we discover hitherto unknown factors and operatives which further demonstrate the complexity..."

This seem to be the essence behind Searle's Chinese Box Paradox: An intelligent, self-aware entity, requires an external view of itself, from outside of itself, to be able to correctly appreciate the capabilities of it's interaction with the world, specifically with entities other than itself.

One of the most famous definition of an intelligent being, aware of it's own being, is that of "An information system, which has an updatable model of the world with an active feedback capability, with the self being represented in this "World-Model" at top-level priority"

We all intuitively know we miss some of the picture, as Korasawa's "Rashumon" demonstrated. On the other hand, we seek (not always openly) other people's impression of us, to be able to complete this missing piece of the puzzle.

Living beings evolved with their interaction capabilities intertwined. They interact, and re-arrange their priorities and knowledge as they go. They have an unlimited number of machine-states, be it as small as you like, to start with.

Machines are pre-conceived with a limited scenario of interaction embedded for good. Once the spring is cocked, they seek out the mission embedded, as written. They have a limited number of machine-states, be it as big as you like, to end with.

Reply Score 1 for Good Answer
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#22
In reply to #20

Re: AI in the Garden of Good and Evil (Part 4)

04/22/2008 11:54 AM

Yuval, you have answered questions I haven't even posed LOL. I am impressed with your philosophical depth and you have led me to conclude philosophy may contribute more to the design of AI than electronics.

I have labored in the field of electronics for years and have given little if any thought to philosophical questions which might surround my efforts. As my career draws to a close I now wonder how engineers of your generation will respond to such an enormously enlightening and possibly frightening challenge in developing self aware AI.

I wonder if it will it be a repeat of my father's generation who released the destructive power of atomic energy which now, in the hands of rogue states, threatens the very survival of all life in this world. It seems AI has the same potential to become a threat to mankind or perhaps to fill the promise which atomic energy made of "a better life".

In the series of "Terminator" films self aware AI dominates mankind. Could this scenario be closer than we realize? With armed drone aircraft executing local decisions to avoid or destroy based on threat scenario programming, the addition of self awareness could cause even more autonomous action, possibly the loss of control.

Self replication is another possible possibility from advancing sentience in AI. As in the "Terminator", the machines duplicated or created new versions to counter mankind's efforts to eradicate them. Will adding sentience to AI bring similar possibilities?

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#23
In reply to #22

Re: AI in the Garden of Good and Evil (Part 4)

04/22/2008 10:00 PM

Philosophy is roughly an attempt to formalise thought, and as such it has always been a key factor in the attempt to clone or formulate active minds.

Our wishful thinking about the future or venues to be taken for realisation of AI, or it's possible viability for advancing our lives, is a very, very long-shot, concerning the feasibility of achieving this.

And the ominous clock is ticking, as you hinted.

Robot-phobia as understandable as it may be, relies on the premise that such AI is at all possible, and this is yet to be proven, even to the tiniest measure.

But not all is in vain: As a by-product of the attempt to teach machines how to learn, we stumbled upon fuzzy logic and heuristically-driven search and determination systems, pattern recognition and complex encoding and decoding methodologies, in short : expert systems.

Expert systems, although an automate, a very complex automate, dramatically aid humans in tasks prone to critical human error: Auto Pilots, Nuclear Resource and Maintenance Management, Traffic Control and Supervision, both on ground and on the air, Satellite Photography Surveillance and Analysis, Mineral Inspection and Analysis, Medical Symptoms Analysis, Mathematical and chemical Analysis and formulation, Genetic Analysis and formulation, and much much more, not to mention the old Deep-Blue, the first chess-program to beat a human world champion back in 1997...

Self-replication is already roughly present in the systems of Robotic Tooling Machines and 3D UV-resin Replicators, but this is besides the point: Can a machine realise a self-thought conclusion and act upon it, alone, or in cooperation of other such machines, as portrayed so vividly in the "I Robot" flic (featuring Will Smith) of yesteryear, which is what I think you are aiming at.

For this, self-conceived thought has to be proven possible, and it hasn't.

Some say, it's not even theoretically possible, no matter what quantity and quality of resources are involved, bluntly-put it's a violation of the second (TD) law: a self entropy-lowering entity simply cannot be pre-conceived. It has to evolve, importing more and more energy into it's system, to allow for this.

There's only one such system we know of to be able to pull such trick, replication included:

Life.

Reply Score 1 for Good Answer
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#24
In reply to #23

Re: AI in the Garden of Good and Evil (Part 4)

04/23/2008 11:52 AM

Hmmmm, I suspect your forecast of AI becoming part of our daily lives "is a very, very long-shot" is not so far away as it may seem. Much like the development of any new technology, secrecy shrouds many programs, especially weapons development.

Autonomous robotic activity is already active in military defense systems and no doubt this technology will expand to incorporate civilian versions. With the increase of daylight criminal activity and threatening situations in hostile regions, home and personal security becomes the market for such "trickle down"' technology.

AI's potential ability to analyze and act on problematical situations at gigahertz speeds would give comfort to those who feel threatened and there are many wealthy individuals and corporations who could easily finance these development programs in secrecy.

So as you stated "the clock is ticking" and like the other technological wonders of the past 100 years such as automobiles, flight, atomic energy, space flight, lasers, computers, internet, etc., the publics access came far more rapidly than forecasted.

All work in AI will result in advances. Each advance will spawn other advances, all cascading into R&D prototypes and those prototypes will ultimately be the templates for production.

At some point in this inevitable sequence of events, applications for all the variant prototypical AI will be explored philosophically from the most benevolent use to the most evil, a path followed by other discoveries such as gunpowder, electricity, atomic energy, and lasers.

I am both fascinated and fearful of this emerging technology for it will arrive in both forms; benevolent and deadly.

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#25
In reply to #24

Re: AI in the Garden of Good and Evil (Part 4)

04/23/2008 5:06 PM
Reply
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#26
In reply to #25

Re: AI in the Garden of Good and Evil (Part 4)

04/23/2008 6:33 PM

Thank you Yuval. This is an interesting study although technically over my head. I have noticed publication of papers dealing with pattern/object recognition seem to be arriving weekly. Autonomous control of vehicles in the latest DARPA contests proves higher order decision making capability is here.

Certainly by now, those vehicular demonstrators are already obsolete and according to friends "in the business" even more sophisticated technology is already being field tested.

As you are probably more aware than I, AI decision making capability increases with computer processing capability. With the interconnecting of multiple CPUs and the prospect of quantum computers on the horizon, AI seems to be a "done deal". Its only a matter of time.

Current difficulties emulating human skills and behavioral responses will be surmountable or bypassed with emerging technology. It has happened before in other technological venues and I wouldn't bet against it occurring in the pursuit of humanistic AI.

Yuval, I am interested in your opinion of AI in the roles of teaching and proctoring disabled children. Many children have Autism, Downs syndrome, Turette's, Tay-Sachs and physical restrictions and are challenging to teach. My oldest daughter, a teacher, has taken some classes which help her understand these limitations but expresses regret she is not clever enough to sense the changes, mood swings, and onset of psychotic disturbances which devastate many of these children.

Perhaps a "HAL"(2001 Space Odyssey) like AI will one day be assisting disabled children to lead productive lives. What do you think or hope to see in the future for AI?

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#27
In reply to #26

Re: AI in the Garden of Good and Evil (Part 4)

04/24/2008 12:27 AM

"...AI decision making capability increases with computer processing capability..."

- Not so. It might help in the efficiency of execution, but the mere term "decision capability" suggest more in the realm of algorithmic wisdom than that of resource availability, as the Deep Blue team demonstrated so vividly back in 1997.

As an AI expert commented, following the event: "It really was a match between the programmers' scheme and Kasparov, not between Kasparov and the (IBM) computer"

"...AI in the roles of teaching and proctoring disabled children..."

- I really cannot see the viability of application, or, I guess I didn't understand the connection.

Provided AI is possible, how can such treatment be implemented?

Reply
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#28
In reply to #27

Re: AI in the Garden of Good and Evil (Part 4)

04/25/2008 10:31 AM

Hello Yuval. This question was developed from my daughter's frustration in detecting the subtle signs of onset disturbances in the child being taught. Timely intervention with the right word, inflection, or phrase is often all that is needed to redirect the child's attention to the environment rather than the often mysterious internal phenomena which disrupts concentration and sometimes cascades into overt behavior.

AI with gigahertz reaction time and programmable levels of sensitivity could intervene at the first and most subtle signs of behavioral aberration. This assumes the detection technology can remotely monitor heart rates, mental activity, respiration, eye movement, and other subtle signs of onset. Intervention could initiate soothing sounds, gentle suggestions, music, temperature changes and other stimuli to redirect and reattach the child's attention to its environment far more quickly than a human counterpart.

As the AI learns each child's idiosyncratic behavior patterns and the provoking stimuli the AI could develop an intervention-al therapy. "Patterning", is a term used in Psychology to describe the technique of teaching movement to physically impaired patients. An analog has also been developed for teaching those with mental impairment. The difficulty for therapists is analyzing the patient's instant requirements for maintaining stability in the learning situation.

Very talented teachers have taught these disadvantaged children to dress, hygiene, participate with other children in sports, games and benefit from the learning experience. Most importantly many of these children have learned how to control their behavior making their ultimate entry into society much more favorable.

Perhaps with advanced detection AI could provide a more resilient therapy reacting quickly with the most appropriate therapeutic response for maintaining the child's mental stability. Hopefully artificial intelligence will be directed to solve these and similar problems. There are so many opportunities for AI to enhance our lives if, as you say, "Provided AI is possible".

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#29
In reply to #28

Re: AI in the Garden of Good and Evil (Part 4)

04/25/2008 11:41 AM

I see.

That's an exciting venue, if there is any.

I would have never thought in that direction which only shows how limited is my imagination.

We know of successful Expert System application, in both speech-recognition (being part of the wide field of research called Pattern Recognition), and of speech synthesis and emulation which started to give off practical fruition (mainly for aiding the blind to read) since the mid-nineties.

Apropo Pattern Recognition, I recently came across a little thing called WIDI, which is a little PC application, capable of hearing a wave (random) audio recording, analysing it, and rebuilding it's content back in MIDI (vector) form, harmonics included. To those who understand what it involves, this is nothing short of pure magic.

Reply
Power-User

Join Date: Dec 2006
Posts: 441
Good Answers: 20
#30
In reply to #29

Re: AI in the Garden of Good and Evil (Part 4)

04/25/2008 3:01 PM

Yuval you gave me a little laughter today when your said; " which only shows how limited is my imagination".

You, like every contributor to this forum are exceptionally talented, filled with knowledge and the promise of advancing the technology which now holds your interest. Your self evaluation is extremely modest, compared with the evidence of your imagination as displayed in this forum.

I hope some day to read about the advances in AI and find you listed as a major contributor. Somehow I don't think I'll have to wait too long.

__________________
intellectuals solve problems, geniuses prevent them ~ Einstein
Reply Off Topic (Score 5)
Guru

Join Date: Feb 2007
Location: Israel
Posts: 2923
Good Answers: 24
#13

Re: AI in the Garden of Good and Evil (Part 4)

04/03/2008 11:25 AM

The point is, that we don't really have to reach the point of having "real" AT our hands to resort to the negotiation with moral, concerning "Seemingly Intelligent" or "Near-Intelligent" systems, those acting upon a real, verified, knowledge-base, currently called in the industry by the name of "Expert-Systems"

An Expert System is a synthetic-brain of sort, with a complex decision mechanism, backed by a specialised knowledge-base, to draw it's decision-making into a reasonable range of real-world limitation. Two vivid examples are modern Auto-Pilot installed in freight and passenger planes, the other is a medical Symptom-Analyser, helping doctors scan vast symptom-combination base, to suggest (to the doctor, and subject for further verification) given pathologies, fitting the described symptom-set.

True, new and horrifying type of "expert-System" variety, is currently under development, to guide unmanned weapon platforms, to their real or imaginary (based on quality and validity of intelligence assumption) targets, and urban policing systems also push for this venue, if only to save their troopers lives.

A system similar to a Symptom-Analyser, may in the future aid judges or juries in their decision-making, just as the current generation of auto pilot (which technically negate the roll of humans completely, you could, today, leave the human pilot at home, and the Auto-Pilot would taxi, take-off, change flight paths according to traffic, approach, land and taxi to a terminal fully automatic) - used for aerial Assault platforms.

I didn't get into the morals of it, just laid the technical background of current existing technology, to say that the moral debate on the subject is valid and relevant, even without the existence of a real, synthetic ming, because our modern synthetic brains, are challenge enough.

Here in Israel, it was speculated of a complete Auto-Pilot turn-over, once determined by Air-Marshal that a plain was hijacked, just as an appetiser of such debate

Reply Score 1 for Good Answer
Reply to Blog Entry 30 comments
Interested in this topic? By joining CR4 you can "subscribe" to
this discussion and receive notification when new comments are added.

Comments rated to be Good Answers:

These comments received enough positive ratings to make them "good answers".

Comments rated to be "almost" Good Answers:

Check out these comments that don't yet have enough votes to be "official" good answers and, if you agree with them, rate them!
Copy to Clipboard

Users who posted comments:

Bayes (4); HUX (2); Kilowatt0 (2); NiCrMoNoMore (2); Steve Melito (1); stevem (1); taejonkwando (7); Yuval (11)

Previous in Blog: Algae Biofuels: Better Than Ethanol?   Next in Blog: Herbert von Karajan

Advertisement