February 28, 2008

How to Bring the Troops Home and Make Everybody Happy


"Over 4,000 robots are currently deployed on the ground in Iraq and by October 2006 unmanned aircraft had flown 400,000 flight hours. Currently there is always a human in the loop to decide on the use of lethal force. However, this is set to change with the US giving priority to autonomous weapons - robots that will decide on where, when and who to kill." -- Killer Military Robots Pose Latest Threat To Humanity, Robotics Expert Warns [Doubters might want to check Gray's remarks in the comments to this post.]

In 1967, the Beat poet Richard Brautigan saw something of this future that is about to become the present. He had, as did many in that era, different expectations:

20060523_bot_laws.jpg

All Watched Over by Machines of Loving Grace

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
    (right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
    (it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

Richard Brautigan

First Published
San Francisco: The Communication Company, 1967.

Email this entry to:


Your email address:


Message (optional):


Posted by Vanderleun at February 28, 2008 1:20 AM | TrackBack
Save to del.icio.us

Comments:

AMERICAN DIGEST HOME
"It is impossible to speak in such a way that you cannot be misunderstood." -- Karl Popper N.B.: Comments are moderated and may not appear immediately. Comments that exceed the obscenity or stupidity limits will be either edited or expunged.

I, for one, welcome our new Robot overlords!

Posted by: John SF at February 28, 2008 1:53 AM

Gotta love the scare pieces with alarming statistics: ...4,000 robots on ground; 400,000 hrs flown...etc

He fails to mention a couple of critical points:
1) the ground robots are remote control bomb disposal units...hardly a threat;
2) the 400,000 hrs flown are >99% surveillance missions....very few platforms are modified to carry ordnance. Again, hardly a threat.

Humans are very much in-the-loop regarding the use of force...I'd even include the lawyers, who continually review the Rules of Engagement and who are part of the shoot/don't shoot decision cycle, in the category of humans.

Posted by: Bruce W. at February 28, 2008 4:57 AM

Heh.... I may as well tell you: this is the field I work in: System Software Safety.

That title is a essentially a euphemism meaning:

"Now that computers with guns, missiles and lasers are killing planes, missiles and people, how can you be sure they follow the Rules of Engagement?"

It's a fascinating field, but not a well appreciated one; at least not yet. The design engineers program in the 'Grace', I make sure that it 'Loves' people. Well, if not 'Love', at least it doesn't harm the things with the right signature....

I loved that movie 'Westworld' when I was a kid--Yul Brynner as the killer cyborg gunslinger was the coolest thing ever. Now, I'm actually working on the first generations of automated killers. I think of the tagline daily at work: "Nothing can go wrong.... Go wrong..... Go wrong...."

Of course, it's a lot easier for people (taxpayers and congress critters) to swallow the concept when you say: "It's only built to kill missiles! Or artillery shells!"; but the Djinni is out of the silicon boule.

Lethal surveillance with racial recognition? Oh yeah, we'll go there.

People are pretty squeamish with the idea and keep wanting to put a 'human in the loop' to give the shoot-don't-shoot authority, but it doesn't take many tests or iterations to discover that the human is the weak link in the system; both in the Shoot and Don't Shoot cases.

Guns don't kill people. Robots with guns kill people.

Posted by: Gray at February 28, 2008 6:09 PM

Great comment, Gray.

Thanks,
Gerard

Posted by: vanderleun at February 28, 2008 6:44 PM

Thanks, Gerard!

To dig deeper into it:

The robotics expert is also concerned with a number of ethical issues that arise from the use of autonomous weapons. He added: "Current robots are dumb machines with very limited sensing capability. What this means is that it is not possible to guarantee discrimination between combatants and innocents or a proportional use of force as required by the current Laws of War.

Totally wrong.

That's like saying:
"A tiger is a dumb machine with very limited sensing capability that cannot discriminate between prey and not-prey."

Autonomous systems now have far better sensing capabilities (radar, sonar, molecular sampling, pattern recognition, visible light, Infrared) than very limited human sight or hearing.

It's clearly not the sensing capabilities that are the problem--the autonomous systems can engage targets that humans cannot even see or hear (or smell)!

He's squeamish about the 'ethics' of the whole thing, but he doesn't know how to state it.

The correct way to state it is:
"Are we sure that this autonomous system has the correct software and hardware inhibits to target only the signatures allowed by ROE while tracking all signatures in it's sensor field of view."

The software and hardware inhibits designed into the robot essentially mimic human 'ethics'. Those inhibits can be designed and tested to a risk of less than 1 incorrect engagement in 1 million hours of operation. For comparison, a well trained human will make a mistake every 1 thousand hours.

Let's do a thought experiment: You are in a pitch-black room with a mouse in it:
Do you want a human with a shotgun in the room with you trying to kill it? Or a Mouse-Killer robot with a shotgun, IR vision, with mouse-recognition and a background target deconfliction program running?

"It seems clear that there is an urgent need for the international community to assess the risks of these new weapons now rather than after they have crept their way into common use."

Well, what are the risks? That it kills the wrong thing? or that it can't kill the right things fast enough?

In the case of shooting down an ICBM, let's bias the inhibits toward shooting 'cuz it's worse to lose a city to a nuclear missile than to incorrectly shoot down an airliner full of talented orphans.

If the risk is that it could shoot the wrong thing, an autonomous system is measurably 1000 times more discriminating than a human across more spectra than a human can even sense!.

As far as "assessing these risks now"; It's too late.

We are still dealing with it a little euphemistically, 'cuz we are morally squeamish about letting humans be automatically killed by something non-volitional.

As a former infantryman and current Software System Safety Engineer, I don't see the problem....

But ultimately, it's still a free-willed, perhaps even God-fearing, human who programs it and sends it on its way to kill. The volition to kill and the killing are merely separated spacially and temporally, but not to fear: it's still Cain and Abel and Battle of the Somme, etc....

Posted by: Gray at February 29, 2008 10:45 AM
Post a comment:

"It is impossible to speak in such a way that you cannot be misunderstood." -- Karl Popper N.B.: Comments are moderated to combat spam and may not appear immediately. Comments that exceed the obscenity or stupidity limits will be either edited or expunged.










Remember personal info?