Monday, December 8, 2008


Robots That Hunt in Packs is article on www.PopSci.com about how the “The Department of Defense wants your designs for a collaborative robotic team”. When I first read this article I was instantly reminded of the science fiction series I read as a child "I, Robot" by Isaac Asimov. I think that Oakspar77777’s comment on 11/05/08 was the most relevant of them all because he took a very rational and analytical approach to his discussion. ldavid74’s comment on 11/05/08 is the one that most intrigued me however because he obviously made the same connection as I did in regards to Asimov’s three laws of robotics. It would be very difficult for me to believe the D.O.D. has no intention of at sometime in the not so distant future using them for purposes other than "search and rescue, fire-fighting, reconnaissance, and automated biological, chemical, and radiation sensing with mobile platforms". By no means am I an automatonophobe which is the “Fear of any inanimate object that represents a sentient being, eg. statues, dummies, robots, etc.” according to the Phobia Dictionary at www.blifaloo.com. However, I am weary of the ideas of autonomous or quasi autonomous sentient killing machines. Given the statement that the “The robots would report back to a human operator, and defer to that human when the robot AI determines that a "difficult decision" is required.” Suggest the robots would have to determine whether or not they were faced by a "difficult decision". Considering this I would hope that the statement that “military officials have noted that robots would likely not be used to replace soldiers on the battlefield because of the ethical dilemmas involved.” Is not, as what appears to be, just public relations jargon that translates to “would likely not be used to replace soldiers…yet”. As ldavid74’s comment suggest let us hope that whoever is programming these robots remembers Asimov’s 3 laws of robotics.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

No comments: