Results 1 to 7 of 7

Thread: Pentagon to build robot soldiers that 'won't commit war crimes'

  1. #1
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Pentagon to build robot soldiers that 'won't commit war crimes'

    Pentagon hires British scientist to help build robot soldiers that 'won't commit war crimes'

    The American military is planning to build robot soldiers that will not be able to commit war crimes like their human comrades in arms.

    By Tim Shipman in Washington
    Last Updated: 7:36AM GMT 01 Dec 2008


    The Pentagon aims to develop 'ethical' robot soldiers, unlike the indiscriminate
    T-800 killers from the Terminator films


    The US Army and Navy have both hired experts in the ethics of building machines to prevent the creation of an amoral Terminator-style killing machine that murders indiscriminately.

    By 2010 the US will have invested $4 billion in a research programme into "autonomous systems", the military jargon for robots, on the basis that they would not succumb to fear or the desire for vengeance that afflicts frontline soldiers.

    A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.

    Colin Allen, a scientific philosopher at Indiana University's has just published a book summarising his views entitled Moral Machines: Teaching Robots Right From Wrong.

    He told The Daily Telegraph: "The question they want answered is whether we can build automated weapons that would conform to the laws of war. Can we use ethical theory to help design these machines?"

    Pentagon chiefs are concerned by studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants.

    Ronald Arkin, a computer scientist at Georgia Tech university, who is working on software for the US Army has written a report which concludes robots, while not "perfectly ethical in the battlefield" can "perform more ethically than human soldiers."

    He says that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events".

    Airborne drones are already used in Iraq and Afghanistan to launch air strikes against militant targets and robotic vehicles are used to disable roadside bombs and other improvised explosive devices.

    Last month the US Army took delivery of a new robot built by an American subsidiary of the British defence company QinetiQ, which can fire everything from bean bags and pepper spray to high-explosive grenades and a 7.62mm machine gun.

    But this generation of robots are all remotely operated by humans. Researchers are now working on "soldier bots" which would be able to identify targets, weapons and distinguish between enemy forces like tanks or armed men and soft targets like ambulances or civilians.

    Their software would be embedded with rules of engagement conforming with the Geneva Conventions to tell the robot when to open fire.

    Dr Allen applauded the decision to tackle the ethical dilemmas at an early stage. "It's time we started thinking about the issues of how to take ethical theory and build it into the software that will ensure robots act correctly rather than wait until it's too late," he said.

    "We already have computers out there that are making decisions that affect people's lives but they do it in an ethically blind way. Computers decide on credit card approvals without any human involvement and we're seeing it in some situations regarding medical care for the elderly," a reference to hospitals in the US that use computer programmes to help decide which patients should not be resuscitated if they fall unconscious.

    Dr Allen said the US military wants fully autonomous robots because they currently use highly trained manpower to operate them. "The really expensive robots are under the most human control because they can't afford to lose them," he said.

    "It takes six people to operate a Predator drone round the clock. I know the Air Force has developed software, which they claim is to train Predator operators. But if the computer can train the human it could also ultimately fly the drone itself."

    Some are concerned that it will be impossible to devise robots that avoid mistakes, conjuring up visions of machines killing indiscriminately when they malfunction, like the robot in the film Robocop.

    Noel Sharkey, a computer scientist at Sheffield University, best known for his involvement with the cult television show Robot Wars, is the leading critic of the US plans.

    He says: "It sends a cold shiver down my spine. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying."

    http://www.telegraph.co.uk/news/worl...ar-crimes.html

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  2. #2
    Expatriate American Patriot's Avatar
    Join Date
    Jul 2005
    Location
    A Banana Republic, Central America
    Posts
    48,612
    Thanks
    82
    Thanked 28 Times in 28 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    Who the hell came up with that idiotic title???
    Libertatem Prius!


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.




  3. #3
    Forum General Brian Baldwin's Avatar
    Join Date
    Jul 2005
    Location
    Missouri
    Posts
    1,869
    Thanks
    0
    Thanked 2 Times in 2 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    And where in the hell is the fun in having combat robots that don't piss off Libs? Isn't that basically a Latte machine?
    Brian Baldwin

    Yea though I walk through the valley of the shadow of death I shall fear no evil.... For I am the meanest S.O.B. in the valley.


    "A simple way to take measure of a country is to look at how many want in... And how many want out." - Tony Blair on America



    It is the soldier, not the reporter, who has given us freedom of the press.

    It is the soldier, not the poet, who has given us freedom of speech.

    It is the soldier, not the campus organizer, who has given us the freedom to demonstrate.

    It is the soldier who salutes the flag, who serves beneath the flag, and whose coffin is draped by the flag, who allows the protester to burn the flag.

    -Father Denis O'Brien of the United States Marine Corp.


    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.

  4. #4
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    Rapid-Fire Killer Robot Passes Flight Test

    Wednesday, December 10, 2008



    Lockheed Martin

    Rat-tat-tat-tat!

    That's the sound made by the "Multiple Kill Vehicle," a frightening but fascinatingly cool hovering robot meant to shoot down enemy ballistic missiles.

    Video of a Dec. 2 flight test conducted at Edwards Air Force Base in California by defense contractor Lockheed Martin has made it onto the Web, and it looks like something out of the "Terminator" movies.

    Rival defense contractor Raytheon is also working on its own multiple-kill-vehicle program.

    Inside a large steel cage, Lockheed's MKV lifts off the ground, moves left and moves right, rapidly firing all the while as flames shoot out of its bottom and sides.

    The plan is to mount one or more MKVs onto carrier missiles, which would launch into space to engage enemy nuclear-tipped ballistic missiles at the apogees, or peaks, of their trajectory arcs.

    Once in space, the MKVs would break off from the carrier vehicles, then use highly accurate targeting computers to shoot big bullets — in military speak, "kinetic interceptors" — to destroy the enemy warheads before they drop back down to Earth.

    None of this description does any justice to the video. You have to see it for yourself.

    • Click here to watch the video.
    • Click here for an official military video explaining how the MKV system works.
    • Click here for the U.S. Missile Defense Agency press release (pdf).
    • Click here for FOXNews.com's Patents and Innovation Center.

    http://www.foxnews.com/story/0,2933,464846,00.html

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  5. #5
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    From The Times
    February 16, 2009
    Military’s killer robots must learn warrior code


    Automatons revolt to form a dictatorship over humans in Asimov's I, Robot

    Leo Lewis

    Read the report in full
    Autonomous military robots that will fight future wars must be programmed to live by a strict warrior code or the world risks untold atrocities at their steely hands.

    The stark warning – which includes discussion of a Terminator-style scenario in which robots turn on their human masters – is issued in a hefty report funded by and prepared for the US Navy’s high-tech and secretive Office of Naval Research .

    The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans.

    Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens soldiers.

    Related Links


    Multimedia


    “There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said.

    “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.”

    The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.

    The solution, he suggests, is to mix rules-based programming with a period of “learning” the rights and wrongs of warfare.

    A rich variety of scenarios outlining the ethical, legal, social and political issues posed as robot technology improves are covered in the report. How do we protect our robot armies against terrorist hackers or software malfunction? Who is to blame if a robot goes berserk in a crowd of civilians – the robot, its programmer or the US president? Should the robots have a “suicide switch” and should they be programmed to preserve their lives?

    The report, compiled by the Ethics and Emerging Technology department of California State Polytechnic University and obtained by The Times, strongly warns the US military against complacency or shortcuts as military robot designers engage in the “rush to market” and the pace of advances in artificial intelligence is increased.

    Any sense of haste among designers may have been heightened by a US congressional mandate that by 2010 a third of all operational “deep-strike” aircraft must be unmanned, and that by 2015 one third of all ground combat vehicles must be unmanned.

    “A rush to market increases the risk for inadequate design or programming. Worse, without a sustained and significant effort to build in ethical controls in autonomous systems . . . there is little hope that the early generations of such systems and robots will be adequate, making mistakes that may cost human lives,” the report noted.

    A simple ethical code along the lines of the “Three Laws of Robotics” postulated in 1950 by Isaac Asimov, the science fiction writer, will not be sufficient to ensure the ethical behaviour of autonomous military machines.

    “We are going to need a code,” Dr Lin said. “These things are military, and they can’t be pacifists, so we have to think in terms of battlefield ethics. We are going to need a warrior code.”

    Isaac Asimov’s three laws of robotics
    1 A robot may not injure a human being or, through inaction, allow a human being to come to harm
    2 A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
    3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
    Introduced in his 1942 short story Runaround

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  6. #6
    Postman vector7's Avatar
    Join Date
    Feb 2007
    Location
    Where it's quiet, peaceful and everyone owns guns
    Posts
    21,663
    Thanks
    30
    Thanked 73 Times in 68 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    Can "Terminators" Actually be our Salvation?

    A Conversation with Peter Asaro
    Written By: R.U. Sirius and Surfdaddy Orca
    Date Published: May 19, 2009 | View more articles in:



    In a fascinating paper entitled “How Just Could a Robot War Be?”, philosopher Peter Asaro of Rutgers University explores a number of robot war scenarios.

    Asaro imagines a situation in which a nation is taken over by robots -- a sort of revolution or civil war. Would a third party nation have a just cause for interceding to prevent this?

    Asaro concludes that the use of autonomous technologies such as robot soldiers is neither “completely morally acceptable nor completely morally unacceptable” according to the just war theory formulated by Michael Walzer.

    See Also




    Just war theory defines the principles underlying most of the international laws regulating warfare, including the Geneva and Hague Conventions. Walzer's classic book Just and Unjust Wars was a standard text at the West Point Military Academy for many years, although it was recently removed from the required reading list.

    Asaro asserts that robotic technology, like all military force, could be just or unjust, depending on the situation.

    h+: We're using semi-autonomous robots now in Iraq and, of course, we've been using smart bombs for some time now. What is the tipping point – at what point does a war become a “robot war”?

    PETER ASARO: There are many kinds of technologies being used already by the U.S. military, and I think it is quite easy to see the U.S. military as being a technological system. I wouldn't call it robotic yet, though, as I think there is something important about having a "human-in-the-loop,” even if the military is trying to train soldiers to behave "robotically" and follow orders without question.

    I think there is always a chance that a soldier will question a bad order, even if they are trained not to, and there is a lot of pressure on them to obey.

    Ron Arkin is a roboticist at Georgia Tech who has designed an architecture for lethal robots that allows them to question their orders. He thinks we can actually make robots super-moral, and thereby reduce civilian casualties and war crimes.

    We might be able to design robotic soldiers that could be more ethical than human soldiers.

    I think Ron has made a good start on the kinds of technological design that might make this possible. The real technical and practical challenges are in properly identifying soldiers and civilians.

    The criteria for doing this are obscure, and humans often make mistakes because information is ambiguous, incomplete, and uncertain. A robot and its computer might be able to do what is optimal in such a situation, but that might not be much better than what humans can do.

    More importantly, human soldiers have the capacity to understand complex social situations, even if they often make mistakes because of a lack of cultural understanding.

    I think we are a long way from achieving this with a computer, which at best will be using simplified models and making numerous potentially hazardous assumptions about the people they are deciding whether or not to kill.

    Also, while it would surely be better if no soldiers were killed, having the technological ability to fight a war without casualties would certainly make it easier to wage unjust and imperial wars. This is not the only constraint, but it is probably the strongest one in domestic U.S. politics of the past 40 years or so.

    By the way, I see robots primarily as a way to reduce the number of soldiers needed to fight a war. I don't see them improving the capabilities of the military, but rather just automating them. The military hold an ideal vision of itself as operating like a well-oiled machine, so it seems that it can be rationalized and automated and roboticized. The reality is that the [human] military is a complex socio-technical system, and the social structure does a lot of hidden work in regulating the system and making it work well. Eliminating it altogether holds a lot of hidden dangers.

    h+: Does robotic warfare heighten the possibility of accidental war, or might it guard against it?

    PA: There was a news item March 2008 about a unit of the Swiss Army, about 170 infantry soldiers, entering into Liechtenstein at night by way of a dark forest. This turned out to be an accident –- they were lost during a training exercise –- so there wound up being no international incident. If there had been tensions between the countries, there could have been a just cause for Liechtenstein to declare war on Switzerland on the basis of an aggression.

    Of course, Liechtenstein does not even have an army. But something similar happened in 2002 when a platoon of British Royal marines accidently invaded a Spanish beach, instead of Gibraltar.

    I think the same is true of machines. They could inadvertently start a war, though this depends both on the technology malfunctioning and on the human political leadership desiring a war. Many wars have been started on false pretenses, or misconstrued or inadvertent acts: consider the sinking of the Maine in Havana or the Gulf of Tonkin incident.

    h+: You talk about the notion that robots could have moral agency – - even superior moral agency –- to human soldiers. What military would build such a soldier? Wouldn't such a solider be likely to start overruling the military commanders on policy decisions?

    PA: I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.

    I think we are now starting to see robots that are capable of taking morally significant actions, and we're beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves... or for themselves.

    They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant's “categorical imperative” is the standard of rationality from which moral requirements are derived.]

    We might be able to design robotic soldiers that could be more ethical than human soldiers.

    Robots might be better at distinguishing civilians from combatants; or at choosing targets with lower risk of collateral damage, or understanding the implications of their actions. Or they might even be programmed with cultural or linguistic knowledge that is impractical to train every human soldier to understand.

    Ron Arkin thinks we can design machines like this. He also thinks that because robots can be programmed to be more inclined to self-sacrifice, they will also be able to avoid making overly hasty decisions without enough information. Ron also designed architecture for robots to override their orders when they see them as being in conflict with humanitarian laws or the rules of engagement. I think this is possible in principle, but only if we really invest time and effort into ensuring that robots really do act this way. So the question is how to get the military to do this.

    It does seem like a hard sell to convince the military to build robots that might disobey orders. But they actually do tell soldiers to disobey illegal orders. The problem is that there are usually strong social and psychological pressures on soldiers to obey their commanders, so they usually carry them out anyway. The laws of war generally only hold commanders responsible for war crimes for this reason. For a killing in war to truly be just, then the one doing the killing must actually be on the just side in the war. In other words, the combatants do not have equal liability to be killed in war. For a robot to be really sure that any act of killing is just, it would first have to be sure that it was fighting for a just cause. It would have to question the nature of the war it is fighting in and it would need to understand international politics and so forth.

    The robots would need to be more knowledgeable than most of the high school graduates who currently get recruited into the military. As long as the war is just and the orders are legal, then the robot would obey, otherwise it wouldn't. I don't think we are likely to see this capability in robots any time soon.

    I do think that human soldiers are very concerned about morality and ethics, as they bear most of the moral burdens of war. They are worried about the public reaction as well, and want to be sure that there are systems in place to prevent tragic events that will outrage the public. It's not impossible to try to control robot soldiers in this way. What we need is both the political will, and the technological design innovation to come together and shape a new set of international arms control agreements that ensures that all lethal robots will be required to have these types of ethical control systems.

    Of course, there are also issues of proliferation, verification and enforcement for any such arms control strategy. There is also the problem of generating the political will for these controls. I think that robotic armies probably have the potential to change the geo-political balance of power in ways far more dramatic than nuclear arms.

    We will have to come up with some very innovative strategies to contain and control them. I believe that it is very important that we are not naive about what the implications of developing robotic soldiers will mean for civil society.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.


    Nikita Khrushchev: "We will bury you"
    "Your grandchildren will live under communism."
    “You Americans are so gullible.
    No, you won’t accept
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    outright, but we’ll keep feeding you small doses of
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll finally wake up and find you already have communism.

    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    ."
    We’ll so weaken your
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    until you’ll
    To view links or images in signatures your post count must be 15 or greater. You currently have 0 posts.
    like overripe fruit into our hands."



  7. #7
    Senior Member Toad's Avatar
    Join Date
    Dec 2007
    Location
    Minot, ND
    Posts
    1,409
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Default Re: Pentagon to build robot soldiers that 'won't commit war crimes'

    I don't know. Their carbon footprint appears to be pretty high...

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •