Entirely South West

BRISTOL TECHNOLOGY NEWS


Bristol professionals speak out against rights for robots


PROFESSIONALS working in the field of robotics are backing calls to halt European Union plans to grant robots electronic personhood fearing it would be a threat to humanity.



Bristol professionals speak out against rights for robots

European lawmakers and manufacturers are in debate about the legal status of robots - and whether it is machines or human beings who should bear responsibility for their actions.

Professor Sanja Dogramadzi, of the Bristol Robotics Laboratory at the University of the West of England (UWE), and Philip Graves, of GWS Robotics in Queen Charlotte Street, have spoken out against the plans.

Professor Dogramadzi is among 156 artificial intelligence experts to have written an open letter to the European Commission this month [April 5, 2018] warning that granting robots legal personhood would be ‘inappropriate’ from a ‘legal and ethical perspective’.

She said: “Our society is responsible that robots equipped with AI operate according to strict safety and ethics rules created by the governing bodies. Creators of AI are solely responsible for its actions.

 “I signed the petition because I agreed with its demands – to establish governing principles of AI rather than give AI a legal status.”

Philip Graves, of GWS Robotics in Bristol, has previously said programmers and operators should maintain responsibility for the machines - even with the development of Artificial Intelligence.

Philip, who has been computer programming since the 1980s, said: “To establish such rights for robots could be extremely dangerous for humanity.

“It could elevate robots, which are essentially machines constructed and initially programmed by humans, to the status of organic beings over which we have no right of control.

“I believe we should legislate from the standpoint that they are machines, under full human responsibility and without independent rights.”

The panel of artificial intelligence experts hail from 14 European countries and includes computer scientists, law professors and CEOS.

A European Parliament report from early 2017 suggests that self-learning robots could be granted ‘electronic personalities’.

"Our society is responsible that robots equipped with AI operate according to strict safety and ethics rules created by the governing bodies. Creators of AI are solely responsible for its actions."
Professor Sanja Dogramadzi



This would allow robots to be insured individually and held liable for damages such they hurt people or damage property.

Supporters say it would merely put robots on par with corporations, which have such status.

But Philip said: ““Robots are ultimately digital processors of a code that has been input by a programmer.

“Their use is determined by how they are programmed, which is where the responsibility lies.”

He said enforcing legal responsibility for actions taken by robots may be helpful - but giving rights to a non-human was extreme.

He said: “It may be practical from an insurance standpoint to protect against damages which may be caused by a robot, while remaining clear that the robot’s actions are the ultimate responsibility of its programmers.

“But to give true personhood to a robot, which includes rights as well as responsibilities, could be seen as extreme.

“Personhood does not exist in law even for non-human animals or other life-forms, whose due rights can be demonstrated from an ethical perspective to be much greater than those of artificially intelligent machines.”

Futurologist and animal rights advocate George Dvorsky, who created a manifesto of rights for robots, wrote in Gizmodo: “By willingly and knowingly granting personhood status to entities that aren’t actually persons, we’re both diminishing what it means to be a person and ignoring living entities who are truly deserving of personhood status, namely nonhuman animals such as whales, dolphins, elephants, and other highly sapient creatures.”

Tesla and SpaceX CEO Elon Musk has repeatedly said society needs to be more concerned about safety with the increased use of artificial intelligence.

"If you're not concerned about AI safety, you should be," Musk has tweeted.




DISCLAIMER: The statements, opinions, views and advice expressed in this article are those of the author/organisation and not of ENTIRELY. This article should represent information correct at the time of publication however whilst every care has been taken to present up-to-date and accurate information, we cannot guarantee that inaccuracies will not occur. ENTIRELY will not be held responsible for any claim, loss, damage or inconvenience caused as a result of any information within this article or any information accessed through this site. The content of any organisations websites which you link to from ENTIRELY are entirely out of the control of ENTIRELY, and you proceed at your own risk. These links are provided purely for your convenience and do not imply any endorsement of or association with any products, services, content, information or materials offered by or accessible to you at the organisations site.






RECOMMENDED FOR YOU ON ENTIRELY TECH