In 1942, science fiction writer Isaac Asimov wrote The Three Laws of Robotics - a set of rules to prevent robots harming humanity.
They state that:
A robot may not injure a human being or, through inaction, allow a human being to come to harm;
A robot must obey orders given it by human beings except where such orders would conflict with the First Law;
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Now it seems that, once again, science fiction has become reality.
The British Standards Institution, which is UK's national standards body, has published its first official set of ethical guidelines relating to robots.
The catchy-named BS 8611 guidelines start by echoing Asimov's Three Laws, stating that: "Robots should not be designed solely or primarily to kill or harm humans."
They go on to specify that "It should be possible to find out who is responsible for any robot and its behaviour," and that "Humans, not robots, are the responsible agents".
The guidelines also explore more complex issues, such as prejudice, deception and addiction, and ask whether it’s desirable for a robot to form an emotional bond with a human.
"Using robots and automation techniques to make processes more efficient, flexible and adaptable is an essential part of manufacturing growth," said Dan Palmer, head of manufacturing at the British Standards Institution (BSI).
"For this to be acceptable, it is essential that ethical issues and hazards such as dehumanisation of humans or over-dependence on robots, are identified and addressed.
"This new guidance on how to deal with various robot applications will help designers and users of robots and autonomous systems to establish this new area of work."
The BSI notes that potential ethical hazards arise from the growing number of robots and autonomous systems being used in everyday life - and these ethical hazards have a broader implication than physical hazards.
The guidelines aim to eliminate or reduce the risks associated with these ethical hazards to an "acceptable level".
So that's the robot apocalypse averted for now then.