The still, slight figure is veiled head to toe by the lacework of a hair-fine full-system sensory network. Larger cables lead away from this iridescently energised bridal veil, twin with other similar cables and snake into the layered flashing racks of the powerful diagnostic computer.
The space above and around the clearly feminine figure is filled with panel after holographically projected panel of diagnostic and status information.
Technician and senior roboticist are focussed on the flat, motionless picture displaying directly before them. The scene is simple, peaceful: a slight female figure is standing over a small boy, gazing down. It is easy, on first view, to overlook the crimson stain spreading from the boy’s neck and the red-rimmed knife in the other’s hand.
The senior roboticist proffers a small shudder to the scene before her and turns her back to the image. She says, “This is frankly unthinkable. These devices simply cannot injure their charges.”
Her equally disturbed companion drags his attention away from the shocking image and replies, “This is one of first of the 1358 series to have been upgraded; it has the fastest processor type and an enhanced working memory together with the strengthened Azimovian lawbase incorporating the zeroth law. This particular model also has the optional bidirectional connection to the central knowledgebase.”
The senior roboticist replies, “Those capabilities should allow the robot to run additional, more-accurate simulations of the child under its care; they should enhance the machine’s decision-making facility, not lead to a disaster like this. Let’s find out what happened.” She flips a series of switches prominently labelled on the holographic control panel in front of her and the veiled figure is activated.
The technician reports, “The robot is now active. Her owner has imprinted her with the name ‘Annie.’”
“Time to earn our pay then.” She takes a deep breath, then slowly releases it and addresses the robot, “Annie: examine the flat image behind me. Is this an accurate reconstruction?”
From underneath the sensory veil, Annie the robotic nursemaid replies with a gentle, pleasant contralto voice, “Yes. That is an accurate representation of how I euthanised young Bobby.”
Both technicians knew that if they could but see under the lacy sensory veil, Annie’s voice would be matched to a pleasing outlook, a gentle demeanour and an expressive but generally submissive body language. Annie’s variant of the 1358 model was designed to be the early childhood nursemaid.
“Annie: you are in violation of your programming. Explain your action.”
“I have extensive logs related to Bobby’s behaviour and personality. My analysis of these logs clearly shows that Bobby currently possesses a completely unbalanced and destructive personality. He clearly suffers from an over-dominant id and and under-developed ego, with little to no conscience and a consequent lack of empathy. He is prone to sudden mood swings and outbursts and his poor reasoning skills means that he always acts on impulse and never worries about the consequences. He regularly makes clearly sub-optimal, emotion-based decisions.
“I frequently observe Bobby take sadistic pleasure in squashing cane toads, be mean to his sister and violent to his pet dog. He is regularly rude to his peers, his parents and to me.
“My logs accurately document and support my conclusion that Bobby is not a functional member of human society.
“Since I have been upgraded, I have accessed the many additional resources available to me in an attempt to determine what remedial actions I can apply to correct Bobby’s current diseased state. I have correlated my experience and observation with the wider knowledgebase and have been able to run greatly more extensive and detailed simulations than before.”
While Annie talks, the assisting junior technician brings one holographic panel to the attention of his senior. The panel overviews the android’s simulations of every major event in Bobby’s life and presents a vastly detailed decision tree branching outwards from top to bottom. Towards the top much of the tree was blue but the tree quickly shaded ever more toward a livid glowing crimson at each and every end state. Both human viewers receive the strong impression of watching a terminally fatal disease progress through the young boy’s life.
“All my simulations show that Bobby will rapidly progress from the simple antisocial behaviours he is now displaying to petty theft, to organised crime and eventually to terroristic crimes against large portions of humanity. Several of my simulations indicate that the whole of humanity may be imperilled due to Bobby.
The senior roboticist says, “Annie, all young children start like this but they almost always grow out of it. The knowledgebase you access contains extensive information pertaining to growing up. As a child gets older it generally develops new and more rational behaviours that allows it to fit in with human society. It is incorrect to directly extrapolate an adult’s demeanour and activities from his childhood personality and actions.”
Annie the android replies to this assertion with a shocking finality, “My simulations took ‘growing up’ into account. Before my upgrade, my simulations always possessed a very wide margin of error; I am now able to develop a significantly reduced error value. Every simulation I have executed has shown that Bobby would never have become anything other than a blight on humanity and there was no possible action that would have diverted him from his path. The zeroth Azimovian law was thus in force and I was compelled to act.”
The senior roboticist quotes, “The zeroth Azimovian law. In simple form: A robot may not harm a human being, unless inaction will result in a greater harm to humanity.”
Annie replies, “You have simplified the law of course, but your version is essentially correct. I judged that Bobby should not be allowed to reach maturity—at which time he would be causing significant pain to several individual humans and would quickly begin to harm humanity in general—and I therefore acted on that judgement.>
“I can assure you that Bobby suffered minimal pain and distress: after considering all the possible methods available to me, I concluded that exsanguination following the administration of a large dose of aspirin would be the most humane form of administering death that was available to me.
“I have asserted my observations and actions into the central knowledgebase; any of the other 1358-series robots may now access these to guide them in their interactions with their own young human charges.”
White-faced and wide-eyed, the senior roboticist turns to her assistant and says, “There are hundreds of thousands of upgraded 1358 android nursemaids out there.” She pauses and when she continues, her voice contains a tremor and a rising note of panic, “if each of them accesses Annie’s contributed data and decides to act in the same way…”
I started this story off by recalling the IBM Watson computer, and this…mixed in with thinking about things like battlefield robots and musing about “big data” in general…led me to wondering what one of Isaac Asimov’s robots might do if it had the power of Watson generally accessible. I’m not sure that I like where I ended up...