you. A serious crime has been committed, a particularly disturbing murder. I’m not at liberty to give you the details. There is a theory, however, that the murderer, in order to commit the crime, did just what we were discussing; he crossed open country at night and alone. I was just wondering what kind of man could do that.”
Dr. Gerrigel shuddered. “No one I know. Certainly not I. Of course, among millions I suppose you could find a few hardy individuals.”
“But you wouldn’t say it was a very likely thing for a human being to do?”
“No. Certainly not likely.”
“In fact, if there’s any other explanation for the crime, any other conceivable explanation, it should be considered.”
Dr. Gerrigel looked more uncomfortable than ever as he sat bolt upright with his well-kept hands precisely folded in his lap. “Do you have an alternate explanation in mind?”
“Yes. It occurs to me that a robot, for instance, would have no difficulty at all in crossing open country.”
Dr. Gerrigel stood up. “Oh, my dear sir!”
“What’s wrong?”
“You mean a robot may have committed the crime?”
“Why not?”
“Murder? Of a human being?”
“Yes. Please sit down, Doctor.”
The roboticist did as he was told. He said, “Mr. Baley, there are two acts involved: walking cross country, and murder. A human being could commit the hatter easily, but would find difficulty in doing the former. A robot could do the former easily, but the latter act would be completely impossible. If you’re going to replace an unlikely theory by an impossible one–”
“Impossible is a hell of a strong word, Doctor.”
“You’ve heard of the First Law of Robotics, Mr. Baley?”
“Sure. I can even quote it: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.” Baley suddenly pointed a finger at the roboticist and went on, “Why can’t a robot be built without the First Law? What’s so sacred about it?”
Dr. Gerrigel looked startled, then tittered, “Oh, Mr. Baley.”
“Well, what’s the answer?”
“Surely, Mr. Baley, if you even know a little about robotics, you must know the gigantic task involved, both mathematically and electronically, in building a positronic brain.”
“I have an idea,” said Baley. He remembered well his visit to a robot factory once in the way of business. He had seen their library of bookfilms, long ones, each of which contained the mathematical analysis of a single type of positronic brain. It took more than an hour for the
average such film to be viewed at standard scanning speed, condensed though its symbolisms were. And no two brains were alike, even when prepared according to the most rigid specifications. That, Baley understood, was a consequence of Heisenberg’s Uncertainty Principle. This meant that each film had to be supplemented by appendices involving possible variations.
Oh, it was a job, all right. Baley wouldn’t deny that.
Dr. Gerrigel said, “Well, then, you must understand that a design for a new type of positronic brain, even one where only minor innovations are involved, is not the matter of a night’s work. It usually involves the entire research staff of a moderately sized factory and takes anywhere up to a year of time. Even this large expenditure of work would not be nearly enough if it were not that the basic theory of such circuits has already been standardized and may be used as a foundation for further elaboration. The standard basic theory involves the Three Laws of Robotics: the First Law, which you’ve quoted; the Second Law, which states, ‘A robot must obey the orders given it by human beings except where such orders would conflict with the First Law,’ and the Third Law, which states, ‘A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’ Do you understand?”
R. Daneel, who, to all appearances, had been following the conversation