The Turing Exception
What if the AI mind is stable up until a certain point, and then it goes bonkers. What stops it?”
    “Well, I assume we’re talking about an electronic theft. There are two aspects: computation and data. The AI would need data about the bank and its security measures, and it would need to send and receive data to conduct an attack. Plus, the AI needs computational resources to conduct the attack.” Leon paused to draw on the whiteboard.
    “The data about the bank becomes a digital footprint. Other AI are serving up the data, and they’ll be curious about who is asking for the data and why. Since the packets must be authenticated, they’ll know who. Similarly, the potential robber AI will need computational power, and we’ll be tracking that. We’ll know which AI was crunching packets right before the attack came. If the bank does get attacked, and we know who was running hacks and transmitting data, we know exactly which AI is responsible.”
    “Where’s privacy in all this?” Mike asked. “Everything we do online will be tracked. When I was young, there was a total uproar over the government spying on citizens. This is way worse.”
    Leon gazed at his feet, thinking back. He’d only been seven years old, newly arrived from Russia, during the period Mike was talking about, but he’d taken the required high school classes on Internet History. “No, because back then the government had no oversight. Privacy was only half the picture. If the government really only used the data to watch criminals, it wouldn’t have been so outrageous. It was the abuse of the data that really pissed people off.”
    Mike stood, walked over to the window. “Like the high school districts that spied on students with malware and took pictures of them with their webcams.” He turned and faced Leon. “So what’s going to stop that from happening now?”
    “Again, reputation,” Leon said. “An AI who shares confidential information is going to affect his reputation, which means less access and less power.”
    “Okay, you’re the architect. What stops two AI from colluding? If one asks for data, and the other has the data, and is willing to cooperate. . . .  Let’s say the second AI spots the robbery at the planning stage and decides he wants in on the action.”
    Leon puffed up a little every time Mike called him an architect. He knew Mike meant it seriously, the term coming from the days when one software engineer would figure how to structure and design a large software program. The older man really trusted him. Leon wouldn’t let him down. “The second AI can’t know what other AI might have detected the traffic patterns. So if he decides to collude, he’s putting himself at risk from potentially many other AI. He also can’t know for sure that the first AI has ill intent: it’s only through the aggregation of much data that will prove that. So he could be at risk of admitting a crime to an AI that isn’t planning one in the first place. And the first AI, how can he trust anything the second AI says? Maybe the second is trying to entrap him.”
    “Hold on, now it seems like we’re setting up a web of distrust. Ultimately, the AI will form and be part of a social structure. Human society is based on trust, and now it seems like you’re setting up a system based on distrust. That’s going to turn dysfunctional.”
    “No,” Leon said. “People do this stuff all the time, we’re just not thinking about it. If you knew a murderer, would you turn them in?”
    “Probably . . .”
    “If you knew someone who committed other crimes

abused an animal, stole money, skipped out on their child support payments

would you still be their friend?”
    “Probably not.”
    “So in other words, their reputation would drop from your perspective. And that’s exactly what would happen with the AI. The bad AI’s reputation will drop, and with that, so will their access to power.”
    “What about locally transposed

Similar Books

Dominant Species

Guy Pettengell

Making His Move

Rhyannon Byrd

Janus' Conquest

Dawn Ryder

Spurt

Chris Miles