Cuckoo's Egg Page 5
There was no way to grab fifty printers again to monitor all the traffic through our system, so I had to watch only those lines that he’d be likely to use. Saturday morning, he’d entered through one of our four Tymnet connections, so that seemed like a good place to start.
I couldn’t buy, steal, or borrow four printers for a few weeks, so I went out begging. One physics professor gave me a beat-up old Decwriter, delighted that someone would take the ten-year-old heap off his hands. A secretary donated a spare IBM PC in exchange for my teaching her how to use spreadsheet programs. A combination of cookies, coaxing, and conniving led to two more obsolete printers. We were back in business, recording all our Tymnet traffic.
Wednesday afternoon marked a week since we’d first detected the hacker. Berkeley was sunny, though I could barely see the windows from across the maze of cubicles. Dave’s watchdog was awake, the printers busy chattering with every keystroke, and I was absentmindedly thinking of infrared emissions from the Pleiades star cluster. Suddenly, the terminal beeped twice: Sventek’s account was active. My adrenaline pumped as I ran to the switchyard; the top of the ream of paper showed the hacker had logged in at 2:26 and was still active.
Letter by letter, the printer spat out the hacker’s keystrokes.
Logged into the Unix-4 computer as Sventek, he first listed the names of everyone connected. Lucky—there was nobody but the usual gang of physicists and astronomers; my watchdog program was well concealed within the Unix-8 computer. “Looking over your shoulder again,” I thought. “Sorry, nobody here but us astrophysicists,” I whispered to the terminal.
All the same, he scanned all the processes running. The Unix command, ps prints the status of other processes. Habitually, I usually typed in ps -axu, the last three characters telling mother Unix to tell everyone’s status. The intruder, however, entered ps -eafg. Strange. I’d never seen anyone use the g flag. Not that he discovered much: just a few scientific analysis programs, and a cranky typesetting program—and a network link to the Unix-8 system.
It’d taken him just three minutes to discover the Unix-8 computer, loosely linked to the Unix-4 system. But could he get in? With the Unix rlogin command he tried a half-dozen times, knocking on the door of the Unix-8 machine with Sventek’s account name and password. No luck. Dave had nailed that door closed.
Apparently satisfied that nobody was watching him, he listed the system password file. There wasn’t much for him to see there: all the passwords are encrypted and then stored. An encrypted password looks like gibberish; without solving an extremely difficult cipher, the password file gave the hacker little more than a dream.
He didn’t become super-user; rather he checked that the Gnu-Emacs file hadn’t been modified. This ended any doubts about whether the same hacker was connected: nobody else would search out the security hole in our system. At 2:37, eleven minutes after he logged in, he abruptly logged off the Unix-4 computer. But not before we’d started the trace.
Tymnet! I’d forgotten to warn their network operations center that they’d have to trace some connections. I hadn’t even asked whether they could trace their own network. Now, watching the printer copy every key that the hacker pressed, there were only minutes to get the trace.
Ron Vivier traces Tymnet’s network within North America. While I talked to him on the phone, I could hear him punching keys on his terminal. In a staccato voice, he asked for our node’s address. At least I’d prepared that much. In a couple minutes, Ron had traced the connection from LBL’s Tymnet port into an Oakland Tymnet office, where someone had dialed in from a telephone.
According to Ron, the hacker had called Tymnet’s modem in Oakland, just three miles from our lab.
It’s easier to call straight into our Berkeley lab than to go through Oakland’s Tymnet office. Why call through Tymnet when you can dial directly into our system? Calling direct would eliminate Tymnet’s intermediate connections and might be a tad more reliable. But calling via Tymnet added one more layer to trace.
The hacker had called the local Tymnet access number instead of our lab. It was like taking the interstate to drive three blocks. Whoever was at the other end of the line knew how to hide. Ron Vivier gave his condolences—I hadn’t wanted just some Tymnet telephone number; I was hunting for a person.
Well, we were on the trail, but there were bends in the road. Somehow, we’d have to trace the phone call, and phone traces meant court orders. Phooey.
When the hacker logged off, I looked up from the printout. Like a firehouse dog, Roy Kerth had picked up the news and made it down to the switchyard. So had Dave and Wayne.
When Ron hung up, I announced, “He’s calling Oakland Tymnet. So he must be from around here. If he were in Peoria, he’d save his nickel and call the Peoria Tymnet modem.”
“Yeah, you’re probably right.” Roy didn’t look forward to losing a bet.
Dave wasn’t thinking about the phone trace. “This ps -eafg command bothers me,” he said. “I can’t say why, it just doesn’t taste right. Maybe it’s just paranoia, but I’m sure that I’ve seen that combination before.”
“To hell with Unix. Serves us right for running such a dog operating system.” Wayne saw a chance to bait Dave. “Hey, that password file isn’t much use to him, is it?”
“Only if he owns a supercomputer. You’d need one to unravel the encryption. Unix isn’t VMS—it’s got the tightest cypher locks around,” Dave countered.
Roy had heard it before; he saw himself as above the war of the operating systems. “Looks like you need some phone traces, Cliff.”
I didn’t like his choice of pronoun, but, yes, that was the point. “Any ideas on where to start?”
“Let your fingers do the walking.”
The morning after we watched the hacker break into our computer, the boss met with Aletha Owens, the lab’s attorney. Aletha didn’t care about computers, but had a wary eye for problems on the horizon. She wasted no time in calling the FBI.
Our local FBI office didn’t raise an eyebrow. Fred Wyniken, special agent with the Oakland resident agency, asked incredulously, “You’re calling us because you’ve lost seventy-five cents in computer time?” Aletha tried explaining information security, and the value of our data. Wyniken interrupted and said, “Look, if you can demonstrate a loss of more than a million dollars, or that someone’s prying through classified data, then we’ll open an investigation. Until then, leave us alone.”
Right. Depending on how you looked at it, our data was worth either nothing or zillions of dollars. How much is the structure of an enzyme worth? What’s the value of a high-temperature superconductor? The FBI thought in terms of bank embezzlement; we lived in a world of research. Classified data? We weren’t a military base or an atomic weapons lab.
Yet we needed the FBI’s cooperation. When the hacker next popped his periscope above the water, we’d probably track him to the Tymnet’s Oakland telephone access number. From there, I hoped a phone trace would lead to him. But I’d heard that the phone company wouldn’t trace a line without a search warrant. And we needed the FBI to get that warrant.
After hitting the FBI’s brick wall, Aletha called our local District Attorney. The Oakland DA didn’t fool around: “Someone’s breaking into your computer? Hell, let’s get a warrant and trace them lines.” The FBI might not give a damn, but our local prosecutors took us seriously. Still, they would have to convince a judge. Our warrant was at least a week away.
Just after five, Dave stopped by and started talking about the break-in.
“Cliff, the hacker’s not from Berkeley.”
“How do you know?”
“You saw that guy typing in the ps -eafg command, right?”
“Yeah, here’s the printout,” I replied. “It’s just an ordinary Unix command to list all the active processes—‘ps’ means print status, and the four letters modify the display. In a sense, they’re like switches on a stereo—they change the way the command works.”
 
; “Cliff, I can tell you’re used to Berkeley Unix. Ever since Berkeley Unix was invented, we’ve mechanically typed ‘ps’ to see what’s happening on the system. But tell me, what do those four letters modify?”
Dave knew my ignorance of obscure Unix commands. I put up the best front I could: “Well, the e flag means list both the process name and environment, and the a flag lists everyone’s process—not just your process. So the hacker wanted to see everything that was running on the system.”
“OK, you got half of ’em. So what are the g and f flags for?”
“I dunno.” Dave let me flounder until I admitted ignorance.
“You ask for a g listing when you want both interesting and uninteresting processes. All the unimportant jobs, like accounting will show up. As will any hidden processes.”
“And we know he’s diddling with the accounting program.”
Dave smiled. “So that leaves us with the f flag. And it’s not in any Berkeley Unix. It’s the AT&T Unix way to list each process’s files. Berkeley Unix does this automatically, and doesn’t need the f flag. Our friend doesn’t know Berkeley Unix. He’s from the school of old-fashioned Unix.”
The Unix operating system was invented in the early 1970s at AT&T’s Bell Laboratories in New Jersey. In the late ’70s, Unix zealots from Bell Labs visited the Berkeley campus, and a new, richer version of Unix was developed. Along with hot tubs, leftist politics, and the free speech movement, Berkeley is known for its Unix implementation.
A schism developed between advocates of the small, compact AT&T Unix and the more elaborate Berkeley implementation. Despite conferences, standards, and promises, no consensus has appeared, and the world is left with two competing Unix operating systems.
Of course, our lab used Berkeley Unix, as do all right-thinking folks. East Coast people were said to be biased towards AT&T Unix, but then, they hadn’t discovered hot tubs either.
From a single letter, Dave ruled out the entire computing population of the West Coast. Conceivably, a Berkeley hacker might use an old-fashioned command, but Dave discounted this. “We’re watching someone who’s never used Berkeley Unix.” He sucked in his breath and whispered, “A heathen.”
Wayne didn’t give a damn about Unix. As a VMS junkie, Wayne was an infidel. Moreover, he felt the hacker couldn’t learn anything from our password file: “Look, there’s no way that anyone can decrypt those passwords. About all he’s learned is our names. Why bother?”
I’d rolled this around in my mind. Passwords are at the heart of security on a big computer. Home computers don’t need passwords: there’s only one user. Anyone at the keyboard can access any program. But when there’s ten or twenty people using a single system, the computer must be certain that the person behind the terminal isn’t an imposter.
Like an electronic signature, passwords verify the authenticity of a transaction. Automatic teller machines, telephone credit cards, electronic funds transfer networks, even some home telephone-answering machines depend on passwords. By filching or forging passwords, a hacker can create counterfeit wealth, steal services, or cover bounced checks. When money was stored in vaults, safecrackers attacked the combination locks. Now that securities are just bits in a computer’s memory, thieves go after the passwords.
When your computer has fifty or a hundred users, you might just store each person’s password in a file. When the user tries to log on, ask for his password and compare that to what’s in your file. In a friendly environment, no problem. But how do you keep someone from sneaking a peek at that password file? Well, protect the password file so that only the system can read it.
Even if you protect the password file, every now and then all the files will be copied onto backup tapes. Even a novice programmer could read those tapes on another computer and list the contents of the password file. File protection alone isn’t enough.
In 1975, Bob Morris and Fred Grampp of Bell Laboratories developed a way to protect passwords, even when files weren’t secure. They would rely on encryption, rather than file protection. If you chose the password “cradle,” the computer doesn’t simply store your choice into a file of passwords. Instead, Unix scrambles the letters into an encrypted word, say, “pn6yywersyq.” Your encrypted password is stored, not the plain text.
So a Unix password file might look something like this:
Aaron: fnqs24lkcvs
Blacker: anvpqw0xcsr
Blatz: pn6yywersyq
Goldman: mwe785jcy12
Henderson: rp2d9cl49b7
Following each account name is the encrypted password. Like Wayne said, stealing the password file just gives you a list of people.
The computer program that encrypts “cradle” into “pn6yywersyq” is built upon a trapdoor algorithm: a process that’s easy to do, but difficult to undo. When Sally Blatz logs in, she types in her account name, Blatz, and then her password, cradle. The system encrypts the password into pn6yywersyq, and compares that to the entry in the password file. If the encrypted entries don’t match, Sally is booted off the machine. The plain text password itself isn’t compared, its encryption is. Password security depends on the trapdoor function.
Trapdoor functions are mathematical ratchets: you can turn them forwards, but not backwards. They quickly translate text into ciphers. To make these locks pickproof, it’s got to be impossible to reverse the algorithm.
Our trapdoors were built upon the Data Encryption Standard (DES), created by IBM and the National Security Agency. We’d heard rumors that the electronic spooks of NSA weakened the DES. They hobbled it just enough to be creckable by NSA, but kept it strong enough to resist the efforts of ordinary mortals. The grapevine said that this way NSA could crack the code and read messages, but nobody else could.
The cryptographic DES program within our Unix computer was public. Anyone could study it. NSA had analyzed its strengths and weaknesses, but these reports were secret. Occasionally, we’d heard rumors of someone cracking this cipher, but none of these panned out. Until NSA published its analyses of the DES, we’d have no choice but to trust that our encryption was strong enough.
Wayne and I had watched the hacker break in and steal our password file. The hacker now knew the names of a few hundred scientists. He might as well have asked for our telephone book—at least that included addresses. Unless he owned a Cray supercomputer, he couldn’t invert the trapdoor function, and our passwords remained safe.
Wayne was still worried. “Maybe this guy’s stumbled on some brilliant way to reverse the trapdoor function. Let’s be a tad careful and change our important passwords.”
I could hardly object. The system password hadn’t been changed for a couple years, and outlasted people who had been hired and fired. I didn’t mind changing my password; to make sure, I used a different password on each computer. If the hacker managed to figure out my password from the Unix-4 computer, he’d still have no way to guess it on the others.
Before pedaling home, I again studied the printout of the previous day’s session. Buried in the ten pages were clues to the hacker’s persona, location, and intentions. But too much conflicted: we traced him through Tymnet into Oakland, California. But Dave didn’t believe he was from Berkeley. He copied our password file, yet our encryption made those passwords into gibberish. What was he doing with our encrypted passwords?
In some ways, this was like astronomy. We passively observed a phenomenon, and from a few clues tried to explain the event and find the location of the source. Astronomers are accustomed to quietly gathering data, usually by freezing behind a telescope on a mountaintop. Here the data appeared sporadically, from an unknown source. Instead of thermodynamics and optics, I needed to understand cryptography and operating systems. Somehow, a physical connection existed between our system and a distant terminal. By applying ordinary physics, it must be possible to understand what was happening.
Physics: there was the key. Record your observations. Apply physical principles. Speculate, but only trust prove
n conclusions. If I were to make any progress, I’d have to treat the task as a freshman physics problem. Time to update my notebook.
And just in time. Wednesday, September 10, at 7:51 A.M., the hacker appeared in our system for six minutes. Long enough to ring the alarm on my terminal, but not enough time to do anything about it. I had stayed at home that night: “Five days at the lab is enough,” Martha said.
I wasn’t at the lab to watch, but the printer saved three pages of the hacker’s trail. He had logged into our Unix-4 computer as Sventek. Well, I understand that—he had Sventek’s password, and had entered from Tymnet.
But he didn’t hang around my Unix-4 computer. Instead he leap-frogged through it and landed in the Milnet. Now it was no news flash that the Milnet existed—it’s a part of the Internet, a computer network that cross-links a hundred other networks. From our Unix computer, we can reach the Internet, and from there, the Milnet.
The Milnet belongs to the Department of Defense.
My hacker connected to Milnet address 26.0.0.113, logged in there as “Hunter,” and checked that he had a copy of Gnu-Emacs, then disappeared.
When I biked in around noon, there was no trace to follow upstream. But the hacker left an indelible trail downstream. Where was that Milnet address? The Network Information Center decoded it for me: the U.S. Army Depot, in Anniston, Alabama. The home of the Army’s Redstone missile complex, two thousand miles away from Berkeley.
In a couple minutes, he’d connected through our lab and into some Army base. The printout left little doubt that this was the hacker. Nobody but the hacker would use Sventek’s account. And who else would check for the Gnu-Emacs security hole on some computer in Alabama?
Nobody was around to tell me to ignore it, so I called Anniston information. Sure enough, the Anniston Army Depot had a computer center, and eventually I found Chuck McNatt, the Anniston Unix wizard.
“Hi, Chuck. You don’t know me but I think we found someone screwing around with your computer.”