Cuckoo's Egg Page 2
Back to the accounting system for an afternoon. I found that the five minute time difference between the time stamps came from our various computers’ clocks drifting over the months. One of our computer’s clocks lost a few seconds every day.
But all of Sventek’s activities should have appeared in both tallies. Was this related to last week’s accounting problem? Had I screwed things up when I poked around last week? Or was there some other explanation?
That afternoon, I sat through an impressively boring lecture on the structure of galaxies. The learned professor not only spoke in a monotone, but filled the chalkboard with a snake’s nest of mathematical equations.
Trying to stay awake, I tossed around the problems I’d bumped into. Someone screwed up when adding a new account. A week later, Sventek logs in and tries to break into some computer in Maryland. The accounting record for that event seems garbled. Sventek’s unavailable. Something’s amiss. It’s almost as if someone’s avoiding the accounting program.
What would it take, I wondered, to use our computers for free? Could someone have found a way around our accounting system?
Big computers have two types of software: user programs and systems software. Programs that you write or install yourself are user programs—for example, my astronomy routines which analyze a planet’s atmosphere.
Alone, user programs can’t do much. They don’t talk directly to the computer; rather, they call upon the operating system to manipulate the computer. When my astronomy program wants to write something, it doesn’t just slap a word on my screen. Instead, it passes the word to the operating system, which, in turn, tells the hardware to write a word.
The operating system, along with the editors, software libraries, and language interpreters, make up the systems software. You don’t write these programs—they come with the computer. Once they’re set up, nobody should tamper with them.
The accounting program is systems software. To modify or bypass it, you have to either be system manager, or somehow have acquired a privileged position within the operating system.
OK, how do you become privileged? The obvious way is to log onto our computer with the system manager’s password. We hadn’t changed our password in months, but nobody would have leaked it. And an outsider would never guess our secret password, “wyvern”—how many people would think of a mythological winged dragon when guessing our password?
But even if you became system manager, you wouldn’t fool with the accounting software. It’s too obscure, too poorly documented. Anyway, I’d seen that it worked.
Wait—our home-brew software worked properly. Someone had added a new account without using it. Perhaps they didn’t know about it. If someone had come in from the cold, they’d be unaware of our local wrinkles. Our system managers and operators knew this. Joe Sventek, even in England, surely would know.
But what about someone from the outside—a hacker?
The word hacker has two very different meanings. The people I knew who called themselves hackers were software wizards who managed to creatively program their way out of tight corners. They knew all the nooks and crannies of the operating system. Not dull software engineers who put in forty hours a week, but creative programmers who can’t leave the computer until the machine’s satisfied. A hacker identifies with the computer, knowing it like a friend.
Astronomers saw me that way. “Cliff, he’s not much of an astronomer, but what a computer hacker!” (The computer folks, of course, had a different view: “Cliff’s not much of a programmer, but what an astronomer!” At best, graduate school had taught me to keep both sides fooled.)
But in common usage, a hacker is someone who breaks into computers.* In 1982, after a group of students used terminals, modems, and long distance telephone lines to break into computers in Los Alamos and the Columbia Medical Center, the computing people suddenly became aware of the vulnerability of our networked systems.
Every few months, I’d hear a rumor about someone else’s system being invaded; usually this was at universities, and it was often blamed on students or teenagers. “Brilliant high school student cracks into top security computer center.” Usually it was harmless and written off as some hacker’s prank.
Could the movie War Games actually happen—might some teenage hacker break into a Pentagon computer and start a war?
I doubted it. Sure, it’s easy to muck around computers at universities where no security was needed. After all, colleges seldom even lock the doors to their buildings. I imagined that military computers were a whole different story—they’d be as tightly secured as a military base. And even if you did get into a military computer, it’s absurd to think you could start a war. Those things just aren’t controlled by computers, I thought.
Our computers at Lawrence Berkeley Laboratory weren’t especially secure, but we were required to keep outsiders away from them and make an effort to prevent their misuse. We weren’t worried about someone hurting our computers, we just wanted to keep our funding agency, the Department of Energy, off our backs. If they wanted our computers painted green, then we’d order paintbrushes.
But to make visiting scientists happy, we had several computer accounts for guests. With an account name of “guest” and a password of “guest,” anyone could use the system to solve their problems, as long as they didn’t use more than a few dollars of computing time. A hacker would have an easy time breaking into that account—it was wide open. This would hardly be much of a break-in, with time limited to one minute. But from that account, you could look around the system, read any public files, and see who was logged in. We felt the minor security risk was well worth the convenience.
Mulling over the situation, I kept doubting that a hacker was fooling around in my system. Nobody’s interested in particle physics. Hell, most of our scientists would be delighted if anyone would read their papers. There’s nothing special here to tempt a hacker—no snazzy supercomputer, no sexy trade secrets, no classified data. Indeed, the best part of working at Lawrence Berkeley Labs was the open, academic atmosphere.
Fifty miles away, Lawrence Livermore Labs did classified work, developing nuclear bombs and Star Wars projects. Now, that might be a target for some hacker to break into. But with no connections to the outside, Livermore’s computers can’t be dialed into. Their classified data’s protected by brute force: isolation.
If someone did break into our system, what could they accomplish? They could read any public files. Most of our scientists set their data this way, so their collaborators can read it. Some of the systems software was public as well.
Though we call this data public, an outsider shouldn’t wander through it. Some of it’s proprietary or copyrighted, like our software libraries and word processing programs. Other databases aren’t for everyone’s consumption—lists of our employees’ addresses and incomplete reports on work in progress. Still, these hardly qualify as sensitive material, and it’s a long way from classified.
No, I wasn’t worried about someone entering our computer as a guest and walking off with somebody’s telephone number. My real concern centered on a much bigger problem: could a stranger become a super-user?
To satisfy a hundred users at once, the computer’s operating system splits the hardware resources much as an apartment house splits a building into many apartments. Each apartment works independently of the others. While one resident may be watching TV, another talks on the phone, and a third washes dishes. Utilities—electricity, phone service, and water—are supplied by the apartment complex. Every resident complains about slow service and the exorbitant rents.
Within the computer, one user might be solving a math problem, another sending electronic mail to Toronto, yet a third writing a letter. The computer utilities are supplied by the systems software and operating system; each user grumbles about the unreliable software, obscure documentation, and the exorbitant costs.
Privacy within the apartment house is regulated by locks and keys. One resident
can’t enter another’s apartment without a key, and (if the walls are sturdy), one resident’s activity won’t bother another. Within the computer, it’s the operating system that ensures user privacy. You can’t get into someone’s area without the right password, and (if the operating system is fair about handing out resources), one user’s programs won’t interfere with another’s.
But apartment walls are never sturdy enough, and my neighbor’s parties thunder into my bedroom. And my computer still slows down when there’s more than one hundred people using it at one time. So our apartment houses need superintendents, and our computers need system managers, or super-users.
With a passkey, the apartment house superintendent can enter any room. From a privileged account, the system manager can read or modify any program or data on the computer. Privileged users bypass the operating system protections and have the full run of the computer. They need this power to maintain the systems software (“Fix the editor!”), to tune the operating system’s performance (“Things are too slow today!”), and to let people use the computer (“Hey, give Barbara an account.”)
Privileged users learn to tread lightly. They can’t do much damage if they’re only privileged to read files. But the super-user’s license lets you change any part of the system—there’s no protections against the super-user’s mistakes.
Truly, the super-user is all-powerful: he controls the horizontal, he controls the vertical. When daylight savings time comes around, she resets the system clock. A new disk drive? He’s the only one who can graft the necessary software into the system. Different operating systems have various names for privileged accounts—super-user, root, system manager—but these accounts must always be jealously guarded against outsiders.
What if an outside hacker became privileged on our system? For one thing, he could add new user accounts.
A hacker with super-user privileges would hold the computer hostage. With the master key to our system, he could shut it down whenever he wishes, and could make the system as unreliable as he wishes. He could read, write, or modify any information in the computer. No user’s file would be protected from him when he operates from this privileged high ground. The system files, too, would be at his disposal—he could read electronic mail before it’s delivered.
He could even modify the accounting files to erase his own tracks.
The lecturer on galactic structure droned on about gravitational waves. I was suddenly awake, aware of what was happening in our computer. I waited around for the question period, asked one token question, then grabbed my bike and started up the hill to Lawrence Berkeley Labs.
A super-user hacker. Someone breaks into our system, finds the master keys, grants himself privileges, and becomes a super-user hacker. Who? How? From where? And, mostly, why?
* What word describes someone who breaks into computers? Old style software wizards are proud to be called hackers, and resent the scofflaws who have appropriated the word. On the networks, wizards refer to these hoodlums of our electronic age as “crackers” or “cyberpunks.” In the Netherlands, there’s the term “computervredebreuk”—literally, computer peace disturbance. Me? The idea of a vandal breaking into my computer makes me think of words like “varmint,” “reprobate,” and “swine.”
It’s only a quarter mile from the University of California to Lawrence Berkeley Labs, but Cyclotron Road is steep enough to make it a fifteen-minute bike ride. The old ten-speed didn’t quite have a low enough gear, so my knees felt the last few hundred feet. Our computer center’s nestled between three particle accelerators: the 184-inch cyclotron, where Ernest Lawrence first purified a milligram of fissionable uranium; the Bevatron, where the anti-proton was discovered; and the Hilac, the birthplace of a half-dozen new elements.
Today, these accelerators are obsolete—their mega-electron volt energies long surpassed by giga-electron volt particle colliders. They’re no longer winning Nobel prizes, but physicists and graduate students still wait six months for time on an accelerator beamline. After all, our accelerators are fine for studying exotic nuclear particles and searching out new forms of matter, with esoteric names like quark-gluon plasmas or pion condensates. And when the physicists aren’t using them, the beams are used for biomedical research, including cancer therapy.
Back in the heyday of World War II’s Manhattan project, Lawrence’s cyclotron was the only way to measure the cross sections of nuclear reactions and uranium atoms. Naturally, the lab was shrouded in secrecy; it served as the model for building atomic bomb plants.
During the 1950s, Lawrence Berkeley Laboratory’s research remained classified, until Edward Teller formed the Lawrence Livermore Laboratory an hour’s drive away. All the classified work went to Livermore, while the unclassified science remained in Berkeley.
Perhaps to spread confusion, both laboratories are named after California’s first Nobel Laureate, both are centers for atomic physics, and both are funded by the Atomic Energy Commission’s offspring, the Department of Energy. That’s about the end of the similarity.
I needed no security clearance to work in the Berkeley Lab—there’s no classified research, not a military contract in sight. Livermore, on the other hand, is a center for designing nuclear bombs and Star Wars laser beams. Hardly the place for a long-haired ex-hippie. While my Berkeley Lab survived on meager scientific grants and unreliable university funding, Livermore constantly expanded. Ever since Teller designed the H-bomb, Livermore’s classified research has never been short of funds.
Berkeley no longer has huge military contracts, yet openness has its rewards. As pure scientists, we’re encouraged to research any curious phenomena, and can always publish our results. Our accelerators might be peashooters compared to the behemoths at CERN in Switzerland, or Fermilab in Illinois; still, they generate huge amounts of data, and we run some respectable computers to analyze it. In fact, it’s a source of local pride to find physicists recording their data at other accelerators, then visiting LBL to analyze their results on our computers.
In raw number-crunching power, Livermore’s computers dwarfed ours. They regularly bought the biggest, fastest, and most expensive Crays. They need ’em to figure out what happens in the first few nanoseconds of a thermonuclear explosion.
Because of their classified research, most of Livermore’s computers are isolated. Of course, they have some unclassified systems too, doing ordinary science. But for their secret work—well, it’s not for ordinary mortal eyes. These classified computers have no connections to the outside world.
It’s just as impossible to import data into Livermore from the outside. Someone designing nuclear bomb triggers using Livermore’s classified computers has to visit the lab in person, bringing his data in on magnetic tape. He can’t use the dozens of networks crossing the country, and can’t log in from home, to see how his program is running. Since their computers are often the first ones off the production line, Livermore usually has to write their own operating systems, forming a bizarre software ecology, unseen outside of their laboratory. Such are the costs of living in a classified world.
While we didn’t have the number-crunching power of Livermore, our computers were no slouches. Our Vax computers were speedy, easy to use, and popular among physicists. We didn’t have to invent our own operating systems, since we bought Digital’s VMS operating system, and grabbed Unix from campus. As an open lab, our computers could be networked anywhere, and we supported scientists from around the world. When problems developed in the middle of the night, I just dialed the LBL computer from my home—no need to bicycle into work when a phone call might solve it.
But there I was, bicycling up to work, wondering if some hacker was in our system. This just might explain some of my accounting problems. If some outsider had picked the locks on our Unix operating system and acquired super-user privileges, he’d have the power to selectively erase the accounting records. And, worse, he could use our network connections to attack other computers.
I ducked my bike into a corner and jogged over to the cubicle maze. By now it was well past five, and the ordinary folks were at home. How could I tell if someone was hacking inside our system? Well, we could just send an electronic mail message to the suspicious account, saying something like, “Hey, are you the real Joe Sventek?” Or we could disable Joe’s account, and see if our troubles ended.
My thoughts about the hacker were sidetracked when I found a note in my office: the astronomy group needed to know how the quality of the telescope’s images degraded if they loosened the specifications for the mirrors. This meant an evening of model building, all inside the computer. I wasn’t officially working for them anymore, but blood’s thicker than water … by midnight, I’d plotted the graphs for them.
The next morning, I eagerly explained my suspicions about a hacker to Dave Cleveland, “I’ll bet you cookies to doughnuts it’s a hacker.”
Dave sat back, closed his eyes, and whispered, “Yep, cookies for sure.”
His mental acrobatics were almost palpable. Dave managed his Unix system with a laid-back style. Since he competed for scientists with the VMS systems, he had never screwed down the security bolts on his system, figuring that the physicists would object and take their business elsewhere. By trusting his users, he ran an open system and devoted his time to improving their software, instead of building locks.
Was someone betraying his trust?
Marv Atchley was my new boss. Quiet and sensitive, Marv ran a loose group that somehow managed to keep the computers running. Marv stood in contrast to our division head, Roy Kerth. At fifty-five, Roy looked like Rodney Dangerfield as a college professor. He did physics in the grand style of Lawrence Laboratory, bouncing protons and antiprotons together, looking at the jetsam from these collisions.
Roy treated his students and staff much as his subatomic particles: keep them in line, energize them, then shoot them into immovable objects. His research demanded heavy number crunching, since his lab generated millions of events each time the accelerator was turned on. Years of delays and excuses had soured him on computer professionals, so when I knocked on his door, I made sure we talked about relativistic physics and ignored computing.