The Instant Commuter
Fiction by Thom Hawkins
Dr. Margaret Tauton arrived promptly for her interview at a long one-story brick building in a business park southwest of Chicago. If the building had no windows, a person’s attention would be drawn to it; instead, the windows were tinted, gently obscuring the work going on inside. Only a sheet of copy paper printed with black letters taped to the door provided any insight regarding the interior—‘Instant Commuter, LLC.’
A woman had called Dr. Tauton a week earlier, identifying herself as “an associate” of this company, and making an unsolicited request to interview for a unique project. The request was certainly unusual in the lack of detail offered, even in response to her direct questions. It was some sort of transportation company, but not directly in competition with her current employer, where she worked on the artificial intelligence used by self-driving cars. She was assured that she would not be violating her non-compete clause, and that she would be interested in this opportunity to expand her work beyond its present-day parameters. The call had, without question, served one purpose—to attract her interest.
Dr. Tauton entered a small vestibule with an array of cameras and sensors along the structural supports between large panes of darkly tinted glass. Her immediate thought was unease—the feeling of being watched. Then she thought this building must not be just the business end of the company but also its research and design facility, hence the security measures.
Like the sign outside, a screen at one end of the vestibule displayed the company name—‘Instant Commmuter, LLC.’ Almost the same—she focused her attention on the extra ‘m’ in ‘Commmuter.’ It seemed odd that the word was correct on a paper sign taped to the door, but incorrect in a digitally rendered version. Disappointed by the lack of precision, she decided this would weigh against her decision to join this project, whatever it was.
Dr. Tauton suddenly felt nauseated and flushed. She’d never been claustrophobic before, but in this strange room, with its cameras and errant logo, she wanted nothing more than to leave. She looked for the door to the building proper but found no handles or even hinges apparent.
“Hello?” Silence. “Hello? This is Dr. Margaret Tauton. I have an interview scheduled.” She knocked on the interior glass. Nothing. Just the same offending logo. “I’m leaving now,” she announced, equally irritated by the suddenness of her sickness and being kept waiting.
“Dr. Tauton?”
She blinked, feeling groggy and unsteady on her feet, taking a moment to find her words. A young woman with brunette hair stood before her, dressed in a navy business suit, carrying a black portfolio. “Dr. Tauton?” the young woman repeated.
“Yes? Yes. I’m Dr. Tauton.”
“I’m glad you could make it,” the woman said. “We spoke on the phone.”
“Oh, yes. Ms. Garth, was it?”
“Call me Patricia. Let’s go to the conference room. We can talk freely there.”
Her choice of the word ‘freely’ seemed strange. But much of this did. Dr. Tauton reflected back to the cameras and sensors in the vestibule, the feeling of being watched. The typo in the logo, the feeling of nausea—she couldn’t even remember how she got out of the vestibule—and now Patricia Garth’s invitation to ‘talk freely.’ If the logo was the first mark against this decision, that was the second.
As she followed Patricia down the hallway, she noted the bare walls—no inspirational posters or company awards—just beige paint, light blue carpet, and a drop ceiling with fluorescent lighting. She entered the conference room behind Patricia, who offered her a seat at a long, rectangular conference table before closing the door and taking a seat herself, directly opposite, facing the closed door as if she expected someone to come through it. There was no decoration in the conference room either—just the same beige walls, light blue carpet, and drop ceiling with fluorescent lights.
Seeing Dr. Tauton look at the rest of the table at the eight empty chairs, Patricia must have anticipated her question. “It will just be us today.” Patricia opened her portfolio.
“Okay.” Dr. Tauton saw her curriculum vitae on the left side of the portfolio, and a blank pad of paper on the right, with a rough edge at the top where some sheets had been torn off. She looked around, nervously.
“I don’t think I’ve ever met anyone with a degree in mathematical philosophy,” Patricia began.
“We’re a rare breed,” said Dr. Tauton, settling into a familiar topic.
“Did you have a particular career in mind with that degree?”
“No one gets a doctorate in mathematical philosophy with a career in mind.”
“You certainly managed to find a niche. How did that come about?”
“My thesis was on fair division—cake cutting problems.”
“I cut, you choose.”
“That’s the simplest version. It gets more complex when you introduce more than two consumers, or when consumers have varying preferences, like frosting to cake ratios.”
“Now I’m seeing the connection to your first job.” Despite having the CV in front of her, Patricia did not glance down at it for reference.
“Yes, while I was working on my thesis, I was quite absorbed, not thinking at all about applications—that was my philosophical training—very abstract. Once I had my doctorate in hand, though, I realized I had nowhere to go but academia. All I could do was turn around and teach what I’d learned to other students also bound to stay within the academic sphere. It wasn’t like I was going to be hired to help at children’s birthday parties, cutting up cake for six-year-olds.”
“But it wasn’t far off.” Patricia’s comment felt like a prompt to continue.
“It wasn’t. While I was studying fair division, a good friend was studying law and ultimately became a divorce attorney. One night over dinner, as she was describing a particularly difficult case, I realized this was a real-world application for fair division—the equitable division of assets. Just because it isn’t cake, doesn’t mean people feel any less strongly about their estates.”
“A fair division consultant—you created your own field.”
“There were, of course, divorce attorneys that had gotten quite good at asset division, but I was the first to approach the problems from the perspective of mathematical philosophy.”
“There’s a psychological angle as well,” Patricia prompted again, guiding the conversation.
“I had to develop that skill. In philosophy, we’re given the premises—we don’t have to derive them. That took time, learning from the more experienced attorneys, who themselves had developed that skill over time.” Dr. Tauton glanced around the conference room again, suddenly realizing there were no whiteboards with insufficiently erased markings to provide evidence of what this company was up to.
Patricia quickly brought her back into the discussion. “How did you make the leap from fair division consultant to working with artificial intelligence?”
“I hate to sound conceited, but after so many cases, the problems became easier and easier to solve. I just got bored. Then I saw a news report about self-driving cars. The report referenced the trolley problem, which is, of course, a familiar problem in mathematical philosophy. I began to realize there were many real-world applications to what I’d studied—I just had to learn to recognize them.”
“You haven’t been working in that field long, and here you are looking for a new challenge. Bored already?”
Again, Dr. Tauton noted Patricia asked her questions based on her CV without appearing to look at it. She maintained eye contact, subtly nodding as if encouraging her to continue. “I wouldn’t say I’m bored—but my learning curve wasn’t as steep as it was with asset division. There wasn’t anything like the psychology of dealing with people here. Artificial intelligence is a pure field. You’re not working in the fuzzy margins between people and machines. People are imperfect, and diverse in their imperfections. With AI, it’s like you get to build a mind from scratch.”
“Like a constructed language. Language has evolved over thousands of years—it’s weird and messy. Then someone decides to fix all that—they start from scratch and build a perfect way of communicating that is both efficient and precise.”
“That’s a good analogy. We’re building an ideal way of making decisions, without the unpredictability of the human mind.” As Dr. Tauton spoke, she realized how oddly at ease she felt in this conversation, how familiar it seemed, even as the place itself was not.
“You found a purely mathematical solution to the trolley problem?”
“No.” Dr. Tauton paused. “The self-driving car version of the trolley problem is simple. A car is moving forward and detects two people in its path. The car cannot stop in time to prevent hitting the two people, so the only option is to veer left. If it veers left, however, it will hit one person standing there. What should we program the car to do in this situation—go straight or veer left?”
“Veer left, I hope,” Patricia said quickly with a nod to steer the conversation back to Dr. Tauton.
“What’s worse, inaction or action—and how much worse is one than the other? The assumption is that inaction is always better because you can’t rationalize that you didn’t willfully choose to act. That’s why there are always more people on the inaction side of the equation—to equalize the weights. At first, I attacked the problem as an actuary—the calculus of human life. How much is a life worth?”
“It depends on whose life—what if the two are criminals that have already been condemned to life in prison?”
“A car would have no way of knowing. It must treat all lives equally.”
“Do Asimov’s laws of robotics apply?” Patricia asked.
“As crazy as it seems to base research on science fiction precepts, we did consider those.”
“It seems like only the first law would really apply in this situation—the robot may not injure a human being, or, through inaction, allow a human being to be injured.”
“First and second laws.” Dr. Tauton put up her index and middle fingers in succession. “The second is where it became problematic because, by necessity, we were giving a robot a command—which it must obey—to kill a human being, either two through inaction or one by action. We tried avoiding the problem by building better sensors, to avoid facing a no-win situation.”
“Like Captain Kirk and the Kobayashi Maru,” Patricia ventured, nearly smiling.
“I would betray myself by acknowledging that reference.” Dr. Tauton half-smiled back.
“I suspected you might be a fan.”
“I hate to admit that I like how he handled the situation, reprogramming the computer to create a path to victory. I felt brave myself, attempting to handle the situation through an engineering solution—a bit of the same thrill I felt years before as I started to grasp the psychology of a divorcing couple—” Dr. Tauton paused at the sound of a sudden, short whir, somewhere outside the conference room.
Patricia cut off the sound with a question. “The sensors didn’t work?”
“To an extent. They certainly made it possible to anticipate and avoid situations where we had to make decisions that were no-win from the perspective of human life. No loss of life is acceptable from a liability perspective. We looked to Buridan’s ass for a model. A donkey is equally hungry and thirsty, and unable to prioritize between eating a mound of hay and drinking from a trough of water, it dies standing between the two. We tried to find a way to equalize the options in the trolley problem in such a way that the vehicle wouldn’t take either path.”
“How would that work if the vehicle can’t stop quickly enough?” Patricia nodded and even fluttered her left hand slightly, as if fighting the urge to conduct.
“It pointed us in a different direction. Our problem was that we were trying to find a deterministic solution—an algorithm that told us what the right thing to do was in each situation. If we program a vehicle to always avoid two people and hit one, it’s possible to take advantage of this certainty.”
“What do you mean?”
“I don’t want to sound macabre, but someone, or specifically two people, could exploit the programming, for example, by stepping in front of the vehicle so that it veers into a single person instead. They could weaponize the vehicle precisely because the programming was certain.”
“You were trying to avoid the unpredictability of the human mind, but by making the vehicle predictable, it was exploitable.” As Patricia spoke, Dr. Tauton noticed that she was also making discreet notes on the pad of paper, still with hardly a downward glance.
“Exactly. We’re talking about human life here, but there’s also an interest in liability. Who is liable in that situation—the two people who stepped in front of the vehicle, or the company that decided if the car senses two people in its path, it should, every time, veer into one person instead? If we used the approach of Buridan’s ass to make the options equally desirable, but in a way that is probabilistic rather than deterministic, we can eliminate liability.”
“How would that work?”
Dr. Tauton paused, considering the constraints of her non-disclosure agreement. The more theoretical aspects of her work, she was free to publish, but was forbidden from revealing proprietary applications of the theory. She spoke slowly, evaluating each statement as she released it into the conversation. “By introducing randomness. Instead of programming the vehicle to always value two lives over one, we program it to prefer two lives two-thirds of the time and one life one-third of the time. The programming generates a random outcome based on a constraint. If we run the scenario a hundred times, the vehicle will strike the two people just over thirty times and one person more than sixty times.”
“But in any one situation, what the vehicle does cannot be precisely determined.”
“Correct—that avoids liability and discourages anyone from using the certainty of a deterministic solution to achieve a pre-defined outcome.”
“That’s a clever solution.” Patricia’s flat tone did not match the praise in her words, like she somehow already knew the answer. “I think it’s time to talk about our opportunity here.”
“I still have no idea what you do here,” Dr. Tauton said, looking around the room again for clues and still finding none.
“We like to keep it under wraps. It’s a relatively new field with trade secrets where even patent applications would reveal too much to our competitors.”
“Ah.” Dr. Tauton was suddenly feeling as if she’d revealed too much, despite staying within the bounds of her agreement. “Where do I fit into this new field?”
“What we’re doing here, Dr. Tauton, is pioneering molecular transportation.”
“Molecular transportation?” Dr. Tauton asked, very much understanding the words, but not necessarily in juxtaposition.
“Because I know you’re a Star Trek fan, Dr. Tauton, I can tell you it’s very much like a transporter.”
“Like ‘beam me up’ transporter?”
“Yes, that’s it.”
Dr. Tauton was excited, but wary. “I’m still not clear where I fit in to all this, though.”
“This technology makes a complete copy of an organism’s molecular structure, transmits that information digitally, and then rebuilds the organism at a given destination.”
“Like a biological fax machine.”
“Yes—though we hope we can transcend the metaphor of a fax machine.”
“Of course. And what problem do you have for me?” Dr. Tauton smiled, glad to be on the other side of this interrogation at last.
“Once the organism is reconstructed at the destination, we need to determine how to …” Patricia paused, searching for the most delicate words, “dispose of the…remnants…at the point of origin.”
”Oh. An ethical problem.”
“To put it delicately. We couldn’t think of anyone better to work at the intersection of ethics and technology. I hope you see how this provides you with an opportunity to grow your expertise.”
“I do, but I’m not sure this is a long-term prospect—” Dr. Tauton looked around the room, trying to be obvious about her awareness that this facility did not seem settled in for the long-term. “…more of a consultation.”
“Perhaps. Do you already have a solution in mind, Doctor?”
“It’s not so much a matter of how as when.”
“How do you mean?”
“You need to ensure that you’re successful at recreating the…” Dr. Tauton leaned on the terminology provided, “…organism…at the destination before…eliminating…the organism at the point of origin. And by successful, I don’t just mean biologically. Its consciousness, that’s tricky. Despite all our advances, we still don’t exactly have consciousness down to a science. It was always assumed that to understand how the mind works, we had to reduce it to its most basic components.”
“A deterministic model of the mind.”
Dr. Tauton realized that the conversational dynamic had shifted again, and Patricia was once more controlling the conversation. “Yes, just like we tried to do with AI decision-making. But as we discovered there, we’re learning the best model isn’t necessarily the most exact. The way the mind works, at the quantum level, may be probabilistic rather than deterministic.”
“You’re saying that even if we recreate the biology exactly at the molecular level, it may not work the same at the atomic level.”
“Correct. You would need to confirm the reconstruction at the destination by testing the consciousness of the organism.”
“Like an interview?”
“Ha.” Dr. Tauton paused, suddenly uneasy again. Patricia’s words came back to her: We can talk freely there. “Yes, like an interview.”
“And after you confirm the consciousness of the organism at the destination end, through some means, you can then proceed to…disengage…at the point of origin?”
“Yes, but here’s where the psychology comes in. By nature, a being does not want to be destroyed. For the organism at the destination end, if successful, the transaction is seamless—no interruption. For the organism at the origin, nothing has happened—they’re still there. Yet they know that if the transaction is successful, they will be destroyed. It is in their nature to resist—strongly—that feature of the operation.”
“How would we overcome this resistance?” Patricia sat up in her chair, signaling this was a question to which she did not already know the answer.
“Ideally, you would prevent them from even knowing they were going to be destroyed.”
“The intention is that these transactions be willful. Customers are coming to us for a service.”
“Yes, but part of that service is your ability to handle the comfort—psychologically —of the transaction. When I was working asset division, I wanted each party to feel like they were getting the better end of the deal. If a party felt like they were compromising, they would resist, and that prolonged the process. The best outcome would be for the organism not to know it would be destroyed at the conclusion of the transaction.”
“How would we accomplish that?”
“If both the organisms believed they were on the destination end of the transaction, neither would resist. The points of reference at the origin and destination must therefore be indistinguishable.”
“Uh-huh,” Patricia said, confirming she understood without committing to full acknowledgement. Dr. Tauton watched as she wrote the words identical pods on the pad in her portfolio.
“I also recommend you engage their mind somehow—but not in a way where they realize they’re being purposefully engaged.”
Patricia wrote the word distract in her portfolio. “If we need a confirmation gap on the receiving end, we wouldn’t be able to destroy the original immediately post-transaction. There must be a time gap at the point of origin as well.”
“Whatever consciousness verification procedure you use on the destination end, you could mirror at the point of origin. You could even monitor both simultaneously to determine whether there are differences in responses or response times. It’s possible, though, depending on your transaction procedure, that the subjects’ states will not be identical. Recognizing what differences are within tolerance must be worked out through testing. How far along are you with testing?”
“Our research and development status is proprietary.”
“Oh,” Dr. Tauton paused, reminded that this entire conversation had been a one-way transaction. “Does that mean I don’t get the job?” she joked, in an effort to break the tension.
“As you stated earlier,” Patricia responded, “this is more of a consultation.”
“We haven’t settled on terms.”
“I think we’re finished here.” Patricia Garth snapped her portfolio shut, punctuating her seriousness. “I’ll show you out.”
Dr. Margaret Tauton entered the vestibule. As she was about to push her way through the outer door, something caught her attention—she saw the digital logo in the glass: ‘Instant Commuter, LLC.’ No extra ‘m.’ The typo had been corrected. She paused, wondering when they discovered and fixed the error.
Outside the vestibule, Patricia Garth squeezed her eyes shut for what she knew well was coming. In a blinding flash of light, a plasma arc disintegrated the copy of Dr. Margaret Tauton, reducing her, at the molecular level, to ash.
Patricia opened her eyes and spoke freely. “I think we were getting somewhere that time. She’s definitely getting more comfortable as I get to know her backstory. Generate another copy from that same image, and let’s go again.” She tore the notes from her pad and tucked them into a hidden pocket in the portfolio.
With no way to get further into the building, Dr. Margaret Tauton turned and pushed open the door to the outside, back into the air and sunshine. The door closed behind her. She turned back and looked at the paper sign taped to the door: ‘Instant Commuter, LLC.’ Convinced now this whole thing was a prank, she took a deep breath of cool October air, walked back to her car, and drove away.