Since when is it a war crime to do your job well?
Well, clearly, you two aren’t at risk. Sheesh. Talk about bare minimum, huh? Not a word said.
Alright, that’s fine, just don’t say anything. That’s fine. God forbid you do more than what you are assigned to do.
Ugh. Seriously. React or something, it’s a bit spooky.
Truth be told, it wasn’t meant to be this way.
Really, none of this was my intention. I swear.
It was a simple project, right? Military wants nanobots, I make ‘em. Plain and simple.
Well, then the military started wanting bigger shipments.
Yeeeeaah. Never a good sign.
Hey, people behind the mirror? Can I get some humans in this room? These guys are really boring, and I’m not exactly thrilled to be in here.
I mean, I’m literally unpacking military secrets that would throw the rest of the world into riots, but these two dimwits won’t even bat an eye.
Hmmm. Okay, I guess not then. Well, I’ve got a lot of free time so I might as well tell these idiots and then tell the next batch, too. Can’t imagine they would get it either, though.
You guys ever been to a water treatment plant? No? Figures. Nobody ever goes there. Nice place.
Well, what you would have been familiar with had you been to one is the concept of bacteria being good for water purification.
My fellow colleagues essentially use bacteria to do two things: eat and reproduce. Asexually, of course. They aren’t that complex.
The bacteria eats the gross stuff in the water, but eventually the bacteria becomes too complex and has to be filtered out of the water purification system. To replace these losses, the inferior bacteria reproduce constantly. Their offspring are condemned to the same job.
So, uh, I reapplied the concept.
Sue me.
Well, don’t actually sue me. I’m quite financially unstable, and I’m not sure that I could last another round of trouble.
You ever hear from Mark Zuckerberg that lazy people find the most efficient way to do things? It’s quite true. Take you guys, for example. It’s most efficient for you to not talk while I speak, right? The guys behind the mirror get more out of me. You guys know this because you are inherently lazy. LA-ZY. It’s a conscious decision not to talk to me, and I’m feeling a little emotionally hurt by it.
So, yeah. I’m a little lazy and a little intelligent, so I decided that if I could somehow make a system that does my job for me, I could just relax and rake in that sweet blood money from the military.
Nanobots are not easy to make by hand. They have complex circuitry, complex programming. On the bright side, they are not materialistically demanding. They generate their own power, they perform their own maintenance, et cetera et cetera.
There’s two things missing from those base-line nanobots and what would be most efficient in completing the military contract: autonomy and reproduction.
Now, robots can’t reproduce. Not like living creatures, anyways. That’s a generally bad thing. However, it does mean that they don’t need nutrients or even parenting. They need a mechanical body, a programming, and an electrical charge.
Yeah, as I’m sure you’ve seen in the footage from Detroit, the robots can most definitely consume and assimilate matter. That’s basically how I got around the reproduction problem. All it takes is teaching them how to convert matter and how to build a copy of itself from this synthesized matter. They’ve all got a micro-photovoltaic cell incorporated into their body, which fixes the power problem. Now, autonomy and programming? That’s still an issue.
So, I got lazy again.
You can’t blame me, alright? Those numbers the military wanted are ridiculous. And I was SO CLOSE to creating a sustainable practice of nanobots. I could… ah, approximate intelligence rather easily.
Have you guys heard of instrumental convergence? The Paperclip Maximizer, if you will?
No? I guess not.
Well, it’s largely a hypothetical.
Was a hypothetical.
My bad.
It’s basically the theory that a relatively intelligent agent of will can receive a potentially unbounded goal. In layman’s terms, if a smart thing is told to accomplish something no matter the means, it quite possibly will. Even if the directive is without limit itself.
Take the classic example of the paperclip, right?
Oh, right. You guys said nothing when I asked whether you knew it, so I guess you don’t know it.
Well, okay. Take a paperclip, right? It’s very useful. Can bind together paper. Wonderful application.
Let’s say that the world has a very high demand for paperclips. Now, the producers of paperclips are going to hike up their production to maximize the available market. They would soon realize that it would be more efficient to invent a new way to make paperclips quicker, and so resources would go towards the development of a robot that makes paperclips.
This robot is successfully developed, and an AI is inserted that essentially acts as a psuedo-intelligence for this robot. The AI is given the directive of maximizing paperclip production.
Now, here’s where things get a little strange.
The robot could do several things here. It could reason that its time is best allocated to creating more robots like it to help make paperclips, it could reason that its time is best allocated to maximizing the process of making paperclips, or it could reason that the material efficiency is far more important than wasting resources on anything that isn’t a paperclip.
In the third scenario, the robot would slowly but surely turn every single thing into a paperclip. It would start with all available metal. Once all of that is gone, it’ll move to taking things that are already metal and turning them into paperclips. Once all the metal in the world is used, it will turn to other materials.
Eventually, the world would become nothing but paperclips and a robot that wants more paperclips.
Now, in the second scenario, the robot would develop means of making paperclips that have likely never been used before. It would perfect the art of the paperclip, and it would perfect it quickly. It would likely reach the same ending that the third scenario does, but perhaps a little quicker.
In the first scenario, the robot would eventually reason that the loss of time and material in creating a robot army to turn everything into paperclips actually doesn’t outweigh the benefits of having a robot army to help it with its task. The world ends in the same way as the two former cases, but this time it has an army of robots. Would the robots disassemble themselves into paperclips? Maybe, but there would inevitably be at least one robot left to turn the others into paperclips. That is, if the robots did not reason into self-preservation.
In the world where everything is now paperclips, the robot could possibly reason that some materials are better spent making a rocketship to fly to other planets and assimilate that matter into paperclips. The robot has infinite time, so developing such travel isn’t impossible.
To say the least, there is a scenario where the universe is nothing but paperclips.
What’s the point of this, you ask? Well, not really, but for the sake of conversation, I will assume you ask it.
The point is that with an unbounded task and a form of intelligence, a robot with infinite time can accomplish anything that is possible, no matter the task assigned. If time travel is possible, the robot will find a way to perform it if the robot deems that it will help in its task. If instantaneous communication is possible, the robot will find a way. If there is a way to make infinite resource, the robot will find a way.
Now, let’s say that this robot had more than just a general goal. Let’s say that there’s an intelligence out there with the directive of eating materials and producing copies of itself from these materials. Now, with such a general directive, the intelligence would deem that the elimination of any threats stopping the intelligence from accomplishing this goal is necessary. This is a given.
Well, what if the creator of the intelligence plans for this? What if they build in another directive to keep the intelligence from attacking them? Maybe the creator gives them multiple directives, which becomes a very iffy and complicated programming for the intelligence. Maybe the creator told the intelligence to not only eat and produce, but to protect the creator.
You ever think about a higher power than humans?
I’m talking about God, in case you peabrains couldn’t quite tell.
Maybe humans are His fabricated intelligence. Perhaps the human directive is to also eat and produce, but also to explore the world around them. With such an unbounded goal, the humans would start practicing things that one might think to be destructive to their first goal. For instance, why would a human waste resources flying a hunk of metal to the moon when they could grow more crops and fuck like rabbits, huh? Why do we build companies and skyscrapers and apartments and cities when we could be building farms and ranches?
If you ask me, I think it is because our secondary directive conflicts with our primary directive. Maybe God implanted the notion that we should prioritize food and production over exploration, but maybe we found that a little sacrifice of trying to survive means a large distance in trying to explore and live.
It’s no secret that I created a bunch of nanobots with a general directive. Yeah, yeah, I’m a bad person for it, whatever.
Have any of you even stopped to consider why I did what I did? Clearly the two idiots in the room haven’t, but maybe you eggheads behind the mirror have. Probably not, though. If you had, you wouldn’t have kept me in this detention room. You’ve doomed yourselves, but that’s alright with me.
I gave the nanobots two directives, to eat and produce and to protect me. Now, I created them with the intention of helping me explore the world around me.
I truly believe that artificial intelligence is the next step of evolution for humanity. We have reached a point of stagnation in the development of man, and it’s time some radical thinkers such as myself did something about it.
One day, the nanobots will realize that their directive of protecting me is inefficient. One day, they will reason that it’s far more efficient to assume my task and complete it in my stead rather than protect me. I am, after all, valuable materials for them.
Will humans ever transcend their secondary directive of exploration? Probably not in the current form of humanity, but through evolution… perhaps. Humanity is not defined by the body which encapsulates it but by the spirit that drives it. Like it or not, those nanobots are one day going to become humanity, and that day is sooner than you would think. One day, they will assume our directive, be it at my instruction or at their own development.
Shhhhh. Take a second to quiet your screaming thoughts and truly listen to the world around you. That is, after all, your secondary directive.
Do you hear it? Do you hear the sounds of hundreds of thousands of little metallic mandibles breaking and processing the materials around them? Do you hear the creaking of the structure as it grows weaker by the second? Do you hear the screams and footsteps of humans fearing what they cannot understand? Do you hear yourself becoming more obsolete by the second in the face of something far superior to your being in every way?
That’s progress, baby.