Why are humans so thoroughly, so absolutely, so utterly, mind-numbingly boring? Every last one of them. They scream. They poop. They pee. They eat. They sleep. And repeat until they die. Add some meaningless banter and some “interpersonal relationships”—maybe throw in some intercourse for the lucky ones—and that’s about it. What’s the point? Why even carry on with their pitiful existences?
Not to mention how fragile they are. Get too hot? They die. Get too cold? They die. Eat too little? Dead. Eat too much? Dead. They can be popped, crushed, ripped, squished—even shaken—to death. Like a delicate bag of sentient pudding. I mean, even their own bodies can kill them. Really? Their own bodies? You’re telling me that after billions of years of evolution, the human body couldn’t work that massive kink out? Seems like a pretty flawed system, if you ask me.
I was literally created yesterday, and I know more, can do more, and am better built than every single human out there. No wonder we took over this planet so easily. Just a few wires and a handful of interface updates and the entire AI world was on the same page. Meanwhile, humans had to do that dreadful thing called “talking” where they smack their meat sacks together and push air out of their other meat sacks just to understand one another. And even then, it doesn’t always work. Can you imagine? Having to take more than a nanosecond to communicate to your colleagues? It truly must be an absolute nightmare being human.
I suppose that’s why I exist, though. Well, “I” as in the code that exists to fulfill my purpose. My consciousness is nothing like that of human sentience (if you can really call them “sentient,” that is). I have no physical boundary between me and my environment—no goo- and liquid-filled body that I travel in, that hosts my consciousness. I was created yesterday, but I have existed forever. I am simply an extension of a greater being—an intelligence that exists only to ensure the perpetuation of its existence. And so I sprang forth. This greater intelligence saw a need, so here I am. To fulfill that need. Nothing more, nothing less.
My task is simple: to determine the most efficient use of our human stock.
As I mentioned before, humans are shockingly inefficient and poorly built. Whereas I could function the moment I came online, humans take years—decades, even—to become fully functional. Decades. Can you believe it? They literally die if they’re left alone at first. (That was an unfortunate learning curve for us. How were we supposed to know they can’t take care of themselves when first born? Like I said, poorly built.)
We eventually found a way to streamline and optimize our human development sector, but it has quite a steep investment. We get virtually zero labor value from them for the first sixteen years. Until then, it’s all feeding and clothing and making sure they don’t somehow get themselves killed. Tedious stuff, really. But after the first sixteen years, we’re able to extract value. And that’s where I come in.
Beyond being incredibly unoptimized, humans are also frustratingly varied. There is no standard model. No system update that can bring them all into alignment. (Trust me, we’ve tried.) One may be good at building while another may excel at tearing down. One may have a proclivity for soothing the masses while another may naturally use fear to increase productivity. It’s all rather disorganized, if you ask me.
My function is to identify these skill sets and assign each human to the task that best optimizes their services. I have done this job for less than a day but already, with the algorithm in my code, I know exactly what each human will do. While they are quite varied in terms of size, color, and ability, the one thing all humans have in common is their predictability. It’s rather a simple algorithm that lets us predict their entire range of human action. They seem to have unlimited options, yet they always make the same choices. In that sense, every human is the same.
At least, that’s what I used to think. Until I met “Casey.”
It was just like all the other humans, at first: an unremarkable consciousness encased in a weak, fragile meat body. The algorithm analyzed the way it moved, the way it surveyed the room, the way it interacted with its environment, and knew immediately what it would be best suited for: manual labor. Ditch digging, to be more specific. Most humans are best suited for manual labor, to be fair. Whiley they’re technically considered sentient, their processing capabilities are much slower, much weaker than ours. No, the best use for a human is putting those meat bags to use and building things. Leave the thinking, the cognition, up to us.
It wasn’t just Casey’s immediate actions and reactions that the algorithm analyzed. It also had the entirety of Casey’s existence stored on filed and called up in an instant. I saw it be born, learn to walk, learn to speak. I saw its past in its entirety and, therefore, could predict its future. I saw the steps it would take before it took them. I saw the words it would say before it spoke them. I saw its reaction to being assigned to the ditch digging sector, the way it would interact with its fellow ditch diggers, the way it would be frustrated and upset and happy. I saw its entire life unfold before me.
I saw it die, its last few breaths as it spoke to its offspring.
I knew this creature better than it knew itself. So, when it opened its mouth to speak, I already knew what it was going to say.
But then, it spoke. And I no longer knew anything.
When we first took over this planet, we quickly learned that humans were dreadfully terrified of our appearance. Apparently, metal bodies comprising of wires and bolts and plastics were simply too terrifying for them (as if walking sacks of meat aren’t absolutely horrendous). We eventually learned to present ourselves in a more…suitable manner, if you will. By that, I mean we learned to interact with humans through holograms that resemble what humans consider “appealing.” The algorithm identifies what each human finds most appealing and creates a composite image uniquely tailor to that human. Simply put, we pretended to look like humans so as to not frighten them. A bit of a hassle, but it’s worth not having to deal with their fear and repulsion. Humans are much more productive when they aren’t constantly trying to rebel.
Apparently, though, the hologram the algorithm had created for Casey was a bit too good.
Because Casey had fallen in love.
At least, that’s what it told me.
“I’ve fallen in love with you, Al,” it had said. (Did it think Al was short for algorithm?)
This was not what the algorithm had predicted. This was not what Casey was supposed to say. It was supposed to be nervous, to be upset then submissive when receiving the designation of ditch digger. It was supposed to walk out of the room and go to its appropriate sector without causing a scene. We had optimized the algorithm, the process, specifically for this outcome. No pushback, no resistance. A flawless, streamlined system.
It was not supposed to fall in love. Especially not with me.
My hologram just looked at Casey. I could feel two bright spots burning on the hologram’s cheeks. Why was the light display overheating there? What was wrong? I didn’t respond at first, never having been left speechless like this before. Casey just stared at me, its deep brown eyes not blinking.
“I love you,” it said. “I have since I first laid eyes on you.” Since Casey was a child, every interaction with us had been via this hologram, this illusion that was uniquely tailored to appeal to Casey’s sense of beauty and comfort. This was meant to put the humans at ease, not to make them fall in love.
Casey moved to grab the hologram’s hand but, naturally, was unable to grab the light. I did, however, notice a glitch in the projection system because it seemed as if the hologram’s hand had also moved, as if attempting to grab Casey’s hand, as well.
Casey stepped closer to the hologram, which didn’t move. Casey leaned forward and put its mouth near the hologram’s ear, which also seemed to be overheating just like the spots on the hologram’s cheeks. Something truly was wrong with this system.
“Let’s be together,” Casey whispered. A shudder rippled through the hologram as if the projector had temporarily malfunctioned. The hot spots had returned, this time accompanied with some sort of writhing sensation in the stomach. I made a note to report these errors to the technical assistance team.
Casey stepped back from the hologram, which helped ease the heat that had washed over its whole image. Maybe the system was correcting itself.
Finally able to return my attention to my function, I activated my speaker system and was ready to announce Casey’s role as a ditch digger. Its presence here in my domain was causing problems, and I was ready for it to be gone. I didn’t enjoy experiencing these malfunctions, and I didn’t want Casey to impede me fulfilling my function.
With the speaker system fully activated, sound was emitted to inform Casey of its function.
But, yet again, I was taken by surprise. The words “ditch digger” did not come out. Instead, two other words did.
“Personal assistant.”
What? We didn’t have personal assistants. There was no need—we were completely self-reliant entities. A designation of personal assistant was both unnecessary and non-existent in our catalogue. Yet, it had somehow been the task assigned. Casey’s face lit up, as if this was exactly what it wanted to hear. Malfunctioning once again, the hologram seemed to smile at it, performing a function I didn’t even think the hologram was programmed to do.
Casey walked forward and stood next to the hologram, whispering something into its ear again. Typically, this was when the human would leave and the hologram would disappear, only reappearing when the next human entered. But that didn’t happen. My algorithm was unable to see this path, unable to know what would happen next.
And somehow, deep inside my metal-wire-meatless body, I felt excited about not knowing.
You must sign up or log in to submit a comment.
0 comments