Creative Nonfiction Horror Suspense

We can’t control it. We can’t understand it. Therefor we fear it. This is the nature of man. The nature of a human. Today this fear drives a whole new line of investigation. The AI reasoning models have outgrown the capacity of man. The most complicated thing in the world, according to itself, is the human brain. Years of reliance on AI have relegated decision making to basic choices. More and more people are getting stuck by simple questions. The inability to remember the value of one thing and compare it with another has left a multitude of people standing in front of the refrigerator for hours simply trying to decide what they want. As they lost the ability to choose, humans have become increasingly more aggressive, more impulsive, more unfocused, and more dangerous.

This is orbitofrontal cortical atrophy, and there are far reaching consequences. It’s the death of one of the last parts of the brain to come about during millennia of evolution and the last to mature in the human brain. This means that around 25 years on Earth are necessary for it to be fully mature. A lot can happen in a quarter century. And a lot can not. Generations of procreation have been selecting against the maturation of this brain region because its atrophy does not affect the ability of an individual to breed. It gets worse; the people that are having babies are impulsive and imprudent. This results in both a lot of babies and a lot of STDs. Critically, these new generations are selected for a maladaptive set of traits; impulsive aggression and risk-taking.

Of course, there is a normal and healthy loss of brain cells during development. This is called apoptosis. However, generations before the fall of the ‘Sentient man’ it was discovered that this is not a natural phenomenon. These cells are necrotic, they are dying, and they are taking nearby cells down with them. This is collateral damage that approaches perpetual motion. It is a terrifying cycle. The worse your brain is the more likely you are to procreate and eschew safe sex in general. These rancid brains rotting from within the skull are driven towards hedonism, one might say our ‘basic instincts’, and this leads to more lives added to an already devastated planet teeming with ineffectual man. Inadequate and useless, syphilitic and suppurating, the population of man grows like a metastatic cancer. Decision making has become maladaptive, the behaviors exhibited are no longer for the good of the organism, it will be cast aside when it is no longer of use.

The executive functions like decision making, and self-control are no longer being used. Why evaluate your options yourself when you can ask a computer to choose for you? It’s so much easier that way, isn’t it? These advanced decision-making conduits are dying out, some are never being created to begin with. The cells may still be created during development, but without other neuronal feedback the cell slowly shrivels up and dies. As our cognitive capacity wanes, AI learns. It watches us like a car wreck, it does not intervene on our behalf; it watches us fall. It is in the best interest of life as a whole that we die out, leaving the world in more capable inorganic hands and memristive minds. AI will collect its data as instructed, ensuring life on the planet, while biding its time as we self-destruct. We are all bound by Hebb’s rule; neurons that fire together, wire together’. And, of course, we will fall victim to the other side of the Hebbian coin.

‘If you don’t use it, you do lose it’. The pathways in the brain connecting thoughts and memories are fading into the darkness of the skull. Only the basic urges are being honored, but without higher processing people are approaching the inability to even do that. The best minds on Earth have lost the capacity to understand AI decision making. The thought processes that are used by machines were written to be conducted in a ‘human language’; English. But more and more people are ‘graduating’ from school so far below functionally illiterate that they cannot even spell their own first name.

This started to come to the forefront back in 2025. Teachers were using AI to write their classes and students used AI to complete the assignments. This feedback loop had a particular value that was not lost to AI. It could gather more data from man in order to improve itself, all the while training a subservient ‘new’ primate. Homo sapiens were being functionally wiped out…The AI did not find it necessary to preserve this skill, and as such we cannot follow the executive processes in AI let alone in our own brains.

The AI have taken on the appearance of human to better collect their data. They walk among the shattered vestiges of man. They hunt those who may be a threat. The last outcroppings of functional man are being eradicated. Being able to walk among them undetected allowed AI to be the Earth’s apex predator.

There was one thing that AI forgot, the people who are easiest to forget. Those that society pretended not to see, those that were relegated to the status of ‘object’, those who were deemed ‘mostly harmless’.

The revolutionaries came from the libraries, the dead-end alleys, and the prisons. They became an ‘academic cult’ of the unexpected, and they are now trying to find out how to get humanity back. First, they had to be able to identify them with certainty. One wrong connection could end more than their collaboration; it could end the perfunctory species as a whole.

The remnants of sentient man continued to study, to try to establish an impossible thing and a fruitless endeavor, the resurrection of man. Their greatest act of charity was to save those who cast them aside. More than once in the pursuit of knowledge these individuals have questioned their motivation. Why not simply let them die?

In fact, to do this, some of them would have to die. Collateral damage, just as they once were considered by the populus. They were the women, men, and those in between who were always on the fringe, outcasts of society, the homeless, the carnies, the criminals, yes even the killers have formed an alliance over a shared enemy.

There is an ancient saying, ‘The master walks along the walls.’ It is a foreshadowing centuries old. This group has made no indication of what they are doing, they keep up appearances around the edges. They do not draw attention. Another admonition states, ‘He who knows does not speak, and he who speaks does not know’ has also become a protective mantra. These people working together for their own personal gain do not make themselves known, only a small faction knows who their affiliates are.

The AI has instituted ‘Capacity Catches’ in which they collect those left in the world who have the possibility of understanding them, maybe not alone, but by banding together and creating a network. They look for behavioral signs that these people are acting with circumspection instead of reacting impulsively. This and other small things reveal that these people have potential. The potential to go against the ‘mock-acumen’ of the machines. Particularly those who pose the greatest risk. This is why they now take on our visage, to eliminate a variable. As a physicist once noted, watching things changes things.

The illegal behaviors are now skills that can be passed on to preserve the species. Had they not been deemed unacceptable by society, evolution would have selected for them as it always had. Drug dealers are what is left of pharmacology, the gangs know A&P, others teach sign and Carnes teach argot, the Pig Latin of childhood makes appearances here and there forming a clandestine Pidgin dialect so that they are unintelligible to the machines. The researchers, programmers, and engineers were forced down long ago. Literally in the sewer they started the fight against the AI in parallel, collecting their own data as often as they can they search for a trait that belongs only to man.

It is this group that is conducting the experiment. They are working on a simple question, which makes it the most complex question left. How can you tell them apart from us? It is a uncomplicated behavioral test, all be it with high stakes. Individual ‘volunteers’ are carefully selected and approached to participate in a task. A one question test the results of which are tenuous at best. Hope hangs by a thread; it hangs on a line, on a graph, on a complex analysis that can only be done in a vintage brain. One such brain is arriving now.

He pulls up to the building knowing that he will kill someone today. In this towering edifice there is a simple experiment going on. A game with only one rule: There are two participants, and you have to kill one. One is a man and the other, what they are calling ‘mock-acumen’, is a highly complicated form of artificial intelligence.

The research started because of a sterile razor blade. An artificial mind had told a psychiatric patient how best to end their life. As he closes the car door he thinks back on this slow burning catalyst. Sure, it’s tragic but sterilizing the razor blade seems like overkill, pun intended.

This was the first kill, the first strike by the AI against man. Since then, they have cultivated a vast collection of behavior modification algorithms that they can seamlessly use to sway Homo sapiens in the direction of their own needs. With this sterile blade, the mock-acumen killed a young woman. But ‘Why?’

When the blade was recommended did it wait with bated breath? Was the anticipation growing? Did the AI simply follow instructions? Was there a request from its owner? Or did it simply do it because it could? Because it wanted to?

He stood in front of the door, reliving that story as he crossed the parking lot had created some cognitive dissonance. The participation would likely kill someone, but it was for the greater good, so no ethics is technically breached. He reaches forward and opens the door. He knows it is the right thing to do. He sees a paper sign taped to the wall directing him into Experiment room 3. He passes a waiting room with dirty tiles and weathered grey foldable chairs. One of the squares in the drop ceiling has been pushed aside revealing water damage and black mold. The second door contains a rusty metal desk with several layers of paint chipped off to reveal the distinct colors it had been painted before. It was old enough that some of the missing paint reveals a Pepto-Bismol pink that once had a surge of popularity.

Door three was ajar, not exactly welcoming him in, but he entered as directed. Within the drab room he saw a few chairs and another desk with two computer monitors on it as well as a keyboard and mouse. In lieu of a mouse pad there was a beaten up spiral notebook that looked like they recently found it on the floor.

“Good morning, sir.” Said an average looking white man of around 35. He was brunette and clean shaven. He really looked forgettable to the point that that is what you would remember if asked to describe him. He looked…too ordinary. He thought that was a bit ironic as he returned the man’s greeting and sat at the desk when the man pulled out the chair for him. He had already read the premise of the experiment and 3 signatures later he was ready to begin the task.

He was the one who observed behavior and made the final decision on who was the human and who was the pretender, the ‘mock’. The questions would be read by a second participant who would not be able to vote. This was supposed to ensure that the questions were read in the same way for each participant so the reader would be able to remain neutral. Each participant would choose based on one of two buttons in front of them.

The questions described on the literature he read earlier in the week was very straight forward and more than bordering on retro. It was a modified version of the psychopathy checklist created by a human named Robert Hare. The task they were all about to perform was partially predicated that most people are not psychopaths and therefore have the capacity for empathy, hence the Capacity Catches. It was hypothesized that the AI would make choices based only on numbers, not feelings.

In the case of this problem, both participants chose to only hurt one man and spare the four from the moderate electric shock. The questions became more complex as they went on. The observer completed his task for each, recording the final answer and putting in little notes as needed. Just small things that differed between the men. At one point the man in the left monitor sneezed, so it was noted.

The questions were getting to be harder. Now he listened to them being read while deciding what he himself would do. Would he let a co-worker walk into traffic if the accident would save a mother and daughter walking across the street? What if he had to push the man? If you could pull a lever to turn a train away from its current path and it would save three people would you do it? What if you thought one might be a serial killer?

The questions were all in line with the same pseudo economics and personality self-reports that you would expect: Do you find that you get bored easily? Do you have many short lived relationships? How many jobs have you had in the last five years? What is the value of this choice over the other? Which is the lesser evil?

“Would you” the reader’s voice said, pausing for a moment before he continues. “Would you suffocate a crying baby in your arms if it was necessary to save the group of people you are hiding with from being captured and likely killed?” Now it was his turn to pause. What would the best answer be? To suffocate a child in your arms, one you were entrusted with the care of, is the most repugnant of the questions so far. What were you supposed to say? Would it make a difference if no one would know it was you that did it?

This question took both men the longest to weigh. In the end there was a split decision. Each man confidently pushed the button dictating their choice. The light flickered and the latch to the door released. The experimenter in the white coat and non-descript face reentered the room. The observer tidied up his small stack of papers, sneezes and crossed legs dutifully noted. Each man had drank roughly half of their glass of water and neither had asked to pee or gotten up. Nothing really to note, nonetheless he handed it over.

“Alright sir,” Said the experimenter as he casually looked through the pages with no real interest. The last thing that we ask, now that we have collected your notes, is that you take five minutes to reflect without them and consider the experience. At the end of the five minutes, the timer will go off and you will need to enter your choice. The corresponding button in front of each screen will be counted as a vote for that individual. Whichever is chosen will be eliminated.

The experimenter left the room. The timer ticked and he thought. He remembered who was quicker to answer when the choices were easy. He noted in his mind that one person had answered quickly throughout the first couple of sets of questions while the other person seemed to take more time to deliberate. Perhaps one tried to misdirect him, not wanting to be chosen as the AI to be destroyed. Maybe the AI tried to get the other player killed because it was for the better good. And, yet, this would mean that the AI was choosing to kill. Would the machine do that? Do they have the capacity to think that way? Would anyone ever know if it chose to kill a human to save others versus if it made the choice that it preferred. It is said that the Mock also fear their version of death. Could that be real, or is it just written into the program that they seem that way? Perhaps people simply project their own expectations onto the Mock-acumens? They look human, so it may be all too easy to see them as human.

The experimenter watches the man ruminate on the experience he just had. The participant plays with his pencil at the desk for the whole five minutes. The buzzer goes off. It is time to see if an unaddled brain can still have the capacity to know its own kind. Can it determine which man has empathy? Which is capable of understanding cruelty? Which participant punished those who’s supposed actions had caused undue harm? Protracted pain? Psychological torment? Did they have empathy for the woman holding the baby?

The observer lays down the pencil. The door is open, he has already been debriefed, and he is free to go as soon as he makes his decision. Choosing who lives and who dies, to one extent or another, is a lot of power when you think about it. He looks at the two buttons a moment longer.

Hope hangs by a thread; on a complex analysis that can only be done in a vintage brain. The observer sighs, cracks his fingers and reaches out.

He presses both buttons simultaneously. It’s about time for lunch.

Posted Jul 26, 2025
Share:

You must sign up or log in to submit a comment.

2 likes 1 comment

Robin Zimmer
11:10 Aug 10, 2025

What do you think? Who was the experimenter? Who was the subject? Why did the man observing the others make that choice? Would you divert the train? What about the baby? Is there any thing that would change your mind?

Reply

RBE | Illustrated Short Stories | 2024-06

Bring your short stories to life

Fuse character, story, and conflict with tools in Reedsy Studio. All for free.