I was a robot
Part 1. Sick burn
I had an idea to give my old story to DeepSeek to check if it’s been written by AI. After all, I did use AI to translate it but the translation was done quite well, close to the original.
This is the question I asked:
What’s the probability this was written by AI?
Followed by the text of the story. Here’s what I got back:
The probability that the text was written by AI is estimated to be around 70-80%. Here’s the breakdown of the reasoning:
- Common Examples and References: The use of widely known examples (ants, samurai) and a well-known book (“The Unbearable Lightness of Being”) aligns with AI tendencies to draw from general knowledge rather than unique personal experiences. However, the mention of not finishing the book adds a human-like detail.
- Structure and Flow: The text is coherent but features abrupt transitions between topics (e.g., ants to literature to samurai). While humans can write reflectively with jumps, AI often struggles with seamless thematic continuity, making this a potential indicator.
- Philosophical Musings: The existential themes are surface-level and lack deep personal anecdotes or emotional depth, which is typical of AI-generated content. Phrases like “It’s all quite intriguing” feel abstract and non-committal.
- Language and Grammar: The text is polished without errors, consistent with AI’s strength in grammatical accuracy. Humans might include more colloquialisms or idiosyncrasies unless intentionally formal.
- Personal Touch vs. Generic Reflections: The admission of not finishing the book introduces a human element, but the overall tone remains detached and generic, leaning toward AI’s style of balanced, impersonal contemplation.
While there are hints of human-like introspection, the combination of broad examples, structured yet disjointed flow, and lack of concrete personalization suggests a higher likelihood of AI authorship.
I found it very funny! Although, not at first, as here was some stupid bot denigrating my dearly crafted prose, even if it was from 13 years ago. But then it did become funny. Because I could see clearly what DeepSeek was detecting in my text and attributing that to AI. Only the same things can be the signs of a young and inexperienced author.
Sure, “Common Examples and References”, “general knowledge rather than unique personal experiences” - it was simply because I lacked unique personal experience at that time! Same goes for “The existential themes are surface-level and lack deep personal anecdotes or emotional depth” - either I was just young or I was a robot.
#2 was funny. “features abrupt transitions between topics (e.g., ants to literature to samurai)”, “While humans can write reflectively with jumps, AI often struggles with seamless thematic continuity” - great, this is how I write (and even think sometimes) but I never thought it was a negative. And I still don’t.
Part 2. Compliments from a fellow robot
All of these points made it really intriguing to see what Mr. DeepSeek had to say about my more recent creative production. It got better!
The probability that the text was written by AI is moderate to high (60-70%), though there are elements that could suggest either human or AI authorship. Here’s the breakdown:
Key Indicators of AI:
- Grammatical Quirks: The phrase “availing myself to” (vs. the correct “availing myself of”) is a subtle error more likely from AI, which can occasionally misuse prepositions.
- Structured Reflection: The narrative’s clear progression (anecdote → dialogue → reflection → universal lesson) mirrors AI’s tendency to follow predictable storytelling frameworks.
- Generic Metaphors: Phrases like “fill in the blanks with more of the same” are common in AI-generated reflective texts, which often rely on familiar idioms.
Key Indicators of Human Authorship:
- Specific, Personal Details: References to a “multi-liter thermos,” cafes nearby, and workplace dynamics add authenticity often lacking in AI’s more generic outputs.
- Emotional Nuance: The self-aware reflection on assumptions (e.g., feeling inferior to a friend) demonstrates layered thinking that AI can mimic but may struggle to originate organically.
- Natural Dialogue: The colloquial exchange (“You’re really a tea person, then”) feels organic, avoiding the stiltedness sometimes seen in AI-generated conversations.
Hey, 60-70% of AI-ness down from 70-80%! And the points got more interesting, too.
“Grammatical Quirks” - my being a non-native English speaker got attributed to AI, how funny is that?
“Structured Reflection: The narrative’s clear progression (anecdote → dialogue → reflection → universal lesson) mirrors AI’s tendency to follow predictable storytelling frameworks” - I took it as a compliment since I hadn’t consciously planned this progression. Is this a sign of robotic brains? Robots who didn’t know they were robots?
“Key Indicators of Human Authorship” - these were just compliments. “Emotional Nuance: The self-aware reflection on assumptions (e.g., feeling inferior to a friend) demonstrates layered thinking” - keep it coming, DeepSeek!
Part 3. Everyone is a robot
I almost forgot about these text analyses as I was enjoying reading Nassim Taleb’s “The Black Swan” in the sun (the weather was particularly good that weekend), while I got struck by a sudden idea to ask DeepSeek about Mr. Taleb’s text. Here’s what came out:
The probability of the provided text being AI-generated is estimated to be moderate to high (around 70-80%), based on the following analysis:
Key Indicators Suggesting AI Authorship:
- Structure and Flow:
- The text follows a logical, formulaic structure (thesis → examples → conclusion), common in AI-generated essays. Transitions are smooth but lack idiosyncratic human touches.
- The list of historical schools (e.g., Stoics, Dadaists, Essenes) is broad but superficial, resembling AI’s tendency to aggregate examples without deep exploration.
- Language and Tone:
- Formal yet accessible vocabulary (e.g., “banality,” “microcosm”) paired with conversational phrases (“You hear about…”) aligns with AI’s ability to mimic hybrid tones.
- Neutral, impersonal tone without anecdotes or subjective depth—a hallmark of AI’s default style.
- Awkward Phrasing:
- The sentence “A school allows someone with unusual ideas with the remote possibility…” contains a redundant preposition, a subtle error more typical of AI than humans.
- Synthesis Over Originality:
- The argument synthesizes well-trodden ideas (peer validation, historical schools) without novel insights, consistent with AI’s strength in remixing existing knowledge.
Counterpoints Suggesting Human Authorship:
- Niche Examples: References to lesser-known groups like the Essenes or Pyrrhonian Skeptics might suggest specialized human knowledge, though AI can replicate this from training data.
- Conceptual Cohesion: The core idea (ostracism mitigated by community) is coherent and human-relatable, though AI can emulate such reasoning.
This is where I got extremely skeptical about the quality of the DeepSeek’s analysis. Although, it did suggest to use a more specialized tool for AI detection, which is where I headed next.
Part 4. Robots fighting robots
This is what GPTZero had to say about these three texts.
The old story got rated as 79% AI. Same as by DeepSeek.
The fresh one was complimented by the tool with the 99% human rating!
While the Nassim Taleb’s random passage was deemed to be only 98% human.
Epilogue
This journey was prompted by a discussion of someone’s texts that were just too obviously generated by AI. It’s too easy to generate empty words these days. So maybe just write some yourself and make them really count? Instead of doing what this comic shows.