The Long Shadow of the Thinking Machine: Koestler, Kant, and the Crisis of Automata

0
128
AI image by SORA
AI image by SORA
AI image by Sora

A DWD Editorial by DrWeb


I. The Clockwork Ancestry: From Vaucanson to Babbage

To understand why we are currently drowning in the digital detritus of “AI Slop,” we must first look back at the mechanical ducks and brass gears of the 18th century. Jacques de Vaucanson’s Digesting Duck was not merely a marvel of Enlightenment-era engineering; it was a precursor to the modern illusion of intelligence.

It ate, it “digested,” and it defecated, yet it possessed no internal life. It was a simulation that demanded the observer suspend their disbelief—a precursor to the Large Language Models (LLMs) of today that simulate reasoning without a shred of comprehension.

We see this today when users treat an algorithmic output as an objective truth rather than a statistical probability. By the time Charles Babbage conceived of the Analytical Engine in the 1830s, the blueprint for the machine-mind was already etched into the collective psyche. Babbage and Lovelace understood the engine as a tool for calculation, yet the public immediately began to wonder: Can it think?

The danger then, as now, is not that the machine will become human, but that humans will begin to behave like machines to better interface with the system. We are witnessing a rush to automate the intangible before understanding the mechanical limitations of the medium. Corporate interests prioritize the “vibe” of intelligence because it scales faster than the messy, expensive reality of human expertise. Humans learn slower than machines, which adds to the mix.

Do Humans Learn Slower Than Machines!?

[Gemini] It is partly true, but largely depends on how you define “learning.” Machines learn faster in terms of raw data processing speed (analyzing gigabytes in seconds), but humans learn faster in terms of data efficiency (learning new concepts from few examples).

Machine Strength (Speed): Machines can process billions of bits per second, training on massive datasets to outperform humans in pattern recognition.

Human Strength (Efficiency): Humans require far fewer instances to learn a new concept.

The Paradox: While our sensory systems take in billions of bits, human conscious thought processes information at a very slow pace of roughly 10 bits per second. [2, 3, 4, 5, 6, 7]
In summary, machines learn faster in terms of raw, repetitive data ingestion, while humans are far more efficient, flexible, and capable of generalizing knowledge from limited examples. [3, 4, 8]

AI responses may include mistakes.

[1] https://www.technologyreview.com/2018/03/07/3241/why-humans-learn-faster-than-ai-for-now/
[2] https://jang.com.pk/en/28379-scientists-expose-surprising-speed-difference-between-human-brain-and-technology-news
[3] https://mindmatters.ai/2019/12/machines-cant-teach-us-how-to-learn/
[4] https://arxiv.org/abs/2012.03661
[5] https://pmc.ncbi.nlm.nih.gov/articles/PMC12320479/
[6] https://govciomedia.com/talk-to-the-algorithms-ai-becomes-a-faster-learner/
[7] https://www.youtube.com/watch?v=NhxVuq6UimY
[8] https://www.reddit.com/r/explainlikeimfive/comments/15t479l/eli5_why_are_brains_so_much_better_at/

II. The Ghost in the Machine: Koestler’s Warning

Arthur Koestler’s “Deep Think” wasn’t just a meditation on the biological evolution of the brain; it was a scathing critique of the reductionist view that humans are merely complex clockwork.

Koestler argued in The Ghost in the Machine that the “Ghost”—the creative, unpredictable spark—cannot be reduced to a binary state or a Pavlovian response. He warned of the “Holon,” the idea that we are both autonomous wholes and parts of a larger system, and that when the system begins to dictate the nature of the individual, we face a “biological catastrophe.”

In the current landscape, corporate AI aims to flatten human nuance into predictable data points. This is the ultimate fulfillment of Koestler’s darkest fears regarding the mechanization of the spirit. When an AI “hallucinates,” it isn’t a glitch; it is the machine reaching the limits of its programming and defaulting to the most likely next token, devoid of the moral or logical anchor that Koestler deemed essential to the human experience. We are trading the “Ghost” for a “Mirror”—one that only reflects back our own worst data biases at scale.

III. Kantian Ethics vs. Algorithmic Silence

Immanuel Kant’s categorical imperative demands that we treat humanity always as an end and never merely as a means. Contrast this with the modern “Black Box” of generative models. In the race for Seed funding and market dominance, human labor—the very data the models are trained on—is treated as a raw material to be extracted, processed, and then sold back to the creator as an “assistance tool.” This is the peak of Kantian violation.

The “right-wing creep” in algorithmic moderation and the “anticipatory obedience” found in corporate AI filters represent a direct assault on the Enlightenment.

Instead of a free exchange of ideas, we are funneled through “Safety Filters” that prioritize the comfort of advertisers and political stakeholders (i.e. Trump) over the pursuit of truth. When a machine is programmed to avoid “sensitive” topics, it isn’t being neutral; it is practicing a form of digital lobotomy that treats the user as a subject to be managed rather than a citizen to be informed. It is a type of censorship and bias of ideas, and our core human value of thinking and ideas becomes the AI slop and noise we get so far.

AI image by SORA

IV. The Crisis of Modern Journalism

Journalism’s legacy is under siege. As a retired librarian and a citizen journalist, I have seen the transition from the curated, verified stacks of the MLS era to the unvetted, synthetic noise of the “Slop” era. The recent layoffs at institutions like the Washington Post are not merely economic “right-sizing”; they are the result of a failure to value the human-in-the-loop. When major news outlets aim or attempt to outsource investigative rigor *research* to LLMs, they commit professional suicide.

The adversarial role of the press—the unyielding critique of corporate and political silence—is being replaced by a feedback loop of corporate-approved, and political POV platitudes. We are seeing a move toward GIGO (Garbage In, Garbage Out) at an industrial scale. If the “Doorway” to information is guarded by an AI that prizes “alignment” over accuracy, then the very concept of a “fact” becomes a casualty of the compute war. The “License-to-Open-Source” movement, championed by the likes of Mozilla, is the only window left for a truly democratic AI that serves the public interest rather than the bottom line of the Silicon Valley and Political elite “of the moment.” (DrWeb’s personal view.)

V. A Way Forward: Reclaiming Human Information Architecture

The mission for any high-integrity project now must be to eradicate GIGO by creating “Doorways” (a naming option I like) that prioritize Information Architecture (IA) over raw compute. We do not need faster models; we need models with better maps. I remain convinced the missing element is the human-in-the-loop with AI. And, I believe the real challenge ahead is to bridge that gap via a turn away from the reliance on “popularity” and “links,” to a challenge to find the “authority” of the voices, the words, the meaning. Authority is a better “human” signpost for validity and “truth” or facts than popularity.

The “Vibe Coding” of the current era must be replaced by a rigorous, adversarial methodology that demands evidence, historical context, and an unyielding commitment to the adversarial tradition of journalism. And the quest for those “authoritative” inputs, answers, not how many connect to this “information,” but it’s human weight and value and size. Those advance our understanding via building on the shoulders of authority. What we think and believe has a “core” base of primary sources, and those are the keys to learning from a knowledge-base, instead of a wave from the “people find this link ok/good/interesting/hate it/like it,” and so on.

The goal is a functional model that secures democratic access to information. We must move past “anticipatory obedience” and return to the independent, authoritative voice that refuses to be silenced by the “creep” of corporate interests. The machine is here, but the Ghost—the human analyst, the authority-based researcher, today’s librarians and information knowledge workers, the citizen journalist —must remain in control of the gears of the “Ghost.” –DrWeb

Multimedia Evidence


Our MLA Bibliography

  • Babbage, Charles. Passages from the Life of a Philosopher. Longman, Green, Longman, Roberts, and Green, 1864.
  • Kant, Immanuel. Critique of Pure Reason. Edited by Paul Guyer and Allen W. Wood, Cambridge UP, 1998.
  • Koestler, Arthur. The Ghost in the Machine. Hutchinson, 1967.
  • Koestler, Arthur. The Act of Creation. Macmillan, 1964.
  • Lovelace, Ada. Notes on the Analytical Engine. Richard and John E. Taylor, 1843.
  • Postman, Neil. Technopoly: The Surrender of Culture to Technology. Vintage Books, 1993.
  • Schudson, Michael. Discovering the News: A Social History of American Newspapers. Basic Books, 1978.
  • Standage, Tom. The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine. Walker & Co, 2002.
  • Wiener, Norbert. The Human Use of Human Beings: Cybernetics and Society. Houghton Mifflin, 1950.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019.
  • Ellul, Jacques. The Technological Society. Knopf, 1964.
  • Mumford, Lewis. The Myth of the Machine. Harcourt Brace Jovanovich, 1967.
  • Berners-Lee, Tim. Weaving the Web. HarperSanFrancisco, 1999.
  • Arendt, Hannah. The Origins of Totalitarianism. Schocken Books, 1951.
  • Orwell, George. Nineteen Eighty-Four. Secker & Warburg, 1949.
  • Heidegger, Martin. The Question Concerning Technology. Harper & Row, 1977.
  • McChesney, Robert W. Digital Disconnect. The New Press, 2013.
  • Manovich, Lev. The Language of New Media. MIT Press, 2001.
  • Turkle, Sherry. Alone Together. Basic Books, 2011.
  • Baudrillard, Jean. Simulacra and Simulation. University of Michigan Press, 1994.
  • Huxley, Aldous. Brave New World Revisited. Chatto & Windus, 1958.

SEE ALSO:

  1. The DIALOG History: Archiving the First Search Engine
  2. The 2026 Bloodbath: Washington Post Slashes One-Third of Newsroom
  3. Deconstructing the “Black Box”: How Algorithms Mask Bias in Law
  4. The Philosophy of the Loom: From Textile Technology to Textual Imagination
  5. Anticipatory Obedience: The New Paradigm of Journalistic Self-Censorship
  6. The Ethics of Generative AI in Public Libraries
  7. Vaucanson’s Duck: The 18th Century Origins of the ‘Deepfake’
  8. Reclaiming the Digital Commons: Building High-Integrity Data Trusts
  9. GIGO 2026: The Hidden Cost of Synthetic Data Feedback Loops
  10. The Ghost in the Machine: Computing and Moral Responsibility

Discover more from DrWeb's Domain

Subscribe to get the latest posts sent to your email.

Leave Your Comments

This site uses Akismet to reduce spam. Learn how your comment data is processed.