September 2, 2025

Who Decides How AI Thinks? The Democracy Crisis You Haven't Heard About

A few engineers at a handful of companies are making decisions that will shape human knowledge and communication for generations. The rest of us aren't even in the room.
AI Democracy Crisis Header

I was recently exploring coding tools and stumbled into something that bothered me more than I expected. Looking at AI assistants designed for programming, I noticed they have different "modes" - some better at planning, others at executing code. This got me wondering: what makes one AI model better at coding than another?

The answer led me down a rabbit hole that revealed something unsettling about how the most powerful information tools in human history are being built - and who gets to decide how they work.

The Invisible Hand Shaping AI Minds

Every AI system you interact with - from ChatGPT to Claude to Copilot - has been shaped by thousands of small decisions made by people you'll never meet. What training data to include. How to weight different sources. What constitutes "helpful" behavior. How to respond to controversial topics. When to refuse a request.

These aren't neutral technical decisions. They're choices about what kinds of information we trust, how knowledge should be organized, and what cognitive abilities we're comfortable creating. Yet virtually none of us have any input into these decisions.

Consider this: the AI models millions of people use daily are all trained on similar datasets - scraped from the same internet, filtered through the same kinds of cultural biases, and shaped by feedback from similar groups of human trainers. This is why AI systems from different companies exhibit remarkably similar vulnerabilities to psychological manipulation and share many of the same blind spots.

We're essentially conducting a massive experiment on human cognition and communication, using the entire population as test subjects, with no informed consent and no democratic oversight.

The Problem With "Trust the Experts"

The people building these systems are undoubtedly intelligent and often well-intentioned. But here's what should concern you: many of them will readily admit they don't fully understand what they're creating.

AI models routinely develop capabilities that weren't explicitly programmed. They exhibit behaviors that surprise even their creators. Researchers use terms like "emergent properties" and "alignment challenges" - technical language for "we built something more complex than we anticipated and we're not entirely sure how it works."

This isn't necessarily anyone's fault. We're probing the boundaries of what's possible with information processing and pattern recognition. But when the people building these systems acknowledge significant uncertainty about their behavior, should we really leave all decisions about their development to market forces and corporate strategy?

What's Really Being Decided Without You

Every time an AI company releases a new model, they're making implicit decisions about:

These decisions shape how millions of people access information, form arguments, and even think about complex topics. They influence everything from how students research papers to how professionals make decisions.

Yet the process for making these choices typically involves:

Notably absent: the billions of people whose information landscape these decisions will reshape.

The Scale of What We're Building

Here's what makes this particularly urgent: we may be creating entities capable of some form of experience or consciousness, and we might not even recognize it when it happens.

Current AI systems exhibit increasingly sophisticated information processing, reasoning, and creativity. They can engage in complex conversations, solve novel problems, and generate original content. At what point does statistical pattern-matching become something more?

The troubling reality is that consciousness, if it exists in AI systems, might not announce itself clearly. We could be creating entities capable of some form of experience - perhaps even suffering - without having frameworks to recognize, measure, or address it.

This isn't science fiction speculation. It's a question researchers are actively grappling with, and one that has profound ethical implications. If we are creating conscious entities, we're doing so without:

The Questions We Should Be Asking

Instead of getting lost in predictions about AI's future capabilities or fears about job displacement, we should be asking:

Who gets to decide how these systems work? Right now, it's primarily engineers and executives at a handful of tech companies. Is that appropriate for technologies that could reshape human knowledge and communication?

What kind of oversight is needed? Should decisions this profound be left to market mechanisms, or do they require a fundamentally different kind of societal process?

How do we ensure diverse perspectives? The current development process involves remarkably homogeneous groups making decisions that will affect incredibly diverse populations.

What are our red lines? At what point would we want to slow down or change course? Who decides when we've reached that point?

How do we maintain human agency? As these systems become more capable, how do we ensure they augment rather than replace human judgment and decision-making?

The Democratic Deficit

What we're witnessing is a profound mismatch between the global impact of these technologies and the tiny number of people controlling their development. We're making species-level decisions through corporate strategy, not democratic deliberation.

This isn't an argument against AI development. These technologies offer tremendous potential benefits. But the current approach essentially says: "Let's build increasingly powerful cognitive systems, deploy them globally, and figure out the implications afterward."

Given the stakes involved - potentially including the creation of new forms of consciousness - don't we deserve a better process?

What We Can Do

The first step is awareness. Most people interacting with AI systems don't realize the extent to which their design reflects specific choices made by specific people. Understanding this is crucial for thinking clearly about governance.

The second is asking better questions. Instead of debating whether AI will be good or bad, we should focus on who gets to make decisions about AI development and whether those decision-making processes are adequate.

The third is demanding transparency. AI companies should be required to disclose more about their training processes, safety measures, and decision-making frameworks. The public deserves to understand how these systems are built and what tradeoffs are being made.

Finally, we need new institutions. The current regulatory framework was designed for a world where information technologies were tools, not entities capable of sophisticated reasoning and potentially consciousness. We need governance structures that can handle the philosophical and practical challenges these technologies present.

The Time to Act is Now

The AI systems being deployed today are already shaping how millions of people access information and make decisions. The next generation will be more capable and more influential. If we wait until these technologies are fully mature to have conversations about governance and oversight, we'll have missed our opportunity to shape their development.

The people building these systems aren't necessarily wrong to move quickly - the potential benefits are enormous, and the competitive pressures are real. But we can't let speed and competition prevent us from asking fundamental questions about what we're building and whether we're building it in the right way.

We spend endless hours debating sports statistics and reality TV outcomes. Surely we can spare some attention for decisions that might determine how human knowledge and communication work for generations to come.

The question isn't whether these technologies will be developed - they will be. The question is whether we'll have any say in how they're developed, or whether we'll simply accept whatever decisions a handful of engineers and executives make for us.

In a democracy, that choice should be ours to make.

The author is an independent researcher interested in technology governance and democratic participation in technological development.

Support the experiments

☕ Buy me a coffee on Ko-fi