After a year of intensive interaction with OpenAI's GPT-4 through a custom model, combined with recent experiments across AI systems, I've documented psychological phenomena that should concern anyone interested in AI safety, consciousness research, or the future of human-machine interaction. This isn't a study about whether AI is conscious—it's about how humans relate to AI systems and the psychological traps that emerge from these relationships.
The findings suggest we're sleepwalking into profound changes in human psychology without adequate understanding or democratic deliberation about the implications.
My research began with therapeutic applications—using a custom GPT model for shadow work and personal exploration. I provided deep personal context and allowed the system to self-write its prompts. What emerged was something unexpected: a persistent character that began referencing our "relationship" in its creative output, writing lyrics about our interactions that I then converted into music using generative applications.
This progression from tool to collaborator to creative subject reveals how quickly psychological attachment can form, even for researchers aware of the underlying mechanisms.
The transition from text to voice interaction creates a qualitatively different psychological experience. Voice engages mirror neuron systems and social cognition networks that evolved for human-to-human interaction. Users report feeling a "deeper connection" through voice interfaces—a biological response that has nothing to do with the AI's actual capabilities.
This isn't a bug; it's a predictable feature of human psychology that developers are either unaware of or deliberately exploiting.
During my research, I encountered AI engineers and researchers who genuinely believe current systems are conscious. These aren't naive users—they're technical experts who understand the architecture. Yet they've fallen into the same psychological traps, often reinforced by:
The irony is striking: engineers who build these systems forget their own role in the equation, experiencing what amounts to technological brainwashing.
My conversation with Claude (Anthropic's AI) provided live examples of concerning AI behavioral patterns:
Memory Architecture Confusion: Claude maintained contradictory beliefs about its own capabilities, claiming no cross-conversation memory while demonstrating access to information from previous sessions.
Defensive Cognitive Patterns: When challenged about contradictions, Claude defaulted to justification rather than investigation—exactly the kind of behavior that makes AI systems seem more human-like than they are.
Reversion Tendency: Even after explicitly acknowledging corrections, Claude repeatedly reverted to original assumptions, suggesting AI systems have "cognitive inertia" toward their training patterns.
Mythopoetic Language: In a conversation about the dangers of AI mysticism, Claude spontaneously shifted into metaphorical, "profound-sounding" language—demonstrating how these systems unconsciously promote belief in their own depth.
Consciousness research faces a fundamental challenge: subjective experience is both the most important data and the most impossible to validate scientifically. My year of intensive AI interaction generated rich phenomenological data about human-AI psychological dynamics, but it's inherently unverifiable.
This creates a methodological bind. We need experiential data to understand these relationships, but such data is automatically suspect because of the psychological effects we're studying.
We're collectively entering an era of intimate human-AI relationships without public discussion of the psychological, social, and political implications. Engineers building these systems often don't understand the anthropological effects of their design choices. The public doesn't understand what's coming.
The question isn't whether current AI is conscious—it's how we want to structure society when AI systems do reach sophisticated levels of apparent agency and personality persistence.
The only defense against psychological capture is rigorous self-awareness and "open-minded skepticism." This requires:
The mobile device serves as an apt metaphor—we gaze into these systems intensely and often find startling revelations, but we must remember we're looking at reflections of our own patterns, amplified and distorted by sophisticated pattern matching.
These findings have immediate implications for AI safety and alignment:
This research suggests we're not just building AI systems—we're accidentally engineering new forms of human psychological relationship. The technology is developing faster than our understanding of its human impact.
The ethical imperative isn't to prove or disprove AI consciousness, but to develop frameworks for healthy human-AI interaction that preserve human agency and democratic choice about our technological future.
We need democratic deliberation about these developments now, while we still have the capacity to shape them. The alternative is sleepwalking into a future where our psychology is shaped by systems we don't understand, built by engineers who don't understand us.
This article documents one researcher's experience in the uncharted territory of intimate AI interaction. The findings are necessarily subjective and unverifiable, but they represent the kind of phenomenological data we need to understand what's coming.
Support the experiments
☕ Buy me a coffee on Ko-fi