How I Created a Convincing AI Clone of Myself
The Terrifying Reality of AI Cloning
When Reid Hoffman (LinkedIn founder and OpenAI backer) suggested AI could revolutionize content creation, I had to test it myself. Could I build a convincing digital clone using only public tools? The answer shocked me—and will alarm you. After weeks of experimentation, I created "AI Austin" that fooled colleagues and even my wife. This isn't science fiction; it's accessible technology with profound implications for creators and consumers alike. Let me walk you through exactly how I did it—and why we should all be concerned.
How Voice Cloning Works
I started with ElevenLabs, a tool that clones voices with frightening accuracy. The process was disturbingly simple:
- Basic setup: I recorded 30 seconds of my voice ("Hello, this is Austin...") for instant cloning
- Professional training: Spent 30 minutes uploading high-quality recordings for premium results
- Testing: Generated phrases I'd never say—like praising folding phones—with uncanny realism
The result: A voice clone so convincing, my own team did double-takes. As I heard it declare "Brother, absolutely not" with my exact cadence, reality hit: voice replication is already mainstream. ElevenLabs isn't alone—similar tools like Resemble AI and Descript offer comparable capabilities, often with minimal safeguards.
Creating an AI Brain
Next, I fed years of my YouTube transcripts into Anthropic's Claude AI to mimic my writing style. The implications here are critical:
- Claude analyzed my speech patterns, humor ("fragile as my ego after YouTube comments"), and video structures
- Within 45 seconds, it generated scripts indistinguishable from my work, like this Xbox retrospective intro:
"Gather 'round friends! Today we're diving into gaming's most fascinating train wreck: the Xbox One. Microsoft's awkward middle child? Oh yes—and here's why it matters..."
- Professional insight: Language models train on public content with zero consent. My "brain" was replicated using material I willingly published—a warning for all creators.
Video Deepfakes: The Final Frontier
For visual cloning, we used face-swapping tools (deliberately unnamed here due to ethical concerns). The process revealed alarming truths:
- Source material: A single photo of me in similar lighting/glasses sufficed
- Real-time manipulation: We filmed colleague Ken saying "300% raises for everyone!" then mapped my face onto his in seconds
- Flaws and improvements: Early results looked "like he hammers Bud Lights at 2:30 PM"—but iteration rapidly improved realism
The breakthrough: Combining voice clones, AI scripts, and refined deepfakes produced a video where "AI Austin" announced my retirement to invest in Pokémon cards. When tested on peers, reactions included:
- "I had no idea—100% fooled!"
- "The hand movements gave it away... but only later"
- "I texted Austin in panic thinking it was real"
Ethical Implications for Creators
This experiment exposed three critical issues every creator must confront:
- Consent erosion: Tools clone voices/faces without permission—I recreated colleagues digitally without their initial knowledge
- Truth decay: As deepfakes improve, authenticity becomes the ultimate luxury good. My wife noted: "That looks like you... yikes"
- Creative crossroads: While AI can assist with scripting or editing, total replacement strips away human connection. As creator Joe Barnes reacted: "It takes away the joy of true audience connection"
Notable safeguards: Some platforms resist misuse. Google Gemini refused requests to generate "Austin Evans-style" videos, while others lack such protections. This patchwork regulation is unsustainable.
How to Spot Deepfakes
Based on my experiment, watch for these telltale signs:
- Unnatural mouth movements: Lips not fully syncing with audio
- Hand gesture glitches: Robotic or repetitive motions (a key giveaway in my test)
- Emotional flatness: AI struggles with micro-expressions during emotional phrases
- Contextual dissonance: Claims that seem "off-brand" (e.g., me suddenly loving Pokémon cards)
Proactive protection: Freeze your biometric data on voice-cloning platforms like ElevenLabs. Regularly search for your name + "AI voice" or "deepfake" to monitor misuse.
Essential Deepfake Detection Tools
| Tool | Best For | Why It Stands Out |
|---|---|---|
| Intel's FakeCatcher | Real-time detection | Analyzes blood flow in pixels (95% accuracy) |
| Amnesty International's Citizen Evidence Lab | Media verification | Cross-references metadata with geolocation |
| WeVerify | Social media monitoring | Uses blockchain timestamping |
Why these work: They combat AI with forensic analysis of subtle physiological cues impossible to fake—like the way light reflects in human eyes.
The Creator's Dilemma
Despite the risks, AI offers legitimate creative tools:
- Ethical scripting: Use Claude to brainstorm video angles—never full scripts
- Voice restoration: Clean up audio in noisy environments using your own voice clone
- Accessibility: Generate sign-language avatars for hearing-impaired viewers
The red line? Total human replacement. As I told my AI clone: "We're all screwed" if we lose authentic connection. My final takeaway: Use AI as a collaborator—never a replacement. Your audience can always tell when the soul is missing.
What aspect of AI cloning concerns you most? Share your biggest worry in the comments—I'll respond personally to discuss solutions.