Vibe Code Camp Distilled

Ashe Magalhaes - Building with AI Agents and Personal Automation

Ashe Magalhaes - Building with AI Agents and Personal Automation

Key Insights

Summary

Ashe Magalhaes is an engineer and builder who previously founded Hearth AI and is now exploring what’s next in the AI space. With a background spanning FAANG companies, political ML engineering, and Apple, Ashe has consistently worked on the same fundamental problem: mining massive networks to match people with funding, opportunities, or connections.

In this presentation, Ashe demonstrates their personal AI ecosystem - a suite of custom agents integrated into ash.ai that handle everything from maintaining a public stream of consciousness to managing a deeply personal digital Rolodex. The system emphasizes authentic human experience over automated integrations, with agents accessible via Slack, email, and text that maintain context across multiple projects and workflows. While battling day five of the flu, Ashe shares their philosophy around relational intelligence, video authenticity, and practical workflows for using tools like Claude Code and Remotion to create educational content.

Main Topics

The ash.ai Agent Ecosystem

Ashe has built ash.ai as both a public-facing website and a private backend hosting multiple agent-based projects. Behind the public pages are “secret” URLs where different agents operate with different contexts and focuses.

Key components: - A Slack workspace where Ashe talks to agents throughout the day - Agents with access to everything published on ash.ai for shared context - Different workflows categorized by agent focus areas - Email, text, and Slack interfaces for agent communication

[00:02:02] “Behind the scenes of my website are like all of these different projects that I have on, including like agents. And so I have a whole ash.ai Slack workspace. So I’m basically talking to my agents all day.” (02:02-02:34)

The system uses custom architecture built over time, with different agents using different approaches - some using OpenAI tool calling for simple tasks, others using more complex Rolodex agent architectures.

Stream of Consciousness Publishing

One of Ashe’s favorite workflows is the “stream” feature - a public mood board of ambient thoughts with optional location data.

How it works: - When hearing something interesting (like a quote about pain thresholds), Ashe texts or Slack messages an agent - The agent adds it to the public stream at ash.ai/stream - Each entry can include location context - It creates a real-time view of “what is my ambient stream of thought”

[00:03:47] “When I’m out in the world and someone says something really interesting, like even Ben’s quote about pain, I would text or Slack message my agent and my agent would add it here.” (03:47-04:01)

Technical implementation: - Next.js with cron jobs - Slack webhook integration - Simple OpenAI tool calling

Relational Intelligence and the Digital Rolodex

Ashe’s core thesis centers on “relational intelligence” - AI augmenting our ability to reason about who we’re connected to, why, and how to act across our “relationscape.”

The digital Rolodex evolved away from automatic integrations toward a more curated, journal-like system:

[00:06:42] “Integrations can add unnecessary noise to a Rolodex. And actually the people that are top of mind or that I care about are people that I’m naturally thinking about meeting with live adding notes on.” (06:42-06:55)

Key principles: - Only includes people with actual notes, not just email contacts - More like a therapist or journal than an address book - Focuses on dedicated thought and conversational back-and-forth - Uses network visualization with face-centric co-occurrence networks

Network Visualization Experiments

Ashe has been fascinated with visualizing networks since 2015, inspired by Google’s G DELT project.

Challenges identified: - Visualizing networks in 2D is fundamentally hard - No one does it well currently - LLMs particularly struggle with network visualizations - The solution likely requires 3D spatial computing

Design philosophy:

[00:06:00] “You also don’t want to get bogged down in the complexity of like the full dimensionality of who someone is to you, your brain, when you see a human face really starts to light up beyond other things like we’re wired towards that.” (06:00-06:07)

The neural dendrite visualization uses co-occurrence networks with faces, showing Ashe in the center connected to people. After multiple iterations, the third attempt achieved a “special” look resembling actual neural dendrites.

Video as Authenticity Medium for 2026

Ashe is deliberately focusing on video content in 2026 as a counter to AI slop.

The authenticity thesis:

[00:08:51] “There is an authenticity element to it. Like when you see me on recording, I’m stuttering. It’s not perfect. I’m sick. I’m sniffly. And there’s that element of like connection, like you are meeting me, we are exchanging some type of connection.” (08:51-09:04)

[00:10:18] “The subconscious picks up on so much more information than is consciously surfaced. And so I will quickly kind of scroll away from anything that starts to feel too marketing, too, too like networking or too, too inauthentic in a way.” (10:18-10:29)

The three-second test:

[00:10:29] “I wonder if people within the first three seconds might be able to feel on some level, okay, there’s a person that delivered this and it feels authentic versus this is some repackaged like artificial content.” (10:29-10:41)

Creating Educational Animations with Claude Code

Ashe uses Claude Code to create technical explainer animations, systematized through the ash.ai templates.

Workflow for LLM breakthrough explainer: - Goal: Explain technical concepts (like what happened in 2017 to get to LLMs) in an accessible, memorable way - Target audience: “Explaining to my mom” - Created slides with Claude Code - Explored Remotion for more sophisticated animations

Development philosophy:

[00:07:55] “I love like having that creator mindset where you sit down and you understand that the first like 20 versions are probably going to be shitty. And how do you just similar to what Ben said, you just like keep putting work out.” (07:55-08:04)

Remotion Experiments for Video Generation

On the morning of the talk, Ashe experimented with Remotion (a new video generation tool with Claude Code skill) as part of systematizing video creation.

Implementation approach: - Added Remotion skill to Claude Code setup - Created a templates folder in ash.ai for all video versions - Attempted to generate commercial combining network dendrite visualization with Remotion - Used the Remotion founder’s prompt template as starting context

Reality check: - First attempts were “not very good” - Network visualization proved too complex for one-shot generation - Requires 30+ minutes of iteration - LLMs struggle particularly with network visualizations

[00:18:54] “I should actually ask it to print the prompt here. Um, I asked it to visualize a network, but it’s not surprising to me. It didn’t one shot it because I have found LLMs are really not great at generating network visualizations.” (18:54-19:04)

Best results: Educational content like encoder-decoder translation explanations

Personal Goal Tracking with Agent Integration

Ashe uses contribution graphs and agent integration to track daily goals, creating accountability without self-flagellation.

System design: - Agents tied to daily goals through Slack bot - Tracks streaks (e.g., 449 days without expensive clothing purchases) - Uses contribution graphs similar to GitHub - Includes goals like cold showers, running - Public accountability with private goals option

[00:12:08] “I really love contribution graphs. Like as an engineer, I think it is a great thing to look at, um, because it lets me be a little bit more forgiving on the days that I miss the goal.” (12:08-12:14)

Pain and ease relationship:

[00:12:52] “You gotta love pain. Like to Ben’s point, you can have a relationship with pain that starts to feel like with more ease.” (12:52-12:57)

Tool and Model Selection Strategy

Ashe uses different tools and models for different purposes rather than committing to one stack.

Claude Code usage: - Default for many tasks - Set to Claude 4.1 (noting that 4.5 is available) - Good for aesthetic/UX work when given existing examples - “This is the page that already looks good from here”

Cursor with Codex 5.2:

[00:23:40] “I’ve actually found as much as I think there’s been negativity towards. Oh, like open AI codex five two on the timeline. It’s saved me time with four or five going in loops for certain things with animations.” (23:40-23:48)

Benefits of model mixing: - Switching to Codex 5.2 breaks loops that Opus gets stuck in - Good for debugging specific issues - Slow but effective - Complementary strengths rather than one-size-fits-all

Context Management and Vibe Coding Practice

Multitasking during agent waits: - Listens to music while waiting for generations - Reads fiction (currently “Tangerine” - a psychological thriller about an Italian woman sabotaging her roommate in Paris) - Messages people on Twitter or texts - Phone calls - Checks Rolodex and messages contacts

[00:26:26] “I think like a lot of women are good at multitasking. Um, so I can have many parallel threads going.” (26:26-26:32)

Managing cognitive stimulation:

[00:28:01] “I’ll actually like try to keep the morning for solo creative work time and then start taking meetings or even going on Twitter later in the day.” (28:01-28:10)

Context awareness concerns:

[00:24:14] “I’m still kind of watching all the context it’s pulling in at times. And I feel like we live in such, um, like a luxurious or like, um, it’s such a weird time where we’re just pulling in so much context for like basic things.” (24:14-24:26)

Actionable Details

Tools and Products Mentioned

Core Infrastructure: - ash.ai - Personal website and agent backend (custom Next.js application) - Claude Code - Primary coding interface with Claude 4.1/4.5 - Cursor - IDE for accessing Codex 5.2 and alternative workflows - Slack - Primary agent communication interface - OpenAI API - Tool calling for simpler agents

Video and Animation: - Remotion - New video generation tool with Claude Code skill - Remotion Studio - Interface for editing generated videos - Standard animation workflows through Claude Code

Visualization: - Google G DELT project - Inspiration for network visualization (now defunct) - Custom network visualization code (co-occurrence networks) - Face-centric graph representations

Personal Development: - Contribution graphs for goal tracking - Slack bot for streak tracking - Public/private goal management system

Specific Workflows

Adding to Stream of Consciousness: 1. Hear something interesting in the world 2. Text or Slack message the stream agent 3. Agent automatically adds to ash.ai/stream with optional location 4. Public page updates in real-time

Creating Educational Animations: 1. Identify technical concept to explain 2. Create first slide/version that captures desired aesthetic 3. Use that as context for Claude Code: “this is your context” 4. Iterate on subsequent slides matching the style 5. Keep colors muted for professional feel 6. Test: Can you explain it to someone non-technical?

Using Remotion with Claude Code: 1. Install Remotion skill in Claude Code 2. Find example prompt from Remotion team 3. Adapt prompt for your use case 4. Reference existing assets (like network visualizations) 5. Expect 30+ minutes of iteration 6. Save all versions in templates folder for comparison

Network Visualization Development: 1. Acknowledge first 20 versions will be “shitty” 2. Try different approaches (spotlight effects, different layouts) 3. Focus on co-occurrence networks over complex graphs 4. Prioritize face visibility (leverage human brain wiring) 5. Keep iterating until it “looks special”

Configuration and Architecture Details

Agent Architecture Variations: - Simple agents: OpenAI tool calling with Next.js cron jobs - Complex agents: Custom Rolodex architecture with multiple context windows - Communication: Slack webhooks, email integration, text message support - Data sharing: All agents have access to ash.ai published content

Model Selection Strategy: - Claude Opus 4.5 for creative/generative work - Codex 5.2 for debugging and breaking out of loops - Different models for different task types - Willingness to switch tools mid-task

Resources and Recommendations

Reading: - “Tangerine” by Christine Mangan (psychological fiction)

Philosophy: - Embrace the creator mindset: first versions will be bad - Create canvases that make you practice daily - Use contribution graphs to reduce self-flagellation - Balance solo creative time (mornings) with interactive time (afternoons) - Build systems that tie goals to regular workflows

Social: - Twitter: @ashebytes - Website: ash.ai (with hidden project URLs) - Contact form available on website

Quotes Worth Saving

[00:01:19] “My fundamental thesis is around relational intelligence. It’s this idea that AI should augment the human experience by extending our ability to reason on who we’re connected to and why and how you act and take a step across your relationscape, across the optimization landscape that is all of your relationships.” (01:19-01:34)

[00:06:42] “I have always been fascinated in visualizing networks like since 2015. And the best, um, organization that I saw do this was the Google G dealt project. I don’t even think they’re alive anymore, but it’s this idea that like understanding and visualizing networks and 2d is actually really hard. I don’t think anyone does this well.” (05:34-05:49)

[00:07:02] “Until we’re all looking into like spatial computing and having 3d representations of networks and the ability to like zoom, because you also don’t want to get bogged down in the complexity of like the full dimensionality of who someone is to you, your brain, when you see a human face really starts to light up beyond other things like we’re wired towards that.” (05:53-06:07)

[00:08:39] “There’s so much AI slop in terms of written text on Twitter. Now everyone’s writing articles. So that has the negative capacity to like waste even more time. Um, images, you can’t really trust and videos you can’t trust, but I’m kind of liking video because there is an authenticity element to it.” (08:39-08:56)

[00:24:14] “We live in such a luxurious or like, um, it’s such a weird time where we’re just pulling in so much context for like basic things. Like I I’ve seen people tweet about this and I do it too, where it’s like, Oh, change the font or like change the color. It’s like me, I can just click through in the file system and do this.” (24:14-24:30)