CJ Hess - Visual Planning with Flowy and 10X Engineering
Key Insights
-
Story point compensation changes everything: Getting paid per story point instead of per hour fundamentally transforms engineering incentives - the faster you ship quality features using AI, the more money you make, creating perfect alignment between AI adoption and compensation.
-
Monorepos are the secret to AI productivity: Working across front-end and back-end in a single repository allows Claude to build complete data flows in one shot, while split codebases require constant context switching and explanations that kill momentum.
-
Visual planning beats text prompts: Creating flow diagrams and UI mockups as JSON files that Claude can both read and generate provides far better fidelity than ASCII diagrams or lengthy text descriptions - “I got tired of looking at these ASCII boxes.”
-
Voice transcription removes the laziness tax: When typing, you unconsciously abbreviate prompts to save effort, but when speaking you naturally provide more detail and context, resulting in better AI outputs with less iteration.
-
Skills over MCPs for context control: Skills and CLIs often provide the same capabilities as MCPs but with dramatically less context bloat, making them preferable for production work where precision matters.
Summary
CJ Hess is an engineer at 10X, a consulting firm with a unique business model: engineers get paid by story points rather than hourly rates, creating direct incentives to leverage AI for maximum productivity. CJ has a background in mobile development from Carnegie Mellon and previously worked as a contractor before joining 10X, where the compensation model aligned perfectly with his experience of getting more done with AI tools but being paid less on hourly billing.
In this session, CJ demonstrates Flowy, a weekend project he built that functions as a “local Figma” - a visual planning tool that saves flow diagrams and UI mockups as JSON files in your IDE. This allows Claude to work with precise visual specifications rather than relying on text descriptions or ASCII diagrams. He uses it to plan and build features in a demo app, showing how visual planning accelerates the development workflow from concept to implementation. The approach combines plan mode in Claude Code with visual diagrams to create a more designer-friendly development process.
Main Topics
The 10X Business Model
10X operates with two service lines: AI transformation consulting (helping businesses find opportunities to integrate AI workflows) and development services. What makes it unique is the compensation model for engineers.
How it works: Engineers are paid per story point rather than per hour. Each sprint, the team scopes work into story points, and it’s up to the engineers to build it. As tools improve and engineers get better at using them, their capacity expands, allowing them to take on more work across different domains.
The incentive shift: Traditional hourly billing creates perverse incentives - you’re rewarded for taking longer. The story point model reverses this: “We really focus on the output side. It’s like, hey, this sprint, what feature can we ship and how can we scope that to fit within this? And then how can we knock it out of the park? And everyone’s incentivized to do better on that output as opposed to, you know, take their time and just rack up more hours.” (00:02:44)
Monorepos as the Foundation
CJ is adamant about monorepos being essential for AI-assisted development. The traditional approach of separate front-end, back-end, and DevOps teams in different codebases creates coordination bottlenecks.
The problem with split codebases: “If you’re running Claude just within the front end, you know, you’re going to spend half your time explaining what data is coming in, it might not be right, you know, it’s going to be a lot of back and forth.” (00:03:43)
The monorepo advantage: “When you’re just sitting in the monorepo, you can kind of build the whole data flow. You know, you can say, I need this new feature. It’s going to store this data. You’re going to pull this data. We’re going to start at the database, build it up through the back end, and then we’re going to build the front end for it. And that just lets us move so much faster.” (00:03:53)
Working with client codebases: For existing clients without monorepos, they create “Frankenstein monorepos” by running Claude in a higher directory that encompasses both front-end and back-end, allowing cross-referencing even if the repos aren’t truly unified.
Flowy - Visual Planning for AI Development
Flowy is a local web app that functions like a lightweight Figma, but saves everything as JSON files that live in your project directory. Claude can both read these files to understand visual designs and generate new diagrams based on prompts.
What it replaces: Instead of writing long text descriptions or looking at ASCII diagrams in markdown files, Flowy provides a visual canvas for: - Flow charts showing navigation and state transitions - UI mockups showing layout and components - System diagrams for architecture
The workflow: CJ creates markdown plans that reference Flowy diagrams. The first steps often involve creating the visual diagrams, then subsequent steps use those diagrams as specifications. “I’ll make the markdown plan and kind of the first few steps are to make these diagrams.” (00:08:20)
Technical implementation: The JSON files contain basic shape information - IDs, positions, labels, connections. CJ notes this works much better with newer models: “I don’t think a sonnet four would do this great, for example, just because of all the spatial reasoning it has to do about where different things live in the diagram.” (00:11:17)
Built with Ralph Loop: Flowy itself was created using Ralph Loop on a weekend: “This was me experimenting with a Ralph loop on the weekend.” (00:10:57)
Voice-to-Text for Prompting
CJ recently converted to using Whisperflow for voice transcription after being “a hater of all the transcription type tools for a while.”
The speed advantage: “Just being able to kind of ramble and talk much faster than you can type makes it so much quicker to interact with Claude. I didn’t realize how much of my time was truly just writing prompts.” (00:15:40)
The detail advantage: “When I’m typing, there’s some inherent laziness where I don’t want to spend the time to type out this perfectly exact prompt. I’m fine if it infers a couple of things. But if I’m just rambling, I’m way more specific. I’m way more detailed. I’ll give it context that might even not be related from what I think. But it actually helps the model solve the problem.” (00:15:51)
Office vs. weekend usage: “When we’re here in the office, I don’t do it as much. But absolutely, Claude on the weekends, I’m only talking.” (00:16:14)
Live Demo: Building a Quiz Feature
CJ demonstrated the full workflow by building a quiz feature for a Claude Code guide app:
Step 1 - Creating the plan: Started with a markdown plan outlining a quiz feature, specifying that Flowy diagrams should be created for: - Navigation flow (how users move through the app to access the quiz) - Gameplay flow (the quiz state machine) - UI mockups (what each screen looks like)
Step 2 - Generating diagrams: Prompted Claude to “use the flowy skill and create the diagrams and mockups” (00:12:12). Claude generated: - Navigation flow showing main app → learn tab → quiz card → quiz pages - Gameplay flow showing question states, correct/incorrect branches, and completion - UI mockups for each screen
Step 3 - Manual refinement: Made edits directly in the Flowy web interface, like removing a box from the mockup, then asked Claude “I made some changes to the mockup. What did I change?” to confirm understanding.
Step 4 - Implementation: Simply told Claude “based on the flowcharts and the plan, build it” (00:18:02) and it generated the full feature.
Debugging: When the initial implementation had a navigation bug, CJ’s approach was to reload first (checking for hot reload issues), then give Claude the error with specific instructions: “This error is happening when I press get started. Reference the original plans. Did we follow them correctly? If so, retrace the navigation flowchart and the code and tell me what is causing this error in the chat.” The “in the chat” part is key: “I often like to add an ‘in the chat’ to have almost a side conversation. Claude is historically eager and just giving it that prompt without that, it’d probably start writing a bunch of code.” (00:25:05)
Skills vs. MCPs
CJ has a strong preference for Claude Code skills over Model Context Protocol (MCP) servers.
Why avoid MCPs: “I’m anti-MCP. I find that skills and CLIs are often just as capable and also kind of give you way less context bloat. I’m liking the skill paradigm more and more because of that.” (00:21:50)
Flowy’s skills: Created two separate skills - one for flow charts and one for UI mockups. Each contains: - How Flowy works - How layouts and JSON structure work - Specific goals for that diagram type
The Figma MCP limitation: Even though Figma has an MCP, it’s better for writing than reading: “That MCP is good, from what I’ve found, for writing. So it’ll kind of make a design really well. But if I need that to come into Claude and be read, I’ve struggled to have it kind of do a one-to-one match. So I almost want the specificity of that full JSON file to actually build this out in the project I’m working on.” (00:22:13)
Planning Workflow and LLM Chaining
CJ uses a sophisticated planning process that chains multiple LLM calls.
Dictation to structure: “A lot of the times I’ll use whisper flow or something to transcribe or dictate what I’m going to say. I’ll just ramble, you know, give it as much context as possible. And then I have a prompt that roughly creates a structured plan.” (00:14:07)
Plan validation: Goes even further than most developers: “There are definitely times where I even go a step further and I’m like, write out all the code file by file. Give me some diffs. And I almost want to do a code review before we’re actually touching the files.” (00:06:40)
Passing prompts through LLMs: “I do love the idea of almost passing prompts through an LLM. Like, in a way, that’s what we’re doing in plan mode, right? It’s, hey, I want to build this, go learn some more, and then write a big markdown file that I’m basically going to use as a prompt for another agent.” (00:20:07)
Ralph Loop - Where and When
CJ has experimented with Ralph but is selective about its use.
Good for side projects: “It’s almost like as a developer, I’ve become more of a designer… this flow implementation was, you know, a single prompt.” Used it to build Flowy itself over a weekend.
Not for production: “If it’s, you know, some production system, I’d be terrified of Ralph. I feel like I use a lot of that human in the loop control.” (00:26:18)
The abstraction problem: “It almost feels like a little bit of an abstraction. That’s great when I’m building something like Flowy where I’m not super opinionated and I’m kind of trying to get to a MVP on a side project. But if it’s, you know, some production system, I’d be terrified of Ralph.” (00:26:03)
Summary: “Perfect for Flowy on the weekend, not for a 10X client.” (00:26:25)
Actionable Details
Tools and Products Mentioned
- Claude Code: Primary development environment, used in plan mode for visual planning workflow
- Flowy: CJ’s weekend project - local visual planning tool that saves diagrams as JSON. Runs at localhost, accessible via browser
- Whisperflow: Voice transcription tool for dictating prompts
- Monologue (Every’s product): Voice-to-text tool with “Modes” feature that edits transcription based on target application (ChatGPT, Warp, Cursor). Has a notes feature for capturing ideas while hiking. iOS launch scheduled for February 9th
- Ralph Loop: Used for building side projects like Flowy, avoided for client work
Flowy Skills Setup
Created two separate skills in Claude Code: 1. Flowy Flowcharts Skill: Contains how flow charts work in Flowy, JSON structure, layout rules, and goals for flow diagrams 2. Flowy UI Mockups Skill: Contains how UI mockups work in Flowy, styling options, and goals for mockups
Both skills explain the JSON format and spatial reasoning required to work with Flowy diagrams.
Plan Mode Workflow
- Use voice transcription to dictate feature requirements and context
- Run transcription through LLM to create structured markdown plan
- First plan steps: Create Flowy diagrams (flowcharts, mockups, system diagrams)
- Reference diagrams in subsequent plan steps
- Optional: Have Claude generate code diffs in the plan before touching files
- Execute plan to implement
- Debug by referencing original plans and asking Claude to trace through flowcharts
Debugging Pattern
When encountering bugs: 1. Check if it’s a hot reload issue (reload the app) 2. Copy the exact error message 3. Tell Claude when/how the error occurs 4. Ask Claude to reference original plans and diagrams 5. Add “in the chat” to force explanation before code changes 6. Manually inspect files in IDE while Claude analyzes
Monorepo Workarounds
For clients without true monorepos: - Run Claude in a parent directory that encompasses both front-end and back-end repos - Creates a “Frankenstein monorepo” that still allows cross-referencing - Still better than working in completely separate contexts
Quotes Worth Saving
“I just felt like I was getting so much more work done and that made me start to think about things like, OK, I’m outputting more, but I’m really getting paid less at the end of the day.” (00:01:02)
On the realization that led him to 10X’s story-point model
“When you’re just sitting in the monorepo, you can kind of build the whole data flow. You know, you can say, I need this new feature. It’s going to store this data. You’re going to pull this data. We’re going to start at the database, build it up through the back end, and then we’re going to build the front end for it. And that just lets us move so much faster, both from the coding side with Claude, but even just coordination, right?” (00:03:53)
On why monorepos are essential for AI-assisted development
“I got tired of looking at these ASCII boxes.” (00:12:31)
On the motivation for building Flowy
“When I’m typing, there’s some inherent laziness where I don’t want to spend the time to type out this perfectly exact prompt. I’m fine if it infers a couple of things. But if I’m just rambling, I’m way more specific. I’m way more detailed. I’ll give it context that might even not be related from what I think. But it actually helps the model solve the problem.” (00:15:51)
On why voice transcription produces better prompts than typing
“I’m anti-MCP. I find that skills and CLIs are often just as capable and also kind of give you way less context bloat.” (00:21:50)
On preferring skills over Model Context Protocol
“I do love the idea of almost passing prompts through an LLM. Like, in a way, that’s what we’re doing in plan mode, right? It’s, hey, I want to build this, go learn some more, and then write a big markdown file that I’m basically going to use as a prompt for another agent.” (00:20:07)
On LLM chaining and plan mode as a prompt refinement system