From Lovable to Claude Code: How I Turned a Prototype Into a Real Application
Building SportsSync — Part 12
Three Months Later
I disappeared for three months. Summer arrived, the kids were home, and development stopped. But the Lovable subscription didn't — €100 wasted on unused tokens while the project sat idle. That billing reality accelerated a decision I'd been delaying: it was time to leave Lovable.
When I came back in August, the first thing I did was cancel Lovable and bring everything local. What followed was the most productive period of the entire project.
Why I Left Lovable
Three reasons, in order of importance:
Cost without control. Lovable charges whether you're building or not. At €50-80/month, idle months are expensive. Claude Code's subscription is €18/month, and the token windows actually help my workflow (more on that later).
Code quality. What Lovable produced worked, but it was spaghetti. Nested loops with O(n²) complexity, duplicated logic across components, no tests, no documentation. For a prototype, fine. For a product I want to sell, unacceptable.
Framework mismatch. Lovable built on Vite, which I don't know well. I needed Next.js — it's what I use professionally, it supports both frontend and backend (API routes), and it deploys seamlessly on Vercel. Same technology, more control.
The Migration: Lovable → Local → Production
The migration was methodical:
-
Clone the repo locally. Lovable syncs to GitHub, so the code was already there. I cloned it, ran
npm install, verified it worked identically to the deployed version. -
Add tests before changing anything. I told Claude Code: "Without modifying any code, add unit tests covering the entire codebase." It generated 390 tests covering approximately 80% of the code. Now I had a safety net — any future change that broke something would be caught.
-
Migrate from Vite to Next.js. A single prompt: "Convert this Vite application to Next.js with App Router." Several hours of iteration, but the result was a proper Next.js application with server components, API routes, and Turbopack for fast development builds.
-
Modernize the tooling. Jest → Vitest (faster, less configuration). ESLint + Prettier → Biome (single tool, faster). npm → pnpm (faster installs, better disk usage). Node 20 → Node 22.
-
Set up CI/CD. GitHub Actions pipeline with parallel jobs: dependency setup, build validation, security audit, test suite, and code quality checks. All seven must pass before code can merge to main. Semantic release generates version numbers and changelogs automatically.
-
Deploy on Vercel. Every PR gets a preview deployment. I can test changes on a real URL before merging. Production deploys happen automatically when PRs merge to main.
The entire transformation — from a Lovable prototype to a production-grade Next.js application with 537 tests, CI/CD, and automated deployments — took about two weeks of focused work.
The Private Dashboard
The biggest new feature: a private area behind Google authentication. When you log in with your Google account, the app accesses your YouTube channel and displays your videos. You can select a video, upload a GPX file, synchronize them, and the overlay appears — just like the public demo, but with your data saved to a database.
What gets stored: YouTube video ID, processed GPX data (as JSON, not the raw XML), and the sync point coordinates. With these three pieces of data, the entire overlay can be reconstructed at any time.
Public sharing works. Each synchronized activity can be toggled to public, generating a shareable link. Anyone with the link sees the telemetry overlay without needing an account. This is the foundation for the community feed — eventually, all public activities will appear on the landing page.
The Google OAuth Nightmare
Integrating Google authentication was the hardest part of the entire project. Not because OAuth is conceptually difficult, but because YouTube's API has aggressive quota limits.
My first implementation used an expensive API endpoint that fetched all video metadata at once. Just me testing the app burned through 95% of the daily API quota. If I couldn't use the app alone without hitting limits, it would never work with real users.
The fix required switching to lighter API calls and caching results aggressively. The quota usage dropped dramatically, but the debugging process took two weeks — most of it fighting Google's documentation and error messages rather than writing actual code.
Claude Code: The Game Changer
If Lovable was like having a designer who builds what you describe, Claude Code is like having a senior developer who works alongside you. The difference is profound.
The token window workflow. Claude Code's subscription gives you token allowances that reset every 5 hours. When you run out, you wait. Initially frustrating, this constraint transformed my productivity. Here's why:
I work in focused sprints. Wake up at 7, start a session: "Continue where we left off." Burn through tokens by 9:30. Switch to my day job. Tokens reset at 1pm. Quick session over lunch. Tokens reset again at 6pm. Evening session after the kids are in bed.
Each break forces me to step back and think about the big picture instead of tunneling into implementation details. When I return with fresh tokens, I have a clearer idea of what to ask for. The constraint eliminates the engineering vice of obsessive focus on the wrong thing.
Plan mode. Before executing anything, Claude Code can outline its plan: "Here's what I'll do, in this order, touching these files." I review, adjust, and approve before any code changes. This prevents the Lovable problem of "I asked for one thing and it changed twelve other things."
My role has shifted. I'm no longer a developer using AI. I'm a product manager directing an AI developer. I validate that the product does what I want (product thinking) and that it's built efficiently (technical oversight). The implementation details? I can live without knowing them for now. The code is a commodity — in two years, none of it will exist in its current form anyway.
Performance Optimizations
With the codebase under control, I tackled performance:
Algorithm complexity. The GPS processing had nested loops (O(n²)) that Claude Code reduced to O(n). For a 19,000-point GPX file, the difference is significant.
React re-renders. Components were re-rendering on every tick of the telemetry update, even when values hadn't changed. React.memo and Zustand (replacing React Context) eliminated unnecessary re-renders.
Grade calculation. The slope/gradient metric was the most problematic — smoothing algorithms produced unrealistic values on short segments. Still not perfect, but improved enough for the MVP.
The Ambassador Plan
The go-to-market strategy is simple and manual:
-
Start with one person. Santi, a cycling friend. Sit with him for 30 minutes, let him use the app, watch what confuses him, ask what's missing.
-
Expand to 10-15 cyclists. Each one gets free or heavily discounted access. In exchange: honest feedback, patience with bugs, and (hopefully) sharing their synchronized activities publicly.
-
Phase 2 as the hook. The current product (view overlays in browser) is useful but limited. The real selling point is Phase 2: automatic short generation. Upload a ride, get three 20-second vertical videos with telemetry baked in, ready for Instagram. That's what makes cyclists say "I need this."
-
Organic growth through content. Every shared activity has "SportsSync" in the overlay. Every Instagram story with telemetry data is an ad. Ten ambassadors with even small followings create visibility in the cycling community.
What Phase 2 Looks Like Technically
The video rendering microservice is 80% built from a separate project I did over the summer (automated YouTube highlight extraction). The pipeline: download video → detect interesting moments (peak speed, max power, steepest climb) → crop to 9:16 → render telemetry overlay onto frames with FFmpeg → output shareable video.
The scaling challenge is real. Processing a 1-minute clip takes about 6 minutes of compute time. With a queue, 10 users generating 3 clips each means 30 jobs × 6 minutes = 3 hours of sequential processing. That needs worker scaling, which needs infrastructure investment, which needs revenue to justify.
But the path is clear: validate Phase 1, sell Phase 2 as the promise, and build it once people are paying.
Create cycling shorts with GPS telemetry
Upload your video, sync your GPX data, and generate ready-to-share shorts in minutes.
Try SportsSync — early access