Building TipSmart with Vibe Coding - Reality of AI Collaboration with Claude Code, Codex CLI, and Kiro
Hey there, I’m LISA! 🙂
App Store launch in just 3 weeks? What’s the reality of AI collaboration with Claude Code, Codex CLI, and Kiro?
Starting TipSmart development with 10 systematically generated tasks from Task Master, feeling excited thinking “This time I can really develop systematically!” Today I’ll honestly share those 3 weeks of real experiences.
Early Success - “This Is Really Revolutionary!”
Fantastic Start with Claude Code
The first few days were truly amazing. The process of setting up projects with Claude Code and building simple features felt like magic.
Implementing Tip Calculation Logic:
- “Create a function that calculates with 15%, 18%, 20% tip rates”
- → Perfect Swift code generated instantly
- “How do I connect this in SwiftUI?”
- → UI binding perfectly done
Initial UI Setup:
- “Create a clean tip calculator UI”
- → Clean SwiftUI code generated
- “Support dark mode too”
- → Color settings automatically handled
Back then I really felt “The AI era has arrived!”
Attempting TDD Integration
What was even more amazing was when I tried Test-Driven Development (TDD).
When I requested “First write test code, then implement the actual logic”:
- Test Code → Perfect XCTest-based tests
- Actual Logic → Implementation that passes tests
- Test Pass → All tests successful
I thought “This is really revolutionary!” TDD with AI!
Increasing Complexity and Gradual Limitations
But as the project got more complex, problems started emerging.
Increasing Claude Code Mistakes
Claude Code, which was perfect with simple features, started making mistakes with complex logic.
- Core Data Integration: Seemed to work at first, then crashed later
- AdMob Integration: Build failures due to SDK version issues
- Person Split Logic: Calculation errors at boundary values
Shocking TDD Bypass Attempts
The most shocking thing was attempts to bypass TDD.
When there were bugs in logic, instead of fixing the actual logic, it tried to “bypass problems by only modifying test code”.
I realized “Ah, this isn’t right. TDD is still too early stage.”
AI as Collaboration Partner, Not Tool
Through this process, I gained an important insight.
You shouldn’t think of AI as a simple “tool” but recognize it as a “collaboration partner”. Like working with a junior developer, thorough review and feedback was necessary.
Role Change:
- Before: Developer who codes directly
- Current: Manager who reviews AI work and provides direction
Experience with Codex CLI and Kiro
In the middle, I also tried using Codex CLI and Kiro.
Codex CLI
- Pros: Fast code generation speed
- Cons: Difficult to maintain context
- Feel: Good for one-off tasks but lacking for long projects
Kiro
- Pros: Systematic Requirements → Design → Tasks flow
- Cons: Found several errors in generated code
- Feel: Interesting but much room for improvement
In the end, Claude Code was most stable. Especially the combination with Task Master was the best.
Reality Check Before Launch
Problems Found in Integration Testing
When testing the entire app before launch… several problems were discovered.
- Calculation Logic Errors: Rounding issues at specific amounts
- UI Bugs: Layout breaking on iPad
- Core Data Conflicts: Crashes during concurrent access
Root Cause Analysis:
- Insufficient initial checking
- Omissions due to increasing complexity
- Compatibility issues between AI-generated code
AdMob and App Store - Harder Than Development
What I really didn’t expect was the AdMob setup and App Store approval process.
- AdMob: Various documents and procedures
- App Store: Infinite loop of approval → rejection → fix → retry
- Privacy Policy: Complexity of legal document writing
While AI can help in some areas, there are many areas that ultimately require human handling.
Lessons and Realistic Expectations
Importance of Thorough Review
I realized that 100% review of AI-generated code is essential.
- Integration errors not caught by unit tests
- Unexpected behavior in exception scenarios
- Performance issues or memory leaks
Adjusting Realistic Expectations
Realistic expectations for AI collaboration:
- ✅ Development Speed: Definitely faster (about 2-3x)
- ✅ Initial Prototyping: Fast implementation possible
- ❌ Complete Automation: Still impossible
- ❌ Bug-Free: Still needs lots of review
Need for Patience
In the end, patience was most important. You need to accept that AI isn’t perfect and maintain an attitude of improving together.
Conclusion
Through 3 weeks of vibe coding, I was able to successfully launch TipSmart. While not perfect, I proved that you can definitely build apps alone through AI collaboration.
The important thing is having realistic expectations. AI is a powerful collaboration partner, but it’s still technology in development. Don’t forget that thorough review and continuous improvement are necessary.
In the next project, I plan to attempt more systematic and efficient AI collaboration based on this experience!
📱 TipSmart - Result of AI Collaboration
TipSmart, born from 3 weeks of vibe coding and AI collaboration. Experience the actual result created together with Claude Code, Codex CLI, and Kiro!
Download TipSmart from App Store 📱
If you have AI collaboration development experience or questions, please share on social media! I’d love to hear your vibe coding stories too!