48 hours after launching create-llm on Hacker News, I shipped the most requested feature: a chat interface.
The Problem
You just trained your own LLM. It took hours (or minutes with the nano template). The model is saved. Now what?
Most tutorials end here. You have a trained model sitting in a checkpoint file, but no easy way to actually talk to it.
I realized this was a massive gap. People don’t just want to train models—they want to use them.
The Solution: Built-in Chat Interface
After training completes, you now see two options:
Training complete! What would you like to do?
1. Continue training (more epochs)
2. Chat with your model
Choose option 2, and a ChatGPT-like interface opens in your browser. Powered by Gradio. No setup. No configuration. Just works.
How It Works
When you finish training with create-llm:
- Model saves automatically to checkpoints/
- You choose “Chat” from the menu
- Gradio interface launches in your browser (localhost:7860)
- Start chatting with your trained LLM
That’s it. Your custom model, your own chat interface, running locally.
Why Gradio?
I chose Gradio for three reasons:
Fast to implement – Got it working in a few hours
Zero frontend code – Python only, no React/HTML needed
Clean interface – Looks professional out of the box
Plus, it’s what the ML community uses. If you’ve tried Hugging Face Spaces, you’ve used Gradio.
Real Example
I trained a nano model (7M parameters) on Shakespeare in 5 minutes on my laptop:
npx create-llm shakespeare-bot
cd shakespeare-bot
python train.py
# ... training happens ...
# Choose: Chat with your model
Browser opens. I type: “To be or not to be”
The model continues in Shakespearean style. It’s not GPT-4, but it’s MY model. Running on MY machine. That feels different.
What People Are Saying
Since launching 48 hours ago:
- 140+ GitHub stars (and climbing)
- #5 on Hacker News Show
- 11 forks from developers worldwide
- Real users training their first LLMs
The chat feature was the most requested addition. Now it’s live.
Try It Yourself
Train your first LLM in 5 minutes:
# 1. Create project
npx create-llm my-first-llm
# 2. Choose template (nano = fastest)
# 3. Let it train
# 4. Choose "Chat with your model"
# 5. Browser opens, start chatting
That’s it. No GPU required for nano template. Works on Mac, Linux, Windows.
What’s Next
The roadmap based on user feedback:
- Better error messages for CUDA setup issues
- Model deployment options (API, Docker)
- Fine-tuning on custom datasets (easier workflow)
- Progress bars during training
- Template refactoring (cleaner codebase)
The Real Goal
I built create-llm because I was frustrated. LLM tutorials were either too basic (“hello world”) or too complex (“here’s 500 lines of setup”).
I wanted something like create-next-app but for LLMs. One command. Working project. Learn by doing.
The chat interface completes that vision. You can now:
- Create a project (1 command)
- Train a model (a few minutes)
- Chat with it (in your browser)
- Learn by experimenting (change configs, see results)
All in under 10 minutes.
Join the Journey
This project went from idea to 140+ stars in 48 hours. The chat feature shipped in 24 hours after launch.
I’m building in public. Shipping fast. Learning from users.
Try create-llm:
- GitHub: github.com/theaniketgiri/create-llm
- Star it if you find it useful
- Open issues for bugs/features
- PRs welcome
Follow my journey:
- Twitter: @theaniketgiri
- More articles coming on training tips, architecture decisions, and lessons learned
One More Thing
If you train a model with create-llm, I’d love to hear about it. What did you train? What dataset? How did it turn out?
Drop a comment or tweet at me. Let’s learn together.
TL;DR: Added a Gradio-powered chat interface to create-llm. Train your LLM, then chat with it immediately. No setup. Try it: npx create-llm my-bot