Personal·engineering
Model Playground
One-Command ML Deployment
The gap between "this model works on my laptop" and "this model is available as an API" is surprisingly wide. Model Playground closes it.
You describe a model in a config file — its dependencies, its entry point, its GPU requirements — and run ./playground run. Thirty seconds later, you have a live API endpoint on Cloud Run with GPU support, OpenAPI docs, optional auth, and CORS config. It scales to zero when idle.
The system uses a base Docker image with common ML packages pre-installed. Each model config adds its specific dependencies on top. Currently supports Depth Anything 3, DepthSplat (Gaussian Splats), RI3D, MVControl, and
ACE-Step 1.5 for music generation.
The part I'm most proud of is the
MCP server integration. Model Playground exposes itself as an MCP server, which means
Claude Code can directly call deployed models during a coding session. You can be writing code and say "generate a depth map of this image" and it routes to your deployed Depth Anything instance.
When you're experimenting with multiple models — trying to find the right one for a feature, or comparing outputs — the deployment friction kills velocity. Model Playground makes it trivial to spin up, test, and tear down models.