Custom Engines, Broken Rigs, and AI Costs
The "Build Your Own Engine" Moment Is Here
Multiple experienced developers — some with seven-figure engine licensing histories — independently arrived at the same conclusion this week: general-purpose engines no longer make sense for AI-assisted solo or small-team development.
The argument runs like this: if AI can scaffold a rendering pipeline, physics harness, or input system in hours, the overhead of learning and fighting a general engine exceeds the cost of building exactly what your game needs. One developer who previously held a $1.5M Unreal license put it bluntly: he'll never use an off-the-shelf engine again at this pace.
The evidence is stacking up:
- A developer with no prior Rust experience shipped a full game engine (ForgeEngine) in Rust using Opus 4.5, including a particle system and a mesh LOD pipeline that handles skinned meshes cross-platform. The key insight: Rust's compiler errors are so descriptive that AI agents "love" working in it.
- Another is porting Unreal Engine 5's rendering backend to WebGPU, replacing the RHI layer with SPIR-V to WGSL translation — detailed, low-level graphics work guided by AI.
- A C# developer is building a MonoGame + DirectX 11 stack with no engine at all, using AI for the boilerplate that engines used to justify.
Meanwhile, traditional IDE usage is cratering. Multiple developers reported uninstalling Visual Studio and canceling JetBrains subscriptions because they haven't touched them in months.
This doesn't mean Unity and Unreal are dead — large teams with existing pipelines aren't going anywhere. But for new projects at the 1-5 person scale, the calculus has flipped: the engine tax may now exceed the build-it-yourself tax.
Practical takeaways
- Evaluate whether your next project needs a general engine or a purpose-built stack. The break-even has shifted.
- Rust and C++ with AI pair well for engine work — strong type systems give agents better error signals.
- Keep game logic portable regardless. The distribution layer still matters (see below).
Resources
Rigging Is the Last Unsolved Piece in the 3D Asset Pipeline
The AI-generated 3D model pipeline has quietly become functional end-to-end — except for one stubborn bottleneck: rigging.
Mesh generation (Tripo, Meshy) now produces usable topology with built-in retopology bringing million-poly outputs down to 15-25K faces. Texturing has multiple viable paths, including runtime LLM-generated PBR workflows. Animation has mocap tools like Cartwheel that accept uploaded models and return game-ready clips. But reliable automatic rigging — especially for non-humanoid characters — remains the gap.
One developer summarized it: "You already have good modeling, optimized retopo, good animation libraries, motion tracking... skinning is a pain in the ass." There's reason for optimism — humanoid robotics research is generating training data that could transfer — but the weights aren't public yet.
Tripo emerged as the consensus leader for mesh generation this week, with developers praising texture quality and the end-to-end workflow over competitors. Meshy still has advocates for runtime/API use cases where no human input is needed.
The biggest image generation news: NanoBanana 2 from Google DeepMind launched and immediately became the tool of the week for 2D asset generation. Developers reported "markedly better" output for game tiles, heightmaps, and concept art. The thinkingLevel: high API parameter significantly improves spatial reasoning. One developer built a full PBR texture pipeline by cutting material maps from NanoBanana 2 output. Limitations remain — sprite sheet generation still fails at mid-cycle consistency — but for static assets, it's a clear step up.
Practical takeaways
- Use Tripo for mesh generation with its native retopo tool; skip third-party retopology for most game-ready models.
- Build a Gemini-for-vision + Opus-for-code preprocessing pipeline: have Gemini analyze reference images, then feed descriptions to Opus for implementation.
- Test NanoBanana 2 with
thinkingLevel: highfor any spatial or architectural asset work. - For rigging, Cartwheel handles humanoids well. Non-humanoid rigging remains manual — watch Kinemo for progress.
Resources
The AI Gamedev Spending Confessional
An unusually candid thread this week revealed the raw economics of AI-assisted game development — and the spread is enormous.
At the top end, developers reported $10K+/month on API calls and tool subscriptions, with one noting previous Cursor bills of $25K/month before optimizing. At the other end, developers are shipping playable games for $30-100 total in AI costs.
The most common optimization pattern: tool cycling. Developers are rotating between Claude Code, Cursor, Codex, and Antigravity to stay within rate limits, spending roughly $800/month combined instead of burning through a single provider. Others are exploiting subscription arbitrage — Google AI Ultra at $200-250/month reportedly provides access that would cost $5-7K in direct API calls.
The model consensus that emerged:
- Opus 4.6 is the generalist king for game development, especially 3D spatial work with Three.js and Godot
- Gemini 3.1 edges ahead for vision/multimodal tasks, particularly copying from reference images
- Codex 5.3 preferred for smart contract work, ML pipelines, and thorny algorithmic problems
- Open-source models were broadly dismissed — "not even close" to frontier models for complex gamedev tasks
- Minimax 2.5 flagged as a budget alternative approaching Opus 4.5 quality
The practical implication: AI cost is now a real line item in game budgets, and the variance between disciplined and undisciplined usage is 10-100x. Teams with tight eval loops, constrained context windows, and multi-tool strategies are getting comparable output at a fraction of the spend.
Practical takeaways
- Audit your AI spend. The difference between $100/month and $10K/month is workflow discipline, not output quality.
- Use vision models (Gemini) for preprocessing, coding models (Opus) for implementation — don't force one model to do both.
- Tool cycling across providers is the current meta for staying under rate limits.
Projects Worth Watching
- ForgeEngine: Full game engine built in Rust by a non-Rust developer using AI, with particle systems and cross-platform mesh LOD. Site
- DriftClub: Browser-based touge racing with Three.js, custom physics, and PartyKit multiplayer — first alpha track playable. Play
- Inner Space MMO: 3D spaceship MMO built on SpacetimeDB with quests, outposts, and cinematics, from the developer who open-sourced a 2D survival MMORPG earlier in the week. Play Skyrim-in-the-Browser: NIF and HKX loaders for Three.js that render Skyrim assets including lipsync data — aimed at letting modders preview and edit files in seconds instead of minutes in Creation Kit. No public repo yet. AI Game Flow Framework: A graph-based "Langchain for games" with a visual node editor, supporting character memory, level creation, economy systems, and generative model nodes, with Unity/Unreal plugins planned. Currently internal; developer is considering a public release.
Tools & Drops
- NanoBanana 2: Google DeepMind's image model, rapidly adopted for game tiles, heightmaps, and PBR texture generation. Use
thinkingLevel: highfor spatial work. - Cartwheel: Upload 3D models, run mocap, download game-ready animation. Site
- GDAI MCP: Paid plugin connecting AI agents directly to the Godot editor with visual feedback. One developer with zero Godot experience used it to port ESA orbital dynamics libraries into GDScript — rated "10/10." (no affiliation with Procedural Press)
- Open-source Agent DAW: A digital audio workstation built for AI agents, aimed at procedural soundtrack generation. Post
Retro Audio Generation Is Getting Specific
For teams building pixel-art or retro-styled games, generic AI music generators produce the wrong aesthetic. A more targeted approach emerged this week: prompt Suno with exact chip specifications.
The technique: specify channel counts and sound chip palettes — "8-bit Ricoh 2A03, 2x Pulse, 1x Triangle, 1x Noise, 1x DPCM" — to constrain output toward authentic NES/SNES/Genesis sounds. Developers reported dramatically better results versus generic "chiptune" prompts.
For even more control, the LLM-to-MIDI pipeline is maturing: use Claude or GPT to generate MIDI data via pretty_midi in Python, then render through tracker software like Renoise. Results are "not bad, not great" but improving.
Industry News
- GDC 2026 State of the Game Industry report landed: 2,300+ respondents, with rising GenAI skepticism among traditional studios and significant platform/engine shifts. Report
- Unity AI Gateway signups are open, with a dedicated "Unity AI Beta" session at GDC (March 12) signaling near-term in-editor AI productization. Early Access
- play.fun's Capybara Simulator hit ~100K players in 2 days, with players earning roughly $10K from a game where the point is to sit still — a data point for tokenized web game distribution. play.fun