Tobrun's Builder Blog
Home Articles RSS
Follow on X Follow on LinkedIn Follow on GitHub Follow on Hugging Face Linktree
Start reading
Articles

Posts tagged “vLLM”

A running stream of posts on software engineering and technical practices.

Astro Email Functional Programming HTML Huggingface JavaScript LLM Linux Local AI Mistral OpenCode Opencode Reachy Mini Resend Robotics SSH Serverless VS Code Vercel Vibe ai claude-code devtools linux llm productivity ssh vLLM workflow Clear filter

Showing posts tagged “vLLM”.

Local LLM with OpenCode
Jan 16, 2026
OpenCode vLLM LLM Local AI

Local LLM with OpenCode

How to configure OpenCode to use any OpenAI-compatible API endpoint, such as a local vLLM server.

Read article →
Running Mistral Vibe with a Local Model
Dec 11, 2025
Mistral Vibe vLLM LLM

Running Mistral Vibe with a Local Model

A practical guide for configuring Mistral Vibe to use local models such as Devstral-2-123B via vLLM.

Read article →
Deploying vLLM on your Linux Server
Dec 3, 2025
vLLM Linux LLM

Deploying vLLM on your Linux Server

A complete step-by-step guide for installing vLLM, configuring systemd, setting up virtual environments, and troubleshooting GPU-backed inference servers.

Read article →

Builder's Blog

Lessons learned, insights gained, and projects I build.

Join for engineering write-ups, design notes, and behind-the-scenes projects.

Follow on X Follow on LinkedIn Follow on GitHub Follow on Hugging Face Linktree

© 2026 Tobrun Van Nuland

Made with ❤️ and one more cup of ☕ than planned