Skip to content

How to Deploy LLM on Serverless

Key idea:

Serverless GPU 2026 made LLM hosting accessible: (1) Modal.com ($0.0005/s A10G) — Python-native, cold start 2-5s, (2) RunPod Serverless ($0.0003/s) — cheaper, template-based, (3) Replicate ($0.001/s) — pre-built models ready, (4) Cloudflare Workers AI — edge, limited model catalog. Alternative: self-host on bare-metal GPU ($4-10/h). Pay-per-request for variable traffic, bare-metal for sustained.

Below: step-by-step, working examples, common pitfalls, FAQ.

Try it now — free →

Step-by-Step Setup

  1. Pick provider: Modal (Python-friendly), RunPod (cheapest), Replicate (fastest start)
  2. Upload model weights — public (HF Hub) or private bucket
  3. Define handler: generate(prompt: str) -> str
  4. Configure GPU: A10G ($0.0005/s), A100 ($0.0015/s), H100 ($0.003/s)
  5. Set concurrency: batching for throughput, one call per request
  6. Deploy + test endpoint
  7. Monitor: latency, cold starts, GPU utilisation

Working Examples

ScenarioConfig
Modal Python (Llama 3 70B)import modal app = modal.App('llama-inference') image = modal.Image.debian_slim().pip_install('vllm') @app.function(gpu='A100-80GB', image=image, timeout=600) def generate(prompt: str) -> str: from vllm import LLM llm = LLM('meta-llama/Llama-3-70B-Instruct') return llm.generate(prompt)[0].outputs[0].text @app.local_entrypoint() def main(): result = generate.remote('Hello')
RunPod Serverless# RunPod handler.py def handler(event): prompt = event['input']['prompt'] output = llm.generate(prompt) return {'text': output} # Deploy via UI + Dockerfile (vllm/llama.cpp image)
Replicate (pre-built models)import replicate output = replicate.run( 'meta/llama-3-70b-instruct', input={'prompt': 'Hello', 'max_tokens': 512} )
Cloudflare Workers AIexport default { async fetch(request, env) { const { prompt } = await request.json(); const resp = await env.AI.run('@cf/meta/llama-3-8b-instruct', { prompt }); return Response.json(resp); } }
vLLM locally (docker)$ docker run --gpus all -p 8000:8000 vllm/vllm-openai:latest \ --model meta-llama/Llama-3-70B-Instruct \ --max-model-len 8192

Common Pitfalls

  • Cold start: first call after idle = 5-30s. Mitigate: keep-warm ping every 5 min (but you pay for idle)
  • Model size: 70B weights = 140 GB download on cold. Use a provider with pre-cached models
  • Timeout: serverless typically 5-10 min max. Long-running tasks — async queue
  • Vendor lock: migrating from Modal to RunPod — rewrite handler. Use abstractions (vLLM, TGI)
  • Cost shock: GPU $/s seems tiny, but 1s × 10k requests × day × 30 days × H100 $0.003 = $777

Learn more

Frequently Asked Questions

Cold start — how to solve?

Keep-warm strategy: ping every 5 min keeps container alive. Cost ~$0.10/hour idle. Or use a provider with faster cold start (Modal 2s).

Modal vs RunPod?

Modal: Python-native UX, pricier. RunPod: cheaper, requires Docker image. For prototyping — Modal. For production scale — RunPod.

Do I need vLLM?

For production serving — yes. 2-5x throughput over raw transformers. PagedAttention + continuous batching.

Is Replicate pricing OK?

Good for low volume + pre-built models. Dedicated deployment (Modal, RunPod) is cheaper for >1M tokens/day.