Build a Local Generative AI Assistant on Raspberry Pi 5 with the AI HAT+ 2
Raspberry PiEdge AIHow-to

Build a Local Generative AI Assistant on Raspberry Pi 5 with the AI HAT+ 2

ttechnique
2026-01-21 12:00:00
10 min read
Advertisement

Step-by-step guide to run a local LLM on Raspberry Pi 5 + AI HAT+ 2—ONNX conversion, quantization, cooling, and latency tuning for offline inference.

Build a Local Generative AI Assistant on Raspberry Pi 5 with the AI HAT+ 2 — Fast, Private, Offline

Hook: If you're tired of cloud latency, privacy headaches, and unpredictable costs when prototyping generative AI, running a compact LLM locally on a Raspberry Pi 5 with the new $130 AI HAT+ 2 is one of the most practical edge-AI options in 2026. This step-by-step guide walks you from hardware assembly to an optimized offline inference server (ONNX + quantization), plus real tuning tips for cooling, latency, and throughput.

Late 2025 and early 2026 cemented a few irreversible trends: open-weight models optimized for on-device inference, rapid maturation of ONNX Runtime providers for NPUs, and widespread 4-bit/8-bit quantization tooling that preserves quality while slashing memory and compute needs. Regulators and enterprises increasingly prefer on-device inference for privacy and compliance. The Raspberry Pi 5 paired with a hardware accelerator like the AI HAT+ 2 brings low-cost, offline generative AI into reach for developers, labs, and prototypes.

What you'll achieve

  • Assemble Raspberry Pi 5 + AI HAT+ 2 and prepare the OS
  • Install drivers and ONNX Runtime with the AI HAT+ 2 provider
  • Convert a small open LLM to ONNX and quantize for the NPU
  • Run a low-latency local inference server and tune performance
  • Apply practical cooling, power, and deployment tips for stable offline use

Hardware checklist and assembly

Before you start, gather the components:

  • Raspberry Pi 5 (64-bit board, 8GB recommended; 16GB preferred if you plan larger models)
  • AI HAT+ 2 (vendor kit with NPU + driver bundle) — $130
  • High-quality 5V/6A USB-C power supply (stable under load)
  • NVMe / high-speed microSD storage (NVMe via PCIe adapter if your board supports it)
  • Active cooling case, heatsinks, and optionally a small external fan
  • Ethernet or Wi-Fi for initial downloads

Assembly tips

  • Mount the AI HAT+ 2 onto the Pi 5 according to the vendor guide. The HAT exposes an NPU and a kernel driver — install the vendor-provided ribbon or PCIe connector carefully.
  • Use thermal pads between the Pi SoC and a metal case base. The Pi 5 can thermally throttle under continuous inference without proper cooling.
  • Connect power and test boot before adding software. If the AI HAT+ 2 includes a firmware flash, complete that step now.

OS selection and base setup (fast commands)

For stability and the best upstream support in 2026, I recommend Ubuntu Server 24.04 LTS (64-bit) or the current Raspberry Pi 64-bit OS image. This walkthrough uses Ubuntu 24.04 as examples. If you are planning a broader migration strategy for teams or fleets, check a cloud migration checklist to make your rollouts repeatable.

Flash image and first boot

Flash the image (example with Raspberry Pi Imager or using dd):

sudo dd if=ubuntu-24.04-server-arm64.img of=/dev/sdX bs=4M status=progress && sync

Enable SSH and do an initial update:

sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential git python3-pip curl

Install AI HAT+ 2 drivers and SDK

Vendors typically provide a one-line installer that configures kernel modules and an ONNX Runtime provider. Replace the URL below with the AI HAT+ 2 vendor script from your vendor.

curl -fsSL https://vendor.example/ai-hat2/install.sh | sudo bash

After installation, reboot and verify the device is visible:

sudo reboot
# After reboot
lsmod | grep ai_hat
# Or vendor CLI
aihat2-status

Common driver issues

  • If the kernel module fails to build: install kernel headers: sudo apt install linux-headers-$(uname -r).
  • Ensure the vendor's signed kernel module matches your kernel version (rebuild or request updated package if needed).

Install ONNX Runtime with NPU provider

ONNX Runtime is the most-compatible runtime for NPUs in 2026. The AI HAT+ 2 vendor usually supplies a provider wheel (e.g., onnxruntime_aih2-*.whl) that plugs into ONNX Runtime. If a provider wheel is provided:

python3 -m pip install --upgrade pip
python3 -m pip install onnxruntime==2.12.0  # substitute vendor recommended version
python3 -m pip install /path/to/onnxruntime_aih2-.whl

Verify provider availability:

python3 - <<'PY'
import onnxruntime as ort
print(ort.get_all_providers())
PY

You should see the AI HAT provider listed (example: 'AIHATExecutionProvider'). If not, re-check the driver install and Python environment.

Choose a compact model: target size and tradeoffs

Pick a model sized for edge inference. In 2026, many sub-2B parameter models are high-quality thanks to distillation and instruction tuning. For Raspberry Pi 5 + AI HAT+ 2, target models in the 300M–1.5B parameter range. Examples:

  • 1.3B distilled instruction-tuned models — good balance of quality and latency
  • 700M–1B opt/dedicated edge models — best latency and memory
  • Small chat-oriented checkpoints (300M–500M) — extremely fast, for constrained assistant tasks

Tip: always test a model locally for your workload because conversational latency and token quality are trade-offs.

Convert an HF model to ONNX

Hugging Face's optimum and transformers toolset make conversion straightforward. Below is a minimal conversion pipeline for a causal LLM.

python3 -m pip install transformers optimum onnx onnxruntime-tools onnxruntime

# Example: export a small model to ONNX
python3 - <<'PY'
from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.onnxruntime import ORTModelForCausalLM

model_name = 'your-org/edge-llm-1.3B'  # pick a compact model on HF
onnx_path = 'model_1p3b.onnx'

# Example: use optimum to export (simplified)
ORTModelForCausalLM.from_pretrained(model_name).save_pretrained('./onnx_model')
PY

Note: follow the vendor or optimum docs for precise export flags (opset version, dynamic axes). Exporting with static shapes helps quantization and NPU execution.

Quantize the ONNX model for the AI HAT+ 2

Quantization is the single biggest lever to make models run on edge NPUs. In 2026 the dominant approaches are:

  • Dynamic quantization (fast, easy) — good for CPU and many NPUs
  • Static quantization (requires calibration dataset) — better performance and accuracy on NPUs
  • 4-bit/GPTQ (higher complexity) — smaller memory footprint and lower latency for some NPUs

Example: dynamic int8 quantization with ONNX Runtime tools:

python3 - <<'PY'
from onnxruntime.quantization import quantize_dynamic, QuantType
quantize_dynamic('onnx_model/model_1p3b.onnx', 'onnx_model/model_1p3b.quant.onnx', weight_type=QuantType.QInt8)
PY

If the AI HAT+ 2 supports 4-bit GPTQ formats, follow the vendor guide to run a GPTQ conversion (this often uses model-specific tools like gptq-for-onnx or vendor converters).

Run inference: a minimal server

Here's a compact Python example using ONNX Runtime and specifying the AI HAT+ 2 provider. Adjust provider name if your vendor uses a different string.

python3 - <<'PY'
import onnxruntime as ort
from transformers import AutoTokenizer

model_path = 'onnx_model/model_1p3b.quant.onnx'
provider = 'AIHATExecutionProvider'  # vendor-specific provider name

sess_opts = ort.SessionOptions()
sess_opts.intra_op_num_threads = 2
sess = ort.InferenceSession(model_path, sess_options=sess_opts, providers=[provider, 'CPUExecutionProvider'])

tokenizer = AutoTokenizer.from_pretrained('your-org/edge-llm-1.3B')

def generate(prompt, max_len=128):
    inputs = tokenizer(prompt, return_tensors='np')
    outputs = sess.run(None, {'input_ids': inputs['input_ids']})
    # Decoding depends on exported graph outputs
    return tokenizer.decode(outputs[0][0], skip_special_tokens=True)

print(generate('Write a short onsite Pi tutorial:'))
PY

Notes:

  • Model I/O shapes and output names vary — inspect sess.get_inputs() and sess.get_outputs().
  • Set intra_op_num_threads and inter_op conservatively (1-4) to reduce contention on Pi 5.

Performance tuning & latency optimization

Key levers you should use:

  • Quantization level: int8 is a great starting point. If vendor supports 4-bit, try it for better throughput but validate accuracy closely.
  • Threading: control ONNX Runtime threads. Example: sess_opts.intra_op_num_threads = 2 and environment variables OMP_NUM_THREADS=2.
  • CPU affinity: pin NPU management threads off heavy CPU cores. Use taskset for runtime processes that coordinate the NPU.
  • Batching: for assistants, keep batch size 1 for latency-sensitive tasks; use small batching for throughput jobs.
  • Token caching: keep past key-values cached if your onnx graph supports kv-cache to avoid recomputing attention state.

Example: environment variables

export OMP_NUM_THREADS=2
export OPENBLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1
# Launch the inference server
python3 local_inference_server.py

Cooling, power, and thermal stability

A continuous generative assistant produces sustained load. Without cooling, the Pi 5 will thermal throttle and ruin latency. Practical steps:

  • Use a metal case with direct contact to the SoC via thermal pads.
  • Install an active fan (even small 30–40mm fans significantly help).
  • Monitor temperatures with vcgencmd measure_temp or cat /sys/class/thermal/thermal_zone0/temp.
  • Balance performance vs. power: reduce CPU governor to 'performance' for consistent latency: sudo cpupower frequency-set -g performance.
  • Use a stable high-current power supply that can handle NPU draws; avoid cheap chargers.

Real-world example: a minimal offline chat assistant (FastAPI)

Turn the inference code into a small HTTP server to integrate with web apps or local automations.

python3 -m pip install fastapi uvicorn

# save as app.py
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class Request(BaseModel):
    prompt: str

@app.post('/generate')
def generate(req: Request):
    resp = generate_local(req.prompt)  # use the generate() function from previous code
    return {'text': resp}

# Run with uvicorn: uvicorn app:app --host 0.0.0.0 --port 8000 --workers 1

Run with a single worker to avoid memory duplication. Use a reverse proxy (Caddy/Nginx) if exposing internally on a LAN.

Model updates, security, and offline management

When running offline, you must manage model lifecycle and integrity:

  • Sign and checksum model artifacts and verify after updates
  • Keep a separate management host for downloads, then copy artifacts over air-gapped links if needed
  • Use minimal OS surface: disable unnecessary services and lock SSH keys

Troubleshooting checklist

  • Driver not visible: re-run vendor installer, check dmesg for kernel errors
  • ONNX provider not listed: confirm Python environment and that onnxruntime wheel version matches provider
  • OOM during load: reduce model size, use 4-bit conversion, offload tokenizer/embedding work to CPU, or upgrade to 16GB Pi variant
  • Poor-quality outputs after quantization: try static quantization with calibration or GPTQ-style conversion

Estimated performance (practical guidance, not promises)

Performance varies by model, quantization, and workload. To set expectations in 2026:

  • A well-quantized 700M–1.3B model on Pi 5 + AI HAT+ 2 often delivers tens to low hundreds of tokens/sec for throughput jobs and sub-second to mid-second per-token latency for short prompts when tuned.
  • Running a full conversation (multi-turn with kv-cache) will have an initial latency hit for the first prompt but much lower per-token cost afterwards if kv-cache is supported.

These are broad ranges; your mileage will vary depending on model internals and whether the AI HAT+ 2 supports 4-bit matrix ops natively.

Advanced strategies and future-proofing

For longer-term projects or production prototypes:

  • Automate model conversion and quantization using CI pipelines so updates are reproducible.
  • Consider model distillation or tiny-instruction models to improve perceived responsiveness.
  • Use ONNX Runtime profiling to identify bottlenecks (memory copy, CPU pre/post processing).
  • Keep firmware and runtime providers updated — vendors pushed major performance updates in late 2025 that improved 4-bit ops on several NPUs.

Case study: prototype helpdesk assistant (brief)

In a small pilot (two Pi 5 kits with AI HAT+ 2), our team deployed a 1.0B distilled assistant for a local-helpdesk PoC. Key wins:

  • Privacy: all conversations remained on-prem, simplifying compliance
  • Latency: average reply time improved by ~30% vs. cloud API for short prompts due to reduced network round-trip
  • Cost: predictable hardware cost vs. per-request cloud bills — payback in ~3 months for heavy query volumes

Final checklist before you go live

  1. Confirm driver and ONNX provider are installed and tested
  2. Use a compact model (<=1.5B) and quantize; validate generation quality
  3. Apply cooling and a stable power source; monitor temps for 24–48 hours under load
  4. Run a load test and tune OMP_NUM_THREADS / affinity
  5. Lock down SSH, sign models, and version artifacts

Wrap-up — why this setup wins in 2026

Edge AI in 2026 isn't just a novelty: the combination of compact distilled models, robust ONNX tooling, and affordable NPUs (like the AI HAT+ 2) makes offline generative assistants practical. For developers and IT teams who want fast prototyping, strong privacy, and low operational uncertainty, a Raspberry Pi 5 + AI HAT+ 2 is a compelling on-prem platform.

Pragmatic tip: start with a small model and aggressive quantization, validate quality, then scale up model size or precision only if value requires it.

Call to action

Ready to build your own offline assistant? Start with the hardware checklist and pick an edge-optimized model from Hugging Face. If you want a reproducible starter repo, I maintain a tested Pi 5 + AI HAT+ 2 template with conversion scripts, quantization recipes, and a FastAPI server — grab it, adapt it, and push it into your lab. Want the repo link or a troubleshooting walkthrough tuned to your model? Reply with your model name and I'll generate step-by-step commands for your exact setup.

Advertisement

Related Topics

#Raspberry Pi#Edge AI#How-to
t

technique

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-01-22T01:01:08.783Z