{% extends "base.html" %} {% block title %}LLM Settings - AUTARCH{% endblock %} {% block content %}
Configured backend: {{ llm_backend }} — select a tab, fill in settings, and click Save & Activate, then Load Model to initialise.
Local GGUF models may take 10–60 s to load depending on size. The page will wait — check the Debug Log for live output.
Scans for .gguf, .ggml, .bin files and SafeTensors model directories.
Requires an Anthropic account. Get your API key from the console.
Also compatible with any OpenAI-format endpoint: LiteLLM, Ollama (/v1), vLLM, LocalAI, etc.
Just set the Base URL to your local server.