Running your own AI in-house – Part 2 – LLM/LVM – Ollama on the Web (API) and Code Editor
Let's continue our article series by taking a look at the HTTP API, optimizing the systemd service, putting Nginx in front of it with HTTPS and making sure authentication is required; finally, we set up our code editor config to use our remote instance.