MENU
Language

[Windows support] Let’s use local LLMs with Open WebUI! (Docker+Ollama) Installation Instructions and Execution Instructions

Open WebUI is an open-source front-end tool that connects with locally running LLMs (e.g., LLaMA, Mistral, Gemma, etc.) to provide a ChatGPT-like interactive UI. Below is a step-by-step guide on how to use it for beginners.

目次

Premise: Overview of Open WebUI

Open WebUI connects to local LLMs (e.g.,ollamalmstudiollama.cpp etc.) via HTTP API to provide a web UI that allows chatting.

Usage steps (Docker premise)

Here’s how to use it in environments where Docker can be used (Windows/macOS/Linux).

Preparation: Steps to install Docker Desktop for Windows (Windows version)

Follow these instructions to install Docker Desktop for Windows and launch it in Linux container mode.

Make sure Docker (Linux container mode) is running

Open PowerShell or Command Prompt

  1. Search for “PowerShell” in the Start menu and launch it
  2. Or you can use “Command Prompt (cmd)”

Run the following command from PowerShell or Command Prompt to check your docker version:

docker version

Example of execution result) If you have been able to get the version correctly, it is OK.

Client:
 Version:           28.0.4
 API version:       1.48
 Go version:        go1.23.7
 Git commit:        b8034c0
 Built:             Tue Mar 25 15:07:48 2025
 OS/Arch:           windows/amd64
 Context:           desktop-linux

Server: Docker Desktop 4.40.0 (187762)
 Engine:
  Version:          28.0.4
  API version:      1.48 (minimum version 1.24)
  Go version:       go1.23.7
  Git commit:       6430e49
  Built:            Tue Mar 25 15:07:22 2025
  OS/Arch:          linux/amd64
  Experimental:     false
・・・

Launching Open WebUI (Docker)

The following command launches the Open WebUI in a Docker container, making it accessible on port 3000, while connecting to the Ollama server and running it in the background with a configuration that persists the data.

Run the command:

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://192.168.1.10:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

*If you are already using port 3000, change it to an arbitrary value, such as 3001:8080.
Example URLs to access:http://localhost:3001

Command description

itemsubstanceexplanation
docker runDocker CommandsCommand to create and launch a new Docker container
-dDetached ModeRun the container in the background (hide the logs).
-p 3000:8080Port MappingThe host’s 3000 port → forwarded to the port in the 8080 container. It will be accessible from http://localhost:3000 the browser.
*If you are already using port 3000, change it to an arbitrary value, such as 3001:8080.
Example URLs to access:http://localhost:3001
-e OLLAMA_BASE_URL=http://192.168.1.10:11434Setting Environment VariablesSpecify the URL of the Ollama API for the Open WebUI. This example assumes 192.168.1.10:11434 that Ollama is running on .
-v open-webui:/app/backend/dataPersisting Data Volumesopen-webui Use a Docker volume named Persist to the app’s data directory /app/backend/data . Open WebUI settings, conversation history, etc. are kept.
--name open-webuiSpecifying a container nameName this container open-webui . Note that containers with the same name cannot exist at the same time.
--restart alwaysAuto-restart settingsDocker automatically restarts when the container stops. It is also automatically restarted when the PC is restarted, so it is suitable for service operation.
ghcr.io/open-webui/open-webui:mainDocker image to useGet the official Docker image for Open WebUI from the GitHub Container Registry and launch it with tags main .

Note: Maindocker Commands Involved

commandpurpose
docker runCreate a new container and start it (first time or recreate)
docker start コンテナ名Restart a container that already exists at a stop (without deletion)
docker restart コンテナ名Stop→ Restart (in 1 command)
docker stop コンテナ名Temporarily stop (do not delete) containers
docker rm コンテナ名Permanently delete containers (per name and configuration)
docker ps -aSee all containers, even downed
docker volume lsCheck the volume list

Example 1) Stopping → starting open-webui

docker stop open-webui
docker start open-webui

Example 2) Delete → recreate open-webui

docker rm
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://192.168.1.10:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Where the open-webui entity is stored when executing the above command

substanceplace
Executable/CodeInside the container (stored below on the host. /var/lib/docker/ Direct access deprecated)
Data (e.g., chat history)-v open-webui-data:/app/backend/data Save to the specified Docker volume in the
Web screen displayedServed on the front-end (e.g., Next.js) in the container

Preparation Procedure to install “Ollama” (Windows version)

This article explains the steps to install Ollama and launch a local LLM.

Verify that Ollama is running

ollama run モデル名

For example, ollama run llama3, etc., to start the model.

Access with a browser

Visit the following URL:

http://localhost:ポート番号

Change the port number to the specified value (e.g., 3001, optional) as needed.

http://localhost:3000

Login and user registration will be displayed, so you can log in by setting any name and password.

Results of accessing http://localhost:3000/

Talk to a model

  • When you start it for the first time, it recognizes and automatically uses Ollama models.
  • If you have multiple models, you can switch between them for each chat.
  • You can also edit prompt templates, system prompts, and more.

How to start the second and subsequent times

Launch Docker Desktop (Linux container mode)

For Windows, make sure you have Docker Desktop launched (Linux container mode).

Launch Ollama example)ollama run llama3

ollama run モデル名

Ollama is designed with a daemon running in the background, so once activated, it will reside.
Closing the terminal can also stop the Ollama process. In that case, restart again with the same command.

Launch Open WebUI

If you are deploying Open WebUI as a Docker container:
open-webui * is the container name. If you set it to an alias, start it with that name.

docker start open-webui

Access the Open WebUI in your browser

http://localhost:ポート番号

By default, the port number is 3000 OR 8080 .
docker ps You can see the actual exposed port number in →.

Official documentation

frequently asked questions

questionresponse
The model does not come outOllama not starting/API connection URL is incorrectly specified
I want to change to a port other than 3000-p 3000:3000 -p 8080:3000 Change to
Can you understand Japanese?If the model is compatible (LLaMA3, Qwen, Gemma, etc.),
How compatible is it with GPU-using models?If you can start it with Ollama, no problem. Dependent on GPU environment
Let's share this post !

Author of this article

AIアーティスト | エンジニア | ライター | 最新のAI技術やトレンド、注目のモデル解説、そして実践に役立つ豊富なリソースまで、幅広い内容を記事にしています。フォローしてねヾ(^^)ノ

Comments

To comment

目次