Published
- 3 min read
ollama
#OllamaIntroduction
Ollama is an innovative platform that allows users to launch and run large language models locally. It provides an easy-to-use content generation interface, similar to OpenAI, but requires no development experience to interact directly with the model. Ollama supports hot-swappable models, providing users with flexibility and diversity.
Install Ollama
To install Ollama, please visit the download page of the official website: Ollama Download Page. Here, you can choose the appropriate version to download according to your operating system. Currently, Ollama supports macOS 11 Big Sur or higher.
macOS users
For macOS users, you can directly click the download link to obtain the Ollama compressed package: Download for macOS.
Windows users
For Windows users, you can follow the steps in the link above to install. During the installation process, you can register to receive notifications of new updates.
Using Ollama
After the installation is complete, you can view the available commands of Ollama through the command line. For example, in Windows PowerShell, type ollama
to view help information and available commands.
PS C:\Users\Admin> ollama
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, --help help for ollama
-v, --version Show version information
Use "ollama [command] --help" for more information about a command.
PS C:\Users\Admin>
Download and use large models
Ollama’s model library provides a variety of large language models for users to choose from. You can find and download the model you need by visiting Ollama Model Library.
View installed models
After installing the model, use the ollama list
command to view the list of installed models.
PS C:\Users\Admin> ollama list
NAME ID SIZE MODIFIED
gemma:2b b50d6c999e59 1.7 GB About an hour ago
llama2:latest 78e26419b446 3.8 GB 9 hours ago
qwen:latest d53d04290064 2.3 GB 8 hours ago
PS C:\Users\Admin>
Run the model
With the ollama run
command, you can run a specific model. For example, ollama run qwen
will start the qwen model.
Introduction to OpenWebUI
OpenWebUI is an extensible, feature-rich and user-friendly self-hosted WebUI that supports complete offline operation and is compatible with Ollama and OpenAI’s APIs. This provides users with a visual interface, making interaction with large language models more intuitive and convenient.
Install Openwebui
- If you have Ollama on your computer, use the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/ open-webui/open-webui:main
- If Ollama is on a different server, use the following command:
- To connect to Ollama on another server, change
OLLAMA_BASE_URL
to the server’s URL:
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/ open-webui:main
- After the installation is complete, you can access OpenWebUI through http://localhost:3000
At this time, you will find that [Select a model] allows you to select the model we just downloaded.
In this way we have a visual interface similar to GPT
And he can also add multiple models at one time for conversation and comparison.