Ollama Settings

Ollama is an excellent tool for running large language models locally. With WordOllama, you can directly call locally running Ollama models in Word/WPS.

Prerequisites

  • Ollama installed
  • At least one model downloaded (e.g. qwen2.5:7b)
  • Ollama is running in the background

Configuration Steps

  1. Open the WordOllama settings interface
  2. Select Ollama as the AI provider
  3. Enter http://localhost:11434 as the API address (default)
  4. Save the configuration, then select your model in the WordOllama panel
  • qwen2.5:7b: Strong Chinese language capability, excellent overall performance
  • qwen2.5:3b: Lightweight version, suitable for low-spec computers
  • llama3.2:3b: Meta's lightweight model, good English performance
  • deepseek-r1:8b: DeepSeek model, strong reasoning ability

Verify Connection

Click the "Test Connection" button in the settings interface. If configured correctly, a success message will appear.

Troubleshooting

Can't connect to Ollama?

  • Make sure Ollama is running: run ollama list in terminal
  • The default address is http://localhost:11434, check if it matches

Model loading slow?

The first time you load a model, it needs to be loaded into memory. Please be patient. Subsequent usage will be faster.