LXMFy bot to interact with Ollama
  • Python 89.7%
  • Makefile 7.2%
  • Dockerfile 3.1%
Find a file
2025-12-21 21:43:35 -06:00
.github/workflows Migrate workflows to Blacksmith 2025-09-21 06:22:20 +00:00
lxmfy_ollama_bot Add context file loading and system prompt building functionality to bot.py 2025-11-21 16:29:20 -06:00
.deepsource.toml ci: add .deepsource.toml 2024-12-29 03:59:28 +00:00
.env-example Add CONTEXT_FILES variable to .env-example for specifying context file paths to attach to system prompt. 2025-11-21 16:28:02 -06:00
.gitignore Update .env-example, .gitignore, README.md, and 8 more files 2024-12-30 17:29:22 -06:00
Dockerfile Update Dockerfile image desc 2025-11-21 16:28:17 -06:00
LICENSE update license 2025-07-14 14:41:07 -05:00
lxmfy-ollama-showcase.png Update .env-example, .gitignore, README.md, and 8 more files 2024-12-30 17:29:22 -06:00
Makefile Add Makefile 2025-11-21 16:28:22 -06:00
poetry.lock Bump urllib3 from 2.5.0 to 2.6.0 2025-12-06 05:06:59 +00:00
pyproject.toml Update dependencies and version for lxmfy-ollama-bot to 1.5.0, including updates to lxmfy, rns, and their respective package versions in poetry.lock. 2025-11-21 16:28:38 -06:00
README.md Update README 2025-11-21 16:28:59 -06:00

ollama-bot

DeepSource Build and Publish Docker Image

Interact with Ollama LLMs using LXMFy bot framework.

showcase

Setup

curl -o .env https://raw.githubusercontent.com/lxmfy/ollama-bot/main/.env-example

edit .env with your Ollama API URL, Model, and LXMF address.

Installation and Running

Using Makefile

Requires poetry and make to be installed.

make install
make run

Using pipx

pipx install git+https://github.com/lxmfy/ollama-bot.git
lxmfy-ollama-bot

Using Poetry directly

poetry install
poetry run lxmfy-ollama-bot

Docker

Using Makefile

make docker-pull
make docker-run

Using Docker directly

First, pull the latest image:

docker pull ghcr.io/lxmfy/ollama-bot:latest

Then, run the bot, mounting your .env file:

docker run -d \
  --name ollama-bot \
  --restart unless-stopped \
  --network host \
  -v $(pwd)/.env:/app/.env \
  ghcr.io/lxmfy/ollama-bot:latest

Commands

Command prefix: /

/help - show help message
/about - show bot information

Chat

Send any message without the / prefix to chat with the AI model.

The bot will automatically respond using the configured Ollama model.

Note: This only uses /api/generate ollama endpoint so bot wont remember your last message.