mirror of
https://github.com/lxmfy/ollama-bot.git
synced 2026-03-24 00:45:28 +00:00
LXMFy bot to interact with Ollama
- Python 89.7%
- Makefile 7.2%
- Dockerfile 3.1%
|
|
||
|---|---|---|
| .github/workflows | ||
| lxmfy_ollama_bot | ||
| .deepsource.toml | ||
| .env-example | ||
| .gitignore | ||
| Dockerfile | ||
| LICENSE | ||
| lxmfy-ollama-showcase.png | ||
| Makefile | ||
| poetry.lock | ||
| pyproject.toml | ||
| README.md | ||
ollama-bot
Interact with Ollama LLMs using LXMFy bot framework.
Setup
curl -o .env https://raw.githubusercontent.com/lxmfy/ollama-bot/main/.env-example
edit .env with your Ollama API URL, Model, and LXMF address.
Installation and Running
Using Makefile
Requires poetry and make to be installed.
make install
make run
Using pipx
pipx install git+https://github.com/lxmfy/ollama-bot.git
lxmfy-ollama-bot
Using Poetry directly
poetry install
poetry run lxmfy-ollama-bot
Docker
Using Makefile
make docker-pull
make docker-run
Using Docker directly
First, pull the latest image:
docker pull ghcr.io/lxmfy/ollama-bot:latest
Then, run the bot, mounting your .env file:
docker run -d \
--name ollama-bot \
--restart unless-stopped \
--network host \
-v $(pwd)/.env:/app/.env \
ghcr.io/lxmfy/ollama-bot:latest
Commands
Command prefix: /
/help - show help message
/about - show bot information
Chat
Send any message without the / prefix to chat with the AI model.
The bot will automatically respond using the configured Ollama model.
Note: This only uses /api/generate ollama endpoint so bot wont remember your last message.
