Skip to content

thedmdim/llama-telegram-bot

Repository files navigation

Docker Pulls Docker Image Size (tag)

🦙 llama-telegram-bot

What?

It's a chatbot for Telegram utilizing genius llama.cpp. Try live instance here @telellamabot

How?

llama-telegram-bot is written in Go and uses go-llama.cpp which is binding to llama.cpp

Quick Start

Let's start! Everything is simple!

Parameters are passed as env variables.

  1. MODEL_PATH=/path/to/model
  2. TG_TOKEN=your_telegram_bot_token_here
  3. Q_SIZE=1000 - task queue limit (optional: default 1000)
  4. N_TOKENS=1024 - tokens to predict (optional: default 1024)
  5. N_CPU=4 - number of cpu to use (optional: default max available)
  6. SINGLE_MESSAGE_PROMPT - a prompt template for a direct message to bot (default in .env.example)
  7. REPLY_MESSAGE_PROMPT - a prompt template when you are replying to bot's answer (default in .env.example)
  8. STOP_WORD - characters when stop prediction (default in .env.example)

Docker Compose

Local build (Prefered)

  1. git clone https://github.com/thedmdim/llama-telegram-bot
  2. cp .env.example .env and edit .env as you need
  3. docker compose up -d

Pull from Docker Hub

  1. git clone https://github.com/thedmdim/llama-telegram-bot
  2. cp .env.example .env and edit .env as you need
  3. docker compose -f docker-compose.hub.yml up -d

Build and run as binary

You need to have Go and CMake installed

  1. git clone --recurse-submodules https://github.com/thedmdim/llama-telegram-bot
  2. cd llama-telegram-bot && make
  3. go build .
  4. env TG_TOKEN=<your_telegram_bot_token> MODEL_PATH=/path/to/your/model ./llama-telegram-bot

About

telegram + go-llama.cpp

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published