Appendices

















User Manual

Click to view the user manual

Read User Manual

Deployment Manual

Follow the instructions for deployment



📌 Prerequisites



Install the following tools:
1. Conda
2. Docker & Docker Compose
3. Node.js (v18+)
4. npm (bundled with Node.js)

01

Backend Setup

Clone repository and set up environment:


git clone https://github.com/Hao-Hao211/ChatLincs.git
cd ChatLincs
conda env create --platform osx-arm64 -f chatlincs.yml
conda activate chatlincs


Launch Docker services:


cd backend/app
docker compose up -d


Note: The ollama service will attempt to automatically download the llava:7b model on first launch.If the download fails (e.g. due to network restrictions), you can manually pull the model on your host machine:


ollama pull llava:7b


Then ensure the model is saved under the mounted volume path (e.g. ./ollama-models) so it can be used inside the container.

02

Frontend Setup

Install dependencies:


cd ChatLincs/frontend
npm install --legacy-peer-deps

03

Run Backend (Flask)

Run Flask server:


cd backend
python run.py


Backend API at: http://localhost:5000

04

Run Frontend (Next.js)

Launch frontend server:


cd frontend
npm run dev


Frontend at: http://localhost:3000

Development Blog

Check out our development blog here:

Monthly Videos