Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
A
agentchat
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
李伟@五瓣科技
agentchat
Commits
2349111b
Commit
2349111b
authored
May 31, 2025
by
Wade
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
add lightrag dbgpt graph
parent
a1665fde
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
858 additions
and
0 deletions
+858
-0
swagger.yaml
docs/swagger.yaml
+191
-0
idx.go
idx.go
+223
-0
dbgpt-graphrag.toml
plugins/graphrag/dbgpt-graphrag.toml
+96
-0
docker-compose.yml
plugins/graphrag/docker-compose.yml
+72
-0
docker-compose.yml
plugins/lightrag/docker-compose.yml
+71
-0
env
plugins/lightrag/env
+205
-0
No files found.
docs/swagger.yaml
View file @
2349111b
...
...
@@ -62,5 +62,196 @@ paths:
type
:
string
description
:
Error message
example
:
"
Invalid
request
body"
/idx/milvus
:
post
:
summary
:
Store Milvus index data
description
:
Stores question, answer, and summary data for Milvus indexing.
tags
:
-
Indexing
requestBody
:
required
:
true
content
:
application/json
:
schema
:
type
:
object
properties
:
question
:
type
:
string
description
:
The question to store
example
:
"
What
is
AI?"
answer
:
type
:
string
description
:
The answer to store
example
:
"
AI
is
artificial
intelligence..."
summary
:
type
:
string
description
:
The summary of the Q&A
example
:
"
AI
overview"
username
:
type
:
string
description
:
The username of the requester
example
:
"
john_doe"
user_id
:
type
:
string
description
:
The unique identifier for the user
example
:
"
user_12345"
required
:
-
question
-
answer
responses
:
'
200'
:
description
:
Successful storage of Milvus index data
content
:
application/json
:
schema
:
type
:
object
properties
:
id
:
type
:
integer
description
:
The ID of the stored record
example
:
1
'
400'
:
description
:
Invalid input
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Invalid
request
body"
'
500'
:
description
:
Server error
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Failed
to
store
data"
/idx/graphrag
:
post
:
summary
:
Store GraphRAG index data
description
:
Stores question, answer, and summary data for GraphRAG indexing.
tags
:
-
Indexing
requestBody
:
required
:
true
content
:
application/json
:
schema
:
type
:
object
properties
:
question
:
type
:
string
description
:
The question to store
example
:
"
What
is
NLP?"
answer
:
type
:
string
description
:
The answer to store
example
:
"
NLP
is
natural
language
processing..."
summary
:
type
:
string
description
:
The summary of the Q&A
example
:
"
NLP
overview"
username
:
type
:
string
description
:
The username of the requester
example
:
"
john_doe"
user_id
:
type
:
string
description
:
The unique identifier for the user
example
:
"
user_12345"
required
:
-
question
-
answer
responses
:
'
200'
:
description
:
Successful storage of GraphRAG index data
content
:
application/json
:
schema
:
type
:
object
properties
:
id
:
type
:
integer
description
:
The ID of the stored record
example
:
1
'
400'
:
description
:
Invalid input
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Invalid
request
body"
'
500'
:
description
:
Server error
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Failed
to
store
data"
/index
:
post
:
summary
:
Trigger indexing of existing QA data
description
:
Triggers the indexing process for existing QA data in the database using the pgvector extension.
tags
:
-
Indexing
requestBody
:
required
:
false
content
:
application/json
:
schema
:
type
:
object
properties
:
apiKey
:
type
:
string
description
:
The API key for authentication
example
:
"
sk-1234567890abcdef"
responses
:
'
200'
:
description
:
Indexing process completed successfully
content
:
application/json
:
schema
:
type
:
object
properties
:
message
:
type
:
string
description
:
Success message
example
:
"
Indexing
completed
successfully"
'
400'
:
description
:
Invalid input
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Invalid
API
key"
'
500'
:
description
:
Server error
content
:
application/json
:
schema
:
type
:
object
properties
:
error
:
type
:
string
description
:
Error message
example
:
"
Failed
to
index
data"
components
:
schemas
:
{}
idx.go
0 → 100644
View file @
2349111b
package
main
import
(
"context"
"database/sql"
"encoding/json"
"fmt"
"log"
"net/http"
"strconv"
"time"
"github.com/firebase/genkit/go/ai"
"github.com/firebase/genkit/go/genkit"
"github.com/wade-liwei/agentchat/plugins/milvus"
)
func
startServer
(
g
*
genkit
.
Genkit
,
db
*
sql
.
DB
,
indexer
ai
.
Indexer
,
retriever
ai
.
Retriever
,
embedder
ai
.
Embedder
,
apiKey
string
)
{
http
.
HandleFunc
(
"/idx/milvus"
,
handleIndex
(
indexer
,
embedder
,
"Milvus"
))
http
.
HandleFunc
(
"/idx/graphrag"
,
handleIndex
(
indexer
,
embedder
,
"GraphRAG"
))
// 如果需要支持 GraphRAG
http
.
HandleFunc
(
"/index"
,
handleIndexTrigger
(
g
,
db
,
indexer
,
apiKey
))
http
.
HandleFunc
(
"/askQuestion"
,
handleAskQuestion
(
g
,
retriever
))
addr
:=
fmt
.
Sprintf
(
":%s"
,
*
port
)
log
.
Printf
(
"Starting server on %s"
,
addr
)
if
err
:=
http
.
ListenAndServe
(
addr
,
nil
);
err
!=
nil
{
log
.
Fatalf
(
"Server failed: %v"
,
err
)
}
}
func
handleIndex
(
indexer
ai
.
Indexer
,
embedder
ai
.
Embedder
,
indexType
string
)
http
.
HandlerFunc
{
return
func
(
w
http
.
ResponseWriter
,
r
*
http
.
Request
)
{
if
r
.
Method
!=
http
.
MethodPost
{
http
.
Error
(
w
,
`{"error":"Method not allowed"}`
,
http
.
StatusMethodNotAllowed
)
return
}
var
req
IndexRequest
if
err
:=
json
.
NewDecoder
(
r
.
Body
)
.
Decode
(
&
req
);
err
!=
nil
{
http
.
Error
(
w
,
`{"error":"Invalid request body"}`
,
http
.
StatusBadRequest
)
return
}
if
req
.
Question
==
""
||
req
.
Answer
==
""
{
http
.
Error
(
w
,
`{"error":"Missing required fields: question and answer"}`
,
http
.
StatusBadRequest
)
return
}
var
userID
*
int64
if
req
.
UserID
!=
nil
{
id
,
err
:=
strconv
.
ParseInt
(
*
req
.
UserID
,
10
,
64
)
if
err
!=
nil
{
http
.
Error
(
w
,
`{"error":"Invalid user_id format"}`
,
http
.
StatusBadRequest
)
return
}
userID
=
&
id
}
// 构造文本内容
text
:=
req
.
Question
+
" "
+
req
.
Answer
if
req
.
Summary
!=
nil
{
text
+=
" "
+
*
req
.
Summary
}
// 构造元数据
metadata
:=
map
[
string
]
interface
{}{
"username"
:
req
.
Username
,
"user_id"
:
userID
,
}
// 生成唯一 ID(由于 Milvus 插件的 schema 中 idField 是 AutoID,这里仅用于响应)
id
:=
time
.
Now
()
.
UnixNano
()
// 创建文档
doc
:=
&
ai
.
Document
{
Content
:
[]
*
ai
.
Part
{
ai
.
NewTextPart
(
text
)},
Metadata
:
metadata
,
}
// 使用 Indexer 写入 Milvus
err
:=
ai
.
Index
(
r
.
Context
(),
indexer
,
ai
.
WithDocs
(
doc
))
if
err
!=
nil
{
log
.
Printf
(
"Failed to index %s data: %v"
,
indexType
,
err
)
http
.
Error
(
w
,
`{"error":"Failed to store data"}`
,
http
.
StatusInternalServerError
)
return
}
resp
:=
IndexResponse
{
ID
:
id
}
w
.
Header
()
.
Set
(
"Content-Type"
,
"application/json"
)
w
.
WriteHeader
(
http
.
StatusOK
)
if
err
:=
json
.
NewEncoder
(
w
)
.
Encode
(
resp
);
err
!=
nil
{
log
.
Printf
(
"Failed to encode response: %v"
,
err
)
}
}
}
func
handleIndexTrigger
(
g
*
genkit
.
Genkit
,
db
*
sql
.
DB
,
indexer
ai
.
Indexer
,
expectedAPIKey
string
)
http
.
HandlerFunc
{
return
func
(
w
http
.
ResponseWriter
,
r
*
http
.
Request
)
{
if
r
.
Method
!=
http
.
MethodPost
{
http
.
Error
(
w
,
`{"error":"Method not allowed"}`
,
http
.
StatusMethodNotAllowed
)
return
}
var
req
IndexTriggerRequest
if
err
:=
json
.
NewDecoder
(
r
.
Body
)
.
Decode
(
&
req
);
err
!=
nil
{
http
.
Error
(
w
,
`{"error":"Invalid request body"}`
,
http
.
StatusBadRequest
)
return
}
if
req
.
APIKey
!=
expectedAPIKey
{
http
.
Error
(
w
,
`{"error":"Invalid API key"}`
,
http
.
StatusBadRequest
)
return
}
if
err
:=
indexExistingRows
(
r
.
Context
(),
db
,
indexer
);
err
!=
nil
{
log
.
Printf
(
"Failed to index data: %v"
,
err
)
http
.
Error
(
w
,
`{"error":"Failed to index data"}`
,
http
.
StatusInternalServerError
)
return
}
resp
:=
IndexTriggerResponse
{
Message
:
"Indexing completed successfully"
}
w
.
Header
()
.
Set
(
"Content-Type"
,
"application/json"
)
w
.
WriteHeader
(
http
.
StatusOK
)
if
err
:=
json
.
NewEncoder
(
w
)
.
Encode
(
resp
);
err
!=
nil
{
log
.
Printf
(
"Failed to encode response: %v"
,
err
)
}
}
}
func
handleAskQuestion
(
g
*
genkit
.
Genkit
,
retriever
ai
.
Retriever
)
http
.
HandlerFunc
{
return
func
(
w
http
.
ResponseWriter
,
r
*
http
.
Request
)
{
if
r
.
Method
!=
http
.
MethodPost
{
http
.
Error
(
w
,
`{"error":"Method not allowed"}`
,
http
.
StatusMethodNotAllowed
)
return
}
var
input
struct
{
Question
string
`json:"Question"`
Show
string
`json:"Show"`
}
if
err
:=
json
.
NewDecoder
(
r
.
Body
)
.
Decode
(
&
input
);
err
!=
nil
{
http
.
Error
(
w
,
`{"error":"Invalid request body"}`
,
http
.
StatusBadRequest
)
return
}
if
input
.
Question
==
""
||
input
.
Show
==
""
{
http
.
Error
(
w
,
`{"error":"Missing required fields: Question and Show"}`
,
http
.
StatusBadRequest
)
return
}
// 创建查询文档
queryDoc
:=
&
ai
.
Document
{
Content
:
[]
*
ai
.
Part
{
ai
.
NewTextPart
(
input
.
Question
)},
}
// 使用 Retriever 检索
retrieverOptions
:=
&
milvus
.
RetrieverOptions
{
Count
:
3
,
// 获取前 3 个结果
MetricType
:
"L2"
,
}
result
,
err
:=
ai
.
Retrieve
(
r
.
Context
(),
retriever
,
ai
.
WithQuery
(
queryDoc
),
ai
.
WithOptions
(
retrieverOptions
))
if
err
!=
nil
{
log
.
Printf
(
"Failed to retrieve data: %v"
,
err
)
http
.
Error
(
w
,
`{"error":"Failed to process question"}`
,
http
.
StatusInternalServerError
)
return
}
// 构造响应(可以根据需要处理检索结果)
var
responseText
string
for
_
,
doc
:=
range
result
.
Documents
{
for
_
,
part
:=
range
doc
.
Content
{
if
part
.
IsText
()
{
responseText
+=
part
.
Text
+
"
\n
"
}
}
}
resp
:=
struct
{
Response
string
`json:"response"`
}{
Response
:
responseText
}
w
.
Header
()
.
Set
(
"Content-Type"
,
"application/json"
)
w
.
WriteHeader
(
http
.
StatusOK
)
if
err
:=
json
.
NewEncoder
(
w
)
.
Encode
(
resp
);
err
!=
nil
{
log
.
Printf
(
"Failed to encode response: %v"
,
err
)
}
}
}
func
indexExistingRows
(
ctx
context
.
Context
,
db
*
sql
.
DB
,
indexer
ai
.
Indexer
)
error
{
rows
,
err
:=
db
.
QueryContext
(
ctx
,
`SELECT id, question, answer, summary FROM qa`
)
if
err
!=
nil
{
return
err
}
defer
rows
.
Close
()
var
docs
[]
*
ai
.
Document
for
rows
.
Next
()
{
var
id
int64
var
question
,
answer
,
summary
sql
.
NullString
if
err
:=
rows
.
Scan
(
&
id
,
&
question
,
&
answer
,
&
summary
);
err
!=
nil
{
return
err
}
content
:=
question
.
String
if
answer
.
Valid
{
content
+=
" "
+
answer
.
String
}
if
summary
.
Valid
{
content
+=
" "
+
summary
.
String
}
docs
=
append
(
docs
,
&
ai
.
Document
{
Content
:
[]
*
ai
.
Part
{
ai
.
NewTextPart
(
content
)},
Metadata
:
map
[
string
]
interface
{}{
"id"
:
id
,
},
})
}
if
err
:=
rows
.
Err
();
err
!=
nil
{
return
err
}
return
ai
.
Index
(
ctx
,
indexer
,
ai
.
WithDocs
(
docs
...
))
}
\ No newline at end of file
plugins/graphrag/dbgpt-graphrag.toml
0 → 100644
View file @
2349111b
ubuntu@ip-172-26-8-199:~/DB-GPT/configs$ cat dbgpt-graphrag.toml
[system]
# Load language from environment variable(It is set by the hook)
language = "${env:DBGPT_LANG:-zh}"
log_level = "INFO"
api_keys = []
encrypt_key = "your_secret_key"
# Server Configurations
[service.web]
host = "0.0.0.0"
port = 5670
#[service.web.database]
#type = "sqlite"
#path = "pilot/meta_data/dbgpt.db"
[service.web.database]
type = "mysql"
host = "${env:MYSQL_HOST:-127.0.0.1}"
port = "${env:MYSQL_PORT:-3306}"
database = "${env:MYSQL_DATABASE:-dbgpt}"
user = "${env:MYSQL_USER:-root}"
password ="${env:MYSQL_PASSWORD:-aa123456}"
[rag]
chunk_size=1000
chunk_overlap=0
similarity_top_k=5
similarity_score_threshold=0.0
max_chunks_once_load=10
max_threads=1
rerank_top_k=3
[rag.storage]
[rag.storage.vector]
type = "chroma"
persist_path = "pilot/data"
[rag.storage.graph]
type = "tugraph"
host="tugraph"
port=7687
username="admin"
password="73@TuGraph"
# enable_summary="True"
# community_topk=20
# community_score_threshold=0.3
# triplet_graph_enabled="True"
# extract_topk=20
# document_graph_enabled="True"
# knowledge_graph_chunk_search_top_size=20
# knowledge_graph_extraction_batch_size=20
# enable_similarity_search="True"
# knowledge_graph_embedding_batch_size=20
# similarity_search_topk=5
# extract_score_threshold=0.7
# enable_text_search="True"
# text2gql_model_enabled="True"
# text2gql_model_name="qwen2.5:latest"
# Model Configurations
[models]
[[models.llms]]
name = "Qwen/Qwen2.5-Coder-32B-Instruct"
provider = "proxy/siliconflow"
api_key = "${env:SILICONFLOW_API_KEY}"
[[models.embeddings]]
name = "BAAI/bge-m3"
provider = "proxy/siliconflow"
api_key = "${env:SILICONFLOW_API_KEY}"
#[models]
#[[models.llms]]
#name = "${env:LLM_MODEL_NAME:-gpt-4o}"
#provider = "${env:LLM_MODEL_PROVIDER:-proxy/openai}"
#api_base = "${env:OPENAI_API_BASE:-https://api.openai.com/v1}"
#api_base = "${env:OPENAI_API_BASE:-https://aihubmix.com/v1}"
#api_key = "${env:OPENAI_API_KEY}"
#[[models.embeddings]]
#name = "${env:EMBEDDING_MODEL_NAME:-text-embedding-3-small}"
#provider = "${env:EMBEDDING_MODEL_PROVIDER:-proxy/openai}"
#api_url = "${env:EMBEDDING_MODEL_API_URL:-https://aihubmix.com/v1/embeddings}"
#api_key = "${env:OPENAI_API_KEY}"
ubuntu@ip-172-26-8-199:~/DB-GPT/configs$
plugins/graphrag/docker-compose.yml
0 → 100644
View file @
2349111b
ubuntu@ip-172-26-8-199:~/DB-GPT$ cat docker-compose.yml
# To run current docker compose file, you should prepare the siliconflow api key in your environment.
# SILICONFLOW_API_KEY=${SILICONFLOW_API_KEY} docker compose up -d
services
:
db
:
image
:
mysql/mysql-server
environment
:
MYSQL_USER
:
'
user'
MYSQL_PASSWORD
:
'
password'
MYSQL_ROOT_PASSWORD
:
'
aa123456'
ports
:
-
3306:3306
volumes
:
-
dbgpt-myql-db:/var/lib/mysql
-
./docker/examples/my.cnf:/etc/my.cnf
-
./docker/examples/sqls:/docker-entrypoint-initdb.d
-
./assets/schema/dbgpt.sql:/docker-entrypoint-initdb.d/dbgpt.sql
restart
:
unless-stopped
networks
:
-
dbgptnet
webserver
:
image
:
eosphorosai/dbgpt-openai:latest
command
:
dbgpt start webserver --config /app/configs/dbgpt-graphrag.toml
environment
:
-
SILICONFLOW_API_KEY=${SILICONFLOW_API_KEY}
-
MYSQL_PASSWORD=aa123456
-
MYSQL_HOST=db
-
MYSQL_PORT=3306
-
MYSQL_DATABASE=dbgpt
-
MYSQL_USER=root
-
OPENAI_API_KEY=sk-UIpD9DohtE0Ok4wtFdC21668Dc3241629e8aA05d5dAeFdA1
volumes
:
-
./configs:/app/configs
-
/data:/data
# May be you can mount your models to container
-
/data/models:/app/models
-
dbgpt-data:/app/pilot/data
-
dbgpt-message:/app/pilot/message
depends_on
:
-
db
-
tugraph
ports
:
-
5670:5670/tcp
# webserver may be failed, it must wait all sqls in /docker-entrypoint-initdb.d execute finish.
restart
:
unless-stopped
networks
:
-
dbgptnet
ipc
:
host
tugraph
:
image
:
tugraph/tugraph-runtime-centos7:4.5.1
command
:
lgraph_server -d run --enable_plugin
true
ports
:
-
7070:7070
-
7687:7687
-
9090:9090
container_name
:
tugraph_demo
restart
:
unless-stopped
networks
:
-
dbgptnet
volumes
:
dbgpt-myql-db
:
dbgpt-data
:
dbgpt-message
:
dbgpt-alembic-versions
:
networks
:
dbgptnet
:
driver
:
bridge
name
:
dbgptnet
ubuntu@ip-172-26-8-199:~/DB-GPT$
plugins/lightrag/docker-compose.yml
0 → 100644
View file @
2349111b
ubuntu@ip-172-26-8-199:~/LightRAG$ cat docker-compose.yml
services
:
pgvector
:
image
:
pgvector/pgvector:pg16
profiles
:
-
pgvector
restart
:
always
environment
:
PGUSER
:
${PGVECTOR_PGUSER:-postgres}
# The password for the default postgres user.
POSTGRES_PASSWORD
:
${PGVECTOR_POSTGRES_PASSWORD:-lightrag123456}
# The name of the default postgres database.
POSTGRES_DB
:
${PGVECTOR_POSTGRES_DB:-lightrag}
# postgres data directory
PGDATA
:
${PGVECTOR_PGDATA:-/var/lib/postgresql/data/pgdata}
# pg_bigm module for full text search
PG_BIGM
:
${PGVECTOR_PG_BIGM:-false}
PG_BIGM_VERSION
:
${PGVECTOR_PG_BIGM_VERSION:-1.2-20240606}
volumes
:
-
./volumes/pgvector/data:/var/lib/postgresql/data
-
./pgvector/docker-entrypoint.sh:/docker-entrypoint.sh
entrypoint
:
[
'
/docker-entrypoint.sh'
]
networks
:
-
lightrag-net
healthcheck
:
test
:
[
'
CMD'
,
'
pg_isready'
]
interval
:
1s
timeout
:
3s
retries
:
30
neo4j
:
image
:
neo4j:latest
volumes
:
-
./neo4j/logs:/logs
-
./neo4j/config:/config
-
./neo4j/data:/data
-
./neo4j/plugins:/plugins
environment
:
-
NEO4J_AUTH=neo4j/mysecretpassword123
# This sets username and password
ports
:
-
"
7474:7474"
# Neo4j Browser HTTP
-
"
7687:7687"
# Bolt protocol for applications
restart
:
always
networks
:
-
lightrag-net
lightrag
:
container_name
:
lightrag
image
:
ghcr.io/hkuds/lightrag:latest
build
:
context
:
.
dockerfile
:
Dockerfile
tags
:
-
ghcr.io/hkuds/lightrag:latest
ports
:
-
"
${PORT:-9621}:9621"
volumes
:
-
./data/rag_storage:/app/data/rag_storage
-
./data/inputs:/app/data/inputs
-
./config.ini:/app/config.ini
-
./.env:/app/.env
env_file
:
-
.env
restart
:
unless-stopped
networks
:
-
lightrag-net
depends_on
:
# <--- 在这里增加依赖
-
neo4j
networks
:
lightrag-net
:
ubuntu@ip-172-26-8-199:~/LightRAG$
plugins/lightrag/env
0 → 100644
View file @
2349111b
ubuntu@ip-172-26-8-199:~/LightRAG$ cat .env
### This is sample file of .env
### Server Configuration
HOST=0.0.0.0
PORT=9621
WEBUI_TITLE='My Graph KB'
WEBUI_DESCRIPTION="Simple and Fast Graph Based RAG System"
OLLAMA_EMULATING_MODEL_TAG=latest
# WORKERS=2
# CORS_ORIGINS=http://localhost:3000,http://localhost:8080
### Login Configuration
# AUTH_ACCOUNTS='admin:admin123,user1:pass456'
# TOKEN_SECRET=Your-Key-For-LightRAG-API-Server
# TOKEN_EXPIRE_HOURS=48
# GUEST_TOKEN_EXPIRE_HOURS=24
# JWT_ALGORITHM=HS256
### API-Key to access LightRAG Server API
# LIGHTRAG_API_KEY=your-secure-api-key-here
# WHITELIST_PATHS=/health,/api/*
### Optional SSL Configuration
# SSL=true
# SSL_CERTFILE=/path/to/cert.pem
# SSL_KEYFILE=/path/to/key.pem
### Directory Configuration (defaults to current working directory)
### Should not be set if deploy by docker (Set by Dockerfile instead of .env)
### Default value is ./inputs and ./rag_storage
# INPUT_DIR=<absolute_path_for_doc_input_dir>
# WORKING_DIR=<absolute_path_for_working_dir>
### Max nodes return from grap retrieval
# MAX_GRAPH_NODES=1000
### Logging level
# LOG_LEVEL=INFO
# VERBOSE=False
# LOG_MAX_BYTES=10485760
# LOG_BACKUP_COUNT=5
### Logfile location (defaults to current working directory)
# LOG_DIR=/path/to/log/directory
### Settings for RAG query
# HISTORY_TURNS=3
# COSINE_THRESHOLD=0.2
# TOP_K=60
# MAX_TOKEN_TEXT_CHUNK=4000
# MAX_TOKEN_RELATION_DESC=4000
# MAX_TOKEN_ENTITY_DESC=4000
### Entity and ralation summarization configuration
### Language: English, Chinese, French, German ...
SUMMARY_LANGUAGE=English
### Number of duplicated entities/edges to trigger LLM re-summary on merge ( at least 3 is recommented)
# FORCE_LLM_SUMMARY_ON_MERGE=6
### Max tokens for entity/relations description after merge
# MAX_TOKEN_SUMMARY=500
### Number of parallel processing documents(Less than MAX_ASYNC/2 is recommended)
# MAX_PARALLEL_INSERT=2
### Chunk size for document splitting, 500~1500 is recommended
# CHUNK_SIZE=1200
# CHUNK_OVERLAP_SIZE=100
### LLM Configuration
ENABLE_LLM_CACHE=true
ENABLE_LLM_CACHE_FOR_EXTRACT=true
### Time out in seconds for LLM, None for infinite timeout
TIMEOUT=240
### Some models like o1-mini require temperature to be set to 1
TEMPERATURE=0
### Max concurrency requests of LLM
MAX_ASYNC=4
### MAX_TOKENS: max tokens send to LLM for entity relation summaries (less than context size of the model)
### MAX_TOKENS: set as num_ctx option for Ollama by API Server
MAX_TOKENS=32768
### LLM Binding type: openai, ollama, lollms, azure_openai
#LLM_BINDING=openai
#LLM_MODEL=gpt-4o
#LLM_BINDING_HOST=https://api.openai.com/v1
#LLM_BINDING_API_KEY=your_api_key
### Optional for Azure
# AZURE_OPENAI_API_VERSION=2024-08-01-preview
# AZURE_OPENAI_DEPLOYMENT=gpt-4o
### Embedding Configuration
### Embedding Binding type: openai, ollama, lollms, azure_openai
#EMBEDDING_BINDING=ollama
#EMBEDDING_MODEL=bge-m3:latest
#EMBEDDING_DIM=1024
#EMBEDDING_BINDING_API_KEY=your_api_key
# If the embedding service is deployed within the same Docker stack, use host.docker.internal instead of localhost
#EMBEDDING_BINDING_HOST=http://localhost:11434
### OpenAI alike example
LLM_BINDING=openai
LLM_MODEL=deepseek-chat
LLM_BINDING_HOST=https://api.deepseek.com
LLM_BINDING_API_KEY=sk-9f70df871a7c4b8aa566a3c7a0603706
### Optional for Azure
# AZURE_OPENAI_API_VERSION=2024-08-01-preview
# AZURE_OPENAI_DEPLOYMENT=gpt-4o
### Embedding Configuration
### Embedding Binding type: openai, ollama, lollms, azure_openai
#EMBEDDING_BINDING=ollama
#EMBEDDING_MODEL=bge-m3:latest
#EMBEDDING_DIM=1024
#EMBEDDING_BINDING_API_KEY=your_api_key
# If the embedding service is deployed within the same Docker stack, use host.docker.internal instead of localhost
#EMBEDDING_BINDING_HOST=http://localhost:11434
# EMBEDDING_BINDING=ollama
EMBEDDING_BINDING=openai # Change from 'ollama' to 'openai'
EMBEDDING_MODEL=BAAI/bge-large-zh-v1.5
EMBEDDING_DIM=1024
EMBEDDING_BINDING_API_KEY=sk-ogigzbogipwhtkwvwnoeiovjdalkotopnpkwkxlvsvjsmyms
EMBEDDING_BINDING_HOST=https://api.siliconflow.cn/v1
### Num of chunks send to Embedding in single request
# EMBEDDING_BATCH_NUM=32
### Max concurrency requests for Embedding
# EMBEDDING_FUNC_MAX_ASYNC=16
### Maximum tokens sent to Embedding for each chunk (no longer in use?)
# MAX_EMBED_TOKENS=8192
### Optional for Azure
# AZURE_EMBEDDING_DEPLOYMENT=text-embedding-3-large
# AZURE_EMBEDDING_API_VERSION=2023-05-15
### Data storage selection
# LIGHTRAG_KV_STORAGE=PGKVStorage
# LIGHTRAG_VECTOR_STORAGE=PGVectorStorage
# LIGHTRAG_DOC_STATUS_STORAGE=PGDocStatusStorage
# LIGHTRAG_GRAPH_STORAGE=Neo4JStorage
### TiDB Configuration (Deprecated)
# TIDB_HOST=localhost
# TIDB_PORT=4000
# TIDB_USER=your_username
# TIDB_PASSWORD='your_password'
# TIDB_DATABASE=your_database
### separating all data from difference Lightrag instances(deprecating)
# TIDB_WORKSPACE=default
### PostgreSQL Configuration
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_USER=your_username
POSTGRES_PASSWORD='your_password'
POSTGRES_DATABASE=your_database
POSTGRES_MAX_CONNECTIONS=12
### separating all data from difference Lightrag instances(deprecating)
# POSTGRES_WORKSPACE=default
### Neo4j Configuration
NEO4J_URI=neo4j+s://xxxxxxxx.databases.neo4j.io
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD='your_password'
### Independent AGM Configuration(not for AMG embedded in PostreSQL)
# AGE_POSTGRES_DB=
# AGE_POSTGRES_USER=
# AGE_POSTGRES_PASSWORD=
# AGE_POSTGRES_HOST=
# AGE_POSTGRES_PORT=8529
# AGE Graph Name(apply to PostgreSQL and independent AGM)
### AGE_GRAPH_NAME is precated
# AGE_GRAPH_NAME=lightrag
### MongoDB Configuration
MONGO_URI=mongodb://root:root@localhost:27017/
MONGO_DATABASE=LightRAG
### separating all data from difference Lightrag instances(deprecating)
# MONGODB_GRAPH=false
### Milvus Configuration
MILVUS_URI=http://localhost:19530
MILVUS_DB_NAME=lightrag
# MILVUS_USER=root
# MILVUS_PASSWORD=your_password
# MILVUS_TOKEN=your_token
### Qdrant
QDRANT_URL=http://localhost:16333
# QDRANT_API_KEY=your-api-key
### Redis
REDIS_URI=redis://localhost:6379
ubuntu@ip-172-26-8-199:~/LightRAG$
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment