Commit c262ad2e authored by duanjinfei's avatar duanjinfei

update document

parent 3fa66718
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:43:11 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the aonet library to call the AI Face Swap API.
This document will guide developers on how to use the aonweb library to call the AI Face Swap API.
## Prerequisites
......@@ -17,63 +17,48 @@ This document will guide developers on how to use the aonet library to call the
- `aonweb` library installed
- Valid Aonet APPID
## Installation
## Basic Usage
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Steps to Use
### 1. Import the aonet library
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object that includes your APPID:
### 2. Initialize AI Instance
```js
const options = {
appid: "your APPID"
};
```
Make sure to replace "your APPID" with your actual Aonet APPID.
### 3. Initialize AI Instance
Initialize the AI instance using the configuration options:
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
```js
const aonet = new AI(options);
const aonweb = new AI(ai_options);
```
### 4. Call the Face Swap API
Use the `prediction` method to call the Face Swap API:
### 3. Prepare Input Data
```js
async function performFaceSwap() {
try {
let response = await aonet.prediction("/predictions/ai/face-swap",
{
const data = {
input: {
"swap_image": "https://aonet.ai/pbxt/JoBuzfSVFLb5lBqkf3v9xMnqx3jFCYhM5JcVInFFwab8sLg0/long-trench-coat.png",
"swap_image": "https://aonweb.ai/pbxt/JoBuzfSVFLb5lBqkf3v9xMnqx3jFCYhM5JcVInFFwab8sLg0/long-trench-coat.png",
"target_image": "https://replicate.delivery/pbxt/JoBuz3wGiVFQ1TDEcsGZbYcNh0bHpvwOi32T1fmxhRujqcu7/9X2.png"
}
});
};
```
### 4. Call the AI Model
```js
const price = 8; // Cost of the AI call
try {
const response = await aonweb.prediction("/predictions/ai/face-swap", data, price);
// Handle response
console.log("Face swap result:", response);
} catch (error) {
} catch (error) {
// Error handling
console.error("Error performing face swap:", error);
}
}
performFaceSwap();
```
### Parameter Description
......
---
title: AnyText API Usage Guide
slug: 0tF--any
createdAt: Tue Jul 30 2024 05:31:14 GMT+0000 (Coordinated Universal Time)
updatedAt: Wed Jul 31 2024 09:01:31 GMT+0000 (Coordinated Universal Time)
---
# AnyText API Usage Guide
## Introduction
This document will guide developers on how to use the `aonet` library to invoke the AnyText API, which is used for text generation and editing before rendering in images(Text).
## Prerequisites
- Node.js environment
- `aonweb` library installed
- Valid Aonet APPID
## Installation
Ensure the `aonet` library is installed. If not, you can install it using npm:
```bash
npm install aonet
```
## Usage Instructions
### 1. Import the `aonet` Library
```javascript
const AI = require("aonet");
```
### 2. Configure Options
Create an `options` object containing your APPID:
```javascript
const options = {
appid: "your_APPID"
};
```
Make sure to replace `"your_APPID"` with your actual Aonet APPID.
### 3. Initialize AI Instance
Initialize the AI instance using the configuration options:
```javascript
const aonet = new AI(options);
```
### 4. Invoke AnyText API
Use the `prediction` method to call the FunASR API:
```javascript
async function performSpeechRecognition() {
try {
let response = await aonet.prediction("/predictions/ai/anytext", {
input: {
"mode": "text-generation",
"prompt": "photo of caramel macchiato coffee on the table, top-down perspective, with \"Any\" \"Text\" written on it using cream",
"seed": 200,
"draw_pos": "https://replicate.delivery/pbxt/LIHKXdjxOWFe7HqP1rliIsghRab48EVQRzGNwQ9RgyO5V03d/gen9.png",
"ori_image": "https://replicate.delivery/pbxt/LIHMZ8cCvmndHNVufiSuKZA4mnokuSOy87cYqhvs4Diei7sL/edit9.png",
"img_count": 2,
"ddim_steps": 20,
"use_fp32": false,
"no_translator": false,
"strength": 1,
"img_width": 512,
"img_height": 512,
"cfg_scale": 9,
"a_prompt": "best quality, extremely detailed,4k, HD, supper legible text, clear text edges, clear strokes, neat writing, no watermarks",
"n_prompt": "low-res, bad anatomy, extra digit, fewer digits, cropped, worst quality, low quality, watermark, unreadable text, messy words, distorted text, disorganized writing, advertising picture",
"sort_radio": "",
"revise_pos": false
}
});
console.log("AnyText result:", response);
} catch (error) {
console.error("Error performing speech recognition:", error);
}
}
performSpeechRecognition();
```
### Parameter Description
- `mode`: str, Indicates a model that needs to be called, fixed value.
- `prompt`: str, Tips, describe the content of the image
- `seed`: int,The number of seeds, range -1 \~ 99999999.
- `draw_pos`: url,A URL address of an image indicating the positions of the generated text. 
- `ori_image`: url,A URL address to be edited.
- `img_count`: int,Number of images to generate, range 1–16.
- `ddim_steps`: int,The number of sampling steps, must be within the range of 1 to 100.​
- `use_fp32`: bool,
- `no_translator`: bool,No Translator
- `strength`: float,The control strength of the text control module, must be within the range of 0.0 to 2.0.​
- `img_width`: int,Image width, valid only in “text generation” mode, must be within the range of 256px to 768px.​
- `img_height`: int,Image height, valid only in “text generation” mode, must be within the range of 256px to 768px.
- `cfg_scale`: float,Classifier-Free Guidance (CFG) strength parameter, range 0.1–30.0.
- `a_prompt`: str,Additional prompt words, typically used to enhance the image effect.
- `n_prompt`: str,Negative prompt words.
- `sort_radio`: str,Sort Position,position sorting priority
- `revise_pos`: bool,Revise Position
## Considerations
- Ensure the provided image URL is publicly accessible and of good quality for optimal recognition results.
- The API may take some time to process the image and generate results,
- Handle potential errors, such as network issues, invalid input, or API limitations.
- Adhere to terms of use and privacy regulations, especially when processing image containing sensitive information.
- Enter the descriptive prompts (supporting both Chinese and English) in the Prompt. Each line of text to be generated should be enclosed in double quotes, then hand-draw the position for each line of text in sequence to generate an image. The quality of the generated image depends critically on the drawing of the text positions, so please do not draw them too casually or too small. The number of positions must match the number of text lines, and each position’s size should match the length or height of the corresponding text line as closely as possible.
## Example Response
The API response will include the image content after text generation or text editing. Parse and use the response data according to the actual API documentation.
## Advanced Usage
- Implement batch image processing by processing multiple image files in a loop or concurrently.
- Add a user interface to allow users to upload their image files or provide image URLs.
- Implement real-time tex recognition by integrating the API into live image streams.
- Integrate post-processing features for text, such as punctuation addition, semantic analysis, or sentiment analysis.
- Consider implementing multi-language support to handle image in different languages as needed.
By following this guide, you should be able to effectively use the AnyText API for automatic speech recognition in your applications. If you have any questions or need further clarification, feel free to ask.
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 06:42:25 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the `aonet` library to invoke the FunASR API, which is used for Automatic Speech Recognition (ASR).
This document will guide developers on how to use the `aonweb` library to invoke the FunASR API, which is used for Automatic Speech Recognition (ASR).
## Prerequisites
......@@ -19,18 +19,18 @@ This document will guide developers on how to use the `aonet` library to invoke
## Installation
Ensure the `aonet` library is installed. If not, you can install it using npm:
Ensure the `aonweb` library is installed. If not, you can install it using npm:
```bash
npm install aonet
npm install aonweb
```
## Usage Instructions
### 1. Import the `aonet` Library
### 1. Import the `aonweb` Library
```javascript
const AI = require("aonet");
const AI = require("aonweb");
```
### 2. Configure Options
......@@ -50,7 +50,7 @@ Make sure to replace `"your_APPID"` with your actual Aonet APPID.
Initialize the AI instance using the configuration options:
```javascript
const aonet = new AI(options);
const aonweb = new AI(options);
```
### 4. Invoke FunASR API
......@@ -60,9 +60,9 @@ Use the `prediction` method to call the FunASR API:
```javascript
async function performSpeechRecognition() {
try {
let response = await aonet.prediction("/predictions/ai/funasr", {
let response = await aonweb.prediction("/predictions/ai/funasr", {
input: {
"awv": "https://aonet.ai/mgxm/d9fa255c-4c47-4fec-99ce-f190539f10c4/olle.mp3",
"awv": "https://aonweb.ai/mgxm/d9fa255c-4c47-4fec-99ce-f190539f10c4/olle.mp3",
"batch_size": 300
}
});
......
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:44:27 GMT+0000 (Coordinated Universal Time)
## Introduction
This document describes how to use the aonet library to call the IDM-VTON AI model. This model is used for virtual try-on, allowing specified clothing images to be applied to human images.
This document describes how to use the aonweb library to call the IDM-VTON AI model. This model is used for virtual try-on, allowing specified clothing images to be applied to human images.
## Prerequisites
......@@ -17,60 +17,50 @@ This document describes how to use the aonet library to call the IDM-VTON AI mod
- `aonweb` library installed
- Valid Aonet APPID
## Installation
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Basic Usage
### 1. Import the AI Class
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object containing authentication information:
### 2. Initialize AI Instance
```js
const options = {
appId: "$APPID_KEY" // Replace with your APPID key
};
```
### 3. Initialize AI Instance
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
```js
const aonet = new AI(options);
const aonweb = new AI(ai_options);
```
### 4. Call the IDM-VTON Model
Use the `prediction` method to call the model:
### 3. Prepare Input Data
```js
async function callIDMVTON() {
try {
let response = await aonet.prediction("/predictions/ai/idm-vton", {
const data = {
input: {
"seed": 42,
"steps": 30,
"garm_img": "https://aonet.ai/pbxt/KgwTlZyFx5aUU3gc5gMiKuD5nNPTgliMlLUWx160G4z99YjO/sweater.webp",
"garm_img": "https://replicate.delivery/pbxt/KgwTlZyFx5aUU3gc5gMiKuD5nNPTgliMlLUWx160G4z99YjO/sweater.webp",
"human_img": "https://replicate.delivery/pbxt/KgwTlhCMvDagRrcVzZJbuozNJ8esPqiNAIJS3eMgHrYuHmW4/KakaoTalk_Photo_2024-04-04-21-44-45.png",
"garment_des": "cute pink top"
}
});
};
```
### 4. Call the AI Model
```js
const price = 8; // Cost of the AI call
try {
const response = await aonweb.prediction("/predictions/ai/idm-vton", data, price);
// Handle response
console.log("IDM-VTON Response:", response);
return response;
} catch (error) {
console.error("Error calling IDM-VTON:", error);
throw error;
}
} catch (error) {
// Error handling
console.error("Error generate :", error);
}
```
......@@ -107,16 +97,16 @@ Use try-catch blocks to catch and handle possible errors.
## Example Code
```js
const AI = require("aonet");
const AI = require("aonweb");
async function runIDMVTON() {
const options = {
auth: process.env.AONET_API_KEY // Store API key in environment variable
};
const aonet = new AI(options);
const aonweb = new AI(options);
try {
const response = await aonet.prediction("/predictions/ai/idm-vton", {
const response = await aonweb.prediction("/predictions/ai/idm-vton", {
input: {
"seed": 42,
"steps": 30,
......@@ -128,7 +118,7 @@ async function runIDMVTON() {
console.log("IDM-VTON Result:", response);
// Further processing of the response...
} catch (error) {
console.error("Error in IDM-VTON process:", error);
// Error handling...
}
}
......@@ -138,4 +128,4 @@ runIDMVTON();
## Conclusion
By following this guide, you should be able to successfully integrate and use the IDM-VTON AI model for virtual try-on application development. If you encounter any issues or need further assistance, please refer to the official aonet documentation or contact technical support.
By following this guide, you should be able to successfully integrate and use the IDM-VTON AI model for virtual try-on application development. If you encounter any issues or need further assistance, please refer to the official aonweb documentation or contact technical support.
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:38:59 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the aonet library to call the LLaMA 3 API for generating natural language text.
This document will guide developers on how to use the aonweb library to call the LLaMA 3 API for generating natural language text.
## Prerequisites
......@@ -17,40 +17,31 @@ This document will guide developers on how to use the aonet library to call the
- `aonweb` library installed
- Valid Aonet APPID
## Installation
## Basic Usage
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Steps to Use
### 1. Import the aonet library
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object that includes your APPID:
### 2. Initialize AI Instance
```js
const options = {
appid: "your APPID"
};
```
Make sure to replace "your APPID" with your actual Aonet APPID.
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
### 3. Initialize AI Instance
const aonweb = new AI(ai_options);
```
Initialize the AI instance using the configuration options:
### 3. Prepare Input Data
```js
const aonet = new AI(options);
const data = {
input:
};
```
### 4. Call the LLaMA 3 API
......@@ -60,7 +51,7 @@ Use the `prediction` method to call the LLaMA 3 API:
```js
async function generateText() {
try {
let response = await aonet.prediction("/predictions/ai/lllama3:0.0.8",
let response = await aonweb.prediction("/predictions/ai/lllama3:0.0.8",
{
input: {
"top_p": 1,
......
......@@ -29,10 +29,11 @@ import { AI, AIOptions } from 'aonweb';
```js
const ai_options = new AIOptions({
appId: 'your_app_id_here'
appId: 'your_app_id_here',
dev_mode: true
});
const aonet = new AI(ai_options);
const aonweb = new AI(ai_options);
```
### 3. Prepare Input Data
......@@ -40,19 +41,19 @@ const aonet = new AI(ai_options);
```js
const data = {
input: {
"prompt": "",
"prompt": "portrait, impressionist painting, loose brushwork, vibrant color, light and shadow play",
"cfg_scale": 1.2,
"num_steps": 4,
"image_width": 768,
"num_samples": 1,
"num_samples": 4,
"image_height": 1024,
"output_format": "webp",
"identity_scale": 0.8,
"mix_identities": false,
"output_quality": 80,
"generation_mode": "fidelity",
"main_face_image": "your_image_url_here",
"negative_prompt": ""
"main_face_image": "https://replicate.delivery/pbxt/Kr6iendsvYS0F3MLmwRZ8q07XIMEJdemnQI3Cmq9nNrauJbq/zcy.webp",
"negative_prompt": "flaws in the eyes, flaws in the face, flaws, lowres, non-HDRi, low quality, worst quality,artifacts noise, text, watermark, glitch, deformed, mutated, ugly, disfigured, hands, low resolution, partially rendered objects, deformed or partially rendered eyes, deformed, deformed eyeballs, cross-eyed,blurry"
}
};
```
......@@ -62,10 +63,12 @@ const data = {
```js
const price = 8; // Cost of the AI call
try {
const response = await aonet.prediction("/predictions/ai/pulid", data, price);
const response = await aonweb.prediction("/predictions/ai/pulid", data, price);
// Handle response
console.log("pulid result:", response);
} catch (error) {
// Error handling
console.error("Error generating :", error);
}
```
......
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:42:11 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the aonet library to call the SadTalker API, which is used to generate AI-driven talking avatars.
This document will guide developers on how to use the aonweb library to call the SadTalker API, which is used to generate AI-driven talking avatars.
## Prerequisites
......@@ -17,68 +17,52 @@ This document will guide developers on how to use the aonet library to call the
- `aonweb` library installed
- Valid Aonet APPID
## Installation
## Basic Usage
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Steps to Use
### 1. Import the aonet library
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object that includes your APPID:
### 2. Initialize AI Instance
```js
const options = {
appid: "your APPID"
};
```
Make sure to replace "your APPID" with your actual Aonet APPID.
### 3. Initialize AI Instance
Initialize the AI instance using the configuration options:
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
```js
const aonet = new AI(options);
const aonweb = new AI(ai_options);
```
### 4. Call the SadTalker API
Use the `prediction` method to call the SadTalker API:
### 3. Prepare Input Data
```js
async function generateTalkingAvatar() {
try {
let response = await aonet.prediction("/predictions/ai/sadtalker",
{
input: {
const data = {
input:{
"still": true,
"enhancer": "gfpgan",
"preprocess": "full",
"driven_audio": "https://aonet.ai/pbxt/Jf1gczNATWiC94VPrsTTLuXI0ZmtuZ6k0aWBcQpr7VuRc5f3/japanese.wav",
"driven_audio": "https://aonweb.ai/pbxt/Jf1gczNATWiC94VPrsTTLuXI0ZmtuZ6k0aWBcQpr7VuRc5f3/japanese.wav",
"source_image": "https://replicate.delivery/pbxt/Jf1gcsODejVsGRd42eeUj0RXX11zjxzHuLuqXmVFwMAi2tZq/art_1.png"
}
});
console.log("SadTalker result:", response);
} catch (error) {
console.error("Error generating talking avatar:", error);
}
}
};
```
### 4. Call the AI Model
generateTalkingAvatar();
```js
const price = 8; // Cost of the AI call
try {
const response = await aonweb.prediction("/predictions/ai/sadtalker", data, price);
// Handle response
} catch (error) {
// Error handling
}
```
### Parameter Description
- `still`: Boolean, set to true to generate a static image instead of a video.
......
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:41:12 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the aonet library to call the Stable Diffusion 3 API for generating AI art images.
This document will guide developers on how to use the aonweb library to call the Stable Diffusion 3 API for generating AI art images.
## Prerequisites
......@@ -17,52 +17,30 @@ This document will guide developers on how to use the aonet library to call the
- `aonweb` library installed
- Valid Aonet APPID
## Installation
## Basic Usage
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Steps to Use
### 1. Import the aonet library
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object that includes your APPID:
### 2. Initialize AI Instance
```js
const options = {
appid: "your APPID"
};
```
Make sure to replace "your APPID" with your actual Aonet APPID.
### 3. Initialize AI Instance
Initialize the AI instance using the configuration options:
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
```js
const aonet = new AI(options);
const aonweb = new AI(ai_options);
```
### 4. Call the Stable Diffusion 3 API
Use the `prediction` method to call the Stable Diffusion 3 API:
### 3. Prepare Input Data
```js
async function generateImage() {
try {
let response = await aonet.prediction("/predictions/ai/stable-diffusion-3",
{
input: {
const data = {
input:{
"cfg": 3.5,
"prompt": "a photo of vibrant artistic graffiti on a wall saying \"SD3 medium\"",
"aspect_ratio": "3:2",
......@@ -70,16 +48,22 @@ async function generateImage() {
"output_quality": 90,
"negative_prompt": ""
}
});
console.log("Stable Diffusion 3 result:", response);
} catch (error) {
console.error("Error generating image:", error);
}
}
};
```
### 4. Call the AI Model
generateImage();
```js
const price = 8; // Cost of the AI call
try {
const response = await aonweb.prediction("/predictions/ai/stable-diffusion-3", data, price);
// Handle response
} catch (error) {
// Error handling
}
```
### Parameter Description
- `cfg`: Number, controls how closely the generated image adheres to the prompt. Higher values make the image more accurate but may reduce creativity.
......
......@@ -9,7 +9,7 @@ updatedAt: Thu Jul 18 2024 13:40:04 GMT+0000 (Coordinated Universal Time)
## Introduction
This document will guide developers on how to use the aonet library to call the XTTS-V2 API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the XTTS-V2 API, which is used for voice cloning and text-to-speech conversion.
## Prerequisites
......@@ -17,65 +17,50 @@ This document will guide developers on how to use the aonet library to call the
- `aonweb` library installed
- Valid Aonet APPID
## Installation
## Basic Usage
Ensure that the aonet library is installed. If it is not installed yet, you can install it using npm:
```bash
npm install aonet
```
## Steps to Use
### 1. Import the aonet library
### 1. Import Required Modules
```js
const AI = require("aonet");
import { AI, AIOptions } from 'aonweb';
```
### 2. Configure Options
Create an options object that includes your APPID:
### 2. Initialize AI Instance
```js
const options = {
appid: "your APPID"
};
```
Make sure to replace "your APPID" with your actual Aonet APPID.
### 3. Initialize AI Instance
Initialize the AI instance using the configuration options:
const ai_options = new AIOptions({
appId: 'your_app_id_here',
dev_mode: true
});
```js
const aonet = new AI(options);
const aonweb = new AI(ai_options);
```
### 4. Call the XTTS-V2 API
Use the `prediction` method to call the XTTS-V2 API:
### 3. Prepare Input Data
```js
async function generateClonedVoice() {
try {
let response = await aonet.prediction("/predictions/ai/xtts-v2",
{
input: {
const data = {
input:{
"text": "Hi there, I'm your new voice clone. Try your best to upload quality audio",
"speaker": "https://aonet.ai/pbxt/Jt79w0xsT64R1JsiJ0LQRL8UcWspg5J4RFrU6YwEKpOT1ukS/male.wav",
"speaker": "https://aonweb.ai/pbxt/Jt79w0xsT64R1JsiJ0LQRL8UcWspg5J4RFrU6YwEKpOT1ukS/male.wav",
"language": "en",
"cleanup_voice": false
}
});
};
```
### 4. Call the AI Model
```js
const price = 8; // Cost of the AI call
try {
const response = await aonweb.prediction("/predictions/ai/xtts-V2", data, price);
// Handle response
console.log("XTTS-V2 result:", response);
} catch (error) {
console.error("Error generating cloned voice:", error);
}
} catch (error) {
// Error handling
console.error("Error generating :", error);
}
generateClonedVoice();
```
### Parameter Description
......
......@@ -30,5 +30,5 @@ const ai_options = new AIOptions({
appId: REPLACE_APP_ID //replace app id
})
const aonet = new AI(ai_options)
const aonweb = new AI(ai_options)
```
\ No newline at end of file
......@@ -8,7 +8,7 @@ const darkCodeTheme = require("prism-react-renderer/themes/dracula");
const config = {
title: "AON",
tagline: "AON ai prediction",
url: "https://aigic.ai",
url: "https://aonet.ai",
baseUrl: "/",
onBrokenLinks: "throw",
onBrokenMarkdownLinks: "warn",
......@@ -16,7 +16,7 @@ const config = {
// GitHub pages deployment config.
// If you aren't using GitHub pages, you don't need these.
organizationName: "aigic", // Usually your GitHub org/user name.
organizationName: "aonet", // Usually your GitHub org/user name.
projectName: "docusaurus", // Usually your repo name.
presets: [
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment