This document will guide developers on how to use the aonweb library to call the Blip-2 API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Blip-2 API, Answers questions about images.
This document will guide developers on how to use the aonweb library to call the Chattts API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Chattts API, The “chat-tts” likely refers to a model that combines chat capabilities with text-to-speech (TTS) technology, enabling more natural and human-like voice output. This type of model takes text input from users and generates synthesized speech, enhancing the interaction with the machine by making it more lively and realistic.
This document will guide developers on how to use the aonweb library to call the Codeformer API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Codeformer API, is a deep learning model designed primarily for image restoration tasks, particularly for enhancing the quality of low-resolution, degraded, or old images. It is known for its ability to perform face restoration, making it especially useful for tasks such as upscaling low-resolution faces in images or videos
This document will guide developers on how to use the aonweb library to call the Controlnet API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Controlnet API, ControlNet is an advanced neural network architecture that enhances image generation models like Stable Diffusion by allowing fine-grained control over the output
This document will guide developers on how to use the aonweb library to call the Gfpgan API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Gfpgan API, is a deep learning model for high-quality facial image restoration, enhancing details and correcting distortions in old or low-resolution photos.
This document will guide developers on how to use the aonweb library to call the MiniGpt-4 API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the MiniGpt-4 API, A model which generates text in response to an input image and prompt.
This document will guide developers on how to use the aonweb library to call the Real-Esrgan API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Real-Esrgan API, Real-ESRGAN with optional face correction and adjustable upscale.
This document will guide developers on how to use the aonweb library to call the Sdxl API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Sdxl API, A text-to-image generative AI model that creates beautiful images.
This document will guide developers on how to use the aonweb library to call the Stable-Diffusion API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Stable-Diffusion API, A latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
This document will guide developers on how to use the aonweb library to call the Whisper API, which is used for voice cloning and text-to-speech conversion.
This document will guide developers on how to use the aonweb library to call the Whisper API, Convert speech in audio to text.
This document will guide developers on how to use the aonweb library to call the XTTS-V2 API, which is used for voice cloning and text-to-speech conversion.
## Prerequisites
- Node.js environment
-`aonweb` library installed
- Valid Aonet APPID
## Basic Usage
### 1. Import Required Modules
```js
import{AI,AIOptions}from'aonweb';
```
### 2. Initialize AI Instance
```js
constai_options=newAIOptions({
appId:'your_app_id_here',
dev_mode:true
});
constaonweb=newAI(ai_options);
```
### 3. Prepare Input Data Example
```js
constdata={
input:{
"text":"Hi there, I'm your new voice clone. Try your best to upload quality audio",
-`text`: String, the text content to be converted into speech.
-`speaker`: String, the URL of the audio file used as the voice sample for cloning.
-`language`: String, specifies the language of the text, with "en" indicating English.
-`cleanup_voice`: Boolean, whether to perform cleanup processing on the generated voice.
### Notes
- Ensure that the provided audio URL is publicly accessible and of good quality to achieve the best cloning effect.
- The API may take some time to process the input and generate the result, consider implementing appropriate wait or loading states.
- Handle possible errors, such as network issues, invalid input, or API limitations.
- Adhere to the terms of use and privacy regulations, especially when handling voice samples of others.
### Example Response
The API response will contain the URL of the generated cloned voice or other relevant information. Parse and use the response data according to the actual API documentation.