This example demonstrates LFM2.5-Audio-1.5B running entirely in your browser using WebGPU and ONNX Runtime Web.
You can find all the code in this Hugging Face Space, including a deployed version you can interact with 0 setup here.
-
Clone the repository:
git clone https://huggingface.co/spaces/LiquidAI/LFM2.5-Audio-1.5B-transformers-js/ cd LFM2.5-Audio-1.5B-transformers-js -
Make sure you have
npm(Node Package Manager) installed in your system:npm --version
If the previous command throws an error it means you don't have
npmand you must install it to build this demo . If you come from the Python world, you can think ofnpmas the Node JS equivalent ofpip. -
Install dependencies specified in
package.jsonwithnpmnpm install
-
Start the development server:
npm run dev
The dev server will start and provide you with a local URL (typically
http://localhost:5173) where you can access the app in your browser.
- ASR (Speech Recognition): Transcribe audio to text
- TTS (Text-to-Speech): Convert text to natural speech
- Interleaved: Mixed audio and text conversation
- A browser with WebGPU support (Chrome/Edge 113+)
- Enable WebGPU at
chrome://flags/#enable-unsafe-webgpuif needed
Uses quantized ONNX models from LiquidAI/LFM2.5-Audio-1.5B-ONNX.
Model weights are released under the LFM 1.0 License.