Oxide-Lab

0

Описание

Modern desktop application (Rust + Tauri v2 + Svelte 5 + Candle (HF)) for communicating with AI models that runs completely locally on your computer. No subscriptions, no data sent to the internet — just you and your personal AI assistant

https://oxidelab.tech/

Языки

  • Rust58,2%
  • Svelte26,2%
  • TypeScript12,2%
  • JavaScript1,2%
  • Python1,1%
  • CSS0,6%
  • Остальные0,5%
2 месяца назад
8 месяцев назад
8 месяцев назад
README.md

English Русский Português


Oxide Lab Logo

Private AI chat desktop application with local LLM support.
All inference happens on your machine — no cloud, no data sharing.

GitHub Stars Awesome Tauri Awesome Svelte

Oxide Lab Chat Interface

📚 Table of Contents

✨ What is this?

Oxide Lab is a native desktop application for running large language models locally. Built with Rust and Tauri v2, it provides a fast, private chat interface without requiring internet connectivity or external API services.

🎬 Demo

https://github.com/user-attachments/assets/0b9c8ff9-7793-4108-8b62-b0800cbd855e

https://github.com/user-attachments/assets/27c1f544-69e0-4a91-8fa5-4c21d67cb7c7

https://github.com/user-attachments/assets/ce5337d5-3e63-4263-b6a7-56e6847bbc71

🚀 Key Features

  • 100% local inference — your data never leaves your machine
  • Multi-architecture support: Llama, Qwen2, Qwen2.5, Qwen3, Qwen3 MoE, Mistral, Mixtral, DeepSeek, Yi, SmolLM2
  • GGUF and SafeTensors model formats
  • Hardware acceleration: CPU, CUDA (NVIDIA), Metal (Apple Silicon), Intel MKL, Apple Accelerate
  • Streaming text generation
  • Multi-language UI: English, Russian, Brazilian Portuguese
  • Modern interface built with Svelte 5 and Tailwind CSS

🛠️ Installation & Setup

Prerequisites

  • Node.js (for frontend build)
  • Rust toolchain (for backend)
  • For CUDA: NVIDIA GPU with CUDA toolkit
  • For Metal: macOS with Apple Silicon

Development

Build

Quality Checks

Rust-specific (from src-tauri/)

📖 How to Start Using

  1. Build or download the application
  2. Download a compatible GGUF or SafeTensors model (e.g., from Hugging Face)
  3. Launch Oxide Lab
  4. Load your model through the interface
  5. Start chatting

🖥️ System Requirements

  • Windows, macOS, or Linux
  • Minimum 4 GB RAM (8+ GB recommended for larger models)
  • For GPU acceleration:
    • NVIDIA: CUDA-compatible GPU
    • Apple: M1/M2/M3 chip (Metal)

🤖 Supported Models

Architectures with full support:

  • Llama (1, 2, 3), Mistral, Mixtral, DeepSeek, Yi, SmolLM2, CodeLlama
  • Qwen2/2.5, Qwen2.5/2 MoE
  • Qwen3, Qwen3 MoE

Formats:

  • GGUF (quantized models)
  • SafeTensors

🛡️ Privacy and Security

  • All processing happens locally on your device
  • No telemetry or data collection
  • No internet connection required for inference
  • Content Security Policy (CSP) enforced

🙏 Acknowledgments

This project is built on top of excellent open-source work:

  • Candle — ML framework for Rust (HuggingFace)
  • Tauri — Desktop application framework
  • Svelte — Frontend framework
  • Tokenizers — Fast tokenization (HuggingFace)

See THIRD_PARTY_LICENSES.md for full dependency attribution.

📄 License

Apache-2.0 — see LICENSE

Copyright (c) 2025 FerrisMind