RemiAI Open Source Framework

License: MIT Build: Electron Model: GGUF

A "No-Setup" Local AI Framework for Students

This project is an open-source, offline AI chat application designed for students and colleges. It allows you to run powerful LLMs (like Llama 3, Mistral, etc.) on your laptop without needing GPU, internet, Python, or complicated installations.

Note - No need any GPU in your laptop to run, it will use the CPU in your laptop for the response generation(inference) and if you want to modify the project code and use another model make sure that your are using the .gguf formated weights only, normal weights like .safetensors will not supported in this application.

πŸš€ Quick Start (One-Line Command)

If you have Git and Node.js installed, open your terminal (Command Prompt or PowerShell) and run:

for powershell

git clone https://huggingface.co/remiai3/RemiAI_Framework; cd RemiAI_Framework; git lfs install; git lfs pull; npm install; npm start

for cmd

git clone https://huggingface.co/remiai3/RemiAI_Framework && cd RemiAI-App && git lfs install && git lfs pull && npm install && npm start

⚠️ IMPORTANT: Git LFS Required

This repository uses Git Large File Storage (LFS) for the AI engine binaries. If you download the ZIP or clone without LFS, the app will not work (Error: "RemiAI engine missing").

Pack the entire project into .exe file installer Run the command:

npm run dist

This will create an installer in the release folder that you can share with friends!

if you are facing errors while package or bundle open the power shell as an administrator and run the above command then it will works 100%


πŸ’» Manual Installation

1. Requirements

2. Download & Setup

  1. Download the project zip (or clone the repo).
  2. Extract the folder.
  3. Open Terminal inside the folder path.
  4. Pull Engine Files (Critical Step):
    git lfs install
    git lfs pull
    
  5. Run the installer for libraries:
    npm install
    

3. Run the App

Simply type:

npm start

The application will launch, the AI engine will start in the background, and you can begin chatting immediately!


πŸ“¦ Features

  • Zero Python Dependency: We use compiled binaries (.dll and .exe included) so you don't need to install Python, PyTorch, or set up virtual environments.
  • Plug & Play Models: Supports .gguf format.
    • Want a different model? Download any .gguf file, rename it to model.gguf, and place it in the project root.
  • Auto-Optimization: Automatically detects your CPU features (AVX vs AVX2) to give you the best speed possible.
  • Privacy First: Runs 100% offline. No data leaves your device.

❓ Troubleshooting

Error: "RemiAI Engine Missing" This means you downloaded the "pointer" files (130 bytes) instead of the real engine. Fix:

  1. Open terminal in project folder.
  2. Run git lfs install
  3. Run git lfs pull
  4. Restart the app.

πŸ› οΈ Credits & License

  • Created By: RemiAI Team
  • License: MIT License.
    • You are free to rename, modify, and distribute this application as your own project!

Note on Models: The application will only uses the .gguf formated weights only to make it as the CPU friendly run the application without any GPU

Downloads last month
177
GGUF
Model size
3B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for remiai3/RemiAI_Framework

Base model

google/gemma-2-2b
Quantized
(171)
this model