OmniLLM is a LAN-hosted offline website that consolidates multiple self-hosted LLMs, allowing users to interact with various AI models without internet access.
Key Features:
· Multiple LLMs – Supports models like LLaMA 2, Mistral, and Gemma.
· Offline & LAN-Based – Runs entirely on local hardware, accessible via a web interface.
· Model Switching – Users can choose between different LLMs for various tasks.
· Fast & Secure – No cloud dependency, ensuring privacy and low-latency responses.
· User-Friendly Interface – Simple UI for input, and model management.
Project Lead: Haseeb Sajid
Supervisor: Asif Muhammad