Beta v0.9.9 · Zen Kernel · Local AI · Zero Telemetry

AI-NATIVE LINUX OS.

AI is not an app you open — it is the invisible layer underneath everything. The shell knows what you mean. The file system knows what your files contain. The editor knows what you are building.

Built on Arch Linux · Zen Kernel · Ollama · SLIWM. Yours, forever.

📦 v0.9.9 · Dec 2025 🔒 100% Local AI Zen Kernel ∞ Freedom
slyme@slyme-os:~ — slyme-shell v1.0
$slyme-ai status
● slyme-ai daemon active (PID 1337)
Model: mistral:7b-instruct · VRAM: 4.2GB / 8GB
Fast model: phi3:mini · Latency: ~80ms
Socket: /run/slyme-ai.sock · ONLINE
$!! find all python files modified today larger than 10kb
→ Translating to command...
find . -name "*.py" -newer $TODAY -size +10k 2>/dev/null
[confirm? y/N] y
./src/daemon.py · ./src/shell_hook.py · ./lib/context.py
$slyme-find "notes about async rust"
Searching semantic index (47,291 files)...
→ 3 matches · 98% / 91% / 84% relevance
~/notes/rust-async-patterns.md
~/projects/slyme-ai/NOTES.md
$
420MB ISO Size
100% Local AI — No Cloud
0 Telemetry / Tracking
Your Freedom

The slyme-ai Daemon

Every component of Slyme OS talks to a single AI middleware daemon over a Unix socket. Swap models, add routing, scale inference — without touching anything else.


  • Dual Model Routing
    Fast path (Phi-3 mini, ~80ms) for shell suggestions. Deep path (Mistral 7B) for complex reasoning. Automatically selected per request.
  • 🧠
    System Context Object
    Every AI call is auto-enriched with current directory, recent commands, active file, clipboard, focused window. You never need to explain what you're doing.
  • 🔌
    Universal Unix Socket API
    Any script, plugin, or app sends a prompt via /run/slyme-ai.sock and gets a response. AI integration = one line of code for any tool.
SLYME-AI DAEMON
systemd service · /run/slyme-ai.sock
🐚slyme-shell
🪟SLIWM WM
📁slyme-find
📋slyme-clip
✏️slyme.nvim
📊slyme-mon

Every Layer, AI-Aware

Not a distro with Ollama bolted on. An OS re-architected from the ground up so intelligence runs through every component.

01 🐚
AI Shell Intelligence
Type !! + plain English to generate commands. Auto-explains errors. Predicts next command from context. Session recap in one command.
02 🪟
Context-Aware WM
SLIWM detects your work mode — Code, Write, Research, Comms — and auto-arranges windows. Global AI scratchpad on any key with full context injected.
03 🔍
Semantic File Search
slyme-find indexes your entire home directory with embeddings. Search 50,000 files with natural language — finds what files mean, not just their names.
04 📋
AI Clipboard
slyme-clip persists everything you copy, auto-tags by category, and lets you search history semantically. "That Go snippet from last week" — it finds it.
05 ✏️
slyme.nvim
Local LLM code completion, inline error explanations, git commit generation, test writer, refactor assistant — all offline, all private, Copilot-class quality.
06 🛡️
Zero Telemetry
Your prompts, your files, your history — none of it leaves your machine. Ever. No cloud dependency, no API keys, no accounts required. Air-gap ready.
07
Zen Kernel
Low-latency optimized kernel for real-time AI inference. Reduced jitter, better VRAM scheduling, snappier IPC — built for running LLMs on consumer hardware.
08 📊
Smart System Monitor
slyme-monitor doesn't just show numbers — it interprets them. "Slow because ffmpeg is competing with Ollama for VRAM. Kill it or switch to phi3:mini."
09 🔧
Suckless Core
DWM-based SLIWM, ST terminal, dmenu. Minimal C, simple source, fully hackable. The base stays lean — every AI feature is modular and opt-in.

Install Slyme OS

Download the ISO, verify integrity, and boot. The installer detects your GPU, recommends models, and downloads them during setup. First boot is AI-ready.

1
Download & Verify
Get the ISO from the official release and verify the SHA256 checksum before flashing.
sha256sum slyme-os-v0.9.9.iso
2
Flash to USB
Use dd or balenaEtcher to write the ISO to a USB drive (4GB+ recommended).
dd if=slyme-os.iso of=/dev/sdX bs=4M status=progress
3
Boot & Install
Boot from USB. The terminal installer detects your GPU, selects optimal models, and sets up slyme-ai automatically.
4
First Boot
slyme-ai daemon starts automatically. Your shell, WM, and all tools are AI-ready immediately. No configuration needed.
slyme-ai status
Component Minimum Recommended
CPU x86_64, 4 cores 8+ cores, AVX2
RAM 8GB 16GB+
VRAM 4GB (phi3:mini) 8GB+ (mistral:7b)
Storage 40GB SSD 100GB NVMe
GPU Optional (CPU ok) NVIDIA / AMD
NVIDIA RTX 30/40xx ✓ AMD RX 6000/7000 ✓ CPU-only mode ✓ Apple Silicon — Soon
No GPU? No problem. The installer detects your hardware and auto-selects the optimal model — phi3:mini on CPU-only systems runs at ~2 tok/s. Still useful, still private.

Development Roadmap

18 months to the most AI-native Linux experience in existence. Each phase ships something real, not vaporware.

Phase 0
Now · Weeks 1–3
Foundation — Cleanup & Community Launch
Real SHA256 verification Documentation site GitHub org + README Hardware compat table r/unixporn launch post Demo video
Phase 1
Months 1–2
The AI Daemon & Shell Intelligence
slyme-ai daemon (Python/Go) Unix socket API Dual model routing Natural language shell (!!) Error explainer Ghost suggestions slyme-recap
Phase 2
Months 2–4
Window Manager & Clipboard Intelligence
Context-aware workspaces AI Scratchpad (Super+Space) Smart window titling slyme-clip persistent clipboard Semantic clipboard search
Phase 3
Months 3–5
Semantic File System
slyme-index embedding daemon nomic-embed-text integration SQLite vector store slyme-find natural language search ranger AI file summaries
Phase 4
Months 4–6
slyme.nvim — Editor Intelligence
Local LLM code completion Inline error explainer Git commit generator Test writer Refactor assistant slyme-review PR reviewer
Phase 5
Months 6–9
System Intelligence & Polish
slyme-monitor TUI Intelligent log analysis Whisper voice interface New terminal installer Hardware auto-config v2.0 launch

Why Slyme OS Exists

The terminal generation was told AI means giving up privacy, paying monthly fees, and trusting corporations with their most sensitive work. We built Slyme OS to prove that's a lie.

Intelligence should be local. Privacy should be absolute. Your tools should make you faster, not dependent. The future of computing is not cloud-first — it is yours-first.

Local AI — no cloud, no keys, no accounts
Zero telemetry — your data never leaves your machine
Open source — read, fork, patch, own everything
Suckless — minimal, auditable, hackable source
Free base — core OS always free, forever
No subscriptions for core features
No corporate cloud AI backends
No bloat, no defaults you didn't choose