Single binary. Zero dependencies. Health probes, AI crash diagnosis, Telegram alerts, Prometheus metrics, reverse proxy, cloud fleet management, TUI dashboard -- all in Rust.
id name status pid cpu mem uptime ---------------------------------------------------------------- a1b2 api-server online 12345 12.3% 128MB 2d 14h c3d4 worker online 12346 3.1% 64MB 2d 14h e5f6 worker online 12347 2.8% 62MB 2d 14h g7h8 frontend online 12348 1.0% 32MB 1d 08h
Every feature you need to manage processes in production. No plugins, no runtime dependencies.
HTTP, TCP, and script-based health checks. Processes only go online after probes pass.
Exponential backoff (100ms to 30s). Circuit breaker stops restart loops.
Dependency ordering with topological sort. Start databases before APIs.
Telegram, Slack, Discord, Email, PagerDuty, Teams, Ntfy, Webhook.
FTS5 full-text search. JSON auto-parse. GELF, Loki, Elasticsearch sinks.
CPU, memory, uptime, restarts exported to /metrics. Alert rules.
Host-based routing, load balancing, auto-TLS via Let's Encrypt.
Git pull + pre/post hooks + graceful reload. One-command rollback.
Split-pane terminal UI. Process table, CPU/memory sparklines, live log tail.
Control processes from Telegram or Discord. Role-based permissions.
AI agent that monitors, diagnoses, and acts on its own. Configurable autonomy.
Real-time browser UI. Live SSE log streaming. Dark theme.
Rolling restarts with health-check gate. Users never notice.
Built-in file watcher + auto-restart. Replaces nodemon.
Built-in HTTP benchmark with concurrent workers. Reports RPS, latency.
Scale up a canary, monitor errors, auto-promote or rollback.
Capture and restore full process state. Restore instantly.
Check certificate expiry for any URL. Warns when certs expire.
25 endpoints with bearer token auth, WebSocket streaming, HMAC webhooks.
Direct integration with 10 cloud providers. Provision, deploy, scale.
The first process manager with built-in LLM capabilities. Diagnose crashes, query logs in English, generate configs, get optimization suggestions.
database.js:42. Connection pool exhausted after 15 concurrent requests exceeded the pool limit of 10.max_connections: 25idle_timeout: 30000memory > 256MB for 5m128.4 MB across 1 instance. The workers are at 64 MB each.An autonomous AI agent with a persistent brain. It watches your processes 24/7, diagnoses crashes before you notice, heals known issues instantly, and learns from every fix.
Polls all processes every 30s. Reads logs, checks health scores, detects anomalies.
Brain checks memory: known pattern? Apply playbook instantly. Unknown? Consult LLM with full context.
Restarts, scales, rolls back. Supervised mode asks you first. Autonomous mode just does it.
Every successful fix becomes a new playbook. Next time, zero API cost. The brain gets smarter.
Port conflicts, OOM kills, connection refused, permission errors, disk full, crash loops, memory leaks. All fixed automatically — zero API calls.
Every process gets scored based on uptime, restarts, errors, and memory trends. Score drops below 50? The brain investigates automatically.
Every successful fix becomes a new playbook. The brain remembers 500 incidents. Known patterns healed in milliseconds — no LLM needed.
Autonomous — heals and notifies. Supervised — asks before acting. Manual — only responds to your messages. You choose the leash.
Web dashboard, zero-downtime deploys, live log streaming, and a dev mode that replaces nodemon. Everything from development to production.
mhost dashboardReal-time browser UI accessible from any device. Process table, log viewer, action buttons, SSE live streaming. Dark theme, responsive, zero dependencies.
mhost reloadRestart without dropping a single request. New instances start, health checks pass, then old instances are killed. Users never notice.
--followReal-time log tail like tail -f. Timestamps, process names, colored JSON parsing, grep filtering — all streaming live.
mhost devReplaces nodemon + dotenv + concurrently. File watcher, auto-restart, .env loading, colorized inline logs. One command for development.
Connect your servers with mhost connect and get a unified control plane for your entire fleet.
See all servers worldwide with real-time status, metrics, and health scores.
Deploys, diagnoses, optimizes, and auto-heals across your entire fleet.
Canary, blue-green, rolling deploys with auto-rollback on failure.
Collaborative incident response with AI-powered diagnosis and timeline.
Track spending across providers. Get actionable recommendations to save money.
Logs, metrics, and alerts from every server in one unified dashboard.
Server Status mhost Processes CPU ---------------------------------------------------- prod-api ● up yes 4/4 45% prod-worker ● up yes 3/3 72% prod-db ● up yes 1/1 12%
Start free. Scale as you grow. No hidden fees.
brew install maqalaqil/tap/mhostnpm install -g @maqalaqil93/mhostcargo install mhostcurl -fsSL mhostai.com/install.sh | shirm mhostai.com/install.ps1 | iexgit clone && cargo build --releaseDefine processes, health checks, groups, notifications, metrics, deploy hooks, and AI config in a single TOML, YAML, or JSON file. Environment variable expansion with ${VAR} syntax.
Modular Rust workspace. Each crate has one responsibility. The CLI and daemon compile into two binaries with zero runtime dependencies.
Get help, share ideas, and connect with other mhost users on Discord.
Join DiscordOne command to install. One file to configure. Zero dependencies.