Execute commands via LLM with natural language.
The cicerone do command enables natural language interaction with your infrastructure. It parses your query, routes it to the appropriate admin host, and returns actionable responses.
Query the default admin.
$ cicerone do "check system status" Querying default admin (darth-ai)... System Status for darth-ai: • CPU: 15% usage • Memory: 3.2 GB / 16 GB (20%) • Disk: 45 GB / 200 GB (22%) • Uptime: 21 days
Target a specific admin host.
$ cicerone do "local: what processes are using the most memory?" Querying local... Top memory consumers: 1. java (PID 1234) - 2.1 GB 2. postgres (PID 5678) - 1.8 GB 3. nginx (PID 9012) - 512 MB
When an admin has a library connected, responses include context from your documents.
$ cicerone do "prod-ai: how do I restart the API?" Based on your runbooks: 1. SSH to production: $ ssh deploy@prod-server 2. Navigate to API directory: $ cd /opt/api 3. Restart service: $ systemctl restart api-service Reference: runbooks/api-restart.md
Query all available admins for comparison.
$ cicerone do "compare memory usage across all servers" Querying: darth-ai, local, prod-ai Results: • darth-ai: 32 GB total, 12 GB used (37%) • local: 16 GB total, 8 GB used (50%) • prod-ai: 64 GB total, 48 GB used (75%) ⚠️ HIGH Recommendation: prod-ai is using 75% memory.
| Syntax | Description | Example |
|---|---|---|
<query> |
Query default admin | cicerone do "check disk" |
<admin>: <query> |
Query specific admin | cicerone do "local: check disk" |
ask <admin> to <query> |
Natural language format | cicerone do "ask local to check disk" |
Before using cicerone do, you need to configure admin hosts:
# Create local admin (Ollama running locally) $ cicerone admin new local --llm-url http://localhost:11434 # Create remote admin $ cicerone admin new prod-ai --host 192.168.1.100 --groups devops # View all admins $ cicerone admin show # Test query $ cicerone do "local: what's the system status?"
See Admin Commands for more details on configuring admins.