Shell Sage is an open-source, AI-powered terminal companion designed to enhance command-line interface (CLI) workflows through integration with various local and cloud AI models. This tool aims to make CLI operations more intuitive and safer for developers by generating shell commands, debugging errors, and offering a seamless switch between local privacy and cloud performance.
AI-Powered Terminal Installation: Users can easily set up Shell Sage by cloning the GitHub repository and running the installation script. This initial setup requires an internet connection, Python 3.8 or higher, and a minimum of 4GB RAM for optimal performance with local models.
$ git clone https://github.com/dheerajcl/Shellsage.git
$ cd Terminal_assistant && ./install.sh
Local and Cloud Processing: Shell Sage supports both local processing via Ollama integration and multiple cloud providers including Groq, OpenAI, Anthropic, Fireworks, and DeepSeek. This flexibility allows users to choose between privacy-first local processing and the faster, internet-dependent API mode.
$ curl -fsSL https://ollama.com/install.sh | sh
$ ollama pull <model_name>
$ shellsage config --mode local
Interactive Setup Wizard: The setup wizard guides users through configuring their operation mode, selecting a local model, or choosing a cloud API provider.
$ shellsage setup
Error Diagnosis and Safe Execution: Shell Sage enhances CLI safety by detecting potentially dangerous commands and suggesting safer alternatives. It also includes interactive confirmation for destructive commands to prevent accidental data loss.
$ rm -rf /important-folder
# Suggested: rm -rf ./important-folder
Natural Language Command Processing: Users can input commands in natural language, which Shell Sage translates into appropriate shell commands. This feature supports complex workflows, such as backing up specific file types.
$ shellsage ask "backup all .js files"
Hybrid Modes: Users can seamlessly switch between local and cloud modes, depending on their needs for privacy or performance.
Multi-Provider Support: The tool supports a variety of AI providers, allowing users to switch among different cloud services or local models easily.
Model Management: Shell Sage facilitates easy management of multiple local and cloud models, enabling users to switch according to the task requirements.
How does local mode differ from cloud mode? Local mode processes data on the user's machine, ensuring privacy and data security, while cloud mode processes data on external servers, which may offer greater computational power and speed.
Which models are officially supported? Shell Sage supports several models in both local and cloud configurations, including but not limited to models from Ollama, Groq, OpenAI, Anthropic, Fireworks, and DeepSeek.
How to switch providers? Users can switch providers through the interactive setup wizard or by modifying the configuration files directly, allowing flexibility in choosing the most suitable AI service for different tasks.
Shell Sage is committed to enhancing the productivity and safety of developers working in command-line environments, providing a versatile tool that adapts to various privacy and performance needs.