Promptheus Documentation
Welcome to the comprehensive documentation for Promptheus - the AI-powered prompt optimization tool that transforms raw prompts into polished, effective queries for any LLM.
🔮 What is Promptheus?
Promptheus uses advanced AI to analyze your basic prompts and transform them into detailed, structured instructions that get superior results from any large language model. Think of it as prompt alchemy - turning base prompts into gold.
Core Capabilities
- Adaptive Intelligence: Automatically detects task types and applies appropriate refinement strategies
- Multi-Provider Support: Works with 6 major AI providers - Google, Claude, OpenAI, Groq, Qwen, and GLM
- Interactive Refinement: Asks clarifying questions to understand your intent
- Powerful CLI: Command-line interface with keyboard shortcuts and pipeline integration
- Beautiful Web UI: Modern alchemical-themed interface for prompt enhancement
- History Management: Track and reload your prompt evolution
- Dynamic Model Discovery: Fetches current model information from models.dev API
Installation
User Installation
Install Promptheus via pip for standard usage:
pip install promptheus
Verify the installation:
promptheus --version
Developer Installation
For development work, install from source:
# Clone the repository
git clone https://github.com/abhichandra21/Promptheus.git
cd Promptheus
# Install in editable mode with dev dependencies
pip install -e ".[dev]"
# Run tests to verify
pytest -q
System Requirements
- Python 3.10 or higher
- At least one configured LLM provider API key
- Internet connection for API calls
Configuration
API Key Setup
Promptheus requires at least one AI provider API key. Create a .env file in your project directory:
# Google AI Studio
GOOGLE_API_KEY=your_google_key_here
# Anthropic Claude
ANTHROPIC_API_KEY=your_anthropic_key_here
# OpenAI
OPENAI_API_KEY=your_openai_key_here
# Groq
GROQ_API_KEY=your_groq_key_here
# Alibaba Cloud Qwen (DashScope)
DASHSCOPE_API_KEY=your_qwen_key_here
# Zhipu AI GLM
ZHIPUAI_API_KEY=your_glm_key_here
Interactive Authentication
Use the interactive auth command for guided setup:
# Interactive provider selection
promptheus auth
# Specific provider
promptheus auth google
promptheus auth anthropic
# Skip validation (for testing)
promptheus auth openai --skip-validation
Environment File Discovery
Promptheus searches upward from your current directory for .env files, stopping at project root markers:
.gitdirectory (repository root)pyproject.toml(Python project root)setup.py(Python project root)
Quick Start
Basic Usage
Transform a prompt with a single command:
promptheus "Write a blog post about AI ethics"
Skip Questions Mode
For quick enhancements without interactive questions:
promptheus -s "Explain quantum computing"
Interactive Mode
Start a persistent session for multiple prompts:
promptheus
Specify Provider
Choose a specific AI provider:
promptheus --provider google "Create a marketing plan"
CLI Basics
Command Structure
The basic Promptheus command structure:
promptheus [options] [prompt]
Input Methods
Promptheus supports multiple input methods:
| Method | Syntax | Example |
|---|---|---|
| Inline | promptheus "text" |
promptheus "Write a haiku" |
| File flag | -f <path> |
promptheus -f prompt.txt |
| @ shorthand | @<path> |
promptheus @prompt.txt |
| Standard input | Pipe | cat prompt.txt | promptheus |
Operational Modes
Adaptive Mode (Default)
Promptheus automatically detects whether your task is analytical or creative:
- Analysis tasks: Skips questions, applies direct enhancement
- Generation tasks: Asks clarifying questions for better results
# Adaptive mode detects task type
promptheus "Analyze this codebase" # Skips questions
promptheus "Write a story" # Asks questions
Skip Questions Mode
Bypass questions and enhance directly:
promptheus -s "Your prompt here"
promptheus --skip-questions "Your prompt here"
Refine Mode
Force interactive questions regardless of task type:
promptheus -r "Your prompt here"
promptheus --refine "Your prompt here"
Single Execution Mode
Process one prompt and exit:
promptheus "Generate a function"
Interactive Mode (REPL)
Launch interactive mode for a persistent session:
promptheus
Keyboard Shortcuts
| Key | Action |
|---|---|
| Enter | Submit prompt |
| Shift+Enter | Insert newline (multiline mode) |
| Option/Alt+Enter | Insert newline (alternative) |
| ↑/↓ Arrows | Navigate command history |
| Tab | Command completion |
| Ctrl+C | Cancel current operation |
Slash Commands
Display available commands and keyboard shortcuts
Display recent prompts and refinements
Load prompt at index n from history
Purge all history entries
Copy last refined result to clipboard
Display current session configuration
Change the active AI provider (google, anthropic, openai, groq, qwen, glm)
Change the active model
Toggle refinement mode on/off
Toggle question bypass mode
Pipeline Integration
Promptheus is designed for seamless integration with Unix pipelines and command substitution workflows.
Basic Pipeline Operations
# Automatic quiet mode when piped
promptheus "Write a story" | cat
# File redirection
promptheus "Write docs" > output.txt
# Integration with AI tools
promptheus "Create a haiku" | claude exec
Command Substitution
# Feed refined output to external tools
claude "$(promptheus 'Write a technical prompt')"
# Variable capture for scripting
REFINED=$(promptheus "Optimize this query")
echo "$REFINED" | mysql -u user -p
Unix Utility Integration
# Save and display simultaneously
promptheus "Long explanation" | tee output.txt
# Filter output
promptheus "List best practices" | grep -i "security"
# Count words
promptheus "Write essay" | wc -w
# Transform output
promptheus "Generate list" | sed 's/^/- /' > checklist.md
JSON Processing
# Extract fields with jq
promptheus -o json "Create API schema" | jq '.endpoints'
# Format and save
promptheus -o json "Config template" | jq '.' > formatted.json
Advanced Patterns
# Batch processing
cat prompts.txt | while read line; do
promptheus "$line" >> results.txt
done
# Conditional execution
if promptheus "Check status" | grep -q "success"; then
echo "All systems operational"
fi
# Multi-stage pipeline
promptheus "Draft outline" | \
promptheus "Expand this outline" | \
tee expanded.txt
Command Flags Reference
| Flag | Description |
|---|---|
-s, --skip-questions |
Bypass interactive questions, apply direct enhancement |
-r, --refine |
Force interactive refinement workflow |
-o, --output-format |
Output format: plain (default) or json |
-c, --copy |
Copy refined prompt to clipboard |
-f, --file |
Read prompt from file |
--provider |
Specify AI provider |
--model |
Specify model identifier |
--help |
Display usage information |
--version |
Display version information |
Flag Composition
Flags can be combined for powerful workflows:
# Skip questions + copy to clipboard + JSON output
promptheus -s -c -o json "Pitch deck outline"
# Custom provider + skip questions + file input
promptheus --provider google -s -f brief.md
Web Interface Overview
🌐 Available Now
The Promptheus Web UI is now available! The web interface provides a beautiful, alchemical-themed interface for prompt enhancement with all the power of the CLI in a modern web application.
Web UI Commands
# Start the web server and open browser automatically
promptheus web
# Start with custom port
promptheus web --port 3000
# Start without opening browser
promptheus web --no-browser
# Start with custom host
promptheus web --host 0.0.0.0
Key Features
- Visual prompt refinement workflow
- Real-time collaborative editing
- Provider selection interface
- History browsing and management
- Export options (PDF, Markdown, Plain Text)
- Dark/light theme support
- Keyboard shortcuts matching CLI
- Model caching from models.dev API
Web UI Features
Alchemical Interface
The web interface features a beautiful alchemical theme that mirrors the transformation process:
- Animated transitions showing prompt transformation
- Visual feedback for AI processing
- Intuitive controls and navigation
- Responsive design for all devices
Interactive Workflow
- Step-by-step question interface
- Real-time preview of refined prompts
- One-click iteration and tweaking
- Side-by-side comparison view
Provider and Model Management
The Web UI includes comprehensive provider and model management:
- Dynamic model listing from models.dev API
- Cache refresh controls
- Validation of provider configurations
- Real-time provider status indicators
Web UI Workflow
The Alchemical Process
- Gather Raw Materials: Enter your basic prompt idea
- Apply AI Heat: Answer clarifying questions
- Transmutation: Watch as AI refines your prompt
- Extract the Gold: Receive your polished prompt
The web interface guides you through each step with visual feedback and helpful hints.
Provider Support
Promptheus supports six major AI providers through a unified interface. No vendor lock-in - use any provider you prefer.
- API Key:
GOOGLE_API_KEY - Get Key: Google AI Studio
Anthropic Claude
- API Key:
ANTHROPIC_API_KEY - Get Key: Anthropic Console
OpenAI
- API Key:
OPENAI_API_KEY - Get Key: OpenAI Platform
Groq
- API Key:
GROQ_API_KEY - Get Key: Groq Console
Alibaba Cloud Qwen
- API Key:
DASHSCOPE_API_KEY - Get Key: DashScope Console
Zhipu AI GLM
- API Key:
ZHIPUAI_API_KEY - Get Key: Zhipu AI Console
Provider Selection
# Auto-detect from available API keys
promptheus "Your prompt"
# Explicit provider selection
promptheus --provider google "Your prompt"
promptheus --provider anthropic "Your prompt"
# Specific model selection
promptheus --model gpt-4o "Your prompt"
promptheus --model claude-3-5-sonnet-20241022 "Your prompt"
List Available Models
# List all models
promptheus list-models
# Filter by provider
promptheus list-models --providers openai,google
Dynamic Model Discovery
Model information is now dynamically fetched from the models.dev API:
- Models are cached with a 24-hour expiration
- Cache is refreshed automatically when expired
- Cache is persisted to
~/.promptheus/models_cache.json - Cache can be manually refreshed in the Web UI Settings panel
History Management
Promptheus automatically tracks your refined prompts for easy retrieval and reuse.
Command-Line Interface
# Display all history
promptheus history
# Display last 50 entries
promptheus history --limit 50
# Clear all history
promptheus history --clear
Interactive Mode Commands
# In interactive mode
/history # View history
/load 3 # Load entry at index 3
/clear-history # Purge all history
Storage Details
- Location:
~/.promptheus/(Unix) or%APPDATA%/promptheus(Windows) - Format: JSONL with metadata
- Privacy: Local storage only, no external transmission
- Content: Timestamps, task types, original and refined versions
Context-Aware Behavior
History is automatically enabled or disabled based on context:
- Interactive terminals: History enabled by default
- Pipelines/scripts: History disabled by default
- Manual override:
PROMPTHEUS_ENABLE_HISTORY=1or=0
# Enable history explicitly
PROMPTHEUS_ENABLE_HISTORY=1 promptheus "analyze data"
# Disable history explicitly
PROMPTHEUS_ENABLE_HISTORY=0 promptheus "secret project"
Authentication Management
Interactive Setup
The easiest way to configure API keys:
# Interactive provider selection
promptheus auth
# Direct provider specification
promptheus auth google
promptheus auth anthropic
promptheus auth openai
Authentication Workflow
- Select provider from menu (or specify directly)
- System displays API key URL
- Enter API key (masked input)
- System validates key against provider API
- Valid key saved to
.envfile
Skip Validation
For testing or offline configuration:
promptheus auth openai --skip-validation
Manual Configuration
Alternatively, manually edit your .env file:
# .env file
GOOGLE_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
Output Formats
Plain Text (Default)
Standard plain text output suitable for reading and piping:
promptheus "Write a haiku"
promptheus -o plain "Write a story"
JSON Output
Structured JSON format with metadata for programmatic processing:
promptheus -o json "Create API schema"
JSON output includes:
- Original prompt
- Refined prompt
- Task classification
- Provider and model used
- Timestamp
Processing JSON Output
# Extract refined prompt
promptheus -o json "prompt" | jq -r '.refined_prompt'
# Extract metadata
promptheus -o json "prompt" | jq '.metadata'
# Format and save
promptheus -o json "prompt" | jq '.' > output.json
Environment Variables
Provider Configuration
# Override auto-detection
export PROMPTHEUS_PROVIDER=google
# Options: google, anthropic, openai, groq, qwen, glm
# Override default model
export PROMPTHEUS_MODEL=gemini-2.0-flash-exp
History Management
# Enable/disable history
export PROMPTHEUS_ENABLE_HISTORY=1 # Enable
export PROMPTHEUS_ENABLE_HISTORY=0 # Disable
# Values: 1/0, true/false, yes/no, on/off
Logging Configuration
# Enable debug mode
export PROMPTHEUS_DEBUG=1
# Set log level
export PROMPTHEUS_LOG_LEVEL=INFO
# Options: DEBUG, INFO, WARNING, ERROR, CRITICAL
# JSON log format
export PROMPTHEUS_LOG_FORMAT=json
# Log to file
export PROMPTHEUS_LOG_FILE=app.log
Configuration Precedence
Settings are resolved in this order (highest to lowest priority):
- Explicit CLI arguments
- Environment variables
- Auto-detection from
.envfile - Provider defaults
Validation Tools
Configuration Validation
Verify your setup is correct:
# Basic validation
promptheus validate
# Test live API connections
promptheus validate --test-connection
# Validate specific providers
promptheus validate --providers google,anthropic
Model Discovery
# List all available models
promptheus list-models
# Filter by providers
promptheus list-models --providers openai
# Include non-text-generation models
promptheus list-models --include-nontext
# Limit output
promptheus list-models --limit 10
Environment Template
Generate a template .env file:
# Single provider
promptheus template openai > .env
# Multiple providers
promptheus template openai,google,anthropic > .env
Troubleshooting
Installation Issues
Problem: Command not found
pip install -e .
which promptheus
python -m promptheus.main "Test"
Provider Configuration
Problem: No API key found
# Verify .env file exists and has keys
cat .env
# Check environment variables
env | grep -E '(GOOGLE|ANTHROPIC|OPENAI|GROQ|DASHSCOPE|ZHIPUAI)'
# Validate configuration
promptheus validate --test-connection
Clipboard Issues
Problem: Clipboard not working
Linux: Install xclip or xsel
sudo apt-get install xclip
macOS/Windows: Native clipboard support included
WSL: May require X server configuration
Python Version
Problem: Provider not compatible with Python 3.14
Some provider SDKs may not yet support Python 3.14:
- Google: Full support via google-genai SDK
- Others: Use Python 3.10-3.13 until compatibility updates
Use virtual environments to manage multiple Python versions
Debug Mode
Enable verbose output for troubleshooting:
# Enable debug mode
PROMPTHEUS_DEBUG=1 promptheus "test prompt"
# Set log level
PROMPTHEUS_LOG_LEVEL=DEBUG promptheus "test"
# Log to file
PROMPTHEUS_LOG_FILE=debug.log promptheus "test"
Command Reference
Primary Commands
| Command | Description |
|---|---|
promptheus [prompt] |
Transform a prompt (or start interactive mode) |
promptheus auth |
Interactive API key configuration |
promptheus validate |
Validate API configuration |
promptheus list-models |
List available providers and models |
promptheus history |
View prompt history |
promptheus template |
Generate .env template |
promptheus completion |
Generate shell completion script |
promptheus web |
Launch Web UI server |
Common Flag Combinations
# Skip questions + copy
promptheus -s -c "Generate docs"
# JSON output + specific provider
promptheus -o json --provider google "Create schema"
# File input + skip questions
promptheus -s -f prompt.txt
# Custom provider + model
promptheus --provider openai --model gpt-4o "Analyze code"
MCP Server
🔌 Model Context Protocol Server
Promptheus includes a built-in MCP server that exposes prompt refinement capabilities as standardized tools for integration with MCP-compatible clients and AI toolchains.
Starting the MCP Server
# Start the MCP server
promptheus mcp
# Or run directly with Python
python -m promptheus.mcp_server
Prerequisites
- MCP package installed:
pip install mcp(included in requirements.txt) - At least one provider API key configured
Available MCP Tools
refine_prompt
Intelligent prompt refinement with optional clarification questions.
- Inputs: prompt (required), answers (optional), answer_mapping (optional), provider (optional), model (optional)
- Response Types:
{"type": "refined", "prompt": "...", "next_action": "..."}- Success{"type": "clarification_needed", "questions_for_ask_user_question": [...], "answer_mapping": {...}}- Questions needed{"type": "error", "error_type": "...", "message": "..."}- Error
tweak_prompt
Apply targeted modifications to existing prompts.
- Inputs: prompt (required), modification (required), provider (optional), model (optional)
- Returns:
{"type": "refined", "prompt": "..."}
list_models
Discover available models from configured providers.
- Inputs: providers (optional), limit (optional), include_nontext (optional)
- Returns:
{"type": "success", "providers": {...}}
list_providers
Check provider configuration status.
- Returns:
{"type": "success", "providers": {...}}
validate_environment
Test environment configuration and API connectivity.
- Inputs: providers (optional), test_connection (optional)
- Returns:
{"type": "success", "validation": {...}}
Prompt Refinement Workflow
Step 1: Initial Refinement Request
{
"tool": "refine_prompt",
"arguments": {
"prompt": "Write a blog post about machine learning"
}
}
Step 2: Handle Clarification Response
{
"type": "clarification_needed",
"task_type": "generation",
"questions_for_ask_user_question": [
{
"question": "Who is your target audience?",
"header": "Q1",
"multiSelect": false,
"options": [
{"label": "Technical professionals", "description": "Technical professionals"},
{"label": "Business executives", "description": "Business executives"}
]
}
],
"answer_mapping": {
"q0": "Who is your target audience?"
}
}
Step 3: Final Refinement with Answers
{
"tool": "refine_prompt",
"arguments": {
"prompt": "Write a blog post about machine learning",
"answers": {"q0": "Technical professionals"},
"answer_mapping": {"q0": "Who is your target audience?"}
}
}
# Response:
{
"type": "refined",
"prompt": "Write a comprehensive technical blog post about machine learning fundamentals targeted at software engineers...",
"next_action": "This refined prompt is now ready to use..."
}
AskUser Integration
The MCP server supports two modes:
- Interactive Mode: When AskUserQuestion is available, questions are asked automatically
- Structured Mode: Returns clarification_needed with formatted questions for client handling
MCP Integration Examples
# Basic MCP client integration
REFINED_PROMPT=$(mcp-client call refine_prompt --prompt "Create API docs" | jq -r '.prompt')
echo "$REFINED_PROMPT" | claude exec --generate-docs
# Batch processing with MCP
for prompt in "Write blog" "Create tutorial" "Draft email"; do
mcp-client call refine_prompt --prompt "$prompt" > "refined_${prompt// /_}.json"
done
Troubleshooting
MCP Package Not Installed
pip install mcp
# Or install with development dependencies
pip install -e .[dev]
Missing Provider API Keys
# Use list_providers to check status
mcp-client call list_providers
# Configure API keys
promptheus auth google
promptheus auth openai
Usage Examples
Content Generation
# Blog post with interactive questions
promptheus "Write a blog post about AI"
# Quick content without questions
promptheus -s "Draft a product announcement"
# Copy result to clipboard
promptheus -c "Create social media posts"
Code Analysis
# Analyze code (auto-skips questions)
promptheus "Analyze the main.py file"
# Code review prompt
promptheus "Review this pull request for security issues"
# Generate test cases
promptheus --provider google "Create unit tests for authentication"
Pipeline Workflows
# Generate and save
promptheus "Write tutorial" > tutorial.md
# Batch processing
cat topics.txt | while read topic; do
promptheus "$topic" >> explanations.md
done
# JSON processing
promptheus -o json "API spec" | jq '.endpoints' > api.json
# Multi-stage refinement
promptheus "Draft outline" | \
promptheus "Expand this" | \
tee final.txt
Interactive Session Workflow
# Start interactive mode
promptheus
# Switch provider
> /set provider google
# Check status
> /status
# Process prompts
> Write a technical guide
# View and reload history
> /history
> /load 3
# Toggle modes
> /toggle skip-questions
> /toggle refine
Contributing
Getting Started
# Clone repository
git clone https://github.com/abhichandra21/Promptheus.git
cd Promptheus
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest -q
# Format code
black .
Contribution Guidelines
- Scope: One feature or fix per pull request
- Commits: Use imperative mood ("Add feature" not "Added feature")
- Tests: Add tests for new functionality
- Documentation: Update relevant docs
- Code style: Run
black .before committing - Security: Never log API keys or credentials
Adding New Providers
- Implement
LLMProviderinterface inproviders.py - Add configuration to
providers.json - Update environment detection in
config.py - Add to provider factory function
- Update documentation
- Add tests
Issue Reporting
Report issues at: github.com/abhichandra21/Promptheus/issues
Include:
- Python version
- Operating system
- Provider being used
- Steps to reproduce
- Error messages or unexpected behavior
- Current model cache status (if related to model discovery)