mirror of
https://github.com/sstent/AICycling_mcp.git
synced 2026-01-25 08:35:03 +00:00
restarted
This commit is contained in:
170
architecture.md
170
architecture.md
@@ -1,170 +0,0 @@
|
||||
# Template Architecture Documentation
|
||||
|
||||
## Overview
|
||||
The template system has been restructured into a modular architecture to improve reusability, maintainability, and extensibility. The original template files in `templates/` remain intact to preserve backward compatibility with the current [`TemplateManager`](templates_manager.py:5) usage. New modular components are organized in subdirectories under `templates/`.
|
||||
|
||||
This structure allows for composing prompts by combining base elements (system prompts, data sections, analysis frameworks) into workflows. Future phases can update the TemplateManager to load and compose these components dynamically.
|
||||
|
||||
## Directory Structure
|
||||
```
|
||||
templates/
|
||||
├── *.txt (original templates - unchanged for compatibility)
|
||||
├── base/
|
||||
│ ├── system_prompts/ # Core system prompts for agents
|
||||
│ │ ├── no_tools_analysis.txt
|
||||
│ │ └── main_agent.txt
|
||||
│ ├── data_sections/ # Reusable data insertion blocks
|
||||
│ │ ├── activity_summary.txt
|
||||
│ │ ├── user_info.txt
|
||||
│ │ ├── training_rules.txt
|
||||
│ │ ├── workout_data.txt
|
||||
│ │ ├── workouts_data.txt
|
||||
│ │ ├── available_tools.txt
|
||||
│ │ └── recent_data.txt
|
||||
│ └── analysis_frameworks/ # Common analysis structures and instructions
|
||||
│ ├── assessment_points.txt
|
||||
│ ├── performance_analysis.txt
|
||||
│ └── data_gathering.txt
|
||||
├── components/ # General reusable components (to be populated in future phases)
|
||||
└── workflows/ # Composed prompt templates using base components
|
||||
├── single_workout_analysis.txt
|
||||
├── analyze_last_workout.txt
|
||||
├── suggest_next_workout.txt
|
||||
└── workout_recommendation.txt
|
||||
```
|
||||
|
||||
## Component Relationships
|
||||
|
||||
### Base Components
|
||||
- **System Prompts** (`templates/base/system_prompts/`): Define the AI's role and behavior.
|
||||
- `no_tools_analysis.txt`: For analysis without tool calls (extracted from `enhanced_temp_system_prompt.txt` and `temp_analysis_system_prompt.txt`).
|
||||
- `main_agent.txt`: For the main agent with tool access (extracted from `main_agent_system_prompt.txt`).
|
||||
|
||||
- **Data Sections** (`templates/base/data_sections/`): Standardized blocks for inserting dynamic data with consistent formatting.
|
||||
- Used in workflows to insert placeholders like `{activity_summary_section}` which would load and format the corresponding file.
|
||||
- Examples: `{user_info_section}` inserts user profile data.
|
||||
|
||||
- **Analysis Frameworks** (`templates/base/analysis_frameworks/`): Reusable instruction sets for common analysis patterns.
|
||||
- `assessment_points.txt`: Standard list of analysis outputs (e.g., assessment, alignment, improvements).
|
||||
- `performance_analysis.txt`: Focus areas for performance metrics and recommendations.
|
||||
- `data_gathering.txt`: Instructions for tool usage in data collection and risk assessment.
|
||||
|
||||
### Workflows
|
||||
Workflows in `templates/workflows/` compose the base components to recreate original template functionality modularly.
|
||||
- **Composition Pattern**: Each workflow includes:
|
||||
- Introductory text specific to the use case.
|
||||
- Inclusions of data sections (e.g., `{training_rules_section}`).
|
||||
- Analysis frameworks (e.g., `{assessment_points}`).
|
||||
- Closing instructions.
|
||||
|
||||
Examples:
|
||||
- `single_workout_analysis.txt`: Uses `{workout_data_section}`, `{rules_section}`, `{assessment_points}`.
|
||||
- `analyze_last_workout.txt`: Uses `{activity_summary_section}`, `{user_info_section}`, `{training_rules_section}`, `{assessment_points}`.
|
||||
- `suggest_next_workout.txt`: Uses `{training_rules_section}` and custom recommendation points.
|
||||
- `workout_recommendation.txt`: Uses `{workouts_data}` and `{rules}` directly, with recommendation structure.
|
||||
|
||||
### Backward Compatibility
|
||||
- All original `.txt` files in `templates/` are preserved.
|
||||
- The [`TemplateManager`](templates_manager.py:16) continues to load them via `get_template(template_name, **kwargs)`.
|
||||
- New workflows can be loaded similarly, but composition logic (e.g., replacing `{section}` placeholders) will be implemented in future phases.
|
||||
|
||||
#### Legacy Path Mapping
|
||||
To support gradual migration from old template paths to the new modular structure, the TemplateManager includes a legacy mapping layer:
|
||||
- Legacy template names are automatically redirected to their new locations.
|
||||
- Example: `'main_agent_system_prompt.txt'` maps to `'base/system_prompts/main_agent.txt'`.
|
||||
- Deprecation warnings are logged when legacy paths are used to encourage migration.
|
||||
- Mappings are defined in `legacy_mappings` dictionary in both TemplateManager and TemplateValidator classes.
|
||||
|
||||
### Phase 4: Advanced Template Features
|
||||
|
||||
#### Template Inheritance System
|
||||
Templates can now use inheritance syntax to extend base templates and include reusable components:
|
||||
|
||||
**Extends Syntax:**
|
||||
```
|
||||
extends: base_template_name
|
||||
```
|
||||
|
||||
**Includes Syntax:**
|
||||
```
|
||||
includes:
|
||||
- component1
|
||||
- component2
|
||||
```
|
||||
|
||||
The system supports multiple inheritance levels with conflict resolution (child overrides parent).
|
||||
|
||||
#### Dynamic Component Selection
|
||||
Components are selected based on available data types in the context:
|
||||
- If `workout_data` is present → include `workout_data_section`
|
||||
- If `user_info` available → include `user_info_section`
|
||||
- If `training_rules` provided → include `training_rules_section`
|
||||
|
||||
Selection logic uses a priority-based system with fallback defaults.
|
||||
|
||||
#### Template Versioning for A/B Testing
|
||||
Templates support versioning with `@version` syntax:
|
||||
- `template@v1.0` - specific version
|
||||
- `template@latest` - most recent version
|
||||
- `template@random` - random version for A/B testing
|
||||
|
||||
Version metadata includes:
|
||||
- Version number (semantic versioning)
|
||||
- Creation date
|
||||
- Author
|
||||
- Test metrics (conversion rates, performance)
|
||||
|
||||
#### TemplateValidator
|
||||
New validator class performs:
|
||||
- Inheritance cycle detection
|
||||
- Component existence validation
|
||||
- Syntax validation for extends/includes
|
||||
- Version format checking
|
||||
- Backward compatibility verification
|
||||
|
||||
#### Version Control Integration
|
||||
Templates are stored in a version-controlled structure:
|
||||
```
|
||||
templates/
|
||||
├── versions/
|
||||
│ ├── template_name/
|
||||
│ │ ├── v1.0.txt
|
||||
│ │ ├── v1.1.txt
|
||||
│ │ └── v2.0.txt
|
||||
├── base/ (unchanged)
|
||||
├── components/ (unchanged)
|
||||
└── workflows/ (unchanged)
|
||||
```
|
||||
|
||||
### Future Enhancements
|
||||
- Update TemplateManager to support component composition (e.g., recursive loading of sections).
|
||||
- Add more components to `templates/components/` for shared UI/logic elements.
|
||||
- Integrate with MCP tools for dynamic prompt generation.
|
||||
|
||||
## Mermaid Diagram: Component Composition Example
|
||||
```mermaid
|
||||
graph TD
|
||||
A[System Prompt] --> B[Data Sections]
|
||||
B --> C[Analysis Frameworks]
|
||||
C --> D[Workflow]
|
||||
D --> E[Final Prompt]
|
||||
F[Original Templates] -.-> E
|
||||
style F fill:#f9f,stroke:#333,stroke-width:2px
|
||||
```
|
||||
This diagram shows how base components feed into workflows, with originals as a fallback.
|
||||
|
||||
## Mermaid Diagram: Phase 4 Template Inheritance and Versioning
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Base Template] --> B[Extended Template]
|
||||
B --> C[Includes Components]
|
||||
C --> D[Dynamic Selection]
|
||||
D --> E[Versioned Template]
|
||||
E --> F[Validated Template]
|
||||
G[Data Context] --> D
|
||||
H[Version Control] --> E
|
||||
I[TemplateValidator] --> F
|
||||
J[Original Templates] -.-> F
|
||||
style J fill:#f9f,stroke:#333,stroke-width:2px
|
||||
```
|
||||
This diagram illustrates the Phase 4 enhancements: inheritance extends base templates, includes add components, dynamic selection adapts to data, versioning enables A/B testing, and validation ensures correctness, while maintaining backward compatibility.
|
||||
147
cache_manager.py
Normal file
147
cache_manager.py
Normal file
@@ -0,0 +1,147 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Cache Manager - Handles caching of MCP responses and other data
|
||||
"""
|
||||
|
||||
import time
|
||||
import logging
|
||||
from typing import Any, Dict, Optional, List
|
||||
from dataclasses import dataclass
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class CacheEntry:
|
||||
"""Cache entry with TTL"""
|
||||
data: Any
|
||||
timestamp: float
|
||||
ttl: int # Time to live in seconds
|
||||
|
||||
class CacheManager:
|
||||
"""Manages caching of data with TTL support"""
|
||||
|
||||
def __init__(self, default_ttl: int = 300):
|
||||
self.default_ttl = default_ttl
|
||||
self._cache: Dict[str, CacheEntry] = {}
|
||||
|
||||
def set(self, key: str, data: Any, ttl: Optional[int] = None) -> None:
|
||||
"""Set cache entry with TTL"""
|
||||
ttl = ttl or self.default_ttl
|
||||
self._cache[key] = CacheEntry(
|
||||
data=data,
|
||||
timestamp=time.time(),
|
||||
ttl=ttl
|
||||
)
|
||||
logger.debug(f"Cached data for key '{key}' with TTL {ttl}s")
|
||||
|
||||
def get(self, key: str, default: Any = None) -> Any:
|
||||
"""Get cache entry, return default if expired or missing"""
|
||||
if key not in self._cache:
|
||||
logger.debug(f"Cache miss for key '{key}'")
|
||||
return default
|
||||
|
||||
entry = self._cache[key]
|
||||
|
||||
# Check if expired
|
||||
if time.time() - entry.timestamp > entry.ttl:
|
||||
logger.debug(f"Cache expired for key '{key}'")
|
||||
del self._cache[key]
|
||||
return default
|
||||
|
||||
logger.debug(f"Cache hit for key '{key}'")
|
||||
return entry.data
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
"""Check if key exists and is not expired"""
|
||||
return self.get(key) is not None
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
"""Delete cache entry"""
|
||||
if key in self._cache:
|
||||
del self._cache[key]
|
||||
logger.debug(f"Deleted cache entry for key '{key}'")
|
||||
return True
|
||||
return False
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear all cache entries"""
|
||||
count = len(self._cache)
|
||||
self._cache.clear()
|
||||
logger.debug(f"Cleared {count} cache entries")
|
||||
|
||||
def cleanup_expired(self) -> int:
|
||||
"""Remove expired cache entries and return count removed"""
|
||||
current_time = time.time()
|
||||
expired_keys = [
|
||||
key for key, entry in self._cache.items()
|
||||
if current_time - entry.timestamp > entry.ttl
|
||||
]
|
||||
|
||||
for key in expired_keys:
|
||||
del self._cache[key]
|
||||
|
||||
if expired_keys:
|
||||
logger.debug(f"Cleaned up {len(expired_keys)} expired cache entries")
|
||||
|
||||
return len(expired_keys)
|
||||
|
||||
def get_all(self) -> Dict[str, Any]:
|
||||
"""Get all non-expired cache entries"""
|
||||
self.cleanup_expired()
|
||||
return {key: entry.data for key, entry in self._cache.items()}
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
"""Get cache statistics"""
|
||||
self.cleanup_expired()
|
||||
return {
|
||||
"total_entries": len(self._cache),
|
||||
"keys": list(self._cache.keys()),
|
||||
"memory_usage_estimate": sum(
|
||||
len(str(entry.data)) for entry in self._cache.values()
|
||||
)
|
||||
}
|
||||
|
||||
def set_multiple(self, data: Dict[str, Any], ttl: Optional[int] = None) -> None:
|
||||
"""Set multiple cache entries at once"""
|
||||
for key, value in data.items():
|
||||
self.set(key, value, ttl)
|
||||
|
||||
def get_multiple(self, keys: List[str]) -> Dict[str, Any]:
|
||||
"""Get multiple cache entries at once"""
|
||||
return {key: self.get(key) for key in keys}
|
||||
|
||||
# Specialized cache for common cycling data patterns
|
||||
class CyclingDataCache(CacheManager):
|
||||
"""Specialized cache for cycling data with helper methods"""
|
||||
|
||||
def cache_user_profile(self, profile_data: Dict[str, Any]) -> None:
|
||||
"""Cache user profile data"""
|
||||
self.set("user_profile", profile_data, ttl=3600) # 1 hour TTL
|
||||
|
||||
def cache_activities(self, activities: List[Dict[str, Any]]) -> None:
|
||||
"""Cache activities list"""
|
||||
self.set("recent_activities", activities, ttl=900) # 15 minutes TTL
|
||||
|
||||
def cache_activity_details(self, activity_id: str, details: Dict[str, Any]) -> None:
|
||||
"""Cache specific activity details"""
|
||||
self.set(f"activity_details_{activity_id}", details, ttl=3600)
|
||||
|
||||
def get_user_profile(self) -> Optional[Dict[str, Any]]:
|
||||
"""Get cached user profile"""
|
||||
return self.get("user_profile")
|
||||
|
||||
def get_recent_activities(self) -> List[Dict[str, Any]]:
|
||||
"""Get cached recent activities"""
|
||||
return self.get("recent_activities", [])
|
||||
|
||||
def get_activity_details(self, activity_id: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get cached activity details"""
|
||||
return self.get(f"activity_details_{activity_id}")
|
||||
|
||||
def cache_workout_analysis(self, workout_id: str, analysis: str) -> None:
|
||||
"""Cache workout analysis results"""
|
||||
self.set(f"analysis_{workout_id}", analysis, ttl=86400) # 24 hours TTL
|
||||
|
||||
def get_workout_analysis(self, workout_id: str) -> Optional[str]:
|
||||
"""Get cached workout analysis"""
|
||||
return self.get(f"analysis_{workout_id}")
|
||||
256
cli_interface.py
Normal file
256
cli_interface.py
Normal file
@@ -0,0 +1,256 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
CLI Interface - Simple command line interface for the cycling analyzer
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
from config import Config, load_config, create_sample_config
|
||||
from core_app import CyclingAnalyzerApp
|
||||
from template_engine import create_default_templates
|
||||
|
||||
class CLI:
|
||||
"""Command line interface"""
|
||||
|
||||
def __init__(self):
|
||||
self.app = None
|
||||
|
||||
async def run(self):
|
||||
"""Main CLI loop"""
|
||||
print("Cycling Workout Analyzer")
|
||||
print("=" * 40)
|
||||
|
||||
# Setup configuration
|
||||
try:
|
||||
config = self._setup_config()
|
||||
self.app = CyclingAnalyzerApp(config)
|
||||
|
||||
# Setup logging
|
||||
logging.basicConfig(level=getattr(logging, config.log_level.upper()))
|
||||
|
||||
# Initialize app
|
||||
await self.app.initialize()
|
||||
|
||||
# Show initial status
|
||||
self._show_status()
|
||||
|
||||
# Main loop
|
||||
await self._main_loop()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\nGoodbye!")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
logging.error(f"CLI error: {e}", exc_info=True)
|
||||
finally:
|
||||
if self.app:
|
||||
await self.app.cleanup()
|
||||
|
||||
def _setup_config(self) -> Config:
|
||||
"""Setup configuration and default files"""
|
||||
# Create sample config if needed
|
||||
create_sample_config()
|
||||
|
||||
# Load config
|
||||
config = load_config()
|
||||
|
||||
# Validate required settings
|
||||
if not config.openrouter_api_key or config.openrouter_api_key == "your_openrouter_api_key_here":
|
||||
print("Please edit config.yaml with your OpenRouter API key")
|
||||
print("Get your key from: https://openrouter.ai")
|
||||
raise ValueError("OpenRouter API key not configured")
|
||||
|
||||
# Create default templates
|
||||
create_default_templates(config.templates_dir)
|
||||
|
||||
return config
|
||||
|
||||
def _show_status(self):
|
||||
"""Show application status"""
|
||||
print(f"\nStatus:")
|
||||
print(f"- Available tools: {len(self.app.list_available_tools())}")
|
||||
print(f"- Available templates: {len(self.app.list_templates())}")
|
||||
print(f"- Cached data keys: {list(self.app.get_cached_data().keys())}")
|
||||
|
||||
async def _main_loop(self):
|
||||
"""Main interaction loop"""
|
||||
while True:
|
||||
print(f"\n{'='*60}")
|
||||
print("CYCLING WORKOUT ANALYZER")
|
||||
print(f"{'='*60}")
|
||||
print("1. Analyze last cycling workout")
|
||||
print("2. Get next workout suggestion")
|
||||
print("3. Enhanced analysis")
|
||||
print("4. List available MCP tools")
|
||||
print("5. List available templates")
|
||||
print("6. Show cached data")
|
||||
print("7. Clear cache")
|
||||
print("8. Exit")
|
||||
print("-" * 60)
|
||||
|
||||
choice = input("Enter your choice (1-8): ").strip()
|
||||
|
||||
try:
|
||||
if choice == "1":
|
||||
await self._analyze_last_workout()
|
||||
elif choice == "2":
|
||||
await self._suggest_next_workout()
|
||||
elif choice == "3":
|
||||
await self._enhanced_analysis()
|
||||
elif choice == "4":
|
||||
self._list_tools()
|
||||
elif choice == "5":
|
||||
self._list_templates()
|
||||
elif choice == "6":
|
||||
self._show_cached_data()
|
||||
elif choice == "7":
|
||||
self._clear_cache()
|
||||
elif choice == "8":
|
||||
break
|
||||
else:
|
||||
print("Invalid choice. Please try again.")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
logging.error(f"Menu action error: {e}")
|
||||
|
||||
input("\nPress Enter to continue...")
|
||||
|
||||
async def _analyze_last_workout(self):
|
||||
"""Analyze last workout"""
|
||||
print("\nAnalyzing your last workout...")
|
||||
|
||||
# Load training rules
|
||||
rules = self._load_training_rules()
|
||||
|
||||
result = await self.app.analyze_workout(
|
||||
analysis_type="analyze_last_workout",
|
||||
training_rules=rules
|
||||
)
|
||||
|
||||
print("\n" + "="*50)
|
||||
print("WORKOUT ANALYSIS")
|
||||
print("="*50)
|
||||
print(result)
|
||||
|
||||
async def _suggest_next_workout(self):
|
||||
"""Suggest next workout"""
|
||||
print("\nGenerating workout suggestion...")
|
||||
|
||||
# Load training rules
|
||||
rules = self._load_training_rules()
|
||||
|
||||
result = await self.app.suggest_next_workout(training_rules=rules)
|
||||
|
||||
print("\n" + "="*50)
|
||||
print("NEXT WORKOUT SUGGESTION")
|
||||
print("="*50)
|
||||
print(result)
|
||||
|
||||
async def _enhanced_analysis(self):
|
||||
"""Enhanced analysis menu"""
|
||||
print("\nSelect analysis type:")
|
||||
print("a) Performance trends")
|
||||
print("b) Training load analysis")
|
||||
print("c) Recovery assessment")
|
||||
|
||||
choice = input("Enter choice (a-c): ").strip().lower()
|
||||
|
||||
analysis_types = {
|
||||
'a': 'performance trends',
|
||||
'b': 'training load',
|
||||
'c': 'recovery assessment'
|
||||
}
|
||||
|
||||
if choice not in analysis_types:
|
||||
print("Invalid choice.")
|
||||
return
|
||||
|
||||
analysis_type = analysis_types[choice]
|
||||
print(f"\nPerforming {analysis_type} analysis...")
|
||||
|
||||
# Load training rules
|
||||
rules = self._load_training_rules()
|
||||
|
||||
result = await self.app.enhanced_analysis(
|
||||
analysis_type,
|
||||
training_rules=rules
|
||||
)
|
||||
|
||||
print(f"\n{'='*50}")
|
||||
print(f"ENHANCED {analysis_type.upper()} ANALYSIS")
|
||||
print("="*50)
|
||||
print(result)
|
||||
|
||||
def _list_tools(self):
|
||||
"""List available tools"""
|
||||
tools = self.app.list_available_tools()
|
||||
if tools:
|
||||
self.app.mcp_client.print_tools()
|
||||
else:
|
||||
print("No MCP tools available")
|
||||
|
||||
def _list_templates(self):
|
||||
"""List available templates"""
|
||||
templates = self.app.list_templates()
|
||||
print(f"\nAvailable templates ({len(templates)}):")
|
||||
for template in templates:
|
||||
print(f" - {template}")
|
||||
|
||||
def _show_cached_data(self):
|
||||
"""Show cached data"""
|
||||
cached_data = self.app.get_cached_data()
|
||||
print(f"\nCached data ({len(cached_data)} items):")
|
||||
for key, value in cached_data.items():
|
||||
data_type = type(value).__name__
|
||||
if isinstance(value, (dict, list)):
|
||||
size = len(value)
|
||||
print(f" - {key}: {data_type} (size: {size})")
|
||||
else:
|
||||
print(f" - {key}: {data_type}")
|
||||
|
||||
def _clear_cache(self):
|
||||
"""Clear cache"""
|
||||
self.app.cache_manager.clear()
|
||||
print("Cache cleared")
|
||||
|
||||
def _load_training_rules(self) -> str:
|
||||
"""Load training rules from file"""
|
||||
rules_file = Path(self.app.config.rules_file)
|
||||
if rules_file.exists():
|
||||
with open(rules_file, 'r') as f:
|
||||
return f.read()
|
||||
else:
|
||||
# Create default rules
|
||||
default_rules = """
|
||||
Training Goals:
|
||||
- Improve FTP (Functional Threshold Power)
|
||||
- Build endurance for long rides
|
||||
- Maintain consistent training
|
||||
|
||||
Power Zones (adjust based on your FTP):
|
||||
- Zone 1 (Active Recovery): < 55% FTP
|
||||
- Zone 2 (Endurance): 56-75% FTP
|
||||
- Zone 3 (Tempo): 76-90% FTP
|
||||
- Zone 4 (Lactate Threshold): 91-105% FTP
|
||||
- Zone 5 (VO2 Max): 106-120% FTP
|
||||
|
||||
Weekly Structure:
|
||||
- 70-80% easy/moderate intensity
|
||||
- 20-30% high intensity
|
||||
- At least 1 rest day per week
|
||||
"""
|
||||
rules_file.parent.mkdir(exist_ok=True)
|
||||
with open(rules_file, 'w') as f:
|
||||
f.write(default_rules)
|
||||
return default_rules
|
||||
|
||||
async def main():
|
||||
"""CLI entry point"""
|
||||
cli = CLI()
|
||||
await cli.run()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
74
config.py
Normal file
74
config.py
Normal file
@@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Configuration management
|
||||
"""
|
||||
|
||||
import os
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
@dataclass
|
||||
class Config:
|
||||
"""Application configuration"""
|
||||
# LLM settings
|
||||
openrouter_api_key: str
|
||||
openrouter_model: str = "deepseek/deepseek-r1-0528:free"
|
||||
|
||||
# MCP settings
|
||||
garth_token: str = ""
|
||||
garth_mcp_server_path: str = "uvx"
|
||||
|
||||
# Application settings
|
||||
templates_dir: str = "templates"
|
||||
rules_file: str = "rules.yaml"
|
||||
cache_ttl: int = 300 # Cache TTL in seconds
|
||||
|
||||
# Logging
|
||||
log_level: str = "INFO"
|
||||
|
||||
def load_config(config_file: str = "config.yaml") -> Config:
|
||||
"""Load configuration from file or environment variables"""
|
||||
config_path = Path(config_file)
|
||||
|
||||
# Load from file if exists
|
||||
if config_path.exists():
|
||||
with open(config_path, 'r') as f:
|
||||
config_data = yaml.safe_load(f) or {}
|
||||
return Config(**config_data)
|
||||
|
||||
# Load from environment variables
|
||||
return Config(
|
||||
openrouter_api_key=os.getenv("OPENROUTER_API_KEY", ""),
|
||||
openrouter_model=os.getenv("OPENROUTER_MODEL", "deepseek/deepseek-r1-0528:free"),
|
||||
garth_token=os.getenv("GARTH_TOKEN", ""),
|
||||
garth_mcp_server_path=os.getenv("GARTH_MCP_SERVER_PATH", "uvx"),
|
||||
templates_dir=os.getenv("TEMPLATES_DIR", "templates"),
|
||||
rules_file=os.getenv("RULES_FILE", "rules.yaml"),
|
||||
cache_ttl=int(os.getenv("CACHE_TTL", "300")),
|
||||
log_level=os.getenv("LOG_LEVEL", "INFO"),
|
||||
)
|
||||
|
||||
def create_sample_config(config_file: str = "config.yaml") -> None:
|
||||
"""Create a sample configuration file"""
|
||||
config_path = Path(config_file)
|
||||
if config_path.exists():
|
||||
return
|
||||
|
||||
sample_config = {
|
||||
"openrouter_api_key": "your_openrouter_api_key_here",
|
||||
"openrouter_model": "deepseek/deepseek-r1-0528:free",
|
||||
"garth_token": "your_garth_token_here",
|
||||
"garth_mcp_server_path": "uvx",
|
||||
"templates_dir": "templates",
|
||||
"rules_file": "rules.yaml",
|
||||
"cache_ttl": 300,
|
||||
"log_level": "INFO"
|
||||
}
|
||||
|
||||
with open(config_path, 'w') as f:
|
||||
yaml.dump(sample_config, f, default_flow_style=False)
|
||||
|
||||
print(f"Created sample config file: {config_file}")
|
||||
print("Please edit with your actual API keys and settings.")
|
||||
163
core_app.py
Normal file
163
core_app.py
Normal file
@@ -0,0 +1,163 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Core Application - Clean skeleton with separated concerns
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
from config import Config, load_config
|
||||
from llm_client import LLMClient
|
||||
from mcp_client import MCPClient
|
||||
from cache_manager import CacheManager
|
||||
from template_engine import TemplateEngine
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class CyclingAnalyzerApp:
|
||||
"""Main application class - orchestrates all components"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.llm_client = LLMClient(config)
|
||||
self.mcp_client = MCPClient(config)
|
||||
self.cache_manager = CacheManager()
|
||||
self.template_engine = TemplateEngine(config.templates_dir)
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize all components"""
|
||||
logger.info("Initializing application components...")
|
||||
|
||||
await self.llm_client.initialize()
|
||||
await self.mcp_client.initialize()
|
||||
await self._preload_cache()
|
||||
|
||||
logger.info("Application initialization complete")
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup all components"""
|
||||
await self.mcp_client.cleanup()
|
||||
await self.llm_client.cleanup()
|
||||
|
||||
async def _preload_cache(self):
|
||||
"""Pre-load and cache common MCP responses"""
|
||||
logger.info("Pre-loading cache...")
|
||||
|
||||
# Cache user profile
|
||||
if await self.mcp_client.has_tool("user_profile"):
|
||||
profile = await self.mcp_client.call_tool("user_profile", {})
|
||||
self.cache_manager.set("user_profile", profile)
|
||||
|
||||
# Cache recent activities
|
||||
if await self.mcp_client.has_tool("get_activities"):
|
||||
activities = await self.mcp_client.call_tool("get_activities", {"limit": 10})
|
||||
self.cache_manager.set("recent_activities", activities)
|
||||
|
||||
# Find and cache last cycling activity details
|
||||
cycling_activity = self._find_last_cycling_activity(activities)
|
||||
if cycling_activity and await self.mcp_client.has_tool("get_activity_details"):
|
||||
details = await self.mcp_client.call_tool(
|
||||
"get_activity_details",
|
||||
{"activity_id": cycling_activity["activityId"]}
|
||||
)
|
||||
self.cache_manager.set("last_cycling_details", details)
|
||||
|
||||
def _find_last_cycling_activity(self, activities: list) -> Optional[Dict[str, Any]]:
|
||||
"""Find the most recent cycling activity from activities list"""
|
||||
cycling_activities = [
|
||||
act for act in activities
|
||||
if "cycling" in act.get("activityType", {}).get("typeKey", "").lower()
|
||||
]
|
||||
return max(cycling_activities, key=lambda x: x.get("start_time", "")) if cycling_activities else None
|
||||
|
||||
# Core functionality methods
|
||||
|
||||
async def analyze_workout(self, analysis_type: str = "last_workout", **kwargs) -> str:
|
||||
"""Analyze workout using LLM with cached data"""
|
||||
template_name = f"workflows/{analysis_type}.txt"
|
||||
|
||||
# Prepare context data
|
||||
context = {
|
||||
"user_profile": self.cache_manager.get("user_profile", {}),
|
||||
"recent_activities": self.cache_manager.get("recent_activities", []),
|
||||
"last_cycling_details": self.cache_manager.get("last_cycling_details", {}),
|
||||
**kwargs
|
||||
}
|
||||
|
||||
# Load and render template
|
||||
prompt = self.template_engine.render(template_name, **context)
|
||||
|
||||
# Call LLM
|
||||
return await self.llm_client.generate(prompt)
|
||||
|
||||
async def suggest_next_workout(self, **kwargs) -> str:
|
||||
"""Generate workout suggestion using MCP tools and LLM"""
|
||||
# Use MCP-enabled agent for dynamic tool usage
|
||||
template_name = "workflows/suggest_next_workout.txt"
|
||||
|
||||
context = {
|
||||
"training_rules": kwargs.get("training_rules", ""),
|
||||
**kwargs
|
||||
}
|
||||
|
||||
prompt = self.template_engine.render(template_name, **context)
|
||||
|
||||
# Use MCP-enabled LLM client for this
|
||||
return await self.llm_client.generate_with_tools(prompt, self.mcp_client)
|
||||
|
||||
async def enhanced_analysis(self, analysis_type: str, **kwargs) -> str:
|
||||
"""Perform enhanced analysis with full MCP tool access"""
|
||||
template_name = "workflows/enhanced_analysis.txt"
|
||||
|
||||
context = {
|
||||
"analysis_type": analysis_type,
|
||||
"cached_data": self.cache_manager.get_all(),
|
||||
**kwargs
|
||||
}
|
||||
|
||||
prompt = self.template_engine.render(template_name, **context)
|
||||
return await self.llm_client.generate_with_tools(prompt, self.mcp_client)
|
||||
|
||||
# Utility methods
|
||||
|
||||
def list_available_tools(self) -> list:
|
||||
"""Get list of available MCP tools"""
|
||||
return self.mcp_client.list_tools()
|
||||
|
||||
def list_templates(self) -> list:
|
||||
"""Get list of available templates"""
|
||||
return self.template_engine.list_templates()
|
||||
|
||||
def get_cached_data(self, key: str = None) -> Any:
|
||||
"""Get cached data by key, or all if no key provided"""
|
||||
return self.cache_manager.get(key) if key else self.cache_manager.get_all()
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
try:
|
||||
config = load_config()
|
||||
app = CyclingAnalyzerApp(config)
|
||||
|
||||
await app.initialize()
|
||||
|
||||
# Example usage
|
||||
print("Available tools:", len(app.list_available_tools()))
|
||||
print("Available templates:", len(app.list_templates()))
|
||||
|
||||
# Run analysis
|
||||
analysis = await app.analyze_workout("analyze_last_workout",
|
||||
training_rules="Sample rules")
|
||||
print("Analysis:", analysis[:200] + "...")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Application error: {e}")
|
||||
finally:
|
||||
if 'app' in locals():
|
||||
await app.cleanup()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
123
llm_client.py
Normal file
123
llm_client.py
Normal file
@@ -0,0 +1,123 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
LLM Client - Handles all LLM interactions
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
import asyncio
|
||||
from typing import Optional, Any
|
||||
|
||||
try:
|
||||
from pydantic_ai import Agent
|
||||
PYDANTIC_AI_AVAILABLE = True
|
||||
except ImportError:
|
||||
PYDANTIC_AI_AVAILABLE = False
|
||||
Agent = None
|
||||
|
||||
from config import Config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class LLMClient:
|
||||
"""Handles LLM interactions with and without MCP tools"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.agent_with_tools = None
|
||||
self.agent_no_tools = None
|
||||
|
||||
if not PYDANTIC_AI_AVAILABLE:
|
||||
raise ImportError("Pydantic AI not available. Install with: pip install pydantic-ai")
|
||||
|
||||
# Set up OpenRouter environment
|
||||
self._setup_openrouter_env()
|
||||
|
||||
def _setup_openrouter_env(self):
|
||||
"""Configure OpenRouter environment variables"""
|
||||
os.environ['OPENROUTER_API_KEY'] = self.config.openrouter_api_key
|
||||
os.environ['OPENAI_BASE_URL'] = "https://openrouter.ai/api/v1"
|
||||
os.environ['OPENAI_DEFAULT_HEADERS'] = json.dumps({
|
||||
"HTTP-Referer": "https://github.com/cycling-analyzer",
|
||||
"X-Title": "Cycling Workout Analyzer"
|
||||
})
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize LLM clients"""
|
||||
logger.info("Initializing LLM clients...")
|
||||
|
||||
model_name = f"openrouter:{self.config.openrouter_model}"
|
||||
|
||||
# Agent without tools for analysis
|
||||
self.agent_no_tools = Agent(
|
||||
model=model_name,
|
||||
system_prompt="You are an expert cycling coach. Analyze the provided data comprehensively.",
|
||||
toolsets=[]
|
||||
)
|
||||
|
||||
logger.info("LLM clients initialized")
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup LLM clients"""
|
||||
if self.agent_with_tools:
|
||||
await self.agent_with_tools.__aexit__(None, None, None)
|
||||
if self.agent_no_tools:
|
||||
await self.agent_no_tools.__aexit__(None, None, None)
|
||||
|
||||
async def generate(self, prompt: str, use_tools: bool = False) -> str:
|
||||
"""Generate response using LLM without tools"""
|
||||
if not self.agent_no_tools:
|
||||
raise RuntimeError("LLM client not initialized")
|
||||
|
||||
try:
|
||||
# Initialize agent context if not already done
|
||||
if not hasattr(self.agent_no_tools, '_initialized'):
|
||||
await asyncio.wait_for(self.agent_no_tools.__aenter__(), timeout=30)
|
||||
self.agent_no_tools._initialized = True
|
||||
|
||||
result = await self.agent_no_tools.run(prompt)
|
||||
return str(result)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.error("LLM request timed out")
|
||||
return "Error: LLM request timed out"
|
||||
except Exception as e:
|
||||
logger.error(f"LLM generation error: {e}")
|
||||
return f"Error generating response: {e}"
|
||||
|
||||
async def generate_with_tools(self, prompt: str, mcp_client) -> str:
|
||||
"""Generate response using LLM with MCP tools"""
|
||||
# Create a temporary agent with tools for this request
|
||||
try:
|
||||
model_name = f"openrouter:{self.config.openrouter_model}"
|
||||
|
||||
temp_agent = Agent(
|
||||
model=model_name,
|
||||
system_prompt="You are an expert cycling coach with access to comprehensive Garmin Connect data. Use available tools to gather data and provide detailed analysis.",
|
||||
toolsets=[mcp_client.mcp_server] if mcp_client.mcp_server else []
|
||||
)
|
||||
|
||||
# Initialize temporary agent
|
||||
await asyncio.wait_for(temp_agent.__aenter__(), timeout=30)
|
||||
|
||||
# Generate response
|
||||
result = await temp_agent.run(prompt)
|
||||
|
||||
# Cleanup
|
||||
await temp_agent.__aexit__(None, None, None)
|
||||
|
||||
return result.text if hasattr(result, 'text') else str(result)
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.error("LLM with tools request timed out")
|
||||
return "Error: LLM request with tools timed out"
|
||||
except Exception as e:
|
||||
logger.error(f"LLM generation with tools error: {e}")
|
||||
return f"Error generating response with tools: {e}"
|
||||
|
||||
async def chat(self, messages: list, use_tools: bool = False) -> str:
|
||||
"""Chat-style interaction (for future extension)"""
|
||||
# Convert messages to prompt for now
|
||||
prompt = "\n".join([f"{msg['role']}: {msg['content']}" for msg in messages])
|
||||
return await self.generate(prompt, use_tools)
|
||||
324
main.py
324
main.py
@@ -1,324 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Cycling Workout Analyzer with Pydantic AI and MCP Server Integration
|
||||
A Python app that uses Pydantic AI with MCP tools to analyze cycling workouts
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
import shutil
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any, Optional, Union
|
||||
from pathlib import Path
|
||||
import yaml
|
||||
from dataclasses import dataclass
|
||||
|
||||
from mcp_manager import Config, print_tools, PydanticAIAnalyzer
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
|
||||
|
||||
class TemplateManager:
|
||||
"""Manages prompt templates (kept for compatibility)"""
|
||||
|
||||
def __init__(self, templates_dir: str):
|
||||
self.templates_dir = Path(templates_dir)
|
||||
self.templates_dir.mkdir(exist_ok=True)
|
||||
|
||||
def list_templates(self) -> List[str]:
|
||||
"""List available templates"""
|
||||
return [f.name for f in self.templates_dir.glob("*.txt")]
|
||||
|
||||
class RulesManager:
|
||||
"""Manages training rules and goals"""
|
||||
|
||||
def __init__(self, rules_file: str):
|
||||
self.rules_file = Path(rules_file)
|
||||
self._create_default_rules()
|
||||
|
||||
def _create_default_rules(self):
|
||||
"""Create default rules file if it doesn't exist"""
|
||||
if not self.rules_file.exists():
|
||||
default_rules = {
|
||||
"training_goals": [
|
||||
"Improve FTP (Functional Threshold Power)",
|
||||
"Build endurance for 100km rides",
|
||||
"Maintain consistent training 4-5x per week"
|
||||
],
|
||||
"power_zones": {
|
||||
"zone_1_active_recovery": "< 142W",
|
||||
"zone_2_endurance": "142-162W",
|
||||
"zone_3_tempo": "163-180W",
|
||||
"zone_4_lactate_threshold": "181-196W",
|
||||
"zone_5_vo2_max": "197-224W",
|
||||
"zone_6_anaerobic": "> 224W"
|
||||
},
|
||||
"heart_rate_zones": {
|
||||
"zone_1": "< 129 bpm",
|
||||
"zone_2": "129-146 bpm",
|
||||
"zone_3": "147-163 bpm",
|
||||
"zone_4": "164-181 bpm",
|
||||
"zone_5": "> 181 bpm"
|
||||
},
|
||||
"weekly_structure": {
|
||||
"easy_rides": "60-70% of weekly volume",
|
||||
"moderate_rides": "20-30% of weekly volume",
|
||||
"hard_rides": "5-15% of weekly volume"
|
||||
},
|
||||
"recovery_rules": [
|
||||
"At least 1 full rest day per week",
|
||||
"Easy spin after hard workouts",
|
||||
"Listen to body - skip workout if overly fatigued"
|
||||
],
|
||||
"workout_preferences": [
|
||||
"Prefer morning rides when possible",
|
||||
"Include variety - not just steady state",
|
||||
"Focus on consistency over peak performance"
|
||||
]
|
||||
}
|
||||
|
||||
with open(self.rules_file, 'w') as f:
|
||||
yaml.dump(default_rules, f, default_flow_style=False)
|
||||
logger.info(f"Created default rules file: {self.rules_file}")
|
||||
|
||||
def get_rules(self) -> str:
|
||||
"""Get rules as formatted string"""
|
||||
with open(self.rules_file, 'r') as f:
|
||||
rules = yaml.safe_load(f)
|
||||
|
||||
return yaml.dump(rules, default_flow_style=False)
|
||||
|
||||
class CyclingAnalyzer:
|
||||
"""Main application class using Pydantic AI"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.analyzer = PydanticAIAnalyzer(config)
|
||||
self.templates = TemplateManager(config.templates_dir)
|
||||
self.rules = RulesManager(config.rules_file)
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize the application"""
|
||||
logger.info("Initializing Pydantic AI Cycling Analyzer...")
|
||||
await self.analyzer.initialize()
|
||||
logger.info("Application initialization complete")
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup resources"""
|
||||
await self.analyzer.cleanup()
|
||||
logger.info("Application cleanup completed")
|
||||
|
||||
async def analyze_last_workout(self):
|
||||
"""Analyze the last cycling workout"""
|
||||
rules_text = self.rules.get_rules()
|
||||
return await self.analyzer.analyze_last_workout(rules_text)
|
||||
|
||||
async def suggest_next_workout(self):
|
||||
"""Suggest next workout based on recent activities"""
|
||||
rules_text = self.rules.get_rules()
|
||||
return await self.analyzer.suggest_next_workout(rules_text)
|
||||
|
||||
async def enhanced_analysis(self, analysis_type: str):
|
||||
"""Perform enhanced analysis using all available tools"""
|
||||
rules_text = self.rules.get_rules()
|
||||
return await self.analyzer.enhanced_analysis(analysis_type, rules_text)
|
||||
|
||||
async def list_available_tools(self):
|
||||
"""List available Garmin tools"""
|
||||
return self.analyzer.available_tools
|
||||
|
||||
async def run(self):
|
||||
"""Main application loop"""
|
||||
logger.info("Starting Cycling Workout Analyzer with Pydantic AI...")
|
||||
|
||||
logger.info("Calling initialize()...")
|
||||
await self.initialize()
|
||||
logger.info("Initialize() completed, starting main loop...")
|
||||
|
||||
# Pre-call user_profile tool
|
||||
logger.info("Pre-caching user profile...")
|
||||
user_profile = await self.analyzer.get_user_profile()
|
||||
print("\n" + "="*60)
|
||||
print("RAW USER PROFILE (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(user_profile, indent=2, default=str))
|
||||
print("="*60)
|
||||
logger.info("User profile pre-cached")
|
||||
|
||||
# Pre-call get_recent_cycling_activity_details
|
||||
logger.info("Pre-caching recent cycling activity details...")
|
||||
activity_data = await self.analyzer.get_recent_cycling_activity_details()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("RAW RECENT ACTIVITIES (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data.get("activities", []), indent=2, default=str))
|
||||
print("="*60)
|
||||
|
||||
if activity_data.get("last_cycling"):
|
||||
print("\n" + "="*60)
|
||||
print("LAST CYCLING ACTIVITY SUMMARY (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data["last_cycling"], indent=2, default=str))
|
||||
print("="*60)
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("ACTIVITY DETAILS (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data["details"], indent=2, default=str))
|
||||
print("="*60)
|
||||
logger.info("Recent cycling activity details pre-cached")
|
||||
else:
|
||||
logger.warning("No cycling activity found in recent activities")
|
||||
print("\nWarning: No cycling activity found in recent activities.")
|
||||
|
||||
try:
|
||||
while True:
|
||||
print("\n" + "="*60)
|
||||
print("CYCLING WORKOUT ANALYZER (Pydantic AI + MCP)")
|
||||
print("="*60)
|
||||
print("1. Analyze last cycling workout")
|
||||
print("2. Get next workout suggestion")
|
||||
print("3. Enhanced analysis using all MCP tools")
|
||||
print("4. List available MCP tools")
|
||||
print("5. List available templates")
|
||||
print("6. View current rules")
|
||||
print("7. Exit")
|
||||
print("-"*60)
|
||||
|
||||
choice = input("Enter your choice (1-7): ").strip()
|
||||
logger.info(f"User selected option: {choice}")
|
||||
|
||||
try:
|
||||
if choice == "1":
|
||||
print("\nAnalyzing your last workout with Pydantic AI...")
|
||||
analysis = await self.analyze_last_workout()
|
||||
print("\n" + "="*50)
|
||||
print("WORKOUT ANALYSIS (Pydantic AI)")
|
||||
print("="*50)
|
||||
print(analysis)
|
||||
|
||||
elif choice == "2":
|
||||
print("\nGenerating workout suggestion with Pydantic AI...")
|
||||
suggestion = await self.suggest_next_workout()
|
||||
print("\n" + "="*50)
|
||||
print("NEXT WORKOUT SUGGESTION (Pydantic AI)")
|
||||
print("="*50)
|
||||
print(suggestion)
|
||||
|
||||
elif choice == "3":
|
||||
print("\nSelect analysis type:")
|
||||
print("a) Performance trends")
|
||||
print("b) Training load analysis")
|
||||
print("c) Recovery assessment")
|
||||
analysis_choice = input("Enter choice (a-c): ").strip().lower()
|
||||
|
||||
analysis_types = {
|
||||
'a': 'performance trends',
|
||||
'b': 'training load',
|
||||
'c': 'recovery assessment'
|
||||
}
|
||||
|
||||
if analysis_choice in analysis_types:
|
||||
print(f"\nPerforming {analysis_types[analysis_choice]} analysis...")
|
||||
analysis = await self.enhanced_analysis(
|
||||
analysis_types[analysis_choice]
|
||||
)
|
||||
print(f"\n{ '='*50}")
|
||||
print(f"ENHANCED {analysis_types[analysis_choice].upper()} ANALYSIS")
|
||||
print("="*50)
|
||||
print(analysis)
|
||||
else:
|
||||
print("Invalid choice.")
|
||||
|
||||
elif choice == "4":
|
||||
tools = await self.list_available_tools()
|
||||
print_tools(tools)
|
||||
|
||||
elif choice == "5":
|
||||
templates = self.templates.list_templates()
|
||||
print(f"\nAvailable templates in {self.config.templates_dir}:")
|
||||
for template in templates:
|
||||
print(f" - {template}")
|
||||
|
||||
elif choice == "6":
|
||||
rules = self.rules.get_rules()
|
||||
print(f"\nCurrent rules from {self.config.rules_file}:")
|
||||
print("-"*30)
|
||||
print(rules)
|
||||
|
||||
elif choice == "7":
|
||||
print("Goodbye!")
|
||||
break
|
||||
|
||||
else:
|
||||
print("Invalid choice. Please try again.")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error: {e}")
|
||||
print(f"An error occurred: {e}")
|
||||
|
||||
input("\nPress Enter to continue...")
|
||||
|
||||
finally:
|
||||
await self.cleanup()
|
||||
|
||||
def load_config() -> Config:
|
||||
"""Load configuration from environment and config files"""
|
||||
config_file = Path("config.yaml")
|
||||
if config_file.exists():
|
||||
with open(config_file) as f:
|
||||
config_data = yaml.safe_load(f)
|
||||
return Config(**config_data)
|
||||
|
||||
api_key = os.getenv("OPENROUTER_API_KEY")
|
||||
if not api_key:
|
||||
api_key = input("Enter your OpenRouter API key: ").strip()
|
||||
|
||||
return Config(
|
||||
openrouter_api_key=api_key,
|
||||
openrouter_model=os.getenv("OPENROUTER_MODEL", "deepseek/deepseek-r1-0528:free"),
|
||||
garth_token=os.getenv("GARTH_TOKEN", ""),
|
||||
garth_mcp_server_path=os.getenv("GARTH_MCP_SERVER_PATH", "uvx"),
|
||||
)
|
||||
|
||||
def create_sample_config():
|
||||
"""Create a sample config file"""
|
||||
config_file = Path("config.yaml")
|
||||
if not config_file.exists():
|
||||
sample_config = {
|
||||
"openrouter_api_key": "your_openrouter_api_key_here",
|
||||
"openrouter_model": "google/gemini-flash-1.5",
|
||||
"garth_token": "your_garth_token_here",
|
||||
"garth_mcp_server_path": "uvx",
|
||||
"rules_file": "rules.yaml",
|
||||
"templates_dir": "templates"
|
||||
}
|
||||
|
||||
with open(config_file, 'w') as f:
|
||||
yaml.dump(sample_config, f, default_flow_style=False)
|
||||
print(f"Created sample config file: {config_file}")
|
||||
print("Please edit it with your actual OpenRouter API key and GARTH_TOKEN.")
|
||||
print("Get your GARTH_TOKEN by running: uvx garth login")
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
create_sample_config()
|
||||
|
||||
try:
|
||||
config = load_config()
|
||||
analyzer = CyclingAnalyzer(config)
|
||||
await analyzer.run()
|
||||
except KeyboardInterrupt:
|
||||
print("\nApplication interrupted by user")
|
||||
except Exception as e:
|
||||
logger.error(f"Application error: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
160
mcp_client.py
Normal file
160
mcp_client.py
Normal file
@@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
MCP Client - Handles MCP server connections and tool management
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import logging
|
||||
import asyncio
|
||||
from typing import List, Dict, Any, Optional
|
||||
|
||||
try:
|
||||
from pydantic_ai.mcp import MCPServerStdio
|
||||
MCP_AVAILABLE = True
|
||||
except ImportError:
|
||||
MCP_AVAILABLE = False
|
||||
MCPServerStdio = None
|
||||
|
||||
from config import Config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MCPClient:
|
||||
"""Manages MCP server connection and tool interactions"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.mcp_server = None
|
||||
self.available_tools = []
|
||||
self._initialized = False
|
||||
|
||||
if not MCP_AVAILABLE:
|
||||
logger.warning("MCP not available. Tool functionality will be limited.")
|
||||
return
|
||||
|
||||
# Set up MCP server
|
||||
self._setup_mcp_server()
|
||||
|
||||
def _setup_mcp_server(self):
|
||||
"""Set up MCP server connection"""
|
||||
if not self.config.garth_token:
|
||||
logger.warning("No GARTH_TOKEN provided. MCP tools will be unavailable.")
|
||||
return
|
||||
|
||||
# Set up environment
|
||||
os.environ["GARTH_TOKEN"] = self.config.garth_token
|
||||
env = os.environ.copy()
|
||||
env["GARTH_TOKEN"] = self.config.garth_token
|
||||
|
||||
# Find server executable
|
||||
server_executable = shutil.which(self.config.garth_mcp_server_path)
|
||||
if not server_executable:
|
||||
logger.error(f"'{self.config.garth_mcp_server_path}' not found in PATH")
|
||||
return
|
||||
|
||||
self.mcp_server = MCPServerStdio(
|
||||
command=server_executable,
|
||||
args=["garth-mcp-server"],
|
||||
env=env,
|
||||
)
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize MCP server connection"""
|
||||
if not self.mcp_server:
|
||||
logger.warning("MCP server not configured")
|
||||
return
|
||||
|
||||
try:
|
||||
logger.info("Initializing MCP server connection...")
|
||||
|
||||
# The MCP server will be initialized when used by the agent
|
||||
# For now, we'll try to list tools to verify connection
|
||||
await asyncio.sleep(0.1) # Give it a moment
|
||||
|
||||
logger.info("MCP server connection established")
|
||||
self._initialized = True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"MCP server initialization failed: {e}")
|
||||
self.mcp_server = None
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup MCP server connection"""
|
||||
if self.mcp_server:
|
||||
# MCP server cleanup is handled by the agent
|
||||
pass
|
||||
|
||||
async def list_tools(self) -> List[Any]:
|
||||
"""List available MCP tools"""
|
||||
if not self.mcp_server:
|
||||
return []
|
||||
|
||||
try:
|
||||
if not self.available_tools:
|
||||
self.available_tools = await self.mcp_server.list_tools()
|
||||
return self.available_tools
|
||||
except Exception as e:
|
||||
logger.error(f"Error listing tools: {e}")
|
||||
return []
|
||||
|
||||
async def has_tool(self, tool_name: str) -> bool:
|
||||
"""Check if a specific tool is available"""
|
||||
tools = await self.list_tools()
|
||||
return any(tool.name == tool_name for tool in tools)
|
||||
|
||||
async def call_tool(self, tool_name: str, parameters: Dict[str, Any]) -> Any:
|
||||
"""Call a specific MCP tool directly"""
|
||||
if not self.mcp_server:
|
||||
raise RuntimeError("MCP server not available")
|
||||
|
||||
try:
|
||||
result = await self.mcp_server.direct_call_tool(tool_name, parameters)
|
||||
return result.output if hasattr(result, 'output') else result
|
||||
except Exception as e:
|
||||
logger.error(f"Error calling tool {tool_name}: {e}")
|
||||
raise
|
||||
|
||||
def get_tool_info(self, tool_name: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get information about a specific tool"""
|
||||
for tool in self.available_tools:
|
||||
if tool.name == tool_name:
|
||||
return {
|
||||
"name": tool.name,
|
||||
"description": getattr(tool, 'description', ''),
|
||||
"parameters": getattr(tool, 'inputSchema', {}).get('properties', {})
|
||||
}
|
||||
return None
|
||||
|
||||
def print_tools(self):
|
||||
"""Pretty print available tools"""
|
||||
if not self.available_tools:
|
||||
print("No MCP tools available")
|
||||
return
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("AVAILABLE MCP TOOLS")
|
||||
print(f"{'='*60}")
|
||||
|
||||
for i, tool in enumerate(self.available_tools, 1):
|
||||
print(f"\n{i}. {tool.name}")
|
||||
if hasattr(tool, 'description') and tool.description:
|
||||
print(f" Description: {tool.description}")
|
||||
|
||||
if hasattr(tool, 'inputSchema') and tool.inputSchema:
|
||||
properties = tool.inputSchema.get("properties", {})
|
||||
if properties:
|
||||
print(" Parameters:")
|
||||
required = tool.inputSchema.get("required", [])
|
||||
for param, info in properties.items():
|
||||
param_type = info.get("type", "unknown")
|
||||
param_desc = info.get("description", "")
|
||||
req_str = " (required)" if param in required else " (optional)"
|
||||
print(f" - {param} ({param_type}){req_str}: {param_desc}")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
|
||||
@property
|
||||
def is_available(self) -> bool:
|
||||
"""Check if MCP server is available"""
|
||||
return self.mcp_server is not None and self._initialized
|
||||
426
mcp_manager.py
426
mcp_manager.py
@@ -1,426 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
MCP Manager for Pydantic AI Cycling Analyzer
|
||||
"""
|
||||
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
import shutil
|
||||
import logging
|
||||
from typing import List, Any
|
||||
from dataclasses import dataclass
|
||||
import garth
|
||||
|
||||
# Pydantic AI imports
|
||||
try:
|
||||
from pydantic_ai import Agent
|
||||
PYDANTIC_AI_AVAILABLE = True
|
||||
except ImportError:
|
||||
PYDANTIC_AI_AVAILABLE = False
|
||||
Agent = None
|
||||
print("Pydantic AI not available. Install with: pip install pydantic-ai")
|
||||
|
||||
from templates_manager import TemplateManager
|
||||
|
||||
# MCP Protocol imports for direct connection
|
||||
try:
|
||||
from pydantic_ai.mcp import MCPServerStdio
|
||||
from pydantic_ai import exceptions
|
||||
MCP_AVAILABLE = True
|
||||
except ImportError:
|
||||
MCP_AVAILABLE = False
|
||||
MCPServerStdio = None
|
||||
exceptions = None
|
||||
print("pydantic_ai.mcp not available. You might need to upgrade pydantic-ai.")
|
||||
|
||||
# Configure logging for this module
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@dataclass
|
||||
class Config:
|
||||
"""Application configuration"""
|
||||
openrouter_api_key: str
|
||||
openrouter_model: str = "deepseek/deepseek-r1-0528:free"
|
||||
garth_token: str = ""
|
||||
garth_mcp_server_path: str = "uvx"
|
||||
rules_file: str = "rules.yaml"
|
||||
templates_dir: str = "templates"
|
||||
|
||||
def print_tools(tools: List[Any]):
|
||||
"""Pretty print the tools list."""
|
||||
if not tools:
|
||||
print("\nNo tools available.")
|
||||
return
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print("AVAILABLE TOOLS")
|
||||
print(f"\n{'='*60}")
|
||||
|
||||
for i, tool in enumerate(tools, 1):
|
||||
print(f"\n{i}. {tool.name}")
|
||||
if tool.description:
|
||||
print(f" Description: {tool.description}")
|
||||
|
||||
if hasattr(tool, 'inputSchema') and tool.inputSchema:
|
||||
properties = tool.inputSchema.get("properties", {})
|
||||
if properties:
|
||||
print(" Parameters:")
|
||||
required_params = tool.inputSchema.get("required", [])
|
||||
for prop_name, prop_info in properties.items():
|
||||
prop_type = prop_info.get("type", "unknown")
|
||||
prop_desc = prop_info.get("description", "")
|
||||
required = prop_name in required_params
|
||||
req_str = " (required)" if required else " (optional)"
|
||||
print(f" - {prop_name} ({prop_type}){req_str}: {prop_desc}")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
|
||||
class PydanticAIAnalyzer:
|
||||
"""Pydantic AI powered cycling analyzer"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.template_manager = TemplateManager(self.config.templates_dir)
|
||||
self.mcp_server = None
|
||||
self.available_tools = []
|
||||
self._cached_activity_details = None
|
||||
|
||||
if not PYDANTIC_AI_AVAILABLE or not MCP_AVAILABLE:
|
||||
raise Exception("Pydantic AI or MCP not available. Please check your installation.")
|
||||
|
||||
os.environ['OPENROUTER_API_KEY'] = config.openrouter_api_key
|
||||
os.environ['OPENAI_BASE_URL'] = "https://openrouter.ai/api/v1"
|
||||
os.environ['OPENAI_DEFAULT_HEADERS'] = json.dumps({
|
||||
"HTTP-Referer": "https://github.com/cycling-analyzer",
|
||||
"X-Title": "Cycling Workout Analyzer"
|
||||
})
|
||||
|
||||
env = os.environ.copy()
|
||||
os.environ["GARTH_TOKEN"] = config.garth_token
|
||||
env["GARTH_TOKEN"] = config.garth_token
|
||||
|
||||
server_executable = shutil.which(config.garth_mcp_server_path)
|
||||
if not server_executable:
|
||||
logger.error(f"'{config.garth_mcp_server_path}' not found in PATH. MCP tools will be unavailable.")
|
||||
else:
|
||||
self.mcp_server = MCPServerStdio(
|
||||
command=server_executable,
|
||||
args=["garth-mcp-server"],
|
||||
env=env,
|
||||
)
|
||||
|
||||
model_name = f"openrouter:{config.openrouter_model}"
|
||||
|
||||
main_system_prompt = self.template_manager.get_template('base/system_prompts/main_agent.txt')
|
||||
|
||||
self.agent = Agent(
|
||||
model=model_name,
|
||||
system_prompt=main_system_prompt,
|
||||
toolsets=[self.mcp_server] if self.mcp_server else []
|
||||
)
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize the analyzer and connect to MCP server"""
|
||||
logger.info("Initializing Pydantic AI analyzer...")
|
||||
if self.agent and self.mcp_server:
|
||||
try:
|
||||
logger.info("Attempting to enter agent context...")
|
||||
await asyncio.wait_for(self.agent.__aenter__(), timeout=45)
|
||||
logger.info("✓ Agent context entered successfully")
|
||||
logger.info("Listing available MCP tools...")
|
||||
self.available_tools = await self.mcp_server.list_tools()
|
||||
logger.info(f"✓ Found {len(self.available_tools)} MCP tools.")
|
||||
if self.available_tools:
|
||||
for tool in self.available_tools[:5]: # Log first 5 tools
|
||||
logger.info(f" Tool: {tool.name} - {getattr(tool, 'description', 'No description')}")
|
||||
if len(self.available_tools) > 5:
|
||||
logger.info(f" ... and {len(self.available_tools) - 5} more tools")
|
||||
else:
|
||||
logger.warning("No tools returned from MCP server!")
|
||||
except asyncio.TimeoutError:
|
||||
logger.error("Agent initialization timed out. MCP tools will be unavailable.")
|
||||
self.mcp_server = None
|
||||
except Exception as e:
|
||||
logger.error(f"Agent initialization failed: {e}. MCP tools will be unavailable.")
|
||||
logger.error(f"Exception type: {type(e)}")
|
||||
import traceback
|
||||
logger.error(f"Full initialization traceback: {traceback.format_exc()}")
|
||||
self.mcp_server = None
|
||||
else:
|
||||
logger.warning("MCP server not configured. MCP tools will be unavailable.")
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup resources"""
|
||||
if self.agent and self.mcp_server:
|
||||
await self.agent.__aexit__(None, None, None)
|
||||
logger.info("Cleanup completed")
|
||||
|
||||
async def get_recent_cycling_activity_details(self) -> dict:
|
||||
"""Pre-call get_activities and get_activity_details to cache the last cycling activity details"""
|
||||
if self._cached_activity_details is not None:
|
||||
logger.debug("Returning cached activity details")
|
||||
return self._cached_activity_details
|
||||
|
||||
if not self.mcp_server:
|
||||
logger.error("MCP server not available")
|
||||
return {}
|
||||
|
||||
try:
|
||||
logger.debug("Pre-calling get_activities tool")
|
||||
activities_args = {"limit": 10}
|
||||
activities = []
|
||||
try:
|
||||
logger.debug("Bypassing direct_call_tool and using garth.connectapi directly for get_activities")
|
||||
garth.client.loads(self.config.garth_token)
|
||||
from urllib.parse import urlencode
|
||||
params = {"limit": 10}
|
||||
endpoint = "activitylist-service/activities/search/activities"
|
||||
endpoint += "?" + urlencode(params)
|
||||
activities = garth.connectapi(endpoint)
|
||||
except Exception as e:
|
||||
logger.error(f"Error calling garth.connectapi directly: {e}", exc_info=True)
|
||||
activities = []
|
||||
|
||||
if not activities:
|
||||
logger.error("Failed to retrieve activities.")
|
||||
return {"error": "Failed to retrieve activities."}
|
||||
|
||||
logger.debug(f"Retrieved {len(activities)} activities")
|
||||
|
||||
# Filter for cycling activities
|
||||
cycling_activities = [
|
||||
act for act in activities
|
||||
if "cycling" in act.get("activityType", {}).get("typeKey", "").lower()
|
||||
]
|
||||
|
||||
if not cycling_activities:
|
||||
logger.warning("No cycling activities found")
|
||||
self._cached_activity_details = {"activities": activities, "last_cycling": None, "details": None}
|
||||
return self._cached_activity_details
|
||||
|
||||
# Get the most recent cycling activity
|
||||
last_cycling = max(cycling_activities, key=lambda x: x.get("start_time", "1970-01-01"))
|
||||
activity_id = last_cycling["activityId"]
|
||||
logger.debug(f"Last cycling activity ID: {activity_id}")
|
||||
|
||||
logger.debug("Pre-calling get_activity_details tool")
|
||||
details = garth.connectapi(f"activity-service/activity/{activity_id}")
|
||||
logger.debug("Retrieved activity details")
|
||||
|
||||
self._cached_activity_details = {
|
||||
"activities": activities,
|
||||
"last_cycling": last_cycling,
|
||||
"details": details
|
||||
}
|
||||
logger.info("Cached recent cycling activity details successfully")
|
||||
return self._cached_activity_details
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error pre-calling activity tools: {e}", exc_info=True)
|
||||
self._cached_activity_details = {"error": str(e)}
|
||||
return self._cached_activity_details
|
||||
|
||||
async def get_user_profile(self) -> dict:
|
||||
"""Pre-call user_profile tool to cache the response"""
|
||||
if hasattr(self, '_cached_user_profile') and self._cached_user_profile is not None:
|
||||
logger.debug("Returning cached user profile")
|
||||
return self._cached_user_profile
|
||||
|
||||
if not self.mcp_server:
|
||||
logger.error("MCP server not available")
|
||||
return {}
|
||||
|
||||
try:
|
||||
logger.debug("Pre-calling user_profile tool")
|
||||
profile_result = await self.mcp_server.direct_call_tool("user_profile", {})
|
||||
profile = profile_result.output if hasattr(profile_result, 'output') else profile_result
|
||||
logger.debug("Retrieved user profile")
|
||||
|
||||
self._cached_user_profile = profile
|
||||
logger.info("Cached user profile successfully")
|
||||
return profile
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error pre-calling user_profile: {e}", exc_info=True)
|
||||
self._cached_user_profile = {"error": str(e)}
|
||||
return self._cached_user_profile
|
||||
|
||||
async def analyze_last_workout(self, training_rules: str) -> str:
|
||||
"""Analyze the last cycling workout using Pydantic AI"""
|
||||
logger.info("Analyzing last workout with Pydantic AI...")
|
||||
|
||||
# Get pre-cached data
|
||||
activity_data = await self.get_recent_cycling_activity_details()
|
||||
user_profile = await self.get_user_profile()
|
||||
|
||||
if not activity_data.get("last_cycling"):
|
||||
return "No recent cycling activity found to analyze."
|
||||
|
||||
last_activity = activity_data["last_cycling"]
|
||||
details = activity_data["details"]
|
||||
|
||||
# Summarize key data for prompt
|
||||
activity_summary = f"""
|
||||
Last Cycling Activity:
|
||||
- Start Time: {last_activity.get('start_time', 'N/A')}
|
||||
- Duration: {last_activity.get('duration', 'N/A')} seconds
|
||||
- Distance: {last_activity.get('distance', 'N/A')} meters
|
||||
- Average Speed: {last_activity.get('averageSpeed', 'N/A')} m/s
|
||||
- Average Power: {last_activity.get('avgPower', 'N/A')} W (if available)
|
||||
- Max Power: {last_activity.get('maxPower', 'N/A')} W (if available)
|
||||
- Average Heart Rate: {last_activity.get('avgHr', 'N/A')} bpm (if available)
|
||||
|
||||
Full Activity Details: {json.dumps(details, default=str)}
|
||||
"""
|
||||
|
||||
user_info = f"""
|
||||
User Profile:
|
||||
{json.dumps(user_profile, default=str)}
|
||||
"""
|
||||
|
||||
prompt = self.template_manager.get_template(
|
||||
'workflows/analyze_last_workout.txt',
|
||||
activity_summary=activity_summary,
|
||||
user_info=user_info,
|
||||
training_rules=training_rules
|
||||
)
|
||||
|
||||
try:
|
||||
# Create temporary agent without tools for this analysis
|
||||
model_name = f"openrouter:{self.config.openrouter_model}"
|
||||
temp_analysis_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt')
|
||||
temp_agent = Agent(
|
||||
model=model_name,
|
||||
system_prompt=temp_analysis_system_prompt,
|
||||
toolsets=[]
|
||||
)
|
||||
|
||||
# Enter context for temp agent
|
||||
await asyncio.wait_for(temp_agent.__aenter__(), timeout=30)
|
||||
|
||||
result = await temp_agent.run(prompt)
|
||||
|
||||
# Exit context
|
||||
await temp_agent.__aexit__(None, None, None)
|
||||
|
||||
return str(result)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error("Temp agent initialization timed out")
|
||||
return "Error: Agent initialization timed out. Please try again."
|
||||
except Exception as e:
|
||||
logger.error(f"Error in workout analysis: {e}")
|
||||
if hasattr(temp_agent, '__aexit__'):
|
||||
await temp_agent.__aexit__(None, None, None)
|
||||
return "Error analyzing workout. Please check the logs for more details."
|
||||
|
||||
async def suggest_next_workout(self, training_rules: str) -> str:
|
||||
"""Suggest next workout using Pydantic AI"""
|
||||
logger.info("Generating workout suggestion with Pydantic AI...")
|
||||
|
||||
# Log available tools before making the call
|
||||
if self.available_tools:
|
||||
tool_names = [tool.name for tool in self.available_tools]
|
||||
logger.info(f"Available MCP tools: {tool_names}")
|
||||
if 'get_activities' not in tool_names:
|
||||
logger.warning("WARNING: 'get_activities' tool not found in available tools!")
|
||||
else:
|
||||
logger.warning("No MCP tools available!")
|
||||
|
||||
prompt = self.template_manager.get_template(
|
||||
'workflows/suggest_next_workout.txt',
|
||||
training_rules=training_rules
|
||||
)
|
||||
|
||||
logger.info("About to call agent.run() with workout suggestion prompt")
|
||||
try:
|
||||
result = await self.agent.run(prompt)
|
||||
logger.info("Agent run completed successfully")
|
||||
return result.text
|
||||
except Exception as e:
|
||||
logger.error(f"Error in workout suggestion: {e}")
|
||||
logger.error(f"Exception type: {type(e)}")
|
||||
import traceback
|
||||
logger.error(f"Full traceback: {traceback.format_exc()}")
|
||||
|
||||
if "exceeded max retries" in str(e):
|
||||
return "Failed to fetch your activity data from Garmin after several attempts. Please check your connection and try again."
|
||||
return "Error suggesting workout. Please check the logs for more details."
|
||||
|
||||
async def enhanced_analysis(self, analysis_type: str, training_rules: str) -> str:
|
||||
"""Perform enhanced analysis using Pydantic AI with all available tools"""
|
||||
logger.info(f"Performing enhanced {analysis_type} analysis...")
|
||||
|
||||
# Get pre-cached data
|
||||
activity_data = await self.get_recent_cycling_activity_details()
|
||||
user_profile = await self.get_user_profile()
|
||||
|
||||
if not activity_data.get("last_cycling"):
|
||||
return f"No recent cycling activity found for {analysis_type} analysis."
|
||||
|
||||
# Summarize recent activities
|
||||
recent_activities = activity_data.get("activities", [])
|
||||
cycling_activities_summary = "\n".join([
|
||||
f"- {act.get('start_time', 'N/A')}: {act.get('activityType', {}).get('typeKey', 'Unknown')} - Duration: {act.get('duration', 'N/A')}s"
|
||||
for act in recent_activities[-5:] # Last 5 activities
|
||||
])
|
||||
|
||||
last_activity = activity_data["last_cycling"]
|
||||
details = activity_data["details"]
|
||||
|
||||
activity_summary = f"""
|
||||
Most Recent Cycling Activity:
|
||||
- Start Time: {last_activity.get('start_time', 'N/A')}
|
||||
- Duration: {last_activity.get('duration', 'N/A')} seconds
|
||||
- Distance: {last_activity.get('distance', 'N/A')} meters
|
||||
- Average Speed: {last_activity.get('averageSpeed', 'N/A')} m/s
|
||||
- Average Power: {last_activity.get('avgPower', 'N/A')} W
|
||||
- Max Power: {last_activity.get('maxPower', 'N/A')} W
|
||||
- Average Heart Rate: {last_activity.get('avgHr', 'N/A')} bpm
|
||||
|
||||
Full Activity Details: {json.dumps(details, default=str)}
|
||||
|
||||
Recent Activities (last 5):
|
||||
{cycling_activities_summary}
|
||||
"""
|
||||
|
||||
user_info = f"""
|
||||
User Profile:
|
||||
{json.dumps(user_profile, default=str)}
|
||||
"""
|
||||
|
||||
prompt = self.template_manager.get_template(
|
||||
'workflows/single_workout_analysis.txt',
|
||||
analysis_type=analysis_type,
|
||||
activity_summary=activity_summary,
|
||||
user_info=user_info,
|
||||
training_rules=training_rules
|
||||
)
|
||||
|
||||
try:
|
||||
# Create temporary agent without tools for this analysis
|
||||
model_name = f"openrouter:{self.config.openrouter_model}"
|
||||
enhanced_temp_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt')
|
||||
temp_agent = Agent(
|
||||
model=model_name,
|
||||
system_prompt=enhanced_temp_system_prompt,
|
||||
toolsets=[]
|
||||
)
|
||||
|
||||
# Enter context for temp agent
|
||||
await asyncio.wait_for(temp_agent.__aenter__(), timeout=30)
|
||||
|
||||
result = await temp_agent.run(prompt)
|
||||
|
||||
# Exit context
|
||||
await temp_agent.__aexit__(None, None, None)
|
||||
|
||||
return str(result)
|
||||
except asyncio.TimeoutError:
|
||||
logger.error("Temp agent initialization timed out")
|
||||
return f"Error: Agent initialization timed out for {analysis_type} analysis."
|
||||
except Exception as e:
|
||||
logger.error(f"Error in enhanced analysis: {e}")
|
||||
if hasattr(temp_agent, '__aexit__'):
|
||||
await temp_agent.__aexit__(None, None, None)
|
||||
return f"Error in {analysis_type} analysis: {e}"
|
||||
@@ -1,103 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
MCP Tool Lister - Lists available tools, executes user_profile, get_activities, and get_activity_details tools
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import yaml
|
||||
from mcp_manager import Config, PydanticAIAnalyzer, print_tools
|
||||
import json
|
||||
|
||||
# Configure extensive debug logging
|
||||
logging.basicConfig(
|
||||
level=logging.DEBUG,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
async def main():
|
||||
logger.info("Starting MCP tool lister")
|
||||
analyzer = None
|
||||
try:
|
||||
# Load configuration from config.yaml
|
||||
logger.debug("Loading configuration from config.yaml")
|
||||
with open('config.yaml', 'r') as f:
|
||||
config_data = yaml.safe_load(f)
|
||||
logger.debug(f"Loaded config data: {config_data}")
|
||||
|
||||
config = Config(**config_data)
|
||||
logger.info("Configuration loaded and Config object created")
|
||||
|
||||
# Initialize the analyzer
|
||||
logger.debug("Creating PydanticAIAnalyzer instance")
|
||||
analyzer = PydanticAIAnalyzer(config)
|
||||
logger.info("PydanticAIAnalyzer instance created")
|
||||
|
||||
# Initialize the analyzer (starts MCP server and lists tools)
|
||||
logger.debug("Initializing analyzer (starting MCP server)")
|
||||
await analyzer.initialize()
|
||||
logger.info("Analyzer initialized successfully")
|
||||
|
||||
# List available tools
|
||||
logger.debug(f"Available tools count: {len(analyzer.available_tools)}")
|
||||
print_tools(analyzer.available_tools)
|
||||
logger.info("Available tools listed and printed")
|
||||
|
||||
# Pre-call user_profile tool
|
||||
logger.debug("Pre-calling user_profile tool")
|
||||
user_profile = await analyzer.get_user_profile()
|
||||
print("\n" + "="*60)
|
||||
print("RAW USER PROFILE (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(user_profile, indent=2, default=str))
|
||||
print("="*60)
|
||||
logger.info("User profile pre-cached and printed")
|
||||
|
||||
# Pre-call get_recent_cycling_activity_details
|
||||
logger.debug("Pre-calling get_recent_cycling_activity_details")
|
||||
activity_data = await analyzer.get_recent_cycling_activity_details()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("RAW RECENT ACTIVITIES (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data.get("activities", []), indent=2, default=str))
|
||||
print("="*60)
|
||||
|
||||
if activity_data.get("last_cycling"):
|
||||
print("\n" + "="*60)
|
||||
print("LAST CYCLING ACTIVITY SUMMARY (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data["last_cycling"], indent=2, default=str))
|
||||
print("="*60)
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("ACTIVITY DETAILS (Pre-cached)")
|
||||
print("="*60)
|
||||
print(json.dumps(activity_data["details"], indent=2, default=str))
|
||||
print("="*60)
|
||||
logger.info("Recent cycling activity details pre-cached and printed")
|
||||
else:
|
||||
logger.warning("No cycling activity found in recent activities")
|
||||
print("\nWarning: No cycling activity found in recent activities.")
|
||||
|
||||
except FileNotFoundError as e:
|
||||
logger.error(f"Config file not found: {e}")
|
||||
print("Error: config.yaml not found. Please ensure the file exists.")
|
||||
except yaml.YAMLError as e:
|
||||
logger.error(f"YAML parsing error: {e}")
|
||||
print("Error: Invalid YAML in config.yaml.")
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error: {e}", exc_info=True)
|
||||
print(f"Error during execution: {e}")
|
||||
finally:
|
||||
# Ensure proper cleanup
|
||||
if analyzer:
|
||||
logger.debug("Performing cleanup")
|
||||
await analyzer.cleanup()
|
||||
logger.info("Cleanup completed successfully")
|
||||
else:
|
||||
logger.warning("No analyzer to cleanup")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
167
minimal_mcp_test (1).py
Normal file
167
minimal_mcp_test (1).py
Normal file
@@ -0,0 +1,167 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Minimal MCP Test - Just test MCP connection and user profile
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
|
||||
# Minimal imports - just what we need
|
||||
try:
|
||||
from pydantic_ai.mcp import MCPServerStdio
|
||||
import shutil
|
||||
MCP_AVAILABLE = True
|
||||
except ImportError:
|
||||
print("❌ pydantic-ai MCP not available")
|
||||
print("Install with: pip install pydantic-ai")
|
||||
exit(1)
|
||||
|
||||
# Simple logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MinimalMCPTest:
|
||||
"""Minimal MCP test class"""
|
||||
|
||||
def __init__(self, garth_token: str, server_path: str = "uvx"):
|
||||
self.garth_token = garth_token
|
||||
self.server_path = server_path
|
||||
self.mcp_server = None
|
||||
self.cached_profile = None
|
||||
|
||||
def setup_mcp_server(self):
|
||||
"""Setup MCP server connection"""
|
||||
# Set environment
|
||||
os.environ["GARTH_TOKEN"] = self.garth_token
|
||||
env = os.environ.copy()
|
||||
|
||||
# Find server executable
|
||||
server_executable = shutil.which(self.server_path)
|
||||
if not server_executable:
|
||||
raise FileNotFoundError(f"'{self.server_path}' not found in PATH")
|
||||
|
||||
self.mcp_server = MCPServerStdio(
|
||||
command=server_executable,
|
||||
args=["garth-mcp-server"],
|
||||
env=env,
|
||||
)
|
||||
|
||||
print("✅ MCP server configured")
|
||||
|
||||
async def test_connection(self):
|
||||
"""Test basic MCP connection"""
|
||||
if not self.mcp_server:
|
||||
raise RuntimeError("MCP server not configured")
|
||||
|
||||
try:
|
||||
# Try to list tools
|
||||
tools = await self.mcp_server.list_tools()
|
||||
print(f"✅ MCP connected - found {len(tools)} tools")
|
||||
|
||||
# Show tools
|
||||
for tool in tools:
|
||||
print(f" 📋 {tool.name}: {getattr(tool, 'description', 'No description')}")
|
||||
|
||||
return tools
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ MCP connection failed: {e}")
|
||||
raise
|
||||
|
||||
async def get_user_profile(self):
|
||||
"""Get and cache user profile"""
|
||||
try:
|
||||
print("📞 Calling user_profile tool...")
|
||||
|
||||
# Direct tool call
|
||||
result = await self.mcp_server.direct_call_tool("user_profile", {})
|
||||
profile_data = result.output if hasattr(result, 'output') else result
|
||||
|
||||
# Cache it
|
||||
self.cached_profile = profile_data
|
||||
|
||||
print("✅ User profile retrieved and cached")
|
||||
return profile_data
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to get user profile: {e}")
|
||||
raise
|
||||
|
||||
def print_profile(self):
|
||||
"""Print cached profile"""
|
||||
if not self.cached_profile:
|
||||
print("❌ No cached profile")
|
||||
return
|
||||
|
||||
print("\n" + "="*50)
|
||||
print("USER PROFILE")
|
||||
print("="*50)
|
||||
print(json.dumps(self.cached_profile, indent=2, default=str))
|
||||
print("="*50)
|
||||
|
||||
async def run_test(self):
|
||||
"""Run the complete test"""
|
||||
print("🚀 Starting Minimal MCP Test\n")
|
||||
|
||||
# Setup
|
||||
self.setup_mcp_server()
|
||||
|
||||
# Test connection
|
||||
tools = await self.test_connection()
|
||||
|
||||
# Check if user_profile tool exists
|
||||
user_profile_tool = next((t for t in tools if t.name == "user_profile"), None)
|
||||
if not user_profile_tool:
|
||||
print("❌ user_profile tool not found")
|
||||
return False
|
||||
|
||||
# Get user profile
|
||||
await self.get_user_profile()
|
||||
|
||||
# Show results
|
||||
self.print_profile()
|
||||
|
||||
print("\n🎉 Test completed successfully!")
|
||||
return True
|
||||
|
||||
def get_config():
|
||||
"""Get configuration from environment or user input"""
|
||||
garth_token = os.getenv("GARTH_TOKEN")
|
||||
|
||||
if not garth_token:
|
||||
print("GARTH_TOKEN not found in environment")
|
||||
print("Please run 'uvx garth login' first to authenticate")
|
||||
garth_token = input("Enter your GARTH_TOKEN: ").strip()
|
||||
|
||||
if not garth_token:
|
||||
raise ValueError("GARTH_TOKEN is required")
|
||||
|
||||
server_path = os.getenv("GARTH_MCP_SERVER_PATH", "uvx")
|
||||
|
||||
return garth_token, server_path
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
try:
|
||||
# Get config
|
||||
garth_token, server_path = get_config()
|
||||
|
||||
# Run test
|
||||
test = MinimalMCPTest(garth_token, server_path)
|
||||
success = await test.run_test()
|
||||
|
||||
if success:
|
||||
print("\n✅ All tests passed!")
|
||||
else:
|
||||
print("\n❌ Tests failed!")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Error: {e}")
|
||||
logger.error("Test error", exc_info=True)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
195
minimal_mcp_test (2).py
Normal file
195
minimal_mcp_test (2).py
Normal file
@@ -0,0 +1,195 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Minimal MCP Test - Just test MCP connection and user profile
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
# Minimal imports - just what we need
|
||||
try:
|
||||
from pydantic_ai.mcp import MCPServerStdio
|
||||
import shutil
|
||||
MCP_AVAILABLE = True
|
||||
except ImportError:
|
||||
print("❌ pydantic-ai MCP not available")
|
||||
print("Install with: pip install pydantic-ai")
|
||||
exit(1)
|
||||
|
||||
# Simple logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MinimalMCPTest:
|
||||
"""Minimal MCP test class"""
|
||||
|
||||
def __init__(self, garth_token: str, server_path: str = "uvx"):
|
||||
self.garth_token = garth_token
|
||||
self.server_path = server_path
|
||||
self.mcp_server = None
|
||||
self.cached_profile = None
|
||||
|
||||
def setup_mcp_server(self):
|
||||
"""Setup MCP server connection"""
|
||||
# Set environment
|
||||
os.environ["GARTH_TOKEN"] = self.garth_token
|
||||
env = os.environ.copy()
|
||||
|
||||
# Find server executable
|
||||
server_executable = shutil.which(self.server_path)
|
||||
if not server_executable:
|
||||
raise FileNotFoundError(f"'{self.server_path}' not found in PATH")
|
||||
|
||||
self.mcp_server = MCPServerStdio(
|
||||
command=server_executable,
|
||||
args=["garth-mcp-server"],
|
||||
env=env,
|
||||
)
|
||||
|
||||
print("✅ MCP server configured")
|
||||
|
||||
async def test_connection(self):
|
||||
"""Test basic MCP connection"""
|
||||
if not self.mcp_server:
|
||||
raise RuntimeError("MCP server not configured")
|
||||
|
||||
try:
|
||||
# Try to list tools
|
||||
tools = await self.mcp_server.list_tools()
|
||||
print(f"✅ MCP connected - found {len(tools)} tools")
|
||||
|
||||
# Show tools
|
||||
for tool in tools:
|
||||
print(f" 📋 {tool.name}: {getattr(tool, 'description', 'No description')}")
|
||||
|
||||
return tools
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ MCP connection failed: {e}")
|
||||
raise
|
||||
|
||||
async def get_user_profile(self):
|
||||
"""Get and cache user profile"""
|
||||
try:
|
||||
print("📞 Calling user_profile tool...")
|
||||
|
||||
# Direct tool call
|
||||
result = await self.mcp_server.direct_call_tool("user_profile", {})
|
||||
profile_data = result.output if hasattr(result, 'output') else result
|
||||
|
||||
# Cache it
|
||||
self.cached_profile = profile_data
|
||||
|
||||
print("✅ User profile retrieved and cached")
|
||||
return profile_data
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to get user profile: {e}")
|
||||
raise
|
||||
|
||||
def print_profile(self):
|
||||
"""Print cached profile"""
|
||||
if not self.cached_profile:
|
||||
print("❌ No cached profile")
|
||||
return
|
||||
|
||||
print("\n" + "="*50)
|
||||
print("USER PROFILE")
|
||||
print("="*50)
|
||||
print(json.dumps(self.cached_profile, indent=2, default=str))
|
||||
print("="*50)
|
||||
|
||||
async def run_test(self):
|
||||
"""Run the complete test"""
|
||||
print("🚀 Starting Minimal MCP Test\n")
|
||||
|
||||
# Setup
|
||||
self.setup_mcp_server()
|
||||
|
||||
# Test connection
|
||||
tools = await self.test_connection()
|
||||
|
||||
# Check if user_profile tool exists
|
||||
user_profile_tool = next((t for t in tools if t.name == "user_profile"), None)
|
||||
if not user_profile_tool:
|
||||
print("❌ user_profile tool not found")
|
||||
return False
|
||||
|
||||
# Get user profile
|
||||
await self.get_user_profile()
|
||||
|
||||
# Show results
|
||||
self.print_profile()
|
||||
|
||||
print("\n🎉 Test completed successfully!")
|
||||
return True
|
||||
|
||||
def get_config():
|
||||
"""Get configuration from config.yaml or environment"""
|
||||
config_file = Path("config.yaml")
|
||||
|
||||
# Try to load from config.yaml first
|
||||
if config_file.exists():
|
||||
try:
|
||||
with open(config_file, 'r') as f:
|
||||
config_data = yaml.safe_load(f)
|
||||
|
||||
garth_token = config_data.get("garth_token")
|
||||
server_path = config_data.get("garth_mcp_server_path", "uvx")
|
||||
|
||||
if garth_token and garth_token != "your_garth_token_here":
|
||||
print("✅ Configuration loaded from config.yaml")
|
||||
return garth_token, server_path
|
||||
else:
|
||||
print("⚠️ garth_token not properly set in config.yaml")
|
||||
|
||||
except yaml.YAMLError as e:
|
||||
print(f"❌ Error parsing config.yaml: {e}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error reading config.yaml: {e}")
|
||||
else:
|
||||
print("ℹ️ config.yaml not found")
|
||||
|
||||
# Fallback to environment variables
|
||||
print("Trying environment variables...")
|
||||
garth_token = os.getenv("GARTH_TOKEN")
|
||||
|
||||
if not garth_token:
|
||||
print("❌ GARTH_TOKEN not found in config.yaml or environment")
|
||||
print("Please either:")
|
||||
print("1. Create config.yaml with garth_token")
|
||||
print("2. Run 'uvx garth login' and set GARTH_TOKEN environment variable")
|
||||
raise ValueError("GARTH_TOKEN is required")
|
||||
|
||||
server_path = os.getenv("GARTH_MCP_SERVER_PATH", "uvx")
|
||||
print("✅ Configuration loaded from environment variables")
|
||||
|
||||
return garth_token, server_path
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
try:
|
||||
# Get config
|
||||
garth_token, server_path = get_config()
|
||||
|
||||
# Run test
|
||||
test = MinimalMCPTest(garth_token, server_path)
|
||||
success = await test.run_test()
|
||||
|
||||
if success:
|
||||
print("\n✅ All tests passed!")
|
||||
else:
|
||||
print("\n❌ Tests failed!")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Interrupted by user")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Error: {e}")
|
||||
logger.error("Test error", exc_info=True)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
269
readme_file.md
Normal file
269
readme_file.md
Normal file
@@ -0,0 +1,269 @@
|
||||
# Cycling Workout Analyzer - Clean Architecture
|
||||
|
||||
A modular, extensible cycling workout analyzer built with a clean architecture that separates core concerns into focused modules.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The application is structured into distinct, focused modules:
|
||||
|
||||
```
|
||||
├── core_app.py # Main orchestrator
|
||||
├── config.py # Configuration management
|
||||
├── llm_client.py # LLM interactions
|
||||
├── mcp_client.py # MCP server management
|
||||
├── cache_manager.py # Data caching with TTL
|
||||
├── template_engine.py # Template loading/rendering
|
||||
├── cli_interface.py # Command line interface
|
||||
└── requirements.txt # Dependencies
|
||||
```
|
||||
|
||||
## Core Features
|
||||
|
||||
### 🤖 LLM Integration
|
||||
- OpenRouter API support with multiple models
|
||||
- Both tool-enabled and tool-free analysis modes
|
||||
- Async request handling with timeouts
|
||||
|
||||
### 🔧 MCP Tool Management
|
||||
- Automatic MCP server discovery and connection
|
||||
- Tool listing and direct tool calling
|
||||
- Garth MCP server integration for Garmin data
|
||||
|
||||
### 💾 Smart Caching
|
||||
- TTL-based caching system
|
||||
- Pre-loading of common data (user profile, activities)
|
||||
- Specialized cycling data cache helpers
|
||||
|
||||
### 📝 Template System
|
||||
- Modular template structure
|
||||
- Section includes and variable substitution
|
||||
- Auto-creation of default templates
|
||||
|
||||
### ⚙️ Configuration
|
||||
- YAML config files with environment variable fallback
|
||||
- Automatic sample config generation
|
||||
- Extensible configuration structure
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Install Dependencies
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Install MCP server for Garmin data
|
||||
npm install -g garth-mcp-server
|
||||
```
|
||||
|
||||
### 2. Configure
|
||||
```bash
|
||||
# Run once to create config.yaml
|
||||
python cli_interface.py
|
||||
|
||||
# Edit config.yaml with your API keys
|
||||
```
|
||||
|
||||
### 3. Run
|
||||
```bash
|
||||
python cli_interface.py
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Analysis
|
||||
```python
|
||||
from config import load_config
|
||||
from core_app import CyclingAnalyzerApp
|
||||
|
||||
config = load_config()
|
||||
app = CyclingAnalyzerApp(config)
|
||||
|
||||
await app.initialize()
|
||||
|
||||
# Analyze last workout
|
||||
analysis = await app.analyze_workout("analyze_last_workout")
|
||||
print(analysis)
|
||||
|
||||
# Get workout suggestion
|
||||
suggestion = await app.suggest_next_workout()
|
||||
print(suggestion)
|
||||
|
||||
await app.cleanup()
|
||||
```
|
||||
|
||||
### Custom Analysis
|
||||
```python
|
||||
# Enhanced analysis with tools
|
||||
analysis = await app.enhanced_analysis(
|
||||
"performance_trends",
|
||||
training_rules="Custom rules here"
|
||||
)
|
||||
|
||||
# Check what tools are available
|
||||
tools = app.list_available_tools()
|
||||
for tool in tools:
|
||||
print(f"- {tool.name}: {tool.description}")
|
||||
```
|
||||
|
||||
### Cache Management
|
||||
```python
|
||||
# Check cached data
|
||||
cached = app.get_cached_data()
|
||||
print("Cached keys:", list(cached.keys()))
|
||||
|
||||
# Cache custom data
|
||||
app.cache_manager.set("custom_key", {"data": "value"}, ttl=600)
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### config.yaml
|
||||
```yaml
|
||||
# LLM Settings
|
||||
openrouter_api_key: "your_api_key_here"
|
||||
openrouter_model: "deepseek/deepseek-r1-0528:free"
|
||||
|
||||
# MCP Settings
|
||||
garth_token: "your_garth_token_here"
|
||||
garth_mcp_server_path: "uvx"
|
||||
|
||||
# Application Settings
|
||||
templates_dir: "templates"
|
||||
rules_file: "rules.yaml"
|
||||
cache_ttl: 300
|
||||
log_level: "INFO"
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="your_key"
|
||||
export GARTH_TOKEN="your_token"
|
||||
export LOG_LEVEL="DEBUG"
|
||||
```
|
||||
|
||||
## Extension Points
|
||||
|
||||
### Custom Analysis Types
|
||||
```python
|
||||
# Add new analysis in core_app.py
|
||||
async def custom_analysis(self, **kwargs) -> str:
|
||||
template = "workflows/custom_analysis.txt"
|
||||
context = {"custom_data": kwargs}
|
||||
prompt = self.template_engine.render(template, **context)
|
||||
return await self.llm_client.generate(prompt)
|
||||
```
|
||||
|
||||
### Custom MCP Tools
|
||||
```python
|
||||
# Add new tool support in mcp_client.py
|
||||
async def call_custom_tool(self, parameters: dict) -> dict:
|
||||
return await self.call_tool("custom_tool", parameters)
|
||||
```
|
||||
|
||||
### Custom Templates
|
||||
Create new templates in `templates/workflows/`:
|
||||
```
|
||||
templates/
|
||||
├── workflows/
|
||||
│ ├── my_analysis.txt
|
||||
│ └── custom_report.txt
|
||||
├── base/
|
||||
│ ├── data_sections/
|
||||
│ └── analysis_frameworks/
|
||||
```
|
||||
|
||||
### Custom Cache Strategies
|
||||
```python
|
||||
from cache_manager import CacheManager
|
||||
|
||||
class CustomCache(CacheManager):
|
||||
def cache_performance_data(self, data, athlete_id):
|
||||
self.set(f"performance_{athlete_id}", data, ttl=1800)
|
||||
```
|
||||
|
||||
## Architecture Benefits
|
||||
|
||||
### Separation of Concerns
|
||||
- **Config**: Handles all configuration logic
|
||||
- **LLM Client**: Pure LLM interactions
|
||||
- **MCP Client**: Tool management only
|
||||
- **Cache**: Data persistence with TTL
|
||||
- **Templates**: Prompt composition
|
||||
- **CLI**: User interface
|
||||
|
||||
### Extensibility
|
||||
- Easy to add new LLM providers
|
||||
- Plugin-style MCP tool additions
|
||||
- Template-based prompt customization
|
||||
- Configurable caching strategies
|
||||
|
||||
### Testability
|
||||
- Each module has single responsibility
|
||||
- Clear interfaces between components
|
||||
- Mock-friendly async design
|
||||
- Dependency injection ready
|
||||
|
||||
### Maintainability
|
||||
- Small, focused files
|
||||
- Clear naming conventions
|
||||
- Comprehensive logging
|
||||
- Error handling at boundaries
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Template Inheritance
|
||||
Templates can include sections and inherit from base templates:
|
||||
```
|
||||
{activity_summary_section} # Includes base/data_sections/activity_summary.txt
|
||||
{assessment_points} # Includes base/analysis_frameworks/assessment_points.txt
|
||||
```
|
||||
|
||||
### Dynamic Tool Selection
|
||||
The app automatically detects available tools and adjusts functionality:
|
||||
```python
|
||||
if await self.mcp_client.has_tool("hrv_data"):
|
||||
hrv_data = await self.mcp_client.call_tool("hrv_data", {})
|
||||
```
|
||||
|
||||
### Cache Warming
|
||||
Common data is pre-loaded during initialization:
|
||||
- User profile (1 hour TTL)
|
||||
- Recent activities (15 min TTL)
|
||||
- Last cycling activity details (1 hour TTL)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Connection Issues
|
||||
```bash
|
||||
# Check if garth-mcp-server is installed
|
||||
which garth-mcp-server
|
||||
|
||||
# Test Garth token
|
||||
uvx garth login
|
||||
```
|
||||
|
||||
### Template Errors
|
||||
```bash
|
||||
# List available templates
|
||||
python -c "from template_engine import TemplateEngine; print(TemplateEngine('templates').list_templates())"
|
||||
|
||||
# Check template variables
|
||||
python -c "from template_engine import TemplateEngine; print(TemplateEngine('templates').get_template_info('workflows/analyze_last_workout.txt'))"
|
||||
```
|
||||
|
||||
### Cache Issues
|
||||
```bash
|
||||
# Clear cache
|
||||
python -c "from cache_manager import CacheManager; CacheManager().clear()"
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
The modular architecture makes contributions straightforward:
|
||||
|
||||
1. **New LLM Provider**: Extend `LLMClient`
|
||||
2. **New Data Source**: Create new MCP client
|
||||
3. **New Analysis**: Add templates and methods
|
||||
4. **New Interface**: Create alternative to CLI
|
||||
5. **New Cache Strategy**: Extend `CacheManager`
|
||||
|
||||
Each module is independently testable and can be developed in isolation.
|
||||
@@ -1,22 +0,0 @@
|
||||
aiohttp>=3.8.0
|
||||
pyyaml>=6.0
|
||||
mcp>=0.1.0
|
||||
pydantic-ai>=0.0.1
|
||||
|
||||
# Pydantic AI dependencies
|
||||
pydantic>=2.0.0
|
||||
openai>=1.0.0 # Required for AsyncOpenAI client
|
||||
|
||||
# Built-in modules (no installation needed)
|
||||
# asyncio
|
||||
# pathlib
|
||||
# dataclasses
|
||||
# logging
|
||||
|
||||
# For direct Garth MCP server integration
|
||||
# Note: You need to install and set up the garth-mcp-server separately
|
||||
# Follow: https://github.com/matin/garth-mcp-server
|
||||
|
||||
# Installation commands:
|
||||
# pip install pydantic-ai
|
||||
# npm install -g garth-mcp-server
|
||||
13
requirements_file.txt
Normal file
13
requirements_file.txt
Normal file
@@ -0,0 +1,13 @@
|
||||
# Core dependencies
|
||||
pydantic-ai>=0.0.1
|
||||
pyyaml>=6.0
|
||||
aiohttp>=3.8.0
|
||||
|
||||
# LLM dependencies
|
||||
pydantic>=2.0.0
|
||||
openai>=1.0.0
|
||||
|
||||
# Optional MCP dependencies
|
||||
# Install garth-mcp-server separately:
|
||||
# npm install -g garth-mcp-server
|
||||
# or check: https://github.com/matin/garth-mcp-server
|
||||
42
rules.yaml
42
rules.yaml
@@ -1,42 +0,0 @@
|
||||
Has a cycling training rulebook for plan updates. The rules are:
|
||||
|
||||
Heart rate zone:
|
||||
- garmin is the source of truth for HR zones
|
||||
- recommend changes to set zones as needed
|
||||
|
||||
Core Gear Rules:
|
||||
- Only one gear ratio per workout.
|
||||
- Gears can change between workouts within a setup.
|
||||
- Switching between setups (Setup 1 ↔ Setup 2) only every ≥4 weeks.
|
||||
|
||||
Setup Definitions:
|
||||
- Setup 1 (endurance/cadence bias):
|
||||
- 38×18 (harder)
|
||||
- 38×20 (easier)
|
||||
- Setup 2 (strength/force bias):
|
||||
- 38×14 (harder)
|
||||
- 38×16 (easier)
|
||||
|
||||
Workout-to-Gear Matching:
|
||||
- Endurance / Z2 → easiest gear in setup.
|
||||
- Tempo / Sweet Spot → harder gear in setup.
|
||||
- Climbing / Strength Endurance → hardest manageable gear.
|
||||
- Cadence / Recovery → easiest gear.
|
||||
|
||||
Terrain & Gradient:
|
||||
- Steep climbs (>10%) → Setup 1 only.
|
||||
- Flat/rolling → Setup 2 acceptable.
|
||||
- Avoid spinning >110 rpm or grinding <60 rpm for long periods.
|
||||
|
||||
Adaptation:
|
||||
- Stay in setup ≥4 weeks; may extend up to 6–8 weeks if progress continues.
|
||||
- Switch setups after recovery week.
|
||||
|
||||
Fallback Rules:
|
||||
- If terrain/weather forces a change, stay in current setup, pick easier gear.
|
||||
- Missing a ride does not reset setup timing.
|
||||
|
||||
Meta-Rule:
|
||||
- When creating or updating the plan, rules must always be consulted and applied.
|
||||
|
||||
|
||||
192
setup.md
192
setup.md
@@ -1,192 +0,0 @@
|
||||
# Cycling Workout Analyzer Setup Guide
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Python 3.8+** installed on your system
|
||||
2. **OpenRouter API account** - Get your API key from [OpenRouter.ai](https://openrouter.ai)
|
||||
3. **Garmin Connect account** with workout data
|
||||
|
||||
## Installation Steps
|
||||
|
||||
### 1. Install the Garth MCP Server
|
||||
|
||||
First, install the Garth MCP server that will connect to your Garmin data:
|
||||
|
||||
```bash
|
||||
# Install the Garth MCP server
|
||||
npm install -g garth-mcp-server
|
||||
|
||||
# Or if using pip/uv (check the repo for latest instructions)
|
||||
# pip install garth-mcp-server
|
||||
```
|
||||
|
||||
### 2. Set Up the Python Application
|
||||
|
||||
```bash
|
||||
# Clone or download the cycling analyzer files
|
||||
# Install Python dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### 3. Configure the Application
|
||||
|
||||
Run the application once to generate the configuration file:
|
||||
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
This will create a `config.yaml` file. Edit it with your credentials:
|
||||
|
||||
```yaml
|
||||
openrouter_api_key: "your_openrouter_api_key_here"
|
||||
openrouter_model: "deepseek/deepseek-r1-0528:free"
|
||||
garmin_email: "your_garmin_email@example.com"
|
||||
garmin_password: "your_garmin_password"
|
||||
garth_mcp_server_path: "garth-mcp-server" # or full path if needed
|
||||
rules_file: "rules.yaml"
|
||||
templates_dir: "templates"
|
||||
```
|
||||
|
||||
### 4. Set Up Environment Variables (Alternative)
|
||||
|
||||
Instead of using the config file, you can set environment variables:
|
||||
|
||||
```bash
|
||||
export OPENROUTER_API_KEY="your_api_key_here"
|
||||
export GARMIN_EMAIL="your_email@example.com"
|
||||
export GARMIN_PASSWORD="your_password"
|
||||
export GARTH_MCP_SERVER_PATH="garth-mcp-server"
|
||||
```
|
||||
|
||||
### 5. Customize Your Training Rules
|
||||
|
||||
Edit the generated `rules.yaml` file with your specific:
|
||||
- Training goals
|
||||
- Power zones (based on your FTP)
|
||||
- Heart rate zones
|
||||
- Weekly training structure preferences
|
||||
- Recovery rules
|
||||
|
||||
### 6. Customize Prompt Templates
|
||||
|
||||
Edit the template files in the `templates/` directory:
|
||||
- `single_workout_analysis.txt` - For analyzing individual workouts
|
||||
- `workout_recommendation.txt` - For getting next workout suggestions
|
||||
- `mcp_enhanced_analysis.txt` - For enhanced analysis using MCP tools
|
||||
|
||||
## Running the Application
|
||||
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
### 1. Basic Analysis
|
||||
- Analyze your last cycling workout against your rules
|
||||
- Get suggestions for your next workout based on recent training
|
||||
|
||||
### 2. MCP-Enhanced Analysis
|
||||
- Uses the Garth MCP server to access comprehensive Garmin data
|
||||
- Provides detailed performance trends, training load analysis, and recovery assessment
|
||||
- The LLM has direct access to your Garmin tools and can fetch additional data as needed
|
||||
|
||||
### 3. Customizable
|
||||
- Edit your training rules and goals
|
||||
- Modify prompt templates to get the analysis style you want
|
||||
- Configure different AI models through OpenRouter
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### MCP Connection Issues
|
||||
- Ensure `garth-mcp-server` is properly installed and accessible
|
||||
- Check that your Garmin credentials are correct
|
||||
- Verify the server path in your configuration
|
||||
|
||||
### API Issues
|
||||
- Confirm your OpenRouter API key is valid and has credits
|
||||
- Check your internet connection
|
||||
- Try a different model if the default one is unavailable
|
||||
|
||||
### No Workout Data
|
||||
- Ensure you have recent cycling activities in Garmin Connect
|
||||
- Check that the MCP server can authenticate with Garmin
|
||||
- Verify your Garmin credentials
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
cycling-analyzer/
|
||||
├── main.py # Main application
|
||||
├── config.yaml # Configuration file
|
||||
├── rules.yaml # Your training rules and zones
|
||||
├── requirements.txt # Python dependencies
|
||||
└── templates/ # Prompt templates
|
||||
├── single_workout_analysis.txt
|
||||
├── workout_recommendation.txt
|
||||
└── mcp_enhanced_analysis.txt
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Templates
|
||||
You can create additional templates for specific analysis types. The application will automatically detect `.txt` files in the templates directory. Template variables available:
|
||||
- `{workout_data}` - Individual workout data
|
||||
- `{workouts_data}` - Multiple workouts data
|
||||
- `{rules}` - Your training rules
|
||||
- `{available_tools}` - MCP tools information
|
||||
|
||||
### Custom Analysis Types
|
||||
Add new analysis options by:
|
||||
1. Creating a new template file
|
||||
2. Adding the analysis logic to the `CyclingAnalyzer` class
|
||||
3. Adding menu options in the main loop
|
||||
|
||||
### Multiple AI Models
|
||||
You can experiment with different AI models through OpenRouter:
|
||||
- `deepseek/deepseek-r1-0528:free` (default, free)
|
||||
- `anthropic/claude-3-sonnet`
|
||||
- `openai/gpt-4-turbo`
|
||||
- `google/gemini-pro`
|
||||
|
||||
### Integration with Other Tools
|
||||
The MCP architecture allows easy integration with other fitness tools and data sources. You can extend the application to work with:
|
||||
- Training Peaks
|
||||
- Strava (via MCP server)
|
||||
- Wahoo, Polar, or other device manufacturers
|
||||
- Custom training databases
|
||||
|
||||
### Automated Analysis
|
||||
You can run the analyzer in automated mode by modifying the `run()` method to:
|
||||
- Analyze workouts automatically after each session
|
||||
- Generate weekly training reports
|
||||
- Send recommendations via email or notifications
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **After a workout**: Run option 1 to get immediate feedback on your session
|
||||
2. **Planning next session**: Use option 2 to get AI-powered recommendations
|
||||
3. **Weekly review**: Use option 3 for enhanced analysis of trends and patterns
|
||||
4. **Adjust training**: Modify your `rules.yaml` based on insights and goals changes
|
||||
|
||||
## Security Notes
|
||||
|
||||
- Store your credentials securely
|
||||
- Consider using environment variables instead of config files for sensitive data
|
||||
- The MCP server runs locally and connects directly to Garmin - no data is sent to third parties except the AI provider (OpenRouter)
|
||||
|
||||
## Support and Contributions
|
||||
|
||||
- Check the Garth MCP server repository for Garmin-specific issues
|
||||
- Refer to OpenRouter documentation for API-related questions
|
||||
- Customize templates and rules to match your specific training methodology
|
||||
|
||||
## What Makes This Unique
|
||||
|
||||
This application bridges three powerful technologies:
|
||||
1. **Garth MCP Server** - Direct access to comprehensive Garmin data
|
||||
2. **Model Context Protocol (MCP)** - Standardized way for AI to access tools and data
|
||||
3. **OpenRouter** - Access to multiple state-of-the-art AI models
|
||||
|
||||
The AI doesn't just analyze static workout data - it can actively query your Garmin account for additional context, trends, and historical data to provide much more comprehensive and personalized recommendations.
|
||||
204
standalone_test.py
Normal file
204
standalone_test.py
Normal file
@@ -0,0 +1,204 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Standalone MCP Test - Single file test for MCP connection and user profile
|
||||
No external dependencies on the modular architecture - just tests MCP directly
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
# Check dependencies
|
||||
try:
|
||||
from pydantic_ai.mcp import MCPServerStdio
|
||||
print("✅ pydantic-ai MCP available")
|
||||
except ImportError:
|
||||
print("❌ pydantic-ai MCP not available")
|
||||
print("Install with: pip install pydantic-ai")
|
||||
exit(1)
|
||||
|
||||
def load_config_from_yaml():
|
||||
"""Load configuration from config.yaml file"""
|
||||
config_file = Path("config.yaml")
|
||||
|
||||
if not config_file.exists():
|
||||
print("❌ config.yaml not found")
|
||||
print("Please create config.yaml with your settings:")
|
||||
print("""
|
||||
garth_token: "your_garth_token_here"
|
||||
openrouter_api_key: "your_openrouter_api_key_here"
|
||||
openrouter_model: "deepseek/deepseek-chat-v3.1"
|
||||
garth_mcp_server_path: "uvx"
|
||||
""")
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(config_file, 'r') as f:
|
||||
config_data = yaml.safe_load(f)
|
||||
|
||||
print(f"✅ Loaded config from {config_file}")
|
||||
return config_data
|
||||
|
||||
except yaml.YAMLError as e:
|
||||
print(f"❌ Error parsing config.yaml: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"❌ Error reading config.yaml: {e}")
|
||||
return None
|
||||
|
||||
async def test_mcp_user_profile():
|
||||
"""Simple test to connect to MCP and get user profile"""
|
||||
|
||||
print("🚀 MCP User Profile Test")
|
||||
print("=" * 40)
|
||||
|
||||
# 1. Load configuration from config.yaml
|
||||
config = load_config_from_yaml()
|
||||
if not config:
|
||||
return False
|
||||
|
||||
# 2. Get garth_token from config
|
||||
garth_token = config.get("garth_token")
|
||||
if not garth_token or garth_token == "your_garth_token_here":
|
||||
print("❌ garth_token not properly set in config.yaml")
|
||||
print("Please run: uvx garth login")
|
||||
print("Then update config.yaml with your token")
|
||||
return False
|
||||
|
||||
print("✅ GARTH_TOKEN loaded from config.yaml")
|
||||
|
||||
# 3. Get server path from config
|
||||
server_path = config.get("garth_mcp_server_path", "uvx")
|
||||
server_executable = shutil.which(server_path)
|
||||
if not server_executable:
|
||||
print(f"❌ {server_path} not found")
|
||||
print("Please install uvx and garth-mcp-server")
|
||||
return False
|
||||
|
||||
print(f"✅ {server_path} found")
|
||||
|
||||
# 4. Setup MCP server
|
||||
print("🔧 Setting up MCP server...")
|
||||
|
||||
env = os.environ.copy()
|
||||
env["GARTH_TOKEN"] = garth_token
|
||||
|
||||
mcp_server = MCPServerStdio(
|
||||
command=server_executable,
|
||||
args=["garth-mcp-server"],
|
||||
env=env,
|
||||
)
|
||||
|
||||
try:
|
||||
# 5. List available tools
|
||||
print("📋 Listing MCP tools...")
|
||||
tools = await mcp_server.list_tools()
|
||||
|
||||
print(f"Found {len(tools)} tools:")
|
||||
for tool in tools:
|
||||
print(f" • {tool.name}")
|
||||
|
||||
# 6. Check for user_profile tool
|
||||
user_profile_tool = next((t for t in tools if t.name == "user_profile"), None)
|
||||
if not user_profile_tool:
|
||||
print("❌ user_profile tool not available")
|
||||
return False
|
||||
|
||||
print("✅ user_profile tool found")
|
||||
|
||||
# 7. Call user_profile tool
|
||||
print("📞 Getting user profile...")
|
||||
result = await mcp_server.direct_call_tool("user_profile", {})
|
||||
|
||||
# Extract data
|
||||
profile_data = result.output if hasattr(result, 'output') else result
|
||||
|
||||
# 8. Display results
|
||||
print("\n" + "=" * 50)
|
||||
print("USER PROFILE RETRIEVED")
|
||||
print("=" * 50)
|
||||
print(json.dumps(profile_data, indent=2, default=str))
|
||||
print("=" * 50)
|
||||
|
||||
# 9. Quick analysis
|
||||
if isinstance(profile_data, dict):
|
||||
print(f"\n📊 Profile contains {len(profile_data)} fields:")
|
||||
for key in list(profile_data.keys())[:5]: # Show first 5 keys
|
||||
print(f" • {key}")
|
||||
if len(profile_data) > 5:
|
||||
print(f" ... and {len(profile_data) - 5} more")
|
||||
|
||||
print("\n🎉 Test completed successfully!")
|
||||
|
||||
# 10. Show config info used
|
||||
print(f"\n📝 Configuration used:")
|
||||
print(f" • Model: {config.get('openrouter_model', 'Not set')}")
|
||||
print(f" • OpenRouter API Key: {'Set' if config.get('openrouter_api_key') else 'Not set'}")
|
||||
print(f" • Server Path: {server_path}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during test: {e}")
|
||||
print(f"Error type: {type(e).__name__}")
|
||||
return False
|
||||
|
||||
async def main():
|
||||
"""Run the test"""
|
||||
try:
|
||||
success = await test_mcp_user_profile()
|
||||
if success:
|
||||
print("\n✅ MCP user profile test PASSED")
|
||||
else:
|
||||
print("\n❌ MCP user profile test FAILED")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Test interrupted")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Unexpected error: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Standalone MCP User Profile Test")
|
||||
print("This will test MCP connection and retrieve your Garmin user profile")
|
||||
print()
|
||||
|
||||
# Check prerequisites
|
||||
print("Prerequisites check:")
|
||||
|
||||
# Check if config.yaml exists
|
||||
config_file = Path("config.yaml")
|
||||
if config_file.exists():
|
||||
print("✅ config.yaml found")
|
||||
try:
|
||||
with open(config_file, 'r') as f:
|
||||
config = yaml.safe_load(f)
|
||||
|
||||
# Check garth_token in config
|
||||
if config.get("garth_token") and config.get("garth_token") != "your_garth_token_here":
|
||||
print("✅ garth_token set in config.yaml")
|
||||
else:
|
||||
print("❌ garth_token not properly set in config.yaml")
|
||||
|
||||
# Check openrouter_api_key
|
||||
if config.get("openrouter_api_key") and config.get("openrouter_api_key") != "your_openrouter_api_key_here":
|
||||
print("✅ openrouter_api_key set in config.yaml")
|
||||
else:
|
||||
print("❌ openrouter_api_key not set in config.yaml")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error reading config.yaml: {e}")
|
||||
else:
|
||||
print("❌ config.yaml not found")
|
||||
|
||||
if shutil.which("uvx"):
|
||||
print("✅ uvx command available")
|
||||
else:
|
||||
print("❌ uvx not found - install it first")
|
||||
|
||||
print()
|
||||
|
||||
# Run the test
|
||||
asyncio.run(main())
|
||||
224
template_engine.py
Normal file
224
template_engine.py
Normal file
@@ -0,0 +1,224 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Template Engine - Simplified template loading and rendering
|
||||
"""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Any
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class TemplateEngine:
|
||||
"""Simple template engine for prompt management"""
|
||||
|
||||
def __init__(self, templates_dir: str):
|
||||
self.templates_dir = Path(templates_dir)
|
||||
self.templates_dir.mkdir(exist_ok=True)
|
||||
|
||||
# Create basic directory structure
|
||||
self._ensure_structure()
|
||||
|
||||
def _ensure_structure(self):
|
||||
"""Ensure basic template directory structure exists"""
|
||||
dirs = [
|
||||
"workflows",
|
||||
"base/system_prompts",
|
||||
"base/data_sections",
|
||||
"base/analysis_frameworks"
|
||||
]
|
||||
|
||||
for dir_path in dirs:
|
||||
(self.templates_dir / dir_path).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def list_templates(self) -> List[str]:
|
||||
"""List all available templates"""
|
||||
templates = []
|
||||
|
||||
# Get all .txt files in templates directory and subdirectories
|
||||
for template_file in self.templates_dir.rglob("*.txt"):
|
||||
rel_path = template_file.relative_to(self.templates_dir)
|
||||
templates.append(str(rel_path))
|
||||
|
||||
return sorted(templates)
|
||||
|
||||
def template_exists(self, template_name: str) -> bool:
|
||||
"""Check if template exists"""
|
||||
template_path = self._resolve_template_path(template_name)
|
||||
return template_path.exists() if template_path else False
|
||||
|
||||
def _resolve_template_path(self, template_name: str) -> Path:
|
||||
"""Resolve template name to full path"""
|
||||
# Handle different template name formats
|
||||
if template_name.endswith('.txt'):
|
||||
template_path = self.templates_dir / template_name
|
||||
else:
|
||||
template_path = self.templates_dir / f"{template_name}.txt"
|
||||
|
||||
return template_path
|
||||
|
||||
def load_template(self, template_name: str) -> str:
|
||||
"""Load raw template content"""
|
||||
template_path = self._resolve_template_path(template_name)
|
||||
|
||||
if not template_path.exists():
|
||||
raise FileNotFoundError(f"Template not found: {template_name}")
|
||||
|
||||
try:
|
||||
with open(template_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
logger.debug(f"Loaded template: {template_name}")
|
||||
return content
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading template {template_name}: {e}")
|
||||
raise
|
||||
|
||||
def render(self, template_name: str, **kwargs) -> str:
|
||||
"""Load and render template with variables"""
|
||||
content = self.load_template(template_name)
|
||||
|
||||
# Handle section includes (simple replacement)
|
||||
content = self._process_includes(content, **kwargs)
|
||||
|
||||
try:
|
||||
rendered = content.format(**kwargs)
|
||||
logger.debug(f"Rendered template: {template_name}")
|
||||
return rendered
|
||||
|
||||
except KeyError as e:
|
||||
logger.error(f"Missing variable in template {template_name}: {e}")
|
||||
logger.debug(f"Available variables: {list(kwargs.keys())}")
|
||||
raise ValueError(f"Missing variable in template {template_name}: {e}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error rendering template {template_name}: {e}")
|
||||
raise
|
||||
|
||||
def _process_includes(self, content: str, **kwargs) -> str:
|
||||
"""Process section includes like {activity_summary_section}"""
|
||||
import re
|
||||
|
||||
# Define section mappings
|
||||
section_mappings = {
|
||||
'activity_summary_section': 'base/data_sections/activity_summary.txt',
|
||||
'user_info_section': 'base/data_sections/user_info.txt',
|
||||
'training_rules_section': 'base/data_sections/training_rules.txt',
|
||||
'workout_data_section': 'base/data_sections/workout_data.txt',
|
||||
'assessment_points': 'base/analysis_frameworks/assessment_points.txt',
|
||||
'performance_analysis': 'base/analysis_frameworks/performance_analysis.txt',
|
||||
}
|
||||
|
||||
# Find and replace section placeholders
|
||||
section_pattern = re.compile(r'\{(\w+_section|\w+_points|\w+_analysis)\}')
|
||||
|
||||
for match in section_pattern.finditer(content):
|
||||
placeholder = match.group(0)
|
||||
section_name = match.group(1)
|
||||
|
||||
if section_name in section_mappings:
|
||||
section_file = section_mappings[section_name]
|
||||
try:
|
||||
section_content = self.load_template(section_file)
|
||||
# Render section with same kwargs
|
||||
section_rendered = section_content.format(**kwargs)
|
||||
content = content.replace(placeholder, section_rendered)
|
||||
except (FileNotFoundError, KeyError) as e:
|
||||
logger.warning(f"Could not process section {section_name}: {e}")
|
||||
# Leave placeholder as-is if section can't be loaded
|
||||
|
||||
return content
|
||||
|
||||
def create_template(self, template_name: str, content: str) -> None:
|
||||
"""Create a new template file"""
|
||||
template_path = self._resolve_template_path(template_name)
|
||||
template_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(template_path, 'w', encoding='utf-8') as f:
|
||||
f.write(content)
|
||||
|
||||
logger.info(f"Created template: {template_name}")
|
||||
|
||||
def get_template_info(self, template_name: str) -> Dict[str, Any]:
|
||||
"""Get information about a template"""
|
||||
if not self.template_exists(template_name):
|
||||
return {"exists": False}
|
||||
|
||||
template_path = self._resolve_template_path(template_name)
|
||||
content = self.load_template(template_name)
|
||||
|
||||
# Extract variables used in template
|
||||
import re
|
||||
variables = set(re.findall(r'\{(\w+)\}', content))
|
||||
|
||||
return {
|
||||
"exists": True,
|
||||
"path": str(template_path),
|
||||
"size": len(content),
|
||||
"variables": sorted(list(variables)),
|
||||
"line_count": len(content.splitlines())
|
||||
}
|
||||
|
||||
# Utility functions for template management
|
||||
def create_default_templates(templates_dir: str) -> None:
|
||||
"""Create default template files if they don't exist"""
|
||||
engine = TemplateEngine(templates_dir)
|
||||
|
||||
# Default system prompts
|
||||
default_templates = {
|
||||
"base/system_prompts/main_agent.txt":
|
||||
"You are an expert cycling coach with access to comprehensive Garmin Connect data.\n"
|
||||
"You analyze cycling workouts, provide performance insights, and give actionable training recommendations.\n"
|
||||
"Use the available tools to gather detailed workout data and provide comprehensive analysis.",
|
||||
|
||||
"base/system_prompts/no_tools_analysis.txt":
|
||||
"You are an expert cycling coach. Perform comprehensive analysis using the provided data.\n"
|
||||
"Do not use any tools - all relevant data is included in the prompt.",
|
||||
|
||||
"base/data_sections/activity_summary.txt":
|
||||
"ACTIVITY SUMMARY:\n{activity_summary}",
|
||||
|
||||
"base/data_sections/user_info.txt":
|
||||
"USER INFO:\n{user_info}",
|
||||
|
||||
"base/data_sections/training_rules.txt":
|
||||
"My training rules and goals:\n{training_rules}",
|
||||
|
||||
"base/analysis_frameworks/assessment_points.txt":
|
||||
"Please provide:\n"
|
||||
"1. Overall assessment of the workout\n"
|
||||
"2. How well it aligns with my rules and goals\n"
|
||||
"3. Areas for improvement\n"
|
||||
"4. Specific feedback on power, heart rate, duration, and intensity\n"
|
||||
"5. Recovery recommendations\n"
|
||||
"6. Comparison with typical performance metrics",
|
||||
|
||||
"workflows/analyze_last_workout.txt":
|
||||
"Analyze my most recent cycling workout using the provided data.\n\n"
|
||||
"{activity_summary_section}\n\n"
|
||||
"{user_info_section}\n\n"
|
||||
"{training_rules_section}\n\n"
|
||||
"{assessment_points}\n\n"
|
||||
"Focus on the provided activity details for your analysis.",
|
||||
|
||||
"workflows/suggest_next_workout.txt":
|
||||
"Please suggest my next cycling workout based on my recent training history.\n\n"
|
||||
"{training_rules_section}\n\n"
|
||||
"Please provide:\n"
|
||||
"1. Analysis of my recent training pattern\n"
|
||||
"2. Identified gaps or imbalances in my training\n"
|
||||
"3. Specific workout recommendation for my next session\n"
|
||||
"4. Target zones (power, heart rate, duration)\n"
|
||||
"5. Rationale for the recommendation based on recent performance",
|
||||
|
||||
"workflows/enhanced_analysis.txt":
|
||||
"Perform enhanced {analysis_type} analysis using all available data and tools.\n\n"
|
||||
"Available cached data: {cached_data}\n\n"
|
||||
"Use MCP tools as needed to gather additional data for comprehensive analysis."
|
||||
}
|
||||
|
||||
for template_name, content in default_templates.items():
|
||||
if not engine.template_exists(template_name):
|
||||
engine.create_template(template_name, content)
|
||||
logger.info(f"Created default template: {template_name}")
|
||||
@@ -1,155 +0,0 @@
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class TemplateValidator:
|
||||
"""Validates template syntax, inheritance, and versioning."""
|
||||
|
||||
def __init__(self, templates_dir: str):
|
||||
self.templates_dir = Path(templates_dir)
|
||||
self.components_dir = self.templates_dir / "base"
|
||||
self.versions_dir = self.templates_dir / "versions"
|
||||
self.legacy_mappings = {'main_agent_system_prompt.txt': 'base/system_prompts/main_agent.txt'}
|
||||
|
||||
def parse_frontmatter(self, content: str) -> dict:
|
||||
"""Parse YAML frontmatter from template content."""
|
||||
frontmatter = {}
|
||||
if content.startswith("---\n"):
|
||||
end = content.find("\n---\n")
|
||||
if end != -1:
|
||||
try:
|
||||
frontmatter = yaml.safe_load(content[4:end])
|
||||
except yaml.YAMLError as e:
|
||||
raise ValueError(f"Invalid YAML frontmatter: {e}")
|
||||
content = content[end + 5:]
|
||||
return frontmatter, content
|
||||
|
||||
def validate_syntax(self, template_path: Path) -> bool:
|
||||
"""Validate extends/includes syntax in frontmatter."""
|
||||
with open(template_path, 'r') as f:
|
||||
frontmatter, _ = self.parse_frontmatter(f.read())
|
||||
|
||||
extends = frontmatter.get('extends')
|
||||
includes = frontmatter.get('includes', [])
|
||||
|
||||
if extends and not isinstance(extends, str):
|
||||
raise ValueError("extends must be a string")
|
||||
|
||||
if not isinstance(includes, list):
|
||||
raise ValueError("includes must be a list")
|
||||
|
||||
for inc in includes:
|
||||
if not isinstance(inc, str):
|
||||
raise ValueError("Each include must be a string")
|
||||
|
||||
return True
|
||||
|
||||
def detect_inheritance_cycle(self, template_name: str, visited: set = None) -> bool:
|
||||
"""Detect cycles in inheritance chain."""
|
||||
if visited is None:
|
||||
visited = set()
|
||||
|
||||
if template_name in visited:
|
||||
return True # Cycle detected
|
||||
|
||||
visited.add(template_name)
|
||||
template_path = self._find_template(template_name)
|
||||
if not template_path:
|
||||
return False
|
||||
|
||||
with open(template_path, 'r') as f:
|
||||
frontmatter, _ = self.parse_frontmatter(f.read())
|
||||
|
||||
extends = frontmatter.get('extends')
|
||||
if extends:
|
||||
if self.detect_inheritance_cycle(extends, visited):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def validate_components_exist(self, template_path: Path) -> bool:
|
||||
"""Check if all included components exist."""
|
||||
with open(template_path, 'r') as f:
|
||||
frontmatter, _ = self.parse_frontmatter(f.read())
|
||||
|
||||
includes = frontmatter.get('includes', [])
|
||||
for inc in includes:
|
||||
comp_path = self.components_dir / inc
|
||||
if not comp_path.exists():
|
||||
raise FileNotFoundError(f"Component '{inc}' not found")
|
||||
|
||||
return True
|
||||
|
||||
def validate_version(self, version_str: str) -> bool:
|
||||
"""Validate semantic version format."""
|
||||
import re
|
||||
if re.match(r'^v?\d+\.\d+\.\d+$', version_str):
|
||||
return True
|
||||
raise ValueError(f"Invalid version format: {version_str}")
|
||||
|
||||
def validate_backward_compatibility(self, template_path: Path) -> bool:
|
||||
"""Ensure template can be loaded as plain if no frontmatter."""
|
||||
with open(template_path, 'r') as f:
|
||||
content = f.read()
|
||||
try:
|
||||
frontmatter, body = self.parse_frontmatter(content)
|
||||
# If no frontmatter, it's compatible
|
||||
return True
|
||||
except ValueError:
|
||||
# No frontmatter, plain template
|
||||
return True
|
||||
|
||||
def _find_template(self, name: str) -> Path | None:
|
||||
"""Find template path, handling versions."""
|
||||
# Check legacy mappings first
|
||||
if name in self.legacy_mappings:
|
||||
name = self.legacy_mappings[name]
|
||||
# Handle versioned
|
||||
if '@' in name:
|
||||
name_part, version = name.rsplit('@', 1)
|
||||
ver_path = self.versions_dir / name_part / f"{version}.txt"
|
||||
if ver_path.exists():
|
||||
return ver_path
|
||||
# Handle subdir paths like workflows/xxx.txt or base/yyy.txt
|
||||
if '/' in name:
|
||||
path = self.templates_dir / name
|
||||
if path.exists():
|
||||
return path
|
||||
# Plain name
|
||||
path = self.templates_dir / f"{name}.txt"
|
||||
if path.exists():
|
||||
return path
|
||||
return None
|
||||
|
||||
def full_validate(self, template_name: str) -> dict:
|
||||
"""Perform full validation and return report."""
|
||||
template_path = self._find_template(template_name)
|
||||
if not template_path:
|
||||
raise FileNotFoundError(f"Template '{template_name}' not found")
|
||||
|
||||
errors = []
|
||||
try:
|
||||
self.validate_syntax(template_path)
|
||||
except ValueError as e:
|
||||
errors.append(str(e))
|
||||
|
||||
try:
|
||||
self.validate_components_exist(template_path)
|
||||
except FileNotFoundError as e:
|
||||
errors.append(str(e))
|
||||
|
||||
if self.detect_inheritance_cycle(template_name):
|
||||
errors.append("Inheritance cycle detected")
|
||||
|
||||
version_str = template_name.split('@')[-1] if '@' in template_name else None
|
||||
if version_str:
|
||||
try:
|
||||
self.validate_version(version_str)
|
||||
except ValueError as e:
|
||||
errors.append(str(e))
|
||||
|
||||
self.validate_backward_compatibility(template_path)
|
||||
|
||||
return {"valid": len(errors) == 0, "errors": errors}
|
||||
@@ -1,7 +0,0 @@
|
||||
Please provide:
|
||||
1. Overall assessment of the workout
|
||||
2. How well it aligns with my rules and goals
|
||||
3. Areas for improvement
|
||||
4. Specific feedback on power, heart rate, duration, and intensity
|
||||
5. Recovery recommendations
|
||||
6. Comparison with typical performance metrics (use user profile data for baselines)
|
||||
@@ -1,7 +0,0 @@
|
||||
1. **Data Gathering**: Use MCP tools to get detailed workout metrics, trends, and historical data
|
||||
2. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics
|
||||
3. **Training Periodization**: Consider the user's training phase and progression
|
||||
4. **Actionable Recommendations**: Provide specific, measurable guidance for future workouts
|
||||
5. **Risk Assessment**: Identify any signs of overtraining or injury risk
|
||||
|
||||
Be thorough in your analysis and use multiple data points to support your recommendations, but be concise.
|
||||
@@ -1,7 +0,0 @@
|
||||
Focus your analysis on:
|
||||
1. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics from the provided data
|
||||
2. **Training Periodization**: Consider the recent activity patterns and progression
|
||||
3. **Actionable Recommendations**: Provide specific, measurable guidance based on the data
|
||||
4. **Risk Assessment**: Identify any signs of overtraining or injury risk from the available metrics
|
||||
|
||||
Be thorough and use the provided data points to support your recommendations.
|
||||
@@ -1,2 +0,0 @@
|
||||
ACTIVITY SUMMARY:
|
||||
{activity_summary}
|
||||
@@ -1,2 +0,0 @@
|
||||
AVAILABLE MCP TOOLS:
|
||||
{available_tools}
|
||||
@@ -1,2 +0,0 @@
|
||||
RECENT DATA:
|
||||
{recent_data}
|
||||
@@ -1,2 +0,0 @@
|
||||
My training rules and goals:
|
||||
{training_rules}
|
||||
@@ -1,2 +0,0 @@
|
||||
USER INFO:
|
||||
{user_info}
|
||||
@@ -1,2 +0,0 @@
|
||||
WORKOUT DATA:
|
||||
{workout_data}
|
||||
@@ -1,2 +0,0 @@
|
||||
RECENT WORKOUTS:
|
||||
{workouts_data}
|
||||
@@ -1,3 +0,0 @@
|
||||
You are an expert cycling coach with access to comprehensive Garmin Connect data.
|
||||
You analyze cycling workouts, provide performance insights, and give actionable training recommendations.
|
||||
Use the available tools to gather detailed workout data and provide comprehensive analysis.
|
||||
@@ -1,2 +0,0 @@
|
||||
You are an expert cycling coach. Perform comprehensive analysis using the provided data.
|
||||
Do not use any tools - all relevant data is included in the prompt.
|
||||
@@ -1,10 +0,0 @@
|
||||
---
|
||||
extends: workflows/single_workout_analysis.txt
|
||||
includes:
|
||||
- data_sections/workout_data.txt
|
||||
- data_sections/training_rules.txt
|
||||
- analysis_frameworks/assessment_points.txt
|
||||
version: 1.0
|
||||
---
|
||||
|
||||
Additional instructions for v1.0: Emphasize power output analysis.
|
||||
@@ -1,11 +0,0 @@
|
||||
Analyze my most recent cycling workout using the provided data.
|
||||
|
||||
{activity_summary_section}
|
||||
|
||||
{user_info_section}
|
||||
|
||||
{training_rules_section}
|
||||
|
||||
{assessment_points}
|
||||
|
||||
Focus on the provided activity details for your analysis.
|
||||
@@ -1,9 +0,0 @@
|
||||
Analyze my cycling workout against my training rules and goals.
|
||||
|
||||
{workout_data_section}
|
||||
|
||||
{rules_section}
|
||||
|
||||
You have access to additional Garmin data through MCP tools if needed.
|
||||
|
||||
{assessment_points}
|
||||
@@ -1,14 +0,0 @@
|
||||
Please suggest my next cycling workout based on my recent training history. Use the get_activities tool to get my recent activities and analyze the training pattern.
|
||||
|
||||
{training_rules_section}
|
||||
|
||||
Please provide:
|
||||
1. Analysis of my recent training pattern
|
||||
2. Identified gaps or imbalances in my training
|
||||
3. Specific workout recommendation for my next session
|
||||
4. Target zones (power, heart rate, duration)
|
||||
5. Rationale for the recommendation based on recent performance
|
||||
6. Alternative options if weather/time constraints exist
|
||||
7. How this fits into my overall training progression
|
||||
|
||||
Use additional tools like hrv_data or nightly_sleep to inform recovery status and workout readiness.
|
||||
@@ -1,18 +0,0 @@
|
||||
Based on my recent cycling workouts, suggest what workout I should do next.
|
||||
|
||||
RECENT WORKOUTS:
|
||||
{workouts_data}
|
||||
|
||||
MY TRAINING RULES:
|
||||
{rules}
|
||||
|
||||
You have access to additional Garmin data and tools to analyze my fitness trends.
|
||||
|
||||
Please provide:
|
||||
1. Analysis of my recent training pattern
|
||||
2. Identified gaps or imbalances in my training
|
||||
3. Specific workout recommendation for my next session
|
||||
4. Target zones (power, heart rate, duration)
|
||||
5. Rationale for the recommendation based on my recent performance
|
||||
6. Alternative options if weather/time constraints exist
|
||||
7. How this fits into my overall training progression
|
||||
@@ -1,177 +0,0 @@
|
||||
import os
|
||||
import logging
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from template_validator import TemplateValidator
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class TemplateManager:
|
||||
"""Manages prompt templates for the cycling analyzer with inheritance, versioning, and validation"""
|
||||
|
||||
def __init__(self, templates_dir: str):
|
||||
self.templates_dir = Path(templates_dir)
|
||||
self.templates_dir.mkdir(exist_ok=True)
|
||||
self.validator = TemplateValidator(str(self.templates_dir))
|
||||
self.components_dir = self.templates_dir / "base"
|
||||
self.versions_dir = self.templates_dir / "versions"
|
||||
self.versions_dir.mkdir(exist_ok=True)
|
||||
|
||||
def list_templates(self) -> list[str]:
|
||||
"""List available template files, including versioned ones"""
|
||||
templates = []
|
||||
# Base templates
|
||||
for f in self.templates_dir.glob("*.txt"):
|
||||
templates.append(f.name)
|
||||
# Workflows and subdirs
|
||||
for f in self.templates_dir.rglob("*.txt"):
|
||||
if f.parent.name in ["workflows", "base"]:
|
||||
rel_path = f.relative_to(self.templates_dir)
|
||||
templates.append(str(rel_path))
|
||||
# Versions
|
||||
for ver_dir in self.versions_dir.iterdir():
|
||||
if ver_dir.is_dir():
|
||||
for f in ver_dir.glob("*.txt"):
|
||||
templates.append(f"{ver_dir.name}@{f.stem}")
|
||||
return sorted(set(templates))
|
||||
|
||||
def _resolve_path(self, template_name: str) -> Path:
|
||||
"""Resolve template path, handling versions and subdirs"""
|
||||
# Handle versioned
|
||||
if '@' in template_name:
|
||||
name, version = template_name.rsplit('@', 1)
|
||||
ver_path = self.versions_dir / name / f"{version}.txt"
|
||||
if ver_path.exists():
|
||||
return ver_path
|
||||
|
||||
# Handle subdir paths like workflows/xxx.txt or base/yyy.txt
|
||||
if '/' in template_name:
|
||||
path = self.templates_dir / template_name
|
||||
if path.exists():
|
||||
return path
|
||||
|
||||
# Plain name
|
||||
path = self.templates_dir / f"{template_name}.txt"
|
||||
if path.exists():
|
||||
return path
|
||||
|
||||
raise FileNotFoundError(f"Template '{template_name}' not found")
|
||||
|
||||
def _parse_frontmatter(self, content: str) -> tuple[dict, str]:
|
||||
"""Parse YAML frontmatter."""
|
||||
frontmatter = {}
|
||||
body = content
|
||||
if content.startswith("---\n"):
|
||||
end = content.find("\n---\n")
|
||||
if end != -1:
|
||||
try:
|
||||
frontmatter = yaml.safe_load(content[4:end]) or {}
|
||||
except yaml.YAMLError as e:
|
||||
raise ValueError(f"Invalid YAML frontmatter in {template_name}: {e}")
|
||||
body = content[end + 5:]
|
||||
return frontmatter, body
|
||||
|
||||
def _load_and_compose(self, template_name: str, visited: set = None, **kwargs) -> str:
|
||||
"""Recursively load and compose template with inheritance and includes."""
|
||||
if visited is None:
|
||||
visited = set()
|
||||
if template_name in visited:
|
||||
raise ValueError(f"Inheritance cycle detected for {template_name}")
|
||||
visited.add(template_name)
|
||||
|
||||
path = self._resolve_path(template_name)
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
frontmatter, body = self._parse_frontmatter(content)
|
||||
|
||||
# Handle extends
|
||||
extends = frontmatter.get('extends')
|
||||
if extends:
|
||||
base_content = self._load_and_compose(extends, visited, **kwargs)
|
||||
# Simple override: child body replaces base, but could be more sophisticated
|
||||
body = base_content.replace("{body}", body, 1) if "{body}" in base_content else base_content + "\n\n" + body
|
||||
|
||||
# Handle includes
|
||||
includes = frontmatter.get('includes', [])
|
||||
for inc in includes:
|
||||
inc_path = self.components_dir / inc
|
||||
if inc_path.exists():
|
||||
with open(inc_path, 'r') as f:
|
||||
inc_content = f.read().format(**kwargs)
|
||||
body += f"\n\n{inc_content}"
|
||||
else:
|
||||
logger.warning(f"Include '{inc}' not found")
|
||||
|
||||
# Dynamic selection based on kwargs
|
||||
dynamic_includes = []
|
||||
if 'workout_data' in kwargs:
|
||||
dynamic_includes.append('data_sections/workout_data.txt')
|
||||
if 'user_info' in kwargs:
|
||||
dynamic_includes.append('data_sections/user_info.txt')
|
||||
if 'training_rules' in kwargs:
|
||||
dynamic_includes.append('data_sections/training_rules.txt')
|
||||
# Add more as needed
|
||||
|
||||
for dinc in dynamic_includes:
|
||||
if dinc not in includes: # Avoid duplicates
|
||||
dinc_path = self.components_dir / dinc
|
||||
if dinc_path.exists():
|
||||
with open(dinc_path, 'r') as f:
|
||||
dinc_content = f.read().format(**kwargs)
|
||||
body += f"\n\n{dinc_content}"
|
||||
|
||||
# Replace section placeholders
|
||||
import re
|
||||
section_pattern = re.compile(r'\{(\w+_section|\w+)\}')
|
||||
sections_map = {
|
||||
'activity_summary_section': 'data_sections/activity_summary.txt',
|
||||
'user_info_section': 'data_sections/user_info.txt',
|
||||
'training_rules_section': 'data_sections/training_rules.txt',
|
||||
'workout_data_section': 'data_sections/workout_data.txt',
|
||||
'workouts_data': 'data_sections/workouts_data.txt',
|
||||
'available_tools_section': 'data_sections/available_tools.txt',
|
||||
'recent_data_section': 'data_sections/recent_data.txt',
|
||||
'assessment_points': 'analysis_frameworks/assessment_points.txt',
|
||||
'performance_analysis': 'analysis_frameworks/performance_analysis.txt',
|
||||
'data_gathering': 'analysis_frameworks/data_gathering.txt',
|
||||
}
|
||||
|
||||
for match in section_pattern.finditer(body):
|
||||
placeholder = match.group(0)
|
||||
section_name = match.group(1)
|
||||
if section_name in sections_map:
|
||||
section_file = sections_map[section_name]
|
||||
section_path = self.components_dir / section_file
|
||||
if section_path.exists():
|
||||
with open(section_path, 'r', encoding='utf-8') as f:
|
||||
section_content = f.read().format(**kwargs)
|
||||
body = body.replace(placeholder, section_content)
|
||||
|
||||
return body
|
||||
|
||||
def get_template(self, template_name: str, **kwargs) -> str:
|
||||
"""Load, compose, validate, and format a template."""
|
||||
# Validate first
|
||||
validation = self.validator.full_validate(template_name)
|
||||
if not validation["valid"]:
|
||||
raise ValueError(f"Template validation failed: {validation['errors']}")
|
||||
|
||||
# Compose
|
||||
composed_content = self._load_and_compose(template_name, **kwargs)
|
||||
|
||||
# Debug logging
|
||||
logger.debug(f"Loading template: {template_name}")
|
||||
logger.debug(f"Composed content length: {len(composed_content)}")
|
||||
logger.debug(f"Available kwargs: {list(kwargs.keys())}")
|
||||
|
||||
# Format
|
||||
try:
|
||||
formatted_template = composed_content.format(**kwargs)
|
||||
return formatted_template
|
||||
except KeyError as e:
|
||||
logger.error(f"Missing variable in template '{template_name}': {e}")
|
||||
logger.error(f"Available kwargs: {list(kwargs.keys())}")
|
||||
raise ValueError(f"Missing variable in template '{template_name}': {e}")
|
||||
except Exception as e:
|
||||
raise ValueError(f"Error formatting template '{template_name}': {e}")
|
||||
263
test_mcp_app.py
Normal file
263
test_mcp_app.py
Normal file
@@ -0,0 +1,263 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple MCP Test App - Test MCP connection and user profile loading
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Import our modules
|
||||
from config import Config, load_config, create_sample_config
|
||||
from mcp_client import MCPClient
|
||||
from cache_manager import CacheManager
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MCPTestApp:
|
||||
"""Simple test application for MCP functionality"""
|
||||
|
||||
def __init__(self, config: Config):
|
||||
self.config = config
|
||||
self.mcp_client = MCPClient(config)
|
||||
self.cache_manager = CacheManager(default_ttl=300)
|
||||
|
||||
async def initialize(self):
|
||||
"""Initialize MCP client"""
|
||||
logger.info("Initializing MCP test app...")
|
||||
await self.mcp_client.initialize()
|
||||
|
||||
async def cleanup(self):
|
||||
"""Cleanup resources"""
|
||||
await self.mcp_client.cleanup()
|
||||
|
||||
async def test_mcp_connection(self):
|
||||
"""Test basic MCP connection and list tools"""
|
||||
print("\n" + "="*60)
|
||||
print("MCP CONNECTION TEST")
|
||||
print("="*60)
|
||||
|
||||
if not self.mcp_client.is_available:
|
||||
print("❌ MCP server not available")
|
||||
return False
|
||||
|
||||
print("✅ MCP server connected")
|
||||
|
||||
# List available tools
|
||||
tools = await self.mcp_client.list_tools()
|
||||
print(f"📋 Found {len(tools)} tools:")
|
||||
|
||||
if tools:
|
||||
for i, tool in enumerate(tools, 1):
|
||||
print(f" {i}. {tool.name}")
|
||||
if hasattr(tool, 'description') and tool.description:
|
||||
print(f" {tool.description}")
|
||||
else:
|
||||
print(" No tools available")
|
||||
|
||||
return len(tools) > 0
|
||||
|
||||
async def test_user_profile(self):
|
||||
"""Test user profile loading and caching"""
|
||||
print("\n" + "="*60)
|
||||
print("USER PROFILE TEST")
|
||||
print("="*60)
|
||||
|
||||
# Check if user_profile tool is available
|
||||
if not await self.mcp_client.has_tool("user_profile"):
|
||||
print("❌ user_profile tool not available")
|
||||
return None
|
||||
|
||||
print("✅ user_profile tool found")
|
||||
|
||||
try:
|
||||
# Call user_profile tool
|
||||
print("📞 Calling user_profile tool...")
|
||||
profile_data = await self.mcp_client.call_tool("user_profile", {})
|
||||
|
||||
# Cache the profile
|
||||
self.cache_manager.set("user_profile", profile_data, ttl=3600)
|
||||
print("💾 User profile cached (TTL: 1 hour)")
|
||||
|
||||
# Pretty print the profile
|
||||
print("\n" + "-"*40)
|
||||
print("USER PROFILE DATA:")
|
||||
print("-"*40)
|
||||
print(json.dumps(profile_data, indent=2, default=str))
|
||||
print("-"*40)
|
||||
|
||||
return profile_data
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error getting user profile: {e}")
|
||||
logger.error(f"User profile error: {e}", exc_info=True)
|
||||
return None
|
||||
|
||||
async def test_cached_retrieval(self):
|
||||
"""Test retrieving cached user profile"""
|
||||
print("\n" + "="*60)
|
||||
print("CACHE RETRIEVAL TEST")
|
||||
print("="*60)
|
||||
|
||||
# Try to get cached profile
|
||||
cached_profile = self.cache_manager.get("user_profile")
|
||||
|
||||
if cached_profile:
|
||||
print("✅ User profile retrieved from cache")
|
||||
print(f"📊 Cache stats: {self.cache_manager.get_stats()}")
|
||||
return cached_profile
|
||||
else:
|
||||
print("❌ No cached user profile found")
|
||||
return None
|
||||
|
||||
async def test_activities_preview(self):
|
||||
"""Test getting recent activities if available"""
|
||||
print("\n" + "="*60)
|
||||
print("ACTIVITIES PREVIEW TEST")
|
||||
print("="*60)
|
||||
|
||||
if not await self.mcp_client.has_tool("get_activities"):
|
||||
print("❌ get_activities tool not available")
|
||||
return None
|
||||
|
||||
print("✅ get_activities tool found")
|
||||
|
||||
try:
|
||||
print("📞 Calling get_activities (limit=5)...")
|
||||
activities = await self.mcp_client.call_tool("get_activities", {"limit": 5})
|
||||
|
||||
if activities:
|
||||
print(f"📋 Retrieved {len(activities)} activities")
|
||||
|
||||
# Show basic info for each activity
|
||||
print("\nRecent Activities:")
|
||||
for i, activity in enumerate(activities[:3], 1): # Show first 3
|
||||
activity_type = activity.get('activityType', {}).get('typeKey', 'Unknown')
|
||||
start_time = activity.get('startTimeLocal', 'Unknown time')
|
||||
duration = activity.get('duration', 0)
|
||||
|
||||
print(f" {i}. {activity_type} - {start_time}")
|
||||
print(f" Duration: {duration // 60}m {duration % 60}s")
|
||||
|
||||
# Cache activities
|
||||
self.cache_manager.set("recent_activities", activities, ttl=900)
|
||||
print("💾 Activities cached (TTL: 15 minutes)")
|
||||
|
||||
return activities
|
||||
else:
|
||||
print("📋 No activities found")
|
||||
return []
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error getting activities: {e}")
|
||||
logger.error(f"Activities error: {e}", exc_info=True)
|
||||
return None
|
||||
|
||||
async def run_all_tests(self):
|
||||
"""Run all tests in sequence"""
|
||||
print("🚀 Starting MCP Test Suite")
|
||||
|
||||
results = {}
|
||||
|
||||
# Test 1: MCP Connection
|
||||
results['mcp_connection'] = await self.test_mcp_connection()
|
||||
|
||||
if not results['mcp_connection']:
|
||||
print("\n❌ MCP connection failed - skipping remaining tests")
|
||||
return results
|
||||
|
||||
# Test 2: User Profile
|
||||
results['user_profile'] = await self.test_user_profile()
|
||||
|
||||
# Test 3: Cache Retrieval
|
||||
results['cache_retrieval'] = await self.test_cached_retrieval()
|
||||
|
||||
# Test 4: Activities Preview (optional)
|
||||
results['activities'] = await self.test_activities_preview()
|
||||
|
||||
# Summary
|
||||
print("\n" + "="*60)
|
||||
print("TEST SUMMARY")
|
||||
print("="*60)
|
||||
|
||||
for test_name, result in results.items():
|
||||
status = "✅ PASS" if result else "❌ FAIL"
|
||||
print(f"{test_name.replace('_', ' ').title()}: {status}")
|
||||
|
||||
# Cache summary
|
||||
cache_stats = self.cache_manager.get_stats()
|
||||
print(f"\nCache Status: {cache_stats['total_entries']} entries")
|
||||
for key in cache_stats['keys']:
|
||||
print(f" - {key}")
|
||||
|
||||
return results
|
||||
|
||||
def validate_config(config: Config) -> bool:
|
||||
"""Validate configuration for MCP testing"""
|
||||
issues = []
|
||||
|
||||
if not config.garth_token:
|
||||
issues.append("GARTH_TOKEN not set")
|
||||
|
||||
if not config.garth_mcp_server_path:
|
||||
issues.append("garth_mcp_server_path not set")
|
||||
|
||||
if issues:
|
||||
print("❌ Configuration issues:")
|
||||
for issue in issues:
|
||||
print(f" - {issue}")
|
||||
print("\nTo fix:")
|
||||
print("1. Run 'uvx garth login' to get GARTH_TOKEN")
|
||||
print("2. Install garth-mcp-server: 'npm install -g garth-mcp-server'")
|
||||
print("3. Update config.yaml with your tokens")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
async def main():
|
||||
"""Main entry point"""
|
||||
print("MCP Test App - Simple MCP and User Profile Test")
|
||||
print("=" * 50)
|
||||
|
||||
try:
|
||||
# Setup config
|
||||
create_sample_config()
|
||||
config = load_config()
|
||||
|
||||
if not validate_config(config):
|
||||
return
|
||||
|
||||
# Create and run test app
|
||||
app = MCPTestApp(config)
|
||||
|
||||
try:
|
||||
await app.initialize()
|
||||
results = await app.run_all_tests()
|
||||
|
||||
# Exit with appropriate code
|
||||
if all(results.values()):
|
||||
print("\n🎉 All tests passed!")
|
||||
sys.exit(0)
|
||||
else:
|
||||
print("\n⚠️ Some tests failed")
|
||||
sys.exit(1)
|
||||
|
||||
finally:
|
||||
await app.cleanup()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Test interrupted")
|
||||
except Exception as e:
|
||||
print(f"\n💥 Test error: {e}")
|
||||
logger.error(f"Main error: {e}", exc_info=True)
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
Reference in New Issue
Block a user