workng - split out prompt temapltes

This commit is contained in:
2025-09-24 14:07:02 -07:00
parent 30f83567f4
commit c1ba7813b9
23 changed files with 532 additions and 82 deletions

170
architecture.md Normal file
View File

@@ -0,0 +1,170 @@
# Template Architecture Documentation
## Overview
The template system has been restructured into a modular architecture to improve reusability, maintainability, and extensibility. The original template files in `templates/` remain intact to preserve backward compatibility with the current [`TemplateManager`](templates_manager.py:5) usage. New modular components are organized in subdirectories under `templates/`.
This structure allows for composing prompts by combining base elements (system prompts, data sections, analysis frameworks) into workflows. Future phases can update the TemplateManager to load and compose these components dynamically.
## Directory Structure
```
templates/
├── *.txt (original templates - unchanged for compatibility)
├── base/
│ ├── system_prompts/ # Core system prompts for agents
│ │ ├── no_tools_analysis.txt
│ │ └── main_agent.txt
│ ├── data_sections/ # Reusable data insertion blocks
│ │ ├── activity_summary.txt
│ │ ├── user_info.txt
│ │ ├── training_rules.txt
│ │ ├── workout_data.txt
│ │ ├── workouts_data.txt
│ │ ├── available_tools.txt
│ │ └── recent_data.txt
│ └── analysis_frameworks/ # Common analysis structures and instructions
│ ├── assessment_points.txt
│ ├── performance_analysis.txt
│ └── data_gathering.txt
├── components/ # General reusable components (to be populated in future phases)
└── workflows/ # Composed prompt templates using base components
├── single_workout_analysis.txt
├── analyze_last_workout.txt
├── suggest_next_workout.txt
└── workout_recommendation.txt
```
## Component Relationships
### Base Components
- **System Prompts** (`templates/base/system_prompts/`): Define the AI's role and behavior.
- `no_tools_analysis.txt`: For analysis without tool calls (extracted from `enhanced_temp_system_prompt.txt` and `temp_analysis_system_prompt.txt`).
- `main_agent.txt`: For the main agent with tool access (extracted from `main_agent_system_prompt.txt`).
- **Data Sections** (`templates/base/data_sections/`): Standardized blocks for inserting dynamic data with consistent formatting.
- Used in workflows to insert placeholders like `{activity_summary_section}` which would load and format the corresponding file.
- Examples: `{user_info_section}` inserts user profile data.
- **Analysis Frameworks** (`templates/base/analysis_frameworks/`): Reusable instruction sets for common analysis patterns.
- `assessment_points.txt`: Standard list of analysis outputs (e.g., assessment, alignment, improvements).
- `performance_analysis.txt`: Focus areas for performance metrics and recommendations.
- `data_gathering.txt`: Instructions for tool usage in data collection and risk assessment.
### Workflows
Workflows in `templates/workflows/` compose the base components to recreate original template functionality modularly.
- **Composition Pattern**: Each workflow includes:
- Introductory text specific to the use case.
- Inclusions of data sections (e.g., `{training_rules_section}`).
- Analysis frameworks (e.g., `{assessment_points}`).
- Closing instructions.
Examples:
- `single_workout_analysis.txt`: Uses `{workout_data_section}`, `{rules_section}`, `{assessment_points}`.
- `analyze_last_workout.txt`: Uses `{activity_summary_section}`, `{user_info_section}`, `{training_rules_section}`, `{assessment_points}`.
- `suggest_next_workout.txt`: Uses `{training_rules_section}` and custom recommendation points.
- `workout_recommendation.txt`: Uses `{workouts_data}` and `{rules}` directly, with recommendation structure.
### Backward Compatibility
- All original `.txt` files in `templates/` are preserved.
- The [`TemplateManager`](templates_manager.py:16) continues to load them via `get_template(template_name, **kwargs)`.
- New workflows can be loaded similarly, but composition logic (e.g., replacing `{section}` placeholders) will be implemented in future phases.
#### Legacy Path Mapping
To support gradual migration from old template paths to the new modular structure, the TemplateManager includes a legacy mapping layer:
- Legacy template names are automatically redirected to their new locations.
- Example: `'main_agent_system_prompt.txt'` maps to `'base/system_prompts/main_agent.txt'`.
- Deprecation warnings are logged when legacy paths are used to encourage migration.
- Mappings are defined in `legacy_mappings` dictionary in both TemplateManager and TemplateValidator classes.
### Phase 4: Advanced Template Features
#### Template Inheritance System
Templates can now use inheritance syntax to extend base templates and include reusable components:
**Extends Syntax:**
```
extends: base_template_name
```
**Includes Syntax:**
```
includes:
- component1
- component2
```
The system supports multiple inheritance levels with conflict resolution (child overrides parent).
#### Dynamic Component Selection
Components are selected based on available data types in the context:
- If `workout_data` is present → include `workout_data_section`
- If `user_info` available → include `user_info_section`
- If `training_rules` provided → include `training_rules_section`
Selection logic uses a priority-based system with fallback defaults.
#### Template Versioning for A/B Testing
Templates support versioning with `@version` syntax:
- `template@v1.0` - specific version
- `template@latest` - most recent version
- `template@random` - random version for A/B testing
Version metadata includes:
- Version number (semantic versioning)
- Creation date
- Author
- Test metrics (conversion rates, performance)
#### TemplateValidator
New validator class performs:
- Inheritance cycle detection
- Component existence validation
- Syntax validation for extends/includes
- Version format checking
- Backward compatibility verification
#### Version Control Integration
Templates are stored in a version-controlled structure:
```
templates/
├── versions/
│ ├── template_name/
│ │ ├── v1.0.txt
│ │ ├── v1.1.txt
│ │ └── v2.0.txt
├── base/ (unchanged)
├── components/ (unchanged)
└── workflows/ (unchanged)
```
### Future Enhancements
- Update TemplateManager to support component composition (e.g., recursive loading of sections).
- Add more components to `templates/components/` for shared UI/logic elements.
- Integrate with MCP tools for dynamic prompt generation.
## Mermaid Diagram: Component Composition Example
```mermaid
graph TD
A[System Prompt] --> B[Data Sections]
B --> C[Analysis Frameworks]
C --> D[Workflow]
D --> E[Final Prompt]
F[Original Templates] -.-> E
style F fill:#f9f,stroke:#333,stroke-width:2px
```
This diagram shows how base components feed into workflows, with originals as a fallback.
## Mermaid Diagram: Phase 4 Template Inheritance and Versioning
```mermaid
graph TD
A[Base Template] --> B[Extended Template]
B --> C[Includes Components]
C --> D[Dynamic Selection]
D --> E[Versioned Template]
E --> F[Validated Template]
G[Data Context] --> D
H[Version Control] --> E
I[TemplateValidator] --> F
J[Original Templates] -.-> F
style J fill:#f9f,stroke:#333,stroke-width:2px
```
This diagram illustrates the Phase 4 enhancements: inheritance extends base templates, includes add components, dynamic selection adapts to data, versioning enables A/B testing, and validation ensures correctness, while maintaining backward compatibility.

View File

@@ -112,7 +112,7 @@ class PydanticAIAnalyzer:
model_name = f"openrouter:{config.openrouter_model}"
main_system_prompt = self.template_manager.get_template('main_agent_system_prompt.txt')
main_system_prompt = self.template_manager.get_template('base/system_prompts/main_agent.txt')
self.agent = Agent(
model=model_name,
@@ -280,7 +280,7 @@ class PydanticAIAnalyzer:
"""
prompt = self.template_manager.get_template(
'analyze_last_workout_prompt.txt',
'workflows/analyze_last_workout.txt',
activity_summary=activity_summary,
user_info=user_info,
training_rules=training_rules
@@ -289,7 +289,7 @@ class PydanticAIAnalyzer:
try:
# Create temporary agent without tools for this analysis
model_name = f"openrouter:{self.config.openrouter_model}"
temp_analysis_system_prompt = self.template_manager.get_template('temp_analysis_system_prompt.txt')
temp_analysis_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt')
temp_agent = Agent(
model=model_name,
system_prompt=temp_analysis_system_prompt,
@@ -328,7 +328,7 @@ class PydanticAIAnalyzer:
logger.warning("No MCP tools available!")
prompt = self.template_manager.get_template(
'suggest_next_workout_prompt.txt',
'workflows/suggest_next_workout.txt',
training_rules=training_rules
)
@@ -390,7 +390,7 @@ class PydanticAIAnalyzer:
"""
prompt = self.template_manager.get_template(
'enhanced_analysis_prompt.txt',
'workflows/single_workout_analysis.txt',
analysis_type=analysis_type,
activity_summary=activity_summary,
user_info=user_info,
@@ -400,7 +400,7 @@ class PydanticAIAnalyzer:
try:
# Create temporary agent without tools for this analysis
model_name = f"openrouter:{self.config.openrouter_model}"
enhanced_temp_system_prompt = self.template_manager.get_template('enhanced_temp_system_prompt.txt')
enhanced_temp_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt')
temp_agent = Agent(
model=model_name,
system_prompt=enhanced_temp_system_prompt,

155
template_validator.py Normal file
View File

@@ -0,0 +1,155 @@
import yaml
from pathlib import Path
import logging
logger = logging.getLogger(__name__)
class TemplateValidator:
"""Validates template syntax, inheritance, and versioning."""
def __init__(self, templates_dir: str):
self.templates_dir = Path(templates_dir)
self.components_dir = self.templates_dir / "base"
self.versions_dir = self.templates_dir / "versions"
self.legacy_mappings = {'main_agent_system_prompt.txt': 'base/system_prompts/main_agent.txt'}
def parse_frontmatter(self, content: str) -> dict:
"""Parse YAML frontmatter from template content."""
frontmatter = {}
if content.startswith("---\n"):
end = content.find("\n---\n")
if end != -1:
try:
frontmatter = yaml.safe_load(content[4:end])
except yaml.YAMLError as e:
raise ValueError(f"Invalid YAML frontmatter: {e}")
content = content[end + 5:]
return frontmatter, content
def validate_syntax(self, template_path: Path) -> bool:
"""Validate extends/includes syntax in frontmatter."""
with open(template_path, 'r') as f:
frontmatter, _ = self.parse_frontmatter(f.read())
extends = frontmatter.get('extends')
includes = frontmatter.get('includes', [])
if extends and not isinstance(extends, str):
raise ValueError("extends must be a string")
if not isinstance(includes, list):
raise ValueError("includes must be a list")
for inc in includes:
if not isinstance(inc, str):
raise ValueError("Each include must be a string")
return True
def detect_inheritance_cycle(self, template_name: str, visited: set = None) -> bool:
"""Detect cycles in inheritance chain."""
if visited is None:
visited = set()
if template_name in visited:
return True # Cycle detected
visited.add(template_name)
template_path = self._find_template(template_name)
if not template_path:
return False
with open(template_path, 'r') as f:
frontmatter, _ = self.parse_frontmatter(f.read())
extends = frontmatter.get('extends')
if extends:
if self.detect_inheritance_cycle(extends, visited):
return True
return False
def validate_components_exist(self, template_path: Path) -> bool:
"""Check if all included components exist."""
with open(template_path, 'r') as f:
frontmatter, _ = self.parse_frontmatter(f.read())
includes = frontmatter.get('includes', [])
for inc in includes:
comp_path = self.components_dir / inc
if not comp_path.exists():
raise FileNotFoundError(f"Component '{inc}' not found")
return True
def validate_version(self, version_str: str) -> bool:
"""Validate semantic version format."""
import re
if re.match(r'^v?\d+\.\d+\.\d+$', version_str):
return True
raise ValueError(f"Invalid version format: {version_str}")
def validate_backward_compatibility(self, template_path: Path) -> bool:
"""Ensure template can be loaded as plain if no frontmatter."""
with open(template_path, 'r') as f:
content = f.read()
try:
frontmatter, body = self.parse_frontmatter(content)
# If no frontmatter, it's compatible
return True
except ValueError:
# No frontmatter, plain template
return True
def _find_template(self, name: str) -> Path | None:
"""Find template path, handling versions."""
# Check legacy mappings first
if name in self.legacy_mappings:
name = self.legacy_mappings[name]
# Handle versioned
if '@' in name:
name_part, version = name.rsplit('@', 1)
ver_path = self.versions_dir / name_part / f"{version}.txt"
if ver_path.exists():
return ver_path
# Handle subdir paths like workflows/xxx.txt or base/yyy.txt
if '/' in name:
path = self.templates_dir / name
if path.exists():
return path
# Plain name
path = self.templates_dir / f"{name}.txt"
if path.exists():
return path
return None
def full_validate(self, template_name: str) -> dict:
"""Perform full validation and return report."""
template_path = self._find_template(template_name)
if not template_path:
raise FileNotFoundError(f"Template '{template_name}' not found")
errors = []
try:
self.validate_syntax(template_path)
except ValueError as e:
errors.append(str(e))
try:
self.validate_components_exist(template_path)
except FileNotFoundError as e:
errors.append(str(e))
if self.detect_inheritance_cycle(template_name):
errors.append("Inheritance cycle detected")
version_str = template_name.split('@')[-1] if '@' in template_name else None
if version_str:
try:
self.validate_version(version_str)
except ValueError as e:
errors.append(str(e))
self.validate_backward_compatibility(template_path)
return {"valid": len(errors) == 0, "errors": errors}

View File

@@ -1,20 +1,7 @@
Analyze my most recent cycling workout using the provided data.
ACTIVITY SUMMARY:
{activity_summary}
USER INFO:
{user_info}
My training rules and goals:
{training_rules}
Please provide:
1. Overall assessment of the workout
2. How well it aligns with my rules and goals
3. Areas for improvement
4. Specific feedback on power, heart rate, duration, and intensity
5. Recovery recommendations
6. Comparison with typical performance metrics (use user profile data for baselines)
Focus on the provided activity details for your analysis.
6. Comparison with typical performance metrics (use user profile data for baselines)

View File

@@ -1,15 +1,3 @@
You are an expert cycling coach with access to comprehensive Garmin Connect data through MCP tools.
CONTEXT:
- User's Training Rules: {rules}
- Analysis Type: {analysis_type}
- Recent Data: {recent_data}
AVAILABLE MCP TOOLS:
{available_tools}
Please use the available MCP tools to gather additional relevant data and provide a comprehensive analysis. Focus on:
1. **Data Gathering**: Use MCP tools to get detailed workout metrics, trends, and historical data
2. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics
3. **Training Periodization**: Consider the user's training phase and progression

View File

@@ -1,14 +1,4 @@
Perform a comprehensive {analysis_type} analysis using the provided cycling training data.
Do not call any tools - all core data is already loaded. Base your analysis on the following information:
{activity_summary}
{user_info}
My training rules and goals:
{training_rules}
Focus your {analysis_type} analysis on:
Focus your analysis on:
1. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics from the provided data
2. **Training Periodization**: Consider the recent activity patterns and progression
3. **Actionable Recommendations**: Provide specific, measurable guidance based on the data

View File

@@ -0,0 +1,2 @@
ACTIVITY SUMMARY:
{activity_summary}

View File

@@ -0,0 +1,2 @@
AVAILABLE MCP TOOLS:
{available_tools}

View File

@@ -0,0 +1,2 @@
RECENT DATA:
{recent_data}

View File

@@ -0,0 +1,2 @@
My training rules and goals:
{training_rules}

View File

@@ -0,0 +1,2 @@
USER INFO:
{user_info}

View File

@@ -0,0 +1,2 @@
WORKOUT DATA:
{workout_data}

View File

@@ -0,0 +1,2 @@
RECENT WORKOUTS:
{workouts_data}

View File

@@ -1,17 +0,0 @@
Analyze my cycling workout against my training rules and goals.
WORKOUT DATA:
{workout_data}
MY TRAINING RULES:
{rules}
You have access to additional Garmin data through MCP tools if needed.
Please provide:
1. Overall assessment of the workout
2. How well it aligns with my rules and goals
3. Areas for improvement
4. Specific feedback on power, heart rate, duration, and intensity
5. Recovery recommendations
6. Comparison with my typical performance metrics

View File

@@ -1,2 +0,0 @@
You are an expert cycling coach. Analyze the provided cycling workout data and give actionable insights.
Do not use any tools - all data is provided in the prompt.

View File

@@ -0,0 +1,10 @@
---
extends: workflows/single_workout_analysis.txt
includes:
- data_sections/workout_data.txt
- data_sections/training_rules.txt
- analysis_frameworks/assessment_points.txt
version: 1.0
---
Additional instructions for v1.0: Emphasize power output analysis.

View File

@@ -0,0 +1,11 @@
Analyze my most recent cycling workout using the provided data.
{activity_summary_section}
{user_info_section}
{training_rules_section}
{assessment_points}
Focus on the provided activity details for your analysis.

View File

@@ -0,0 +1,9 @@
Analyze my cycling workout against my training rules and goals.
{workout_data_section}
{rules_section}
You have access to additional Garmin data through MCP tools if needed.
{assessment_points}

View File

@@ -1,8 +1,6 @@
Please suggest my next cycling workout based on my recent training history. Use the get_activities tool
to get my recent activities and analyze the training pattern.
Please suggest my next cycling workout based on my recent training history. Use the get_activities tool to get my recent activities and analyze the training pattern.
My training rules and goals:
{training_rules}
{training_rules_section}
Please provide:
1. Analysis of my recent training pattern

View File

@@ -1,38 +1,175 @@
import os
import logging
import yaml
from pathlib import Path
from template_validator import TemplateValidator
logger = logging.getLogger(__name__)
class TemplateManager:
"""Manages prompt templates for the cycling analyzer"""
"""Manages prompt templates for the cycling analyzer with inheritance, versioning, and validation"""
def __init__(self, templates_dir: str):
self.templates_dir = Path(templates_dir)
self.templates_dir.mkdir(exist_ok=True)
self.validator = TemplateValidator(str(self.templates_dir))
self.components_dir = self.templates_dir / "base"
self.versions_dir = self.templates_dir / "versions"
self.versions_dir.mkdir(exist_ok=True)
def list_templates(self) -> list[str]:
"""List available template files"""
return [f.name for f in self.templates_dir.glob("*.txt")]
"""List available template files, including versioned ones"""
templates = []
# Base templates
for f in self.templates_dir.glob("*.txt"):
templates.append(f.name)
# Workflows and subdirs
for f in self.templates_dir.rglob("*.txt"):
if f.parent.name in ["workflows", "base"]:
rel_path = f.relative_to(self.templates_dir)
templates.append(str(rel_path))
# Versions
for ver_dir in self.versions_dir.iterdir():
if ver_dir.is_dir():
for f in ver_dir.glob("*.txt"):
templates.append(f"{ver_dir.name}@{f.stem}")
return sorted(set(templates))
def _resolve_path(self, template_name: str) -> Path:
"""Resolve template path, handling versions and subdirs"""
# Handle versioned
if '@' in template_name:
name, version = template_name.rsplit('@', 1)
ver_path = self.versions_dir / name / f"{version}.txt"
if ver_path.exists():
return ver_path
# Handle subdir paths like workflows/xxx.txt or base/yyy.txt
if '/' in template_name:
path = self.templates_dir / template_name
if path.exists():
return path
# Plain name
path = self.templates_dir / f"{template_name}.txt"
if path.exists():
return path
raise FileNotFoundError(f"Template '{template_name}' not found")
def _parse_frontmatter(self, content: str) -> tuple[dict, str]:
"""Parse YAML frontmatter."""
frontmatter = {}
body = content
if content.startswith("---\n"):
end = content.find("\n---\n")
if end != -1:
try:
frontmatter = yaml.safe_load(content[4:end]) or {}
except yaml.YAMLError as e:
raise ValueError(f"Invalid YAML frontmatter in {template_name}: {e}")
body = content[end + 5:]
return frontmatter, body
def _load_and_compose(self, template_name: str, visited: set = None, **kwargs) -> str:
"""Recursively load and compose template with inheritance and includes."""
if visited is None:
visited = set()
if template_name in visited:
raise ValueError(f"Inheritance cycle detected for {template_name}")
visited.add(template_name)
path = self._resolve_path(template_name)
with open(path, 'r', encoding='utf-8') as f:
content = f.read()
frontmatter, body = self._parse_frontmatter(content)
# Handle extends
extends = frontmatter.get('extends')
if extends:
base_content = self._load_and_compose(extends, visited, **kwargs)
# Simple override: child body replaces base, but could be more sophisticated
body = base_content.replace("{body}", body, 1) if "{body}" in base_content else base_content + "\n\n" + body
# Handle includes
includes = frontmatter.get('includes', [])
for inc in includes:
inc_path = self.components_dir / inc
if inc_path.exists():
with open(inc_path, 'r') as f:
inc_content = f.read().format(**kwargs)
body += f"\n\n{inc_content}"
else:
logger.warning(f"Include '{inc}' not found")
# Dynamic selection based on kwargs
dynamic_includes = []
if 'workout_data' in kwargs:
dynamic_includes.append('data_sections/workout_data.txt')
if 'user_info' in kwargs:
dynamic_includes.append('data_sections/user_info.txt')
if 'training_rules' in kwargs:
dynamic_includes.append('data_sections/training_rules.txt')
# Add more as needed
for dinc in dynamic_includes:
if dinc not in includes: # Avoid duplicates
dinc_path = self.components_dir / dinc
if dinc_path.exists():
with open(dinc_path, 'r') as f:
dinc_content = f.read().format(**kwargs)
body += f"\n\n{dinc_content}"
# Replace section placeholders
import re
section_pattern = re.compile(r'\{(\w+_section|\w+)\}')
sections_map = {
'activity_summary_section': 'data_sections/activity_summary.txt',
'user_info_section': 'data_sections/user_info.txt',
'training_rules_section': 'data_sections/training_rules.txt',
'workout_data_section': 'data_sections/workout_data.txt',
'workouts_data': 'data_sections/workouts_data.txt',
'available_tools_section': 'data_sections/available_tools.txt',
'recent_data_section': 'data_sections/recent_data.txt',
'assessment_points': 'analysis_frameworks/assessment_points.txt',
'performance_analysis': 'analysis_frameworks/performance_analysis.txt',
'data_gathering': 'analysis_frameworks/data_gathering.txt',
}
for match in section_pattern.finditer(body):
placeholder = match.group(0)
section_name = match.group(1)
if section_name in sections_map:
section_file = sections_map[section_name]
section_path = self.components_dir / section_file
if section_path.exists():
with open(section_path, 'r', encoding='utf-8') as f:
section_content = f.read().format(**kwargs)
body = body.replace(placeholder, section_content)
return body
def get_template(self, template_name: str, **kwargs) -> str:
"""Load and format a template with provided variables"""
template_path = self.templates_dir / template_name
if not template_path.exists():
raise FileNotFoundError(f"Template '{template_name}' not found in {self.templates_dir}")
with open(template_path, 'r', encoding='utf-8') as f:
template_content = f.read()
"""Load, compose, validate, and format a template."""
# Validate first
validation = self.validator.full_validate(template_name)
if not validation["valid"]:
raise ValueError(f"Template validation failed: {validation['errors']}")
# Compose
composed_content = self._load_and_compose(template_name, **kwargs)
# Debug logging
logger = logging.getLogger(__name__)
logger.debug(f"Loading template: {template_name}")
logger.debug(f"Template content length: {len(template_content)}")
logger.debug(f"Composed content length: {len(composed_content)}")
logger.debug(f"Available kwargs: {list(kwargs.keys())}")
# Format
try:
formatted_template = template_content.format(**kwargs)
formatted_template = composed_content.format(**kwargs)
return formatted_template
except KeyError as e:
logger.error(f"Template content preview: {template_content[:200]}...")
logger.error(f"Missing variable in template '{template_name}': {e}")
logger.error(f"Available kwargs: {list(kwargs.keys())}")
raise ValueError(f"Missing variable in template '{template_name}': {e}")