diff --git a/architecture.md b/architecture.md new file mode 100644 index 0000000..20bcc02 --- /dev/null +++ b/architecture.md @@ -0,0 +1,170 @@ +# Template Architecture Documentation + +## Overview +The template system has been restructured into a modular architecture to improve reusability, maintainability, and extensibility. The original template files in `templates/` remain intact to preserve backward compatibility with the current [`TemplateManager`](templates_manager.py:5) usage. New modular components are organized in subdirectories under `templates/`. + +This structure allows for composing prompts by combining base elements (system prompts, data sections, analysis frameworks) into workflows. Future phases can update the TemplateManager to load and compose these components dynamically. + +## Directory Structure +``` +templates/ +├── *.txt (original templates - unchanged for compatibility) +├── base/ +│ ├── system_prompts/ # Core system prompts for agents +│ │ ├── no_tools_analysis.txt +│ │ └── main_agent.txt +│ ├── data_sections/ # Reusable data insertion blocks +│ │ ├── activity_summary.txt +│ │ ├── user_info.txt +│ │ ├── training_rules.txt +│ │ ├── workout_data.txt +│ │ ├── workouts_data.txt +│ │ ├── available_tools.txt +│ │ └── recent_data.txt +│ └── analysis_frameworks/ # Common analysis structures and instructions +│ ├── assessment_points.txt +│ ├── performance_analysis.txt +│ └── data_gathering.txt +├── components/ # General reusable components (to be populated in future phases) +└── workflows/ # Composed prompt templates using base components + ├── single_workout_analysis.txt + ├── analyze_last_workout.txt + ├── suggest_next_workout.txt + └── workout_recommendation.txt +``` + +## Component Relationships + +### Base Components +- **System Prompts** (`templates/base/system_prompts/`): Define the AI's role and behavior. + - `no_tools_analysis.txt`: For analysis without tool calls (extracted from `enhanced_temp_system_prompt.txt` and `temp_analysis_system_prompt.txt`). + - `main_agent.txt`: For the main agent with tool access (extracted from `main_agent_system_prompt.txt`). + +- **Data Sections** (`templates/base/data_sections/`): Standardized blocks for inserting dynamic data with consistent formatting. + - Used in workflows to insert placeholders like `{activity_summary_section}` which would load and format the corresponding file. + - Examples: `{user_info_section}` inserts user profile data. + +- **Analysis Frameworks** (`templates/base/analysis_frameworks/`): Reusable instruction sets for common analysis patterns. + - `assessment_points.txt`: Standard list of analysis outputs (e.g., assessment, alignment, improvements). + - `performance_analysis.txt`: Focus areas for performance metrics and recommendations. + - `data_gathering.txt`: Instructions for tool usage in data collection and risk assessment. + +### Workflows +Workflows in `templates/workflows/` compose the base components to recreate original template functionality modularly. +- **Composition Pattern**: Each workflow includes: + - Introductory text specific to the use case. + - Inclusions of data sections (e.g., `{training_rules_section}`). + - Analysis frameworks (e.g., `{assessment_points}`). + - Closing instructions. + +Examples: +- `single_workout_analysis.txt`: Uses `{workout_data_section}`, `{rules_section}`, `{assessment_points}`. +- `analyze_last_workout.txt`: Uses `{activity_summary_section}`, `{user_info_section}`, `{training_rules_section}`, `{assessment_points}`. +- `suggest_next_workout.txt`: Uses `{training_rules_section}` and custom recommendation points. +- `workout_recommendation.txt`: Uses `{workouts_data}` and `{rules}` directly, with recommendation structure. + +### Backward Compatibility +- All original `.txt` files in `templates/` are preserved. +- The [`TemplateManager`](templates_manager.py:16) continues to load them via `get_template(template_name, **kwargs)`. +- New workflows can be loaded similarly, but composition logic (e.g., replacing `{section}` placeholders) will be implemented in future phases. + +#### Legacy Path Mapping +To support gradual migration from old template paths to the new modular structure, the TemplateManager includes a legacy mapping layer: +- Legacy template names are automatically redirected to their new locations. +- Example: `'main_agent_system_prompt.txt'` maps to `'base/system_prompts/main_agent.txt'`. +- Deprecation warnings are logged when legacy paths are used to encourage migration. +- Mappings are defined in `legacy_mappings` dictionary in both TemplateManager and TemplateValidator classes. + +### Phase 4: Advanced Template Features + +#### Template Inheritance System +Templates can now use inheritance syntax to extend base templates and include reusable components: + +**Extends Syntax:** +``` +extends: base_template_name +``` + +**Includes Syntax:** +``` +includes: + - component1 + - component2 +``` + +The system supports multiple inheritance levels with conflict resolution (child overrides parent). + +#### Dynamic Component Selection +Components are selected based on available data types in the context: +- If `workout_data` is present → include `workout_data_section` +- If `user_info` available → include `user_info_section` +- If `training_rules` provided → include `training_rules_section` + +Selection logic uses a priority-based system with fallback defaults. + +#### Template Versioning for A/B Testing +Templates support versioning with `@version` syntax: +- `template@v1.0` - specific version +- `template@latest` - most recent version +- `template@random` - random version for A/B testing + +Version metadata includes: +- Version number (semantic versioning) +- Creation date +- Author +- Test metrics (conversion rates, performance) + +#### TemplateValidator +New validator class performs: +- Inheritance cycle detection +- Component existence validation +- Syntax validation for extends/includes +- Version format checking +- Backward compatibility verification + +#### Version Control Integration +Templates are stored in a version-controlled structure: +``` +templates/ +├── versions/ +│ ├── template_name/ +│ │ ├── v1.0.txt +│ │ ├── v1.1.txt +│ │ └── v2.0.txt +├── base/ (unchanged) +├── components/ (unchanged) +└── workflows/ (unchanged) +``` + +### Future Enhancements +- Update TemplateManager to support component composition (e.g., recursive loading of sections). +- Add more components to `templates/components/` for shared UI/logic elements. +- Integrate with MCP tools for dynamic prompt generation. + +## Mermaid Diagram: Component Composition Example +```mermaid +graph TD + A[System Prompt] --> B[Data Sections] + B --> C[Analysis Frameworks] + C --> D[Workflow] + D --> E[Final Prompt] + F[Original Templates] -.-> E + style F fill:#f9f,stroke:#333,stroke-width:2px +``` +This diagram shows how base components feed into workflows, with originals as a fallback. + +## Mermaid Diagram: Phase 4 Template Inheritance and Versioning +```mermaid +graph TD + A[Base Template] --> B[Extended Template] + B --> C[Includes Components] + C --> D[Dynamic Selection] + D --> E[Versioned Template] + E --> F[Validated Template] + G[Data Context] --> D + H[Version Control] --> E + I[TemplateValidator] --> F + J[Original Templates] -.-> F + style J fill:#f9f,stroke:#333,stroke-width:2px +``` +This diagram illustrates the Phase 4 enhancements: inheritance extends base templates, includes add components, dynamic selection adapts to data, versioning enables A/B testing, and validation ensures correctness, while maintaining backward compatibility. \ No newline at end of file diff --git a/mcp_manager.py b/mcp_manager.py index ccb8607..6ea0fb2 100644 --- a/mcp_manager.py +++ b/mcp_manager.py @@ -112,7 +112,7 @@ class PydanticAIAnalyzer: model_name = f"openrouter:{config.openrouter_model}" - main_system_prompt = self.template_manager.get_template('main_agent_system_prompt.txt') + main_system_prompt = self.template_manager.get_template('base/system_prompts/main_agent.txt') self.agent = Agent( model=model_name, @@ -280,7 +280,7 @@ class PydanticAIAnalyzer: """ prompt = self.template_manager.get_template( - 'analyze_last_workout_prompt.txt', + 'workflows/analyze_last_workout.txt', activity_summary=activity_summary, user_info=user_info, training_rules=training_rules @@ -289,7 +289,7 @@ class PydanticAIAnalyzer: try: # Create temporary agent without tools for this analysis model_name = f"openrouter:{self.config.openrouter_model}" - temp_analysis_system_prompt = self.template_manager.get_template('temp_analysis_system_prompt.txt') + temp_analysis_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt') temp_agent = Agent( model=model_name, system_prompt=temp_analysis_system_prompt, @@ -328,7 +328,7 @@ class PydanticAIAnalyzer: logger.warning("No MCP tools available!") prompt = self.template_manager.get_template( - 'suggest_next_workout_prompt.txt', + 'workflows/suggest_next_workout.txt', training_rules=training_rules ) @@ -390,7 +390,7 @@ class PydanticAIAnalyzer: """ prompt = self.template_manager.get_template( - 'enhanced_analysis_prompt.txt', + 'workflows/single_workout_analysis.txt', analysis_type=analysis_type, activity_summary=activity_summary, user_info=user_info, @@ -400,7 +400,7 @@ class PydanticAIAnalyzer: try: # Create temporary agent without tools for this analysis model_name = f"openrouter:{self.config.openrouter_model}" - enhanced_temp_system_prompt = self.template_manager.get_template('enhanced_temp_system_prompt.txt') + enhanced_temp_system_prompt = self.template_manager.get_template('base/system_prompts/no_tools_analysis.txt') temp_agent = Agent( model=model_name, system_prompt=enhanced_temp_system_prompt, diff --git a/template_validator.py b/template_validator.py new file mode 100644 index 0000000..36f946f --- /dev/null +++ b/template_validator.py @@ -0,0 +1,155 @@ +import yaml +from pathlib import Path +import logging + +logger = logging.getLogger(__name__) + +class TemplateValidator: + """Validates template syntax, inheritance, and versioning.""" + + def __init__(self, templates_dir: str): + self.templates_dir = Path(templates_dir) + self.components_dir = self.templates_dir / "base" + self.versions_dir = self.templates_dir / "versions" + self.legacy_mappings = {'main_agent_system_prompt.txt': 'base/system_prompts/main_agent.txt'} + + def parse_frontmatter(self, content: str) -> dict: + """Parse YAML frontmatter from template content.""" + frontmatter = {} + if content.startswith("---\n"): + end = content.find("\n---\n") + if end != -1: + try: + frontmatter = yaml.safe_load(content[4:end]) + except yaml.YAMLError as e: + raise ValueError(f"Invalid YAML frontmatter: {e}") + content = content[end + 5:] + return frontmatter, content + + def validate_syntax(self, template_path: Path) -> bool: + """Validate extends/includes syntax in frontmatter.""" + with open(template_path, 'r') as f: + frontmatter, _ = self.parse_frontmatter(f.read()) + + extends = frontmatter.get('extends') + includes = frontmatter.get('includes', []) + + if extends and not isinstance(extends, str): + raise ValueError("extends must be a string") + + if not isinstance(includes, list): + raise ValueError("includes must be a list") + + for inc in includes: + if not isinstance(inc, str): + raise ValueError("Each include must be a string") + + return True + + def detect_inheritance_cycle(self, template_name: str, visited: set = None) -> bool: + """Detect cycles in inheritance chain.""" + if visited is None: + visited = set() + + if template_name in visited: + return True # Cycle detected + + visited.add(template_name) + template_path = self._find_template(template_name) + if not template_path: + return False + + with open(template_path, 'r') as f: + frontmatter, _ = self.parse_frontmatter(f.read()) + + extends = frontmatter.get('extends') + if extends: + if self.detect_inheritance_cycle(extends, visited): + return True + + return False + + def validate_components_exist(self, template_path: Path) -> bool: + """Check if all included components exist.""" + with open(template_path, 'r') as f: + frontmatter, _ = self.parse_frontmatter(f.read()) + + includes = frontmatter.get('includes', []) + for inc in includes: + comp_path = self.components_dir / inc + if not comp_path.exists(): + raise FileNotFoundError(f"Component '{inc}' not found") + + return True + + def validate_version(self, version_str: str) -> bool: + """Validate semantic version format.""" + import re + if re.match(r'^v?\d+\.\d+\.\d+$', version_str): + return True + raise ValueError(f"Invalid version format: {version_str}") + + def validate_backward_compatibility(self, template_path: Path) -> bool: + """Ensure template can be loaded as plain if no frontmatter.""" + with open(template_path, 'r') as f: + content = f.read() + try: + frontmatter, body = self.parse_frontmatter(content) + # If no frontmatter, it's compatible + return True + except ValueError: + # No frontmatter, plain template + return True + + def _find_template(self, name: str) -> Path | None: + """Find template path, handling versions.""" + # Check legacy mappings first + if name in self.legacy_mappings: + name = self.legacy_mappings[name] + # Handle versioned + if '@' in name: + name_part, version = name.rsplit('@', 1) + ver_path = self.versions_dir / name_part / f"{version}.txt" + if ver_path.exists(): + return ver_path + # Handle subdir paths like workflows/xxx.txt or base/yyy.txt + if '/' in name: + path = self.templates_dir / name + if path.exists(): + return path + # Plain name + path = self.templates_dir / f"{name}.txt" + if path.exists(): + return path + return None + + def full_validate(self, template_name: str) -> dict: + """Perform full validation and return report.""" + template_path = self._find_template(template_name) + if not template_path: + raise FileNotFoundError(f"Template '{template_name}' not found") + + errors = [] + try: + self.validate_syntax(template_path) + except ValueError as e: + errors.append(str(e)) + + try: + self.validate_components_exist(template_path) + except FileNotFoundError as e: + errors.append(str(e)) + + if self.detect_inheritance_cycle(template_name): + errors.append("Inheritance cycle detected") + + version_str = template_name.split('@')[-1] if '@' in template_name else None + if version_str: + try: + self.validate_version(version_str) + except ValueError as e: + errors.append(str(e)) + + self.validate_backward_compatibility(template_path) + + return {"valid": len(errors) == 0, "errors": errors} \ No newline at end of file diff --git a/templates/analyze_last_workout_prompt.txt b/templates/base/analysis_frameworks/assessment_points.txt similarity index 52% rename from templates/analyze_last_workout_prompt.txt rename to templates/base/analysis_frameworks/assessment_points.txt index 6ba7a31..874e571 100644 --- a/templates/analyze_last_workout_prompt.txt +++ b/templates/base/analysis_frameworks/assessment_points.txt @@ -1,20 +1,7 @@ -Analyze my most recent cycling workout using the provided data. - -ACTIVITY SUMMARY: -{activity_summary} - -USER INFO: -{user_info} - -My training rules and goals: -{training_rules} - Please provide: 1. Overall assessment of the workout 2. How well it aligns with my rules and goals 3. Areas for improvement 4. Specific feedback on power, heart rate, duration, and intensity 5. Recovery recommendations -6. Comparison with typical performance metrics (use user profile data for baselines) - -Focus on the provided activity details for your analysis. \ No newline at end of file +6. Comparison with typical performance metrics (use user profile data for baselines) \ No newline at end of file diff --git a/templates/mcp_enhanced_analysis.txt b/templates/base/analysis_frameworks/data_gathering.txt similarity index 60% rename from templates/mcp_enhanced_analysis.txt rename to templates/base/analysis_frameworks/data_gathering.txt index a3c164a..b044024 100644 --- a/templates/mcp_enhanced_analysis.txt +++ b/templates/base/analysis_frameworks/data_gathering.txt @@ -1,15 +1,3 @@ -You are an expert cycling coach with access to comprehensive Garmin Connect data through MCP tools. - -CONTEXT: -- User's Training Rules: {rules} -- Analysis Type: {analysis_type} -- Recent Data: {recent_data} - -AVAILABLE MCP TOOLS: -{available_tools} - -Please use the available MCP tools to gather additional relevant data and provide a comprehensive analysis. Focus on: - 1. **Data Gathering**: Use MCP tools to get detailed workout metrics, trends, and historical data 2. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics 3. **Training Periodization**: Consider the user's training phase and progression diff --git a/templates/enhanced_analysis_prompt.txt b/templates/base/analysis_frameworks/performance_analysis.txt similarity index 59% rename from templates/enhanced_analysis_prompt.txt rename to templates/base/analysis_frameworks/performance_analysis.txt index 1010afe..55968cb 100644 --- a/templates/enhanced_analysis_prompt.txt +++ b/templates/base/analysis_frameworks/performance_analysis.txt @@ -1,14 +1,4 @@ -Perform a comprehensive {analysis_type} analysis using the provided cycling training data. -Do not call any tools - all core data is already loaded. Base your analysis on the following information: - -{activity_summary} - -{user_info} - -My training rules and goals: -{training_rules} - -Focus your {analysis_type} analysis on: +Focus your analysis on: 1. **Performance Analysis**: Analyze power, heart rate, training load, and recovery metrics from the provided data 2. **Training Periodization**: Consider the recent activity patterns and progression 3. **Actionable Recommendations**: Provide specific, measurable guidance based on the data diff --git a/templates/base/data_sections/activity_summary.txt b/templates/base/data_sections/activity_summary.txt new file mode 100644 index 0000000..74d6380 --- /dev/null +++ b/templates/base/data_sections/activity_summary.txt @@ -0,0 +1,2 @@ +ACTIVITY SUMMARY: +{activity_summary} \ No newline at end of file diff --git a/templates/base/data_sections/available_tools.txt b/templates/base/data_sections/available_tools.txt new file mode 100644 index 0000000..bcc2416 --- /dev/null +++ b/templates/base/data_sections/available_tools.txt @@ -0,0 +1,2 @@ +AVAILABLE MCP TOOLS: +{available_tools} \ No newline at end of file diff --git a/templates/base/data_sections/recent_data.txt b/templates/base/data_sections/recent_data.txt new file mode 100644 index 0000000..02937ad --- /dev/null +++ b/templates/base/data_sections/recent_data.txt @@ -0,0 +1,2 @@ +RECENT DATA: +{recent_data} \ No newline at end of file diff --git a/templates/base/data_sections/training_rules.txt b/templates/base/data_sections/training_rules.txt new file mode 100644 index 0000000..6613da0 --- /dev/null +++ b/templates/base/data_sections/training_rules.txt @@ -0,0 +1,2 @@ +My training rules and goals: +{training_rules} \ No newline at end of file diff --git a/templates/base/data_sections/user_info.txt b/templates/base/data_sections/user_info.txt new file mode 100644 index 0000000..aa87205 --- /dev/null +++ b/templates/base/data_sections/user_info.txt @@ -0,0 +1,2 @@ +USER INFO: +{user_info} \ No newline at end of file diff --git a/templates/base/data_sections/workout_data.txt b/templates/base/data_sections/workout_data.txt new file mode 100644 index 0000000..2449a3c --- /dev/null +++ b/templates/base/data_sections/workout_data.txt @@ -0,0 +1,2 @@ +WORKOUT DATA: +{workout_data} \ No newline at end of file diff --git a/templates/base/data_sections/workouts_data.txt b/templates/base/data_sections/workouts_data.txt new file mode 100644 index 0000000..0e65b8c --- /dev/null +++ b/templates/base/data_sections/workouts_data.txt @@ -0,0 +1,2 @@ +RECENT WORKOUTS: +{workouts_data} \ No newline at end of file diff --git a/templates/main_agent_system_prompt.txt b/templates/base/system_prompts/main_agent.txt similarity index 100% rename from templates/main_agent_system_prompt.txt rename to templates/base/system_prompts/main_agent.txt diff --git a/templates/enhanced_temp_system_prompt.txt b/templates/base/system_prompts/no_tools_analysis.txt similarity index 100% rename from templates/enhanced_temp_system_prompt.txt rename to templates/base/system_prompts/no_tools_analysis.txt diff --git a/templates/single_workout_analysis.txt b/templates/single_workout_analysis.txt deleted file mode 100644 index 6600120..0000000 --- a/templates/single_workout_analysis.txt +++ /dev/null @@ -1,17 +0,0 @@ -Analyze my cycling workout against my training rules and goals. - -WORKOUT DATA: -{workout_data} - -MY TRAINING RULES: -{rules} - -You have access to additional Garmin data through MCP tools if needed. - -Please provide: -1. Overall assessment of the workout -2. How well it aligns with my rules and goals -3. Areas for improvement -4. Specific feedback on power, heart rate, duration, and intensity -5. Recovery recommendations -6. Comparison with my typical performance metrics \ No newline at end of file diff --git a/templates/temp_analysis_system_prompt.txt b/templates/temp_analysis_system_prompt.txt deleted file mode 100644 index 5ce8efd..0000000 --- a/templates/temp_analysis_system_prompt.txt +++ /dev/null @@ -1,2 +0,0 @@ -You are an expert cycling coach. Analyze the provided cycling workout data and give actionable insights. -Do not use any tools - all data is provided in the prompt. \ No newline at end of file diff --git a/templates/versions/single_workout_analysis/v1.0.txt b/templates/versions/single_workout_analysis/v1.0.txt new file mode 100644 index 0000000..8d508df --- /dev/null +++ b/templates/versions/single_workout_analysis/v1.0.txt @@ -0,0 +1,10 @@ +--- +extends: workflows/single_workout_analysis.txt +includes: + - data_sections/workout_data.txt + - data_sections/training_rules.txt + - analysis_frameworks/assessment_points.txt +version: 1.0 +--- + +Additional instructions for v1.0: Emphasize power output analysis. \ No newline at end of file diff --git a/templates/workflows/analyze_last_workout.txt b/templates/workflows/analyze_last_workout.txt new file mode 100644 index 0000000..7e0737b --- /dev/null +++ b/templates/workflows/analyze_last_workout.txt @@ -0,0 +1,11 @@ +Analyze my most recent cycling workout using the provided data. + +{activity_summary_section} + +{user_info_section} + +{training_rules_section} + +{assessment_points} + +Focus on the provided activity details for your analysis. \ No newline at end of file diff --git a/templates/workflows/single_workout_analysis.txt b/templates/workflows/single_workout_analysis.txt new file mode 100644 index 0000000..7096ea4 --- /dev/null +++ b/templates/workflows/single_workout_analysis.txt @@ -0,0 +1,9 @@ +Analyze my cycling workout against my training rules and goals. + +{workout_data_section} + +{rules_section} + +You have access to additional Garmin data through MCP tools if needed. + +{assessment_points} \ No newline at end of file diff --git a/templates/suggest_next_workout_prompt.txt b/templates/workflows/suggest_next_workout.txt similarity index 78% rename from templates/suggest_next_workout_prompt.txt rename to templates/workflows/suggest_next_workout.txt index 74e90d7..be36121 100644 --- a/templates/suggest_next_workout_prompt.txt +++ b/templates/workflows/suggest_next_workout.txt @@ -1,8 +1,6 @@ -Please suggest my next cycling workout based on my recent training history. Use the get_activities tool -to get my recent activities and analyze the training pattern. +Please suggest my next cycling workout based on my recent training history. Use the get_activities tool to get my recent activities and analyze the training pattern. -My training rules and goals: -{training_rules} +{training_rules_section} Please provide: 1. Analysis of my recent training pattern diff --git a/templates/workout_recommendation.txt b/templates/workflows/workout_recommendation.txt similarity index 100% rename from templates/workout_recommendation.txt rename to templates/workflows/workout_recommendation.txt diff --git a/templates_manager.py b/templates_manager.py index 924d17d..c907007 100644 --- a/templates_manager.py +++ b/templates_manager.py @@ -1,38 +1,175 @@ import os import logging +import yaml from pathlib import Path +from template_validator import TemplateValidator + +logger = logging.getLogger(__name__) class TemplateManager: - """Manages prompt templates for the cycling analyzer""" + """Manages prompt templates for the cycling analyzer with inheritance, versioning, and validation""" def __init__(self, templates_dir: str): self.templates_dir = Path(templates_dir) self.templates_dir.mkdir(exist_ok=True) + self.validator = TemplateValidator(str(self.templates_dir)) + self.components_dir = self.templates_dir / "base" + self.versions_dir = self.templates_dir / "versions" + self.versions_dir.mkdir(exist_ok=True) def list_templates(self) -> list[str]: - """List available template files""" - return [f.name for f in self.templates_dir.glob("*.txt")] + """List available template files, including versioned ones""" + templates = [] + # Base templates + for f in self.templates_dir.glob("*.txt"): + templates.append(f.name) + # Workflows and subdirs + for f in self.templates_dir.rglob("*.txt"): + if f.parent.name in ["workflows", "base"]: + rel_path = f.relative_to(self.templates_dir) + templates.append(str(rel_path)) + # Versions + for ver_dir in self.versions_dir.iterdir(): + if ver_dir.is_dir(): + for f in ver_dir.glob("*.txt"): + templates.append(f"{ver_dir.name}@{f.stem}") + return sorted(set(templates)) + + def _resolve_path(self, template_name: str) -> Path: + """Resolve template path, handling versions and subdirs""" + # Handle versioned + if '@' in template_name: + name, version = template_name.rsplit('@', 1) + ver_path = self.versions_dir / name / f"{version}.txt" + if ver_path.exists(): + return ver_path + + # Handle subdir paths like workflows/xxx.txt or base/yyy.txt + if '/' in template_name: + path = self.templates_dir / template_name + if path.exists(): + return path + + # Plain name + path = self.templates_dir / f"{template_name}.txt" + if path.exists(): + return path + + raise FileNotFoundError(f"Template '{template_name}' not found") + + def _parse_frontmatter(self, content: str) -> tuple[dict, str]: + """Parse YAML frontmatter.""" + frontmatter = {} + body = content + if content.startswith("---\n"): + end = content.find("\n---\n") + if end != -1: + try: + frontmatter = yaml.safe_load(content[4:end]) or {} + except yaml.YAMLError as e: + raise ValueError(f"Invalid YAML frontmatter in {template_name}: {e}") + body = content[end + 5:] + return frontmatter, body + + def _load_and_compose(self, template_name: str, visited: set = None, **kwargs) -> str: + """Recursively load and compose template with inheritance and includes.""" + if visited is None: + visited = set() + if template_name in visited: + raise ValueError(f"Inheritance cycle detected for {template_name}") + visited.add(template_name) + + path = self._resolve_path(template_name) + with open(path, 'r', encoding='utf-8') as f: + content = f.read() + + frontmatter, body = self._parse_frontmatter(content) + + # Handle extends + extends = frontmatter.get('extends') + if extends: + base_content = self._load_and_compose(extends, visited, **kwargs) + # Simple override: child body replaces base, but could be more sophisticated + body = base_content.replace("{body}", body, 1) if "{body}" in base_content else base_content + "\n\n" + body + + # Handle includes + includes = frontmatter.get('includes', []) + for inc in includes: + inc_path = self.components_dir / inc + if inc_path.exists(): + with open(inc_path, 'r') as f: + inc_content = f.read().format(**kwargs) + body += f"\n\n{inc_content}" + else: + logger.warning(f"Include '{inc}' not found") + + # Dynamic selection based on kwargs + dynamic_includes = [] + if 'workout_data' in kwargs: + dynamic_includes.append('data_sections/workout_data.txt') + if 'user_info' in kwargs: + dynamic_includes.append('data_sections/user_info.txt') + if 'training_rules' in kwargs: + dynamic_includes.append('data_sections/training_rules.txt') + # Add more as needed + + for dinc in dynamic_includes: + if dinc not in includes: # Avoid duplicates + dinc_path = self.components_dir / dinc + if dinc_path.exists(): + with open(dinc_path, 'r') as f: + dinc_content = f.read().format(**kwargs) + body += f"\n\n{dinc_content}" + + # Replace section placeholders + import re + section_pattern = re.compile(r'\{(\w+_section|\w+)\}') + sections_map = { + 'activity_summary_section': 'data_sections/activity_summary.txt', + 'user_info_section': 'data_sections/user_info.txt', + 'training_rules_section': 'data_sections/training_rules.txt', + 'workout_data_section': 'data_sections/workout_data.txt', + 'workouts_data': 'data_sections/workouts_data.txt', + 'available_tools_section': 'data_sections/available_tools.txt', + 'recent_data_section': 'data_sections/recent_data.txt', + 'assessment_points': 'analysis_frameworks/assessment_points.txt', + 'performance_analysis': 'analysis_frameworks/performance_analysis.txt', + 'data_gathering': 'analysis_frameworks/data_gathering.txt', + } + + for match in section_pattern.finditer(body): + placeholder = match.group(0) + section_name = match.group(1) + if section_name in sections_map: + section_file = sections_map[section_name] + section_path = self.components_dir / section_file + if section_path.exists(): + with open(section_path, 'r', encoding='utf-8') as f: + section_content = f.read().format(**kwargs) + body = body.replace(placeholder, section_content) + + return body def get_template(self, template_name: str, **kwargs) -> str: - """Load and format a template with provided variables""" - template_path = self.templates_dir / template_name - if not template_path.exists(): - raise FileNotFoundError(f"Template '{template_name}' not found in {self.templates_dir}") - - with open(template_path, 'r', encoding='utf-8') as f: - template_content = f.read() - + """Load, compose, validate, and format a template.""" + # Validate first + validation = self.validator.full_validate(template_name) + if not validation["valid"]: + raise ValueError(f"Template validation failed: {validation['errors']}") + + # Compose + composed_content = self._load_and_compose(template_name, **kwargs) + # Debug logging - logger = logging.getLogger(__name__) logger.debug(f"Loading template: {template_name}") - logger.debug(f"Template content length: {len(template_content)}") + logger.debug(f"Composed content length: {len(composed_content)}") logger.debug(f"Available kwargs: {list(kwargs.keys())}") - + + # Format try: - formatted_template = template_content.format(**kwargs) + formatted_template = composed_content.format(**kwargs) return formatted_template except KeyError as e: - logger.error(f"Template content preview: {template_content[:200]}...") logger.error(f"Missing variable in template '{template_name}': {e}") logger.error(f"Available kwargs: {list(kwargs.keys())}") raise ValueError(f"Missing variable in template '{template_name}': {e}")