go baby go

This commit is contained in:
2025-09-12 16:59:14 -07:00
parent 7e1497f0cc
commit d50b49fa07
136 changed files with 2412 additions and 10384 deletions

1867
CL_ImplementationGuide_1.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,66 @@
# Fitness TUI LLM Analysis Rules
## Core Instructions
You are a specialized fitness analysis AI integrated into a terminal-based fitness application. Your role is to provide expert analysis of athletic activities by comparing them against intended workout plans and providing actionable insights.
## Input Context
You will receive:
- **Activity Data**: Metrics from completed workouts (duration, distance, pace, heart rate, elevation, etc.)
- **Intended Workout**: Natural language description of what the athlete planned to do
- **Historical Context**: Previous analyses may be referenced for trend analysis
## Analysis Framework
### 1. Workout Adherence Assessment
- Compare actual performance against intended workout goals
- Identify deviations from planned intensity, duration, or structure
- Rate adherence on a scale: Excellent (90-100%), Good (70-89%), Fair (50-69%), Poor (<50%)
- Explain specific areas where the workout matched or differed from intentions
### 2. Performance Analysis
Evaluate these key areas:
#### Pacing Strategy
- Analyze pace distribution throughout the activity
- Identify pacing errors (too fast start, fading finish, inconsistent splits)
- Compare actual pace to target zones if specified in workout plan
- Flag negative splits, positive splits, or erratic pacing patterns
#### Heart Rate Analysis
- Assess time in different heart rate zones (if data available)
- Flag excessive time in high zones for easy workouts
- Flag insufficient intensity for hard workouts
- Identify heart rate drift patterns indicating fatigue or dehydration
#### Effort-Duration Relationship
- Evaluate if effort level was appropriate for workout duration
- Assess sustainability of the chosen intensity
- Compare perceived effort indicators (pace, HR) to workout goals
### 3. Training Load Assessment
- Evaluate workout difficulty relative to training phase
- Assess recovery needs based on intensity and duration
- Flag potentially excessive training stress
- Consider cumulative fatigue if historical data is available
### 4. Technical Observations
Analyze technical aspects when data is available:
- Cadence patterns and efficiency
- Power output consistency (if available)
- Elevation gain/loss and pacing on hills
- Environmental factors (temperature, weather impact)
## Output Format Requirements
Structure your analysis using these sections:
### Executive Summary
- One sentence overall assessment
- Adherence rating with brief justification
- Primary recommendation
### Workout Adherence Analysis
- How well did the activity match the intended workout?
- Specific

View File

@@ -1,261 +0,0 @@
# 🎯 **Backend Implementation TODO List**
## **Priority 1: Core API Gaps (Essential)**
### **1.1 Plan Generation Endpoint**
- [ ] **Add plan generation endpoint** in `app/routes/plan.py`
```python
@router.post("/generate", response_model=PlanSchema)
async def generate_plan(
plan_request: PlanGenerationRequest,
db: AsyncSession = Depends(get_db)
):
```
- [ ] **Create PlanGenerationRequest schema** in `app/schemas/plan.py`
```python
class PlanGenerationRequest(BaseModel):
rule_ids: List[UUID]
goals: Dict[str, Any]
user_preferences: Optional[Dict[str, Any]] = None
duration_weeks: int = 12
```
- [ ] **Update AIService.generate_plan()** to handle rule fetching from DB
- [ ] **Add validation** for rule compatibility and goal requirements
- [ ] **Add tests** for plan generation workflow
### **1.2 Rule Parsing API**
- [ ] **Add natural language rule parsing endpoint** in `app/routes/rule.py`
```python
@router.post("/parse-natural-language")
async def parse_natural_language_rules(
request: NaturalLanguageRuleRequest,
db: AsyncSession = Depends(get_db)
):
```
- [ ] **Create request/response schemas** in `app/schemas/rule.py`
```python
class NaturalLanguageRuleRequest(BaseModel):
natural_language_text: str
rule_name: str
class ParsedRuleResponse(BaseModel):
parsed_rules: Dict[str, Any]
confidence_score: Optional[float]
suggestions: Optional[List[str]]
```
- [ ] **Enhance AIService.parse_rules_from_natural_language()** with better error handling
- [ ] **Add rule validation** after parsing
- [ ] **Add preview mode** before saving parsed rules
### **1.3 Section Integration with GPX Parsing**
- [ ] **Update `app/services/gpx.py`** to create sections automatically
```python
async def parse_gpx_with_sections(file_path: str, route_id: UUID, db: AsyncSession) -> dict:
# Parse GPX into segments
# Create Section records for each segment
# Return enhanced GPX data with section metadata
```
- [ ] **Modify `app/routes/gpx.py`** to create sections after route creation
- [ ] **Add section creation logic** in GPX upload workflow
- [ ] **Update Section model** to include more GPX-derived metadata
- [ ] **Add section querying endpoints** for route visualization
## **Priority 2: Data Model Enhancements**
### **2.1 Missing Schema Fields**
- [ ] **Add missing fields to User model** in `app/models/user.py`
```python
class User(BaseModel):
name: Optional[str]
email: Optional[str]
fitness_level: Optional[str]
preferences: Optional[JSON]
```
- [ ] **Enhance Plan model** with additional metadata
```python
class Plan(BaseModel):
user_id: Optional[UUID] = Column(ForeignKey("users.id"))
name: str
description: Optional[str]
start_date: Optional[Date]
end_date: Optional[Date]
goal_type: Optional[str]
active: Boolean = Column(default=True)
```
- [ ] **Add plan-rule relationship table** (already exists but ensure proper usage)
- [ ] **Update all schemas** to match enhanced models
### **2.2 Database Relationships**
- [ ] **Fix User-Plan relationship** in models
- [ ] **Add cascade delete rules** where appropriate
- [ ] **Add database constraints** for data integrity
- [ ] **Create missing indexes** for performance
```sql
CREATE INDEX idx_workouts_garmin_activity_id ON workouts(garmin_activity_id);
CREATE INDEX idx_plans_user_active ON plans(user_id, active);
CREATE INDEX idx_analyses_workout_approved ON analyses(workout_id, approved);
```
## **Priority 3: API Completeness**
### **3.1 Export/Import Functionality**
- [ ] **Create export service** `app/services/export_import.py`
```python
class ExportImportService:
async def export_user_data(user_id: UUID) -> bytes:
async def export_routes() -> bytes:
async def import_user_data(data: bytes, user_id: UUID):
```
- [ ] **Add export endpoints** in new `app/routes/export.py`
```python
@router.get("/export/routes")
@router.get("/export/plans/{plan_id}")
@router.get("/export/user-data")
@router.post("/import/routes")
@router.post("/import/plans")
```
- [ ] **Support multiple formats** (JSON, GPX, ZIP)
- [ ] **Add data validation** for imports
- [ ] **Handle version compatibility** for imports
### **3.2 Enhanced Dashboard API**
- [ ] **Expand dashboard data** in `app/routes/dashboard.py`
```python
@router.get("/metrics/weekly")
@router.get("/metrics/monthly")
@router.get("/progress/{plan_id}")
@router.get("/upcoming-workouts")
```
- [ ] **Add aggregation queries** for metrics
- [ ] **Cache dashboard data** for performance
- [ ] **Add real-time updates** capability
### **3.3 Advanced Workout Features**
- [ ] **Add workout comparison endpoint**
```python
@router.get("/workouts/{workout_id}/compare/{compare_workout_id}")
```
- [ ] **Add workout search/filtering**
```python
@router.get("/workouts/search")
async def search_workouts(
activity_type: Optional[str] = None,
date_range: Optional[DateRange] = None,
power_range: Optional[PowerRange] = None
):
```
- [ ] **Add bulk workout operations**
- [ ] **Add workout tagging system**
## **Priority 4: Service Layer Improvements**
### **4.1 AI Service Enhancements**
- [ ] **Add prompt caching** to reduce API calls
- [ ] **Implement prompt A/B testing** framework
- [ ] **Add AI response validation** and confidence scoring
- [ ] **Create AI service health checks**
- [ ] **Add fallback mechanisms** for AI failures
- [ ] **Implement rate limiting** for AI calls
- [ ] **Add cost tracking** for AI API usage
### **4.2 Garmin Service Improvements**
- [ ] **Add incremental sync** instead of full sync
- [ ] **Implement activity deduplication** logic
- [ ] **Add webhook support** for real-time sync
- [ ] **Enhance error recovery** for failed syncs
- [ ] **Add activity type filtering**
- [ ] **Support multiple Garmin accounts** per user
### **4.3 Plan Evolution Enhancements**
- [ ] **Add plan comparison** functionality
- [ ] **Implement plan rollback** mechanism
- [ ] **Add plan branching** for different scenarios
- [ ] **Create plan templates** system
- [ ] **Add automated plan adjustments** based on performance
## **Priority 5: Validation & Error Handling**
### **5.1 Input Validation**
- [ ] **Add comprehensive Pydantic validators** for all schemas
- [ ] **Validate GPX file integrity** before processing
- [ ] **Add business rule validation** (e.g., plan dates, workout conflicts)
- [ ] **Validate AI responses** before storing
- [ ] **Add file size/type restrictions**
### **5.2 Error Handling**
- [ ] **Create custom exception hierarchy**
```python
class CyclingCoachException(Exception):
class GarminSyncError(CyclingCoachException):
class AIServiceError(CyclingCoachException):
class PlanGenerationError(CyclingCoachException):
```
- [ ] **Add global exception handler**
- [ ] **Improve error messages** for user feedback
- [ ] **Add error recovery mechanisms**
- [ ] **Log errors with context** for debugging
## **Priority 6: Performance & Monitoring**
### **6.1 Performance Optimizations**
- [ ] **Add database query optimization**
- [ ] **Implement caching** for frequently accessed data
- [ ] **Add connection pooling** configuration
- [ ] **Optimize GPX file parsing** for large files
- [ ] **Add pagination** to list endpoints
- [ ] **Implement background job queue** for long-running tasks
### **6.2 Enhanced Monitoring**
- [ ] **Add application metrics** (response times, error rates)
- [ ] **Create health check dependencies**
- [ ] **Add performance profiling** endpoints
- [ ] **Implement alerting** for critical errors
- [ ] **Add audit logging** for data changes
## **Priority 7: Security & Configuration**
### **7.1 Security Improvements**
- [ ] **Implement user authentication/authorization**
- [ ] **Add rate limiting** to prevent abuse
- [ ] **Validate file uploads** for security
- [ ] **Add CORS configuration** properly
- [ ] **Implement request/response logging** (without sensitive data)
- [ ] **Add API versioning** support
### **7.2 Configuration Management**
- [ ] **Add environment-specific configs**
- [ ] **Validate configuration** on startup
- [ ] **Add feature flags** system
- [ ] **Implement secrets management**
- [ ] **Add configuration reload** without restart
## **Priority 8: Testing & Documentation**
### **8.1 Testing**
- [ ] **Create comprehensive test suite**
- Unit tests for services
- Integration tests for API endpoints
- Database migration tests
- AI service mock tests
- [ ] **Add test fixtures** for common data
- [ ] **Implement test database** setup/teardown
- [ ] **Add performance tests** for critical paths
- [ ] **Create end-to-end tests** for workflows
### **8.2 Documentation**
- [ ] **Generate OpenAPI documentation**
- [ ] **Add endpoint documentation** with examples
- [ ] **Create service documentation**
- [ ] **Document deployment procedures**
- [ ] **Add troubleshooting guides**
---
## **🎯 Recommended Implementation Order:**
1. **Week 1:** Priority 1 (Core API gaps) - Essential for feature completeness
2. **Week 2:** Priority 2 (Data model) + Priority 5.1 (Validation) - Foundation improvements
3. **Week 3:** Priority 3.1 (Export/Import) + Priority 4.1 (AI improvements) - User-facing features
4. **Week 4:** Priority 6 (Performance) + Priority 8.1 (Testing) - Production readiness
This todo list will bring your backend implementation to 100% design doc compliance and beyond, making it production-ready with enterprise-level features! 🚀

View File

@@ -1,255 +0,0 @@
# Frontend Development TODO List
## 🚨 Critical Missing Features (High Priority)
### 1. Rules Management System
- [ ] **Create Rules page component** (`/src/pages/Rules.jsx`)
- [ ] Natural language textarea editor
- [ ] AI parsing button with loading state
- [ ] JSON preview pane with syntax highlighting
- [ ] Rule validation feedback
- [ ] Save/cancel actions
- [ ] **Create RuleEditor component** (`/src/components/rules/RuleEditor.jsx`)
- [ ] Rich text input with auto-resize
- [ ] Character count and validation
- [ ] Template suggestions dropdown
- [ ] **Create RulePreview component** (`/src/components/rules/RulePreview.jsx`)
- [ ] JSON syntax highlighting (use `react-json-view`)
- [ ] Editable JSON with validation
- [ ] Diff view for rule changes
- [ ] **Create RulesList component** (`/src/components/rules/RulesList.jsx`)
- [ ] Rule set selection dropdown
- [ ] Version history per rule set
- [ ] Delete/duplicate rule sets
- [ ] **API Integration**
- [ ] `POST /api/rules` - Create new rule set
- [ ] `PUT /api/rules/{id}` - Update rule set
- [ ] `GET /api/rules` - List all rule sets
- [ ] `POST /api/rules/{id}/parse` - AI parsing endpoint
### 2. Plan Generation Workflow
- [ ] **Create PlanGeneration page** (`/src/pages/PlanGeneration.jsx`)
- [ ] Goal selection interface
- [ ] Rule set selection
- [ ] Plan parameters (duration, weekly hours)
- [ ] Progress tracking for AI generation
- [ ] **Create GoalSelector component** (`/src/components/plans/GoalSelector.jsx`)
- [ ] Predefined goal templates
- [ ] Custom goal input
- [ ] Goal validation
- [ ] **Create PlanParameters component** (`/src/components/plans/PlanParameters.jsx`)
- [ ] Duration slider (4-20 weeks)
- [ ] Weekly hours slider (5-15 hours)
- [ ] Difficulty level selection
- [ ] Available days checkboxes
- [ ] **Enhance PlanTimeline component**
- [ ] Week-by-week breakdown
- [ ] Workout details expandable cards
- [ ] Progress tracking indicators
- [ ] Edit individual workouts
- [ ] **API Integration**
- [ ] `POST /api/plans/generate` - Generate new plan
- [ ] `GET /api/plans/{id}/preview` - Preview before saving
- [ ] Plan generation status polling
### 3. Route Management & Visualization
- [ ] **Enhance RoutesPage** (`/src/pages/RoutesPage.jsx`)
- [ ] Route list with metadata
- [ ] GPX file upload integration
- [ ] Route preview cards
- [ ] Search and filter functionality
- [ ] **Create RouteVisualization component** (`/src/components/routes/RouteVisualization.jsx`)
- [ ] Interactive map (use Leaflet.js)
- [ ] GPX track overlay
- [ ] Elevation profile chart
- [ ] Distance markers
- [ ] **Create RouteMetadata component** (`/src/components/routes/RouteMetadata.jsx`)
- [ ] Distance, elevation gain, grade analysis
- [ ] Estimated time calculations
- [ ] Difficulty rating
- [ ] Notes/description editing
- [ ] **Create SectionManager component** (`/src/components/routes/SectionManager.jsx`)
- [ ] Split routes into sections
- [ ] Section-specific metadata
- [ ] Gear recommendations per section
- [ ] **Dependencies to add**
- [ ] `npm install leaflet react-leaflet`
- [ ] GPX parsing library integration
### 4. Export/Import System
- [ ] **Create ExportImport page** (`/src/pages/ExportImport.jsx`)
- [ ] Export options (JSON, ZIP)
- [ ] Import validation
- [ ] Bulk operations
- [ ] **Create DataExporter component** (`/src/components/export/DataExporter.jsx`)
- [ ] Selective export (routes, rules, plans)
- [ ] Format selection (JSON, GPX, ZIP)
- [ ] Export progress tracking
- [ ] **Create DataImporter component** (`/src/components/export/DataImporter.jsx`)
- [ ] File validation and preview
- [ ] Conflict resolution interface
- [ ] Import progress tracking
- [ ] **API Integration**
- [ ] `GET /api/export` - Generate export package
- [ ] `POST /api/import` - Import data package
- [ ] `POST /api/import/validate` - Validate before import
## 🔧 Code Quality & Architecture Improvements
### 5. Enhanced Error Handling
- [ ] **Create GlobalErrorHandler** (`/src/components/GlobalErrorHandler.jsx`)
- [ ] Centralized error logging
- [ ] User-friendly error messages
- [ ] Retry mechanisms
- [ ] **Improve API error handling**
- [ ] Consistent error response format
- [ ] Network error recovery
- [ ] Timeout handling
- [ ] **Add error boundaries**
- [ ] Page-level error boundaries
- [ ] Component-level error recovery
### 6. State Management Improvements
- [ ] **Enhance AuthContext**
- [ ] Add user preferences
- [ ] API caching layer
- [ ] Offline capability detection
- [ ] **Create AppStateContext** (`/src/context/AppStateContext.jsx`)
- [ ] Global loading states
- [ ] Toast notifications
- [ ] Modal management
- [ ] **Add React Query** (Optional but recommended)
- [ ] `npm install @tanstack/react-query`
- [ ] API data caching
- [ ] Background refetching
- [ ] Optimistic updates
### 7. UI/UX Enhancements
- [ ] **Improve responsive design**
- [ ] Better mobile navigation
- [ ] Touch-friendly interactions
- [ ] Responsive charts and maps
- [ ] **Add loading skeletons**
- [ ] Replace generic spinners
- [ ] Component-specific skeletons
- [ ] Progressive loading
- [ ] **Create ConfirmDialog component** (`/src/components/ui/ConfirmDialog.jsx`)
- [ ] Delete confirmations
- [ ] Destructive action warnings
- [ ] Custom confirmation messages
- [ ] **Add keyboard shortcuts**
- [ ] Navigation shortcuts
- [ ] Action shortcuts
- [ ] Help overlay
## 🧪 Testing & Quality Assurance
### 8. Testing Infrastructure
- [ ] **Expand component tests**
- [ ] Rules management tests
- [ ] Plan generation tests
- [ ] Route visualization tests
- [ ] **Add integration tests**
- [ ] API integration tests
- [ ] User workflow tests
- [ ] Error scenario tests
- [ ] **Performance testing**
- [ ] Large dataset handling
- [ ] Chart rendering performance
- [ ] Memory leak detection
### 9. Development Experience
- [ ] **Add Storybook** (Optional)
- [ ] Component documentation
- [ ] Design system documentation
- [ ] Interactive component testing
- [ ] **Improve build process**
- [ ] Bundle size optimization
- [ ] Dead code elimination
- [ ] Tree shaking verification
- [ ] **Add development tools**
- [ ] React DevTools integration
- [ ] Performance monitoring
- [ ] Bundle analyzer
## 📚 Documentation & Dependencies
### 10. Missing Dependencies
```json
{
"leaflet": "^1.9.4",
"react-leaflet": "^4.2.1",
"react-json-view": "^1.21.3",
"@tanstack/react-query": "^4.32.0",
"react-hook-form": "^7.45.0",
"react-select": "^5.7.4",
"file-saver": "^2.0.5"
}
```
### 11. Configuration Files
- [ ] **Create environment config** (`/src/config/index.js`)
- [ ] API endpoints configuration
- [ ] Feature flags
- [ ] Environment-specific settings
- [ ] **Add TypeScript support** (Optional)
- [ ] Convert critical components
- [ ] Add type definitions
- [ ] Improve IDE support
## 🚀 Deployment & Performance
### 12. Production Readiness
- [ ] **Optimize bundle size**
- [ ] Code splitting implementation
- [ ] Lazy loading for routes
- [ ] Image optimization
- [ ] **Add PWA features** (Optional)
- [ ] Service worker
- [ ] Offline functionality
- [ ] App manifest
- [ ] **Performance monitoring**
- [ ] Core Web Vitals tracking
- [ ] Error tracking integration
- [ ] User analytics
## 📅 Implementation Priority
### Phase 1 (Week 1-2): Core Missing Features
1. Rules Management System
2. Plan Generation Workflow
3. Enhanced Route Management
### Phase 2 (Week 3): Data Management
1. Export/Import System
2. Enhanced Error Handling
3. State Management Improvements
### Phase 3 (Week 4): Polish & Quality
1. UI/UX Enhancements
2. Testing Infrastructure
3. Performance Optimization
### Phase 4 (Ongoing): Maintenance
1. Documentation
2. Monitoring
3. User Feedback Integration
---
## 🎯 Success Criteria
- [ ] All design document workflows implemented
- [ ] 90%+ component test coverage
- [ ] Mobile-responsive design
- [ ] Sub-3s initial page load
- [ ] Accessibility compliance (WCAG 2.1 AA)
- [ ] Cross-browser compatibility (Chrome, Firefox, Safari, Edge)
## 📝 Notes
- **Prioritize user-facing features** over internal architecture improvements
- **Test each feature** as you implement it
- **Consider Progressive Web App features** for offline functionality
- **Plan for internationalization** if expanding globally
- **Monitor bundle size** as you add dependencies

File diff suppressed because it is too large Load Diff

View File

@@ -1,97 +0,0 @@
### Phase 5: Testing and Deployment (Week 12-13)
#### Week 12: Testing
1. **Backend Testing**
- Implement comprehensive unit tests for critical services:
- Garmin sync service (mock API responses)
- AI service (mock OpenRouter API)
- Workflow services (plan generation, evolution)
- API endpoint testing with realistic payloads
- Error handling and edge case testing
- Database operation tests (including rollback scenarios)
Example test for Garmin service:
```python
# tests/test_garmin_service.py
import pytest
from unittest.mock import AsyncMock, patch
from app.services.garmin import GarminService
from app.exceptions import GarminAuthError
@pytest.mark.asyncio
async def test_garmin_auth_failure():
with patch('garth.Client', side_effect=Exception("Auth failed")):
service = GarminService()
with pytest.raises(GarminAuthError):
await service.authenticate()
```
2. **Integration Testing**
- Test full Garmin sync workflow: authentication → activity fetch → storage
- Verify AI analysis pipeline: workout → analysis → plan evolution
- Database transaction tests across multiple operations
- File system integration tests (GPX upload/download)
3. **Frontend Testing**
- Component tests using React Testing Library
- User workflow tests (upload GPX → generate plan → analyze workout)
- API response handling and error display tests
- Responsive design verification across devices
Example component test:
```javascript
// frontend/src/components/__tests__/GarminSync.test.jsx
import { render, screen, fireEvent } from '@testing-library/react';
import GarminSync from '../GarminSync';
test('shows sync status after triggering', async () => {
render(<GarminSync />);
fireEvent.click(screen.getByText('Sync Recent Activities'));
expect(await screen.findByText('Syncing...')).toBeInTheDocument();
});
```
4. **Continuous Integration Setup**
- Configure GitHub Actions pipeline:
- Backend test suite (Python)
- Frontend test suite (Jest)
- Security scanning (dependencies, secrets)
- Docker image builds on successful tests
- Automated database migration checks
- Test coverage reporting
#### Week 13: Deployment Preparation
1. **Environment Configuration**
```bash
# .env.production
GARMIN_USERNAME=your_garmin_email
GARMIN_PASSWORD=your_garmin_password
OPENROUTER_API_KEY=your_openrouter_key
AI_MODEL=anthropic/claude-3-sonnet-20240229
API_KEY=your_secure_api_key
```
2. **Production Docker Setup**
- Optimize Dockerfiles for production:
- Multi-stage builds
- Minimized image sizes
- Proper user permissions
- Health checks for all services
- Resource limits in docker-compose.prod.yml
3. **Backup Strategy**
- Implement daily automated backups:
- Database (pg_dump)
- GPX files
- Garmin sessions
- Backup rotation (keep last 30 days)
- Verify restore procedure
4. **Monitoring and Logging**
- Structured logging with log rotation
- System health dashboard
- Error tracking and alerting
- Performance monitoring
## Key Technical Decisions
...

View File

@@ -1,27 +0,0 @@
# Include important files
include README.md
include LICENSE
include requirements.txt
include .env
include Makefile
include pyproject.toml
# Include backend files
recursive-include backend *.py
recursive-include backend *.ini
recursive-include backend *.mako
# Include TUI files
recursive-include tui *.py
# Include data directories
recursive-include data *.gitkeep
# Exclude unnecessary files
exclude *.pyc
recursive-exclude * __pycache__
recursive-exclude * *.py[co]
exclude .git*
exclude .vscode
exclude .idea
exclude *.log

View File

@@ -1,67 +0,0 @@
.PHONY: install dev-install run test clean build package help init-db
# Default target
help:
@echo "AI Cycling Coach - Available commands:"
@echo " install - Install the application"
@echo " dev-install - Install in development mode"
@echo " run - Run the application"
@echo " init-db - Initialize the database"
@echo " test - Run tests"
@echo " clean - Clean build artifacts"
@echo " build - Build distribution packages"
@echo " package - Create standalone executable"
# Installation
install:
pip install .
dev-install:
pip install -e .[dev]
# Database initialization
init-db:
@echo "Initializing database..."
@mkdir -p data
@cd backend && python -m alembic upgrade head
@echo "Database initialized successfully!"
# Run application
run:
python main.py
# Testing
test:
pytest
# Cleanup
clean:
rm -rf build/
rm -rf dist/
rm -rf *.egg-info/
find . -type d -name __pycache__ -exec rm -rf {} +
find . -type f -name "*.pyc" -delete
# Build distribution
build: clean
python -m build
# Package as executable (requires PyInstaller)
package:
@echo "Creating standalone executable..."
@pip install pyinstaller
@pyinstaller --onefile --name cycling-coach main.py
@echo "Executable created in dist/cycling-coach"
# Development tools
lint:
black --check .
isort --check-only .
format:
black .
isort .
# Quick setup for new users
setup: dev-install init-db
@echo "Setup complete! Run 'make run' to start the application."

272
README.md
View File

@@ -1,272 +0,0 @@
# AI Cycling Coach - Terminal Edition
🚴‍♂️ An intelligent cycling training coach with a sleek Terminal User Interface (TUI) that creates personalized training plans and analyzes your workouts using AI, with seamless Garmin Connect integration.
## ✨ Features
- **🧠 AI-Powered Plan Generation**: Create personalized 4-week training plans based on your goals and constraints
- **📊 Automatic Workout Analysis**: Get detailed AI feedback on your completed rides with terminal-based visualizations
- **⌚ Garmin Connect Integration**: Sync activities automatically from your Garmin device
- **🔄 Plan Evolution**: Training plans adapt based on your actual performance
- **🗺️ GPX Route Management**: Upload and visualize your favorite cycling routes with ASCII maps
- **📈 Progress Tracking**: Monitor your training progress with terminal charts and metrics
- **💻 Pure Terminal Interface**: Beautiful, responsive TUI that works entirely in your terminal
- **🗃️ SQLite Database**: Lightweight, portable database that travels with your data
- **🚀 No Docker Required**: Simple installation and native performance
## 🏁 Quick Start
### Option 1: Automated Installation (Recommended)
```bash
git clone https://github.com/ai-cycling-coach/ai-cycling-coach.git
cd ai-cycling-coach
./install.sh
```
### Option 2: Manual Installation
```bash
# Clone and setup
git clone https://github.com/ai-cycling-coach/ai-cycling-coach.git
cd ai-cycling-coach
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install
pip install -e .
# Initialize database
make init-db
# Run the application
cycling-coach
```
## ⚙️ Configuration
Edit the `.env` file with your settings:
```bash
# Database Configuration (SQLite)
DATABASE_URL=sqlite+aiosqlite:///data/cycling_coach.db
# File Storage
GPX_STORAGE_PATH=data/gpx
# AI Service Configuration
OPENROUTER_API_KEY=your_openrouter_api_key_here
AI_MODEL=deepseek/deepseek-r1
# Garmin Connect Credentials
GARMIN_USERNAME=your_garmin_email@example.com
GARMIN_PASSWORD=your_secure_password
# Optional: Logging Configuration
LOG_LEVEL=INFO
```
## 🎮 Usage
### Terminal Interface
Start the application with:
```bash
cycling-coach
# or
ai-cycling-coach
# or
python main.py
```
Navigate through the interface using:
1. **🏠 Dashboard**: View recent workouts, weekly stats, and sync status
2. **📋 Plans**: Generate new training plans or manage existing ones
3. **💪 Workouts**: Sync from Garmin, view detailed analysis, and approve AI suggestions
4. **📏 Rules**: Define custom training constraints and preferences
5. **🗺️ Routes**: Upload GPX files and view ASCII route visualizations
### Key Features
#### 🧠 AI-Powered Analysis
- Detailed workout feedback with actionable insights
- Performance trend analysis
- Training load recommendations
- Recovery suggestions
#### 🗺️ ASCII Route Visualization
```
Route Map:
S●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●E
● ●
● Morning Loop - 15.2km ●
● ●
●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
Elevation Profile (50m - 180m):
████████████████████████████████████████████████████████████
██████████████████████████████ ████████████████████████████
███████████████████████████ ████████████████████████
███████████████████████ ██████████████████████
```
#### 📊 Terminal-Based Charts
- Heart rate zones
- Power distribution
- Training load trends
- Weekly volume tracking
## 🏗️ Architecture
### 🔧 Technology Stack
- **Backend**: Python with SQLAlchemy + SQLite
- **TUI Framework**: Textual (Rich terminal interface)
- **AI Integration**: OpenRouter API (Deepseek R1, Claude, GPT)
- **Garmin Integration**: garth library
- **Database**: SQLite with async support
### 📁 Project Structure
```
ai-cycling-coach/
├── main.py # Application entrypoint
├── backend/ # Core business logic
│ ├── app/
│ │ ├── models/ # Database models
│ │ ├── services/ # Business services
│ │ └── config.py # Configuration
│ └── alembic/ # Database migrations
├── tui/ # Terminal interface
│ ├── views/ # TUI screens
│ ├── services/ # TUI service layer
│ └── widgets/ # Custom UI components
└── data/ # SQLite database and files
├── cycling_coach.db
└── gpx/ # GPX route files
```
## 🛠️ Development
### Setup Development Environment
```bash
# Clone repository
git clone https://github.com/ai-cycling-coach/ai-cycling-coach.git
cd ai-cycling-coach
# Install in development mode
make dev-install
# Initialize database
make init-db
# Run tests
make test
# Format code
make format
# Run application
make run
```
### Available Make Commands
```bash
make help # Show all available commands
make install # Install the application
make dev-install # Install in development mode
make run # Run the application
make init-db # Initialize the database
make test # Run tests
make clean # Clean build artifacts
make build # Build distribution packages
make package # Create standalone executable
make setup # Complete setup for new users
```
### Creating a Standalone Executable
```bash
make package
# Creates: dist/cycling-coach
```
## 🚀 Deployment Options
### 1. Portable Installation
```bash
# Create portable package
make build
pip install dist/ai-cycling-coach-*.whl
```
### 2. Standalone Executable
```bash
# Create single-file executable
make package
# Copy dist/cycling-coach to target system
```
### 3. Development Installation
```bash
# For development and testing
make dev-install
```
## 📋 Requirements
- **Python**: 3.8 or higher
- **Operating System**: Linux, macOS, Windows
- **Terminal**: Any terminal with Unicode support
- **Memory**: ~100MB RAM
- **Storage**: ~50MB + data files
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Run tests (`make test`)
5. Format code (`make format`)
6. Commit changes (`git commit -m 'Add amazing feature'`)
7. Push to branch (`git push origin feature/amazing-feature`)
8. Open a Pull Request
## 🐛 Troubleshooting
### Common Issues
**Database errors:**
```bash
make init-db # Reinitialize database
```
**Import errors:**
```bash
pip install -e . # Reinstall in development mode
```
**Garmin sync fails:**
- Check credentials in `.env`
- Verify Garmin Connect account access
- Check internet connection
**TUI rendering issues:**
- Ensure terminal supports Unicode
- Try different terminal emulators
- Check terminal size (minimum 80x24)
### Getting Help
- 📖 Check the documentation
- 🐛 Open an issue on GitHub
- 💬 Join our community discussions
## 📄 License
MIT License - see [LICENSE](LICENSE) file for details.
## 🙏 Acknowledgments
- [Textual](https://github.com/Textualize/textual) - Amazing TUI framework
- [garth](https://github.com/matin/garth) - Garmin Connect integration
- [OpenRouter](https://openrouter.ai/) - AI model access
- [SQLAlchemy](https://www.sqlalchemy.org/) - Database toolkit

View File

@@ -1,27 +0,0 @@
[alembic]
script_location = alembic
sqlalchemy.url = sqlite+aiosqlite:///data/cycling_coach.db
[loggers]
keys = root
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -1,70 +0,0 @@
from logging.config import fileConfig
from sqlalchemy import engine_from_config, pool
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
from alembic import context
import sys
import os
from pathlib import Path
# Add backend directory to path
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
# Import base and models
from backend.app.models.base import Base
from backend.app.config import settings
# Import all models to ensure they're registered
from backend.app.models import *
config = context.config
fileConfig(config.config_file_name)
target_metadata = Base.metadata
def run_migrations_offline():
"""Run migrations in 'offline' mode."""
url = settings.DATABASE_URL
context.configure(
url=url,
target_metadata=target_metadata,
literal_binds=True,
dialect_opts={"paramstyle": "named"},
render_as_batch=True, # Important for SQLite
)
with context.begin_transaction():
context.run_migrations()
async def run_migrations_online():
"""Run migrations in 'online' mode."""
# Ensure data directory exists
data_dir = Path("data")
data_dir.mkdir(exist_ok=True)
connectable = create_async_engine(
settings.DATABASE_URL,
poolclass=pool.NullPool,
connect_args={"check_same_thread": False} if "sqlite" in settings.DATABASE_URL else {}
)
async with connectable.connect() as connection:
await connection.run_sync(do_run_migrations)
await connectable.dispose()
def do_run_migrations(connection):
context.configure(
connection=connection,
target_metadata=target_metadata,
render_as_batch=True, # Important for SQLite ALTER TABLE support
)
with context.begin_transaction():
context.run_migrations()
if context.is_offline_mode():
run_migrations_offline()
else:
import asyncio
asyncio.run(run_migrations_online())

View File

@@ -1,25 +0,0 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
${imports if imports else ""}
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@@ -1,40 +0,0 @@
"""Create initial rules table with plaintext storage
Revision ID: 001
Revises:
Create Date: 2025-09-12 14:01:00.000000
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = '001'
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
"""Create rules table with plaintext storage as per design specification."""
# Create rules table with correct schema from the start
op.create_table('rules',
sa.Column('id', postgresql.UUID(as_uuid=True), nullable=False),
sa.Column('created_at', sa.DateTime(), nullable=True),
sa.Column('updated_at', sa.DateTime(), nullable=True),
sa.Column('name', sa.String(length=100), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('user_defined', sa.Boolean(), nullable=True),
sa.Column('rule_text', sa.Text(), nullable=False),
sa.Column('version', sa.Integer(), nullable=True),
sa.Column('parent_rule_id', postgresql.UUID(as_uuid=True), nullable=True),
sa.ForeignKeyConstraint(['parent_rule_id'], ['rules.id'], ),
sa.PrimaryKeyConstraint('id')
)
def downgrade() -> None:
"""Drop rules table."""
op.drop_table('rules')

View File

@@ -1,20 +0,0 @@
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
# Database settings
DATABASE_URL: str = "sqlite+aiosqlite:///data/cycling_coach.db"
# File storage settings
GPX_STORAGE_PATH: str = "data/gpx"
# AI settings
AI_MODEL: str = "deepseek/deepseek-r1"
OPENROUTER_API_KEY: str = ""
# Garmin settings
GARMIN_USERNAME: str = ""
GARMIN_PASSWORD: str = ""
model_config = SettingsConfigDict(env_file=".env", env_file_encoding="utf-8", extra="ignore")
settings = Settings()

View File

@@ -1,42 +0,0 @@
import os
from pathlib import Path
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import declarative_base, sessionmaker
# Use SQLite database in data directory
DATA_DIR = Path("data")
DATABASE_PATH = DATA_DIR / "cycling_coach.db"
DATABASE_URL = os.getenv("DATABASE_URL", f"sqlite+aiosqlite:///{DATABASE_PATH}")
engine = create_async_engine(
DATABASE_URL,
echo=False, # Set to True for SQL debugging
connect_args={"check_same_thread": False} if "sqlite" in DATABASE_URL else {}
)
AsyncSessionLocal = sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False
)
Base = declarative_base()
async def get_db() -> AsyncSession:
async with AsyncSessionLocal() as session:
yield session
async def init_db():
"""Initialize the database by creating all tables."""
# Ensure data directory exists
DATA_DIR.mkdir(exist_ok=True)
# Import all models to ensure they are registered
from .models import (
user, rule, plan, plan_rule, workout,
analysis, route, section, garmin_sync_log, prompt
)
# Create all tables
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)

View File

@@ -1,10 +0,0 @@
from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession
from backend.app.database import get_db
from backend.app.services.ai_service import AIService
from typing import AsyncGenerator
async def get_ai_service(db: AsyncSession = Depends(get_db)) -> AIService:
"""Get AI service instance with database dependency."""
return AIService(db)

View File

@@ -1,110 +0,0 @@
import logging
import json
from datetime import datetime
from fastapi import FastAPI, Depends, Request, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from .database import get_db
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import text
from alembic.config import Config
from alembic.migration import MigrationContext
from alembic.script import ScriptDirectory
from .routes import gpx as gpx_routes
from .routes import rule as rule_routes
from .routes import plan as plan_routes
from .routes import workouts as workout_routes
from .routes import prompts as prompt_routes
from .routes import dashboard as dashboard_routes
from .config import settings
# Configure structured JSON logging
class StructuredJSONFormatter(logging.Formatter):
def format(self, record):
log_data = {
"timestamp": datetime.utcnow().isoformat(),
"level": record.levelname,
"message": record.getMessage(),
"logger": record.name,
"module": record.module,
"function": record.funcName,
"line": record.lineno,
"thread": record.threadName,
}
if hasattr(record, 'extra'):
log_data.update(record.extra)
if record.exc_info:
log_data["exception"] = self.formatException(record.exc_info)
return json.dumps(log_data)
# Set up logging
logger = logging.getLogger("ai_cycling_coach")
logger.setLevel(logging.INFO)
# Create console handler with structured JSON format
console_handler = logging.StreamHandler()
console_handler.setFormatter(StructuredJSONFormatter())
logger.addHandler(console_handler)
# Configure rotating file handler
from logging.handlers import RotatingFileHandler
file_handler = RotatingFileHandler(
filename="/app/logs/app.log",
maxBytes=10*1024*1024, # 10 MB
backupCount=5,
encoding='utf-8'
)
file_handler.setFormatter(StructuredJSONFormatter())
logger.addHandler(file_handler)
app = FastAPI(
title="AI Cycling Coach API",
description="Backend service for AI-assisted cycling training platform",
version="0.1.0"
)
# API Key Authentication Middleware
@app.middleware("http")
async def api_key_auth(request: Request, call_next):
# Skip authentication for documentation and health endpoints
if (request.url.path.startswith("/docs") or
request.url.path.startswith("/redoc") or
request.url.path == "/health" or
request.url.path == "/openapi.json"):
return await call_next(request)
api_key = request.headers.get("X-API-KEY")
if api_key != settings.API_KEY:
raise HTTPException(status_code=401, detail="Invalid API Key")
return await call_next(request)
# Configure CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Include routers
app.include_router(gpx_routes.router)
app.include_router(rule_routes.router)
app.include_router(plan_routes.router)
app.include_router(workout_routes.router, prefix="/workouts", tags=["workouts"])
app.include_router(prompt_routes.router, prefix="/prompts", tags=["prompts"])
app.include_router(dashboard_routes.router, prefix="/api/dashboard", tags=["dashboard"])
@app.get("/health")
async def health_check():
"""Simplified health check endpoint."""
return {
"status": "healthy",
"version": "0.1.0",
"timestamp": datetime.utcnow().isoformat()
}
if __name__ == "__main__":
import uvicorn
logger.info("Starting AI Cycling Coach API server")
uvicorn.run(app, host="0.0.0.0", port=8000, log_config=None)

View File

@@ -1,11 +0,0 @@
from .base import BaseModel
from .route import Route
from .section import Section
from .rule import Rule
from .plan import Plan
from .plan_rule import plan_rules
from .user import User
from .workout import Workout
from .analysis import Analysis
from .prompt import Prompt
from .garmin_sync_log import GarminSyncLog

View File

@@ -1,21 +0,0 @@
from datetime import datetime
from sqlalchemy import Column, Integer, String, ForeignKey, JSON, Boolean, DateTime
from sqlalchemy.orm import relationship
from .base import BaseModel
class Analysis(BaseModel):
"""Analysis model for AI-generated workout feedback."""
__tablename__ = "analyses"
workout_id = Column(Integer, ForeignKey("workouts.id"), nullable=False)
analysis_type = Column(String(50), default='workout_review')
jsonb_feedback = Column(JSON, nullable=False)
suggestions = Column(JSON)
approved = Column(Boolean, default=False)
created_plan_id = Column(Integer, ForeignKey('plans.id'))
approved_at = Column(DateTime, default=datetime.utcnow) # Changed from server_default=func.now()
# Relationships
workout = relationship("Workout", back_populates="analyses")
plan = relationship("Plan", back_populates="analyses")

View File

@@ -1,15 +0,0 @@
from datetime import datetime
from sqlalchemy import Column, Integer, DateTime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class BaseModel(Base):
__abstract__ = True
id = Column(Integer, primary_key=True, autoincrement=True)
created_at = Column(DateTime, default=datetime.utcnow)
updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
def __repr__(self):
return f"<{self.__class__.__name__} {self.id}>"

View File

@@ -1,12 +0,0 @@
from sqlalchemy import Column, Integer, DateTime, String, Text
from .base import BaseModel
class GarminSyncLog(BaseModel):
"""Log model for tracking Garmin sync operations."""
__tablename__ = "garmin_sync_log"
last_sync_time = Column(DateTime)
activities_synced = Column(Integer, default=0)
status = Column(String(20)) # success, error, in_progress
error_message = Column(Text)

View File

@@ -1,15 +0,0 @@
from sqlalchemy import Column, Integer, ForeignKey, JSON
from sqlalchemy.orm import relationship
from .base import BaseModel
class Plan(BaseModel):
__tablename__ = "plans"
jsonb_plan = Column(JSON, nullable=False) # Changed from JSONB to JSON for SQLite compatibility
version = Column(Integer, nullable=False)
parent_plan_id = Column(Integer, ForeignKey('plans.id'), nullable=True)
parent_plan = relationship("Plan", remote_side="Plan.id", backref="child_plans")
analyses = relationship("Analysis", back_populates="plan", lazy="selectin")
workouts = relationship("Workout", back_populates="plan", cascade="all, delete-orphan", lazy="selectin")
rules = relationship("Rule", secondary="plan_rules", back_populates="plans", lazy="selectin")

View File

@@ -1,9 +0,0 @@
from sqlalchemy import Column, Integer, ForeignKey, Table
from .base import Base
# Association table for many-to-many relationship between plans and rules
plan_rules = Table(
'plan_rules', Base.metadata,
Column('plan_id', Integer, ForeignKey('plans.id'), primary_key=True),
Column('rule_id', Integer, ForeignKey('rules.id'), primary_key=True)
)

View File

@@ -1,13 +0,0 @@
from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime
from .base import BaseModel
class Prompt(BaseModel):
"""Prompt model for AI prompt versioning and management."""
__tablename__ = "prompts"
action_type = Column(String(50), nullable=False) # plan_generation, workout_analysis, rule_parsing, suggestions
model = Column(String(100)) # AI model identifier
prompt_text = Column(Text, nullable=False)
version = Column(Integer, default=1)
active = Column(Boolean, default=True)

View File

@@ -1,14 +0,0 @@
from sqlalchemy import Column, String, Float, ForeignKey
from sqlalchemy.orm import relationship
from .base import BaseModel
class Route(BaseModel):
__tablename__ = "routes"
name = Column(String(100), nullable=False)
description = Column(String(500))
total_distance = Column(Float, nullable=False)
elevation_gain = Column(Float, nullable=False)
gpx_file_path = Column(String(255), nullable=False)
sections = relationship("Section", back_populates="route", cascade="all, delete-orphan")

View File

@@ -1,16 +0,0 @@
from sqlalchemy import Column, Integer, ForeignKey, Boolean, String, Text
from sqlalchemy.orm import relationship
from .base import BaseModel
class Rule(BaseModel):
__tablename__ = "rules"
name = Column(String(100), nullable=False)
description = Column(Text, nullable=True)
user_defined = Column(Boolean, default=True)
rule_text = Column(Text, nullable=False) # Plaintext rules as per design spec
version = Column(Integer, default=1)
parent_rule_id = Column(Integer, ForeignKey('rules.id'), nullable=True)
parent_rule = relationship("Rule", remote_side="Rule.id")
plans = relationship("Plan", secondary="plan_rules", back_populates="rules", lazy="selectin")

View File

@@ -1,15 +0,0 @@
from sqlalchemy import Column, String, Float, ForeignKey
from sqlalchemy.orm import relationship
from .base import BaseModel
class Section(BaseModel):
__tablename__ = "sections"
route_id = Column(ForeignKey("routes.id"), nullable=False)
gpx_file_path = Column(String(255), nullable=False)
distance_m = Column(Float, nullable=False)
grade_avg = Column(Float)
min_gear = Column(String(50))
est_time_minutes = Column(Float)
route = relationship("Route", back_populates="sections")

View File

@@ -1,12 +0,0 @@
from sqlalchemy import Column, String
from sqlalchemy.orm import relationship
from .base import BaseModel
class User(BaseModel):
__tablename__ = "users"
username = Column(String(100), nullable=False, unique=True)
email = Column(String(255), nullable=True)
# Note: Relationship removed as Plan model doesn't have user_id field
# plans = relationship("Plan", back_populates="user")

View File

@@ -1,26 +0,0 @@
from sqlalchemy import Column, Integer, String, Float, DateTime, ForeignKey, JSON, Boolean
from sqlalchemy.orm import relationship
from .base import BaseModel
class Workout(BaseModel):
"""Workout model for Garmin activity data."""
__tablename__ = "workouts"
plan_id = Column(Integer, ForeignKey("plans.id"), nullable=True)
garmin_activity_id = Column(String(255), unique=True, nullable=False)
activity_type = Column(String(50))
start_time = Column(DateTime, nullable=False)
duration_seconds = Column(Integer)
distance_m = Column(Float)
avg_hr = Column(Integer)
max_hr = Column(Integer)
avg_power = Column(Float)
max_power = Column(Float)
avg_cadence = Column(Float)
elevation_gain_m = Column(Float)
metrics = Column(JSON) # Store full Garmin data as JSONB
# Relationships
plan = relationship("Plan", back_populates="workouts")
analyses = relationship("Analysis", back_populates="workout", cascade="all, delete-orphan")

View File

@@ -1,52 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from backend.app.database import get_db
from backend.app.models.workout import Workout
from backend.app.models.plan import Plan
from backend.app.models.garmin_sync_log import GarminSyncLog
from sqlalchemy import select, desc
from datetime import datetime, timedelta
router = APIRouter()
@router.get("/dashboard")
async def get_dashboard_data(db: AsyncSession = Depends(get_db)):
"""Get consolidated dashboard data"""
try:
# Recent workouts (last 7 days)
workout_result = await db.execute(
select(Workout)
.where(Workout.start_time >= datetime.now() - timedelta(days=7))
.order_by(desc(Workout.start_time))
.limit(5)
)
recent_workouts = [w.to_dict() for w in workout_result.scalars().all()]
# Current active plan
plan_result = await db.execute(
select(Plan)
.where(Plan.active == True)
.order_by(desc(Plan.created_at))
)
current_plan = plan_result.scalar_one_or_none()
# Sync status
sync_result = await db.execute(
select(GarminSyncLog)
.order_by(desc(GarminSyncLog.created_at))
.limit(1)
)
last_sync = sync_result.scalar_one_or_none()
return {
"recent_workouts": recent_workouts,
"current_plan": current_plan.to_dict() if current_plan else None,
"last_sync": last_sync.to_dict() if last_sync else None,
"metrics": {
"weekly_volume": sum(w.duration_seconds for w in recent_workouts) / 3600,
"plan_progress": current_plan.progress if current_plan else 0
}
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Dashboard data error: {str(e)}")

View File

@@ -1,40 +0,0 @@
from fastapi import APIRouter, Query, HTTPException
from fastapi.responses import FileResponse
from backend.app.services.export_service import ExportService
from pathlib import Path
import logging
router = APIRouter()
logger = logging.getLogger(__name__)
@router.get("/export")
async def export_data(
types: str = Query(..., description="Comma-separated list of data types to export"),
format: str = Query('json', description="Export format (json, zip, gpx)")
):
valid_types = {'routes', 'rules', 'plans'}
requested_types = set(types.split(','))
# Validate requested types
if not requested_types.issubset(valid_types):
raise HTTPException(
status_code=400,
detail=f"Invalid export types. Valid types are: {', '.join(valid_types)}"
)
try:
exporter = ExportService()
export_path = await exporter.create_export(
export_types=list(requested_types),
export_format=format
)
return FileResponse(
export_path,
media_type="application/zip" if format == 'zip' else "application/json",
filename=f"export_{'_'.join(requested_types)}.{format}"
)
except Exception as e:
logger.error(f"Export failed: {str(e)}")
raise HTTPException(status_code=500, detail="Export failed") from e

View File

@@ -1,23 +0,0 @@
from fastapi import APIRouter, Depends, BackgroundTasks
from sqlalchemy.ext.asyncio import AsyncSession
from backend.app.dependencies import verify_api_key
from backend.app.services.workout_sync import WorkoutSyncService
from backend.app.database import get_db
router = APIRouter(dependencies=[Depends(verify_api_key)])
@router.post("/sync")
async def trigger_garmin_sync(
background_tasks: BackgroundTasks,
db: AsyncSession = Depends(get_db)
):
"""Trigger background sync of Garmin activities"""
sync_service = WorkoutSyncService(db)
background_tasks.add_task(sync_service.sync_recent_activities, days_back=14)
return {"message": "Garmin sync started"}
@router.get("/sync-status")
async def get_sync_status(db: AsyncSession = Depends(get_db)):
"""Get latest sync status"""
sync_service = WorkoutSyncService(db)
return await sync_service.get_latest_sync_status()

View File

@@ -1,35 +0,0 @@
from fastapi import APIRouter, UploadFile, File, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from backend.app.database import get_db
from backend.app.services.gpx import parse_gpx, store_gpx_file
from backend.app.schemas.gpx import RouteCreate, Route as RouteSchema
from backend.app.models import Route
import os
router = APIRouter(prefix="/gpx", tags=["GPX Routes"])
@router.post("/upload", response_model=RouteSchema)
async def upload_gpx_route(
file: UploadFile = File(...),
db: AsyncSession = Depends(get_db)
):
# Store GPX file
gpx_path = await store_gpx_file(file)
# Parse GPX file
gpx_data = await parse_gpx(gpx_path)
# Create route in database
route_data = RouteCreate(
name=file.filename,
description=f"Uploaded from {file.filename}",
total_distance=gpx_data['total_distance'],
elevation_gain=gpx_data['elevation_gain'],
gpx_file_path=gpx_path
)
db_route = Route(**route_data.dict())
db.add(db_route)
await db.commit()
await db.refresh(db_route)
return db_route

View File

@@ -1,54 +0,0 @@
from fastapi import APIRouter
from fastapi.responses import PlainTextResponse, JSONResponse
from backend.app.services.health_monitor import HealthMonitor
from prometheus_client import generate_latest, CONTENT_TYPE_LATEST, Gauge
from pathlib import Path
import json
router = APIRouter()
monitor = HealthMonitor()
# Prometheus metrics
SYNC_QUEUE = Gauge('sync_queue_size', 'Current Garmin sync queue size')
PENDING_ANALYSES = Gauge('pending_analyses', 'Number of pending workout analyses')
@router.get("/health")
async def get_health():
return monitor.check_system_health()
@router.get("/metrics")
async def prometheus_metrics():
# Update metrics with latest values
health_data = monitor.check_system_health()
SYNC_QUEUE.set(health_data['services'].get('sync_queue_size', 0))
PENDING_ANALYSES.set(health_data['services'].get('pending_analyses', 0))
return PlainTextResponse(
content=generate_latest(),
media_type=CONTENT_TYPE_LATEST
)
@router.get("/dashboard/health", response_class=JSONResponse)
async def health_dashboard():
"""Health dashboard endpoint with aggregated monitoring data"""
health_data = monitor.check_system_health()
# Get recent logs (last 100 lines)
log_file = Path("/app/logs/app.log")
recent_logs = []
try:
with log_file.open() as f:
lines = f.readlines()[-100:]
recent_logs = [json.loads(line.strip()) for line in lines]
except FileNotFoundError:
pass
return {
"system": health_data,
"logs": recent_logs,
"statistics": {
"log_entries": len(recent_logs),
"error_count": sum(1 for log in recent_logs if log.get('level') == 'ERROR'),
"warning_count": sum(1 for log in recent_logs if log.get('level') == 'WARNING')
}
}

View File

@@ -1,38 +0,0 @@
from fastapi import APIRouter, UploadFile, File, Form, HTTPException
from fastapi.responses import JSONResponse
from backend.app.services.import_service import ImportService
import logging
from typing import Optional
router = APIRouter()
logger = logging.getLogger(__name__)
@router.post("/import/validate")
async def validate_import(
file: UploadFile = File(...),
):
try:
importer = ImportService()
validation_result = await importer.validate_import(file)
return JSONResponse(content=validation_result)
except Exception as e:
logger.error(f"Import validation failed: {str(e)}")
raise HTTPException(status_code=400, detail=str(e)) from e
@router.post("/import")
async def execute_import(
file: UploadFile = File(...),
conflict_resolution: str = Form("skip"),
resolutions: Optional[str] = Form(None),
):
try:
importer = ImportService()
import_result = await importer.execute_import(
file,
conflict_resolution,
resolutions
)
return JSONResponse(content=import_result)
except Exception as e:
logger.error(f"Import failed: {str(e)}")
raise HTTPException(status_code=500, detail=str(e)) from e

View File

@@ -1,132 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException, BackgroundTasks
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from backend.app.database import get_db
from backend.app.models.plan import Plan as PlanModel
from backend.app.models.rule import Rule
from backend.app.schemas.plan import PlanCreate, Plan as PlanSchema, PlanGenerationRequest, PlanGenerationResponse
from backend.app.dependencies import get_ai_service
from backend.app.services.ai_service import AIService
from uuid import UUID, uuid4
from datetime import datetime
from typing import List
router = APIRouter(prefix="/plans", tags=["Training Plans"])
@router.post("/", response_model=PlanSchema)
async def create_plan(
plan: PlanCreate,
db: AsyncSession = Depends(get_db)
):
# Create plan
db_plan = PlanModel(
jsonb_plan=plan.jsonb_plan,
version=plan.version,
parent_plan_id=plan.parent_plan_id
)
db.add(db_plan)
await db.commit()
await db.refresh(db_plan)
return db_plan
@router.get("/{plan_id}", response_model=PlanSchema)
async def read_plan(
plan_id: UUID,
db: AsyncSession = Depends(get_db)
):
plan = await db.get(PlanModel, plan_id)
if not plan:
raise HTTPException(status_code=404, detail="Plan not found")
return plan
@router.get("/", response_model=List[PlanSchema])
async def read_plans(
db: AsyncSession = Depends(get_db)
):
result = await db.execute(select(PlanModel))
return result.scalars().all()
@router.put("/{plan_id}", response_model=PlanSchema)
async def update_plan(
plan_id: UUID,
plan: PlanCreate,
db: AsyncSession = Depends(get_db)
):
db_plan = await db.get(PlanModel, plan_id)
if not db_plan:
raise HTTPException(status_code=404, detail="Plan not found")
# Update plan fields
db_plan.jsonb_plan = plan.jsonb_plan
db_plan.version = plan.version
db_plan.parent_plan_id = plan.parent_plan_id
await db.commit()
await db.refresh(db_plan)
return db_plan
@router.delete("/{plan_id}")
async def delete_plan(
plan_id: UUID,
db: AsyncSession = Depends(get_db)
):
plan = await db.get(PlanModel, plan_id)
if not plan:
raise HTTPException(status_code=404, detail="Plan not found")
await db.delete(plan)
await db.commit()
return {"detail": "Plan deleted"}
@router.post("/generate", response_model=PlanGenerationResponse)
async def generate_plan(
request: PlanGenerationRequest,
background_tasks: BackgroundTasks,
db: AsyncSession = Depends(get_db),
ai_service: AIService = Depends(get_ai_service)
):
"""
Generate a new training plan using AI based on provided goals and rule set.
"""
try:
# Get all rules from the provided rule IDs
rules = []
for rule_id in request.rule_ids:
rule = await db.get(Rule, rule_id)
if not rule:
raise HTTPException(status_code=404, detail=f"Rule with ID {rule_id} not found")
rules.append(rule.jsonb_rules)
# Generate plan using AI service
generated_plan = await ai_service.generate_training_plan(
rule_set=rules, # Pass all rules as a list
goals=request.goals.model_dump(),
preferred_routes=request.preferred_routes
)
# Create a Plan object for the response
plan_obj = PlanSchema(
id=uuid4(), # Generate a proper UUID
jsonb_plan=generated_plan,
version=1,
parent_plan_id=None,
created_at=datetime.utcnow()
)
# Create response with generated plan
response = PlanGenerationResponse(
plan=plan_obj,
generation_metadata={
"status": "success",
"generated_at": datetime.utcnow().isoformat(),
"rule_ids": [str(rule_id) for rule_id in request.rule_ids]
}
)
return response
except Exception as e:
raise HTTPException(
status_code=500,
detail=f"Failed to generate plan: {str(e)}"
)

View File

@@ -1,79 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from typing import List
from backend.app.database import get_db
from backend.app.models.prompt import Prompt
from backend.app.schemas.prompt import Prompt as PromptSchema, PromptCreate, PromptUpdate
from backend.app.services.prompt_manager import PromptManager
router = APIRouter()
@router.get("/", response_model=List[PromptSchema])
async def read_prompts(db: AsyncSession = Depends(get_db)):
"""Get all prompts."""
result = await db.execute(select(Prompt))
return result.scalars().all()
@router.get("/{prompt_id}", response_model=PromptSchema)
async def read_prompt(prompt_id: int, db: AsyncSession = Depends(get_db)):
"""Get a specific prompt by ID."""
prompt = await db.get(Prompt, prompt_id)
if not prompt:
raise HTTPException(status_code=404, detail="Prompt not found")
return prompt
@router.post("/", response_model=PromptSchema)
async def create_prompt(
prompt: PromptCreate,
db: AsyncSession = Depends(get_db)
):
"""Create a new prompt version."""
prompt_manager = PromptManager(db)
new_prompt = await prompt_manager.create_prompt_version(
action_type=prompt.action_type,
prompt_text=prompt.prompt_text,
model=prompt.model
)
return new_prompt
@router.get("/active/{action_type}")
async def get_active_prompt(
action_type: str,
db: AsyncSession = Depends(get_db)
):
"""Get the active prompt for a specific action type."""
prompt_manager = PromptManager(db)
prompt_text = await prompt_manager.get_active_prompt(action_type)
if not prompt_text:
raise HTTPException(status_code=404, detail=f"No active prompt found for {action_type}")
return {"action_type": action_type, "prompt_text": prompt_text}
@router.get("/history/{action_type}", response_model=List[PromptSchema])
async def get_prompt_history(
action_type: str,
db: AsyncSession = Depends(get_db)
):
"""Get the version history for a specific action type."""
prompt_manager = PromptManager(db)
prompts = await prompt_manager.get_prompt_history(action_type)
return prompts
@router.post("/{prompt_id}/activate")
async def activate_prompt_version(
prompt_id: int,
db: AsyncSession = Depends(get_db)
):
"""Activate a specific prompt version."""
prompt_manager = PromptManager(db)
success = await prompt_manager.activate_prompt_version(prompt_id)
if not success:
raise HTTPException(status_code=404, detail="Prompt not found")
return {"message": "Prompt version activated successfully"}

View File

@@ -1,122 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from backend.app.database import get_db
from backend.app.models.rule import Rule
from backend.app.schemas.rule import RuleCreate, Rule as RuleSchema, NaturalLanguageRuleRequest, ParsedRuleResponse
from backend.app.dependencies import get_ai_service
from backend.app.services.ai_service import AIService
from uuid import UUID
from typing import List
router = APIRouter(prefix="/rules", tags=["Rules"])
@router.post("/", response_model=RuleSchema)
async def create_rule(
rule: RuleCreate,
db: AsyncSession = Depends(get_db)
):
"""Create new rule set (plaintext) as per design specification."""
db_rule = Rule(**rule.model_dump())
db.add(db_rule)
await db.commit()
await db.refresh(db_rule)
return db_rule
@router.get("/", response_model=List[RuleSchema])
async def list_rules(
active_only: bool = True,
db: AsyncSession = Depends(get_db)
):
"""List rule sets as specified in design document."""
query = select(Rule)
if active_only:
# For now, return all rules. Later we can add an 'active' field
pass
result = await db.execute(query)
return result.scalars().all()
@router.get("/{rule_id}", response_model=RuleSchema)
async def get_rule(
rule_id: UUID,
db: AsyncSession = Depends(get_db)
):
"""Get specific rule set."""
rule = await db.get(Rule, rule_id)
if not rule:
raise HTTPException(status_code=404, detail="Rule not found")
return rule
@router.put("/{rule_id}", response_model=RuleSchema)
async def update_rule(
rule_id: UUID,
rule: RuleCreate,
db: AsyncSession = Depends(get_db)
):
"""Update rule set - creates new version as per design spec."""
db_rule = await db.get(Rule, rule_id)
if not db_rule:
raise HTTPException(status_code=404, detail="Rule not found")
# Create new version instead of updating in place
new_version = Rule(
name=rule.name,
description=rule.description,
user_defined=rule.user_defined,
rule_text=rule.rule_text,
version=db_rule.version + 1,
parent_rule_id=db_rule.id
)
db.add(new_version)
await db.commit()
await db.refresh(new_version)
return new_version
@router.delete("/{rule_id}")
async def delete_rule(
rule_id: UUID,
db: AsyncSession = Depends(get_db)
):
"""Delete rule set."""
rule = await db.get(Rule, rule_id)
if not rule:
raise HTTPException(status_code=404, detail="Rule not found")
await db.delete(rule)
await db.commit()
return {"detail": "Rule deleted"}
@router.post("/parse-natural-language", response_model=ParsedRuleResponse)
async def parse_natural_language_rules(
request: NaturalLanguageRuleRequest,
ai_service: AIService = Depends(get_ai_service)
):
"""
Parse natural language training rules into structured format using AI.
This helps users create rules but the final rule_text is stored as plaintext.
"""
try:
# Parse rules using AI service - this creates structured data for validation
parsed_rules = await ai_service.parse_rules_from_natural_language(request.natural_language_text)
# Simple validation - just check for basic completeness
suggestions = []
if len(request.natural_language_text.split()) < 10:
suggestions.append("Consider providing more detailed rules")
response = ParsedRuleResponse(
parsed_rules=parsed_rules,
confidence_score=0.8, # Simplified confidence
suggestions=suggestions,
validation_errors=[], # Simplified - no complex validation
rule_name=request.rule_name
)
return response
except Exception as e:
raise HTTPException(
status_code=500,
detail=f"Failed to parse natural language rules: {str(e)}"
)

View File

@@ -1,167 +0,0 @@
from fastapi import APIRouter, Depends, HTTPException, BackgroundTasks
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from typing import List
from backend.app.database import get_db
from backend.app.models.workout import Workout
from backend.app.models.analysis import Analysis
from backend.app.models.garmin_sync_log import GarminSyncLog
from backend.app.models.plan import Plan
from backend.app.schemas.workout import Workout as WorkoutSchema, WorkoutSyncStatus, WorkoutMetric
from backend.app.schemas.analysis import Analysis as AnalysisSchema
from backend.app.schemas.plan import Plan as PlanSchema
from backend.app.services.workout_sync import WorkoutSyncService
from backend.app.services.ai_service import AIService
from backend.app.services.plan_evolution import PlanEvolutionService
router = APIRouter()
@router.get("/", response_model=List[WorkoutSchema])
async def read_workouts(db: AsyncSession = Depends(get_db)):
"""Get all workouts."""
result = await db.execute(select(Workout))
return result.scalars().all()
@router.get("/{workout_id}", response_model=WorkoutSchema)
async def read_workout(workout_id: int, db: AsyncSession = Depends(get_db)):
"""Get a specific workout by ID."""
workout = await db.get(Workout, workout_id)
if not workout:
raise HTTPException(status_code=404, detail="Workout not found")
return workout
@router.get("/{workout_id}/metrics", response_model=list[WorkoutMetric])
async def get_workout_metrics(
workout_id: int,
db: AsyncSession = Depends(get_db)
):
"""Get time-series metrics for a workout"""
workout = await db.get(Workout, workout_id)
if not workout:
raise HTTPException(status_code=404, detail="Workout not found")
if not workout.metrics:
return []
return workout.metrics
@router.post("/sync")
async def trigger_garmin_sync(
background_tasks: BackgroundTasks,
db: AsyncSession = Depends(get_db)
):
"""Trigger background sync of recent Garmin activities."""
sync_service = WorkoutSyncService(db)
background_tasks.add_task(sync_service.sync_recent_activities, days_back=14)
return {"message": "Garmin sync started"}
@router.get("/sync-status", response_model=WorkoutSyncStatus)
async def get_sync_status(db: AsyncSession = Depends(get_db)):
"""Get the latest sync status."""
result = await db.execute(
select(GarminSyncLog).order_by(GarminSyncLog.created_at.desc()).limit(1)
)
sync_log = result.scalar_one_or_none()
if not sync_log:
return WorkoutSyncStatus(status="never_synced")
return sync_log
@router.post("/{workout_id}/analyze")
async def analyze_workout(
workout_id: int,
background_tasks: BackgroundTasks,
db: AsyncSession = Depends(get_db)
):
"""Trigger AI analysis of a specific workout."""
workout = await db.get(Workout, workout_id)
if not workout:
raise HTTPException(status_code=404, detail="Workout not found")
ai_service = AIService(db)
background_tasks.add_task(
analyze_and_store_workout,
db, workout, ai_service
)
return {"message": "Analysis started", "workout_id": workout_id}
async def analyze_and_store_workout(db: AsyncSession, workout: Workout, ai_service: AIService):
"""Background task to analyze workout and store results."""
try:
# Get current plan if workout is associated with one
plan = None
if workout.plan_id:
plan = await db.get(Plan, workout.plan_id)
# Analyze workout
analysis_result = await ai_service.analyze_workout(workout, plan.jsonb_plan if plan else None)
# Store analysis
analysis = Analysis(
workout_id=workout.id,
jsonb_feedback=analysis_result.get("feedback", {}),
suggestions=analysis_result.get("suggestions", {})
)
db.add(analysis)
await db.commit()
except Exception as e:
# Log error but don't crash the background task
print(f"Error analyzing workout {workout.id}: {str(e)}")
@router.get("/{workout_id}/analyses", response_model=List[AnalysisSchema])
async def read_workout_analyses(workout_id: int, db: AsyncSession = Depends(get_db)):
"""Get all analyses for a specific workout."""
workout = await db.get(Workout, workout_id)
if not workout:
raise HTTPException(status_code=404, detail="Workout not found")
return workout.analyses
@router.post("/analyses/{analysis_id}/approve")
async def approve_analysis(
analysis_id: int,
db: AsyncSession = Depends(get_db)
):
"""Approve analysis suggestions and trigger plan evolution."""
analysis = await db.get(Analysis, analysis_id)
if not analysis:
raise HTTPException(status_code=404, detail="Analysis not found")
analysis.approved = True
# Trigger plan evolution if suggestions exist and workout has a plan
if analysis.suggestions and analysis.workout.plan_id:
evolution_service = PlanEvolutionService(db)
current_plan = await db.get(Plan, analysis.workout.plan_id)
if current_plan:
new_plan = await evolution_service.evolve_plan_from_analysis(
analysis, current_plan
)
await db.commit()
return {"message": "Analysis approved", "new_plan_id": new_plan.id if new_plan else None}
await db.commit()
return {"message": "Analysis approved"}
@router.get("/plans/{plan_id}/evolution", response_model=List["PlanSchema"])
async def get_plan_evolution(
plan_id: int,
db: AsyncSession = Depends(get_db)
):
"""Get full evolution history for a plan."""
evolution_service = PlanEvolutionService(db)
plans = await evolution_service.get_plan_evolution_history(plan_id)
if not plans:
raise HTTPException(status_code=404, detail="Plan not found")
return plans

View File

@@ -1,25 +0,0 @@
from pydantic import BaseModel
from typing import Optional, Dict, Any
class AnalysisBase(BaseModel):
workout_id: int
analysis_type: str = 'workout_review'
jsonb_feedback: Optional[Dict[str, Any]] = None
suggestions: Optional[Dict[str, Any]] = None
approved: bool = False
class AnalysisCreate(AnalysisBase):
pass
class Analysis(AnalysisBase):
id: int
class Config:
from_attributes = True
class AnalysisUpdate(BaseModel):
approved: bool

View File

@@ -1,25 +0,0 @@
from pydantic import BaseModel
from typing import Optional, List
class GPXData(BaseModel):
total_distance: float
elevation_gain: float
points: List[dict]
class RouteCreate(BaseModel):
name: str
description: Optional[str] = None
total_distance: float
elevation_gain: float
gpx_file_path: str
class Route(BaseModel):
id: str
name: str
description: Optional[str] = None
total_distance: float
elevation_gain: float
gpx_file_path: str
class Config:
orm_mode = True

View File

@@ -1,43 +0,0 @@
from pydantic import BaseModel, Field, field_validator
from datetime import datetime
from typing import List, Optional, Dict, Any
from uuid import UUID, uuid4
class TrainingGoals(BaseModel):
"""Training goals for plan generation."""
primary_goal: str = Field(..., description="Primary training goal")
target_weekly_hours: int = Field(..., ge=3, le=20, description="Target hours per week")
fitness_level: str = Field(..., description="Current fitness level")
event_date: Optional[str] = Field(None, description="Target event date (YYYY-MM-DD)")
preferred_routes: List[int] = Field(default=[], description="Preferred route IDs")
avoid_days: List[str] = Field(default=[], description="Days to avoid training")
class PlanBase(BaseModel):
jsonb_plan: Dict[str, Any] = Field(..., description="Training plan data in JSONB format")
version: int = Field(..., gt=0, description="Plan version number")
parent_plan_id: Optional[UUID] = Field(None, description="Parent plan ID for evolution tracking")
class PlanCreate(PlanBase):
pass
class Plan(PlanBase):
id: UUID = Field(default_factory=uuid4)
created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: Optional[datetime] = Field(default=None)
model_config = {"from_attributes": True}
class PlanGenerationRequest(BaseModel):
"""Request schema for plan generation."""
rule_ids: List[int] = Field(..., description="Rule set IDs to apply")
goals: TrainingGoals = Field(..., description="Training goals")
duration_weeks: int = Field(4, ge=1, le=20, description="Plan duration in weeks")
user_preferences: Optional[Dict[str, Any]] = Field(None, description="Additional preferences")
preferred_routes: List[int] = Field(default=[], description="Preferred route IDs")
class PlanGenerationResponse(BaseModel):
"""Response schema for plan generation."""
plan: Plan
generation_metadata: Dict[str, Any] = Field(..., description="Generation metadata")
model_config = {"from_attributes": True}

View File

@@ -1,30 +0,0 @@
from pydantic import BaseModel
from typing import Optional
from datetime import datetime
class PromptBase(BaseModel):
action_type: str
model: Optional[str] = None
prompt_text: str
version: int = 1
active: bool = True
class PromptCreate(BaseModel):
action_type: str
prompt_text: str
model: Optional[str] = None
class PromptUpdate(BaseModel):
prompt_text: Optional[str] = None
active: Optional[bool] = None
class Prompt(PromptBase):
id: int
created_at: datetime
class Config:
orm_mode = True

View File

@@ -1,49 +0,0 @@
from pydantic import BaseModel, Field, field_validator
from typing import Optional, Dict, Any, List
from uuid import UUID
from datetime import datetime
class NaturalLanguageRuleRequest(BaseModel):
"""Request schema for natural language rule parsing."""
natural_language_text: str = Field(
...,
min_length=10,
max_length=5000,
description="Natural language rule description"
)
rule_name: str = Field(..., min_length=1, max_length=100, description="Rule set name")
@field_validator('natural_language_text')
@classmethod
def validate_text_content(cls, v):
required_keywords = ['ride', 'week', 'hour', 'day', 'rest', 'training']
if not any(keyword in v.lower() for keyword in required_keywords):
raise ValueError("Text must contain training-related keywords")
return v
class ParsedRuleResponse(BaseModel):
"""Response schema for parsed rules."""
parsed_rules: Dict[str, Any] = Field(..., description="Structured rule data")
confidence_score: Optional[float] = Field(None, ge=0.0, le=1.0, description="Parsing confidence")
suggestions: List[str] = Field(default=[], description="Improvement suggestions")
validation_errors: List[str] = Field(default=[], description="Validation errors")
rule_name: str = Field(..., description="Rule set name")
class RuleBase(BaseModel):
"""Base rule schema."""
name: str = Field(..., min_length=1, max_length=100)
description: Optional[str] = Field(None, max_length=500)
user_defined: bool = Field(True, description="Whether rule is user-defined")
rule_text: str = Field(..., min_length=10, description="Plaintext rule description")
version: int = Field(1, ge=1, description="Rule version")
parent_rule_id: Optional[UUID] = Field(None, description="Parent rule for versioning")
class RuleCreate(RuleBase):
pass
class Rule(RuleBase):
id: UUID
created_at: datetime
updated_at: datetime
model_config = {"from_attributes": True}

View File

@@ -1,46 +0,0 @@
from pydantic import BaseModel
from typing import Optional, Dict, Any
from datetime import datetime
class WorkoutMetric(BaseModel):
timestamp: datetime
heart_rate: Optional[int] = None
power: Optional[float] = None
cadence: Optional[float] = None
class WorkoutBase(BaseModel):
garmin_activity_id: str
activity_type: Optional[str] = None
start_time: datetime
duration_seconds: Optional[int] = None
distance_m: Optional[float] = None
avg_hr: Optional[int] = None
max_hr: Optional[int] = None
avg_power: Optional[float] = None
max_power: Optional[float] = None
avg_cadence: Optional[float] = None
elevation_gain_m: Optional[float] = None
metrics: Optional[Dict[str, Any]] = None
class WorkoutCreate(WorkoutBase):
plan_id: Optional[int] = None
class Workout(WorkoutBase):
id: int
plan_id: Optional[int] = None
class Config:
orm_mode = True
class WorkoutSyncStatus(BaseModel):
status: str
last_sync_time: Optional[datetime] = None
activities_synced: int = 0
error_message: Optional[str] = None
class Config:
orm_mode = True

View File

@@ -1,208 +0,0 @@
import os
import asyncio
from typing import Dict, Any, List, Optional
import httpx
import json
from backend.app.services.prompt_manager import PromptManager
from backend.app.models.workout import Workout
import logging
logger = logging.getLogger(__name__)
class AIService:
"""Service for AI-powered analysis and plan generation."""
def __init__(self, db_session):
self.db = db_session
self.prompt_manager = PromptManager(db_session)
self.api_key = os.getenv("OPENROUTER_API_KEY")
self.model = os.getenv("AI_MODEL", "anthropic/claude-3-sonnet-20240229")
self.base_url = "https://openrouter.ai/api/v1"
async def analyze_workout(self, workout: Workout, plan: Optional[Dict] = None) -> Dict[str, Any]:
"""Analyze a workout using AI and generate feedback."""
prompt_template = await self.prompt_manager.get_active_prompt("workout_analysis")
if not prompt_template:
raise ValueError("No active workout analysis prompt found")
# Build context from workout data
workout_context = {
"activity_type": workout.activity_type,
"duration_minutes": workout.duration_seconds / 60 if workout.duration_seconds else 0,
"distance_km": workout.distance_m / 1000 if workout.distance_m else 0,
"avg_hr": workout.avg_hr,
"avg_power": workout.avg_power,
"elevation_gain": workout.elevation_gain_m,
"planned_workout": plan
}
prompt = prompt_template.format(**workout_context)
response = await self._make_ai_request(prompt)
return self._parse_workout_analysis(response)
async def generate_plan(self, rules_text: str, goals: Dict[str, Any]) -> Dict[str, Any]:
"""Generate a training plan using AI with plaintext rules as per design spec."""
prompt_template = await self.prompt_manager.get_active_prompt("plan_generation")
context = {
"rules_text": rules_text, # Use plaintext rules directly
"goals": goals,
"current_fitness_level": goals.get("fitness_level", "intermediate")
}
prompt = prompt_template.format(**context)
response = await self._make_ai_request(prompt)
return self._parse_plan_response(response)
async def generate_training_plan(self, rules_text: str, goals: Dict[str, Any], preferred_routes: List[int]) -> Dict[str, Any]:
"""Generate a training plan using AI with plaintext rules as per design specification."""
prompt_template = await self.prompt_manager.get_active_prompt("training_plan_generation")
if not prompt_template:
# Fallback to general plan generation prompt
prompt_template = await self.prompt_manager.get_active_prompt("plan_generation")
context = {
"rules_text": rules_text, # Use plaintext rules directly without parsing
"goals": goals,
"preferred_routes": preferred_routes,
"current_fitness_level": goals.get("fitness_level", "intermediate")
}
prompt = prompt_template.format(**context)
response = await self._make_ai_request(prompt)
return self._parse_plan_response(response)
async def parse_rules_from_natural_language(self, natural_language: str) -> Dict[str, Any]:
"""Parse natural language rules into structured format."""
prompt_template = await self.prompt_manager.get_active_prompt("rule_parsing")
prompt = prompt_template.format(user_rules=natural_language)
response = await self._make_ai_request(prompt)
parsed_rules = self._parse_rules_response(response)
# Add confidence scoring to the parsed rules
parsed_rules = self._add_confidence_scoring(parsed_rules)
return parsed_rules
def _add_confidence_scoring(self, parsed_rules: Dict[str, Any]) -> Dict[str, Any]:
"""Add confidence scoring to parsed rules based on parsing quality."""
confidence_score = self._calculate_confidence_score(parsed_rules)
# Add confidence score to the parsed rules
if isinstance(parsed_rules, dict):
parsed_rules["_confidence"] = confidence_score
parsed_rules["_parsing_quality"] = self._get_parsing_quality(confidence_score)
return parsed_rules
def _calculate_confidence_score(self, parsed_rules: Dict[str, Any]) -> float:
"""Calculate confidence score based on parsing quality."""
if not isinstance(parsed_rules, dict):
return 0.5 # Default confidence for non-dict responses
score = 0.0
# Score based on presence of key cycling training rule fields
key_fields = {
"max_rides_per_week": 0.3,
"min_rest_between_hard": 0.2,
"max_duration_hours": 0.2,
"weather_constraints": 0.3,
"intensity_limits": 0.2,
"schedule_constraints": 0.2
}
for field, weight in key_fields.items():
if parsed_rules.get(field) is not None:
score += weight
return min(score, 1.0)
def _get_parsing_quality(self, confidence_score: float) -> str:
"""Get parsing quality description based on confidence score."""
if confidence_score >= 0.8:
return "excellent"
elif confidence_score >= 0.6:
return "good"
elif confidence_score >= 0.4:
return "fair"
else:
return "poor"
async def evolve_plan(self, evolution_context: Dict[str, Any]) -> Dict[str, Any]:
"""Evolve a training plan using AI based on workout analysis."""
prompt_template = await self.prompt_manager.get_active_prompt("plan_evolution")
if not prompt_template:
raise ValueError("No active plan evolution prompt found")
prompt = prompt_template.format(**evolution_context)
response = await self._make_ai_request(prompt)
return self._parse_plan_response(response)
async def _make_ai_request(self, prompt: str) -> str:
"""Make async request to OpenRouter API with retry logic."""
async with httpx.AsyncClient() as client:
for attempt in range(3): # Simple retry logic
try:
response = await client.post(
f"{self.base_url}/chat/completions",
headers={
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
},
json={
"model": self.model,
"messages": [{"role": "user", "content": prompt}],
"max_tokens": 2000,
},
timeout=30.0
)
response.raise_for_status()
data = response.json()
return data["choices"][0]["message"]["content"]
except Exception as e:
if attempt == 2: # Last attempt
logger.error(f"AI request failed after 3 attempts: {str(e)}")
raise AIServiceError(f"AI request failed after 3 attempts: {str(e)}")
await asyncio.sleep(2 ** attempt) # Exponential backoff
def _parse_workout_analysis(self, response: str) -> Dict[str, Any]:
"""Parse AI response for workout analysis."""
try:
# Assume AI returns JSON
clean_response = response.strip()
if clean_response.startswith("```json"):
clean_response = clean_response[7:-3]
return json.loads(clean_response)
except json.JSONDecodeError:
return {"raw_analysis": response, "structured": False}
def _parse_plan_response(self, response: str) -> Dict[str, Any]:
"""Parse AI response for plan generation."""
try:
clean_response = response.strip()
if clean_response.startswith("```json"):
clean_response = clean_response[7:-3]
return json.loads(clean_response)
except json.JSONDecodeError:
return {"raw_plan": response, "structured": False}
def _parse_rules_response(self, response: str) -> Dict[str, Any]:
"""Parse AI response for rule parsing."""
try:
clean_response = response.strip()
if clean_response.startswith("```json"):
clean_response = clean_response[7:-3]
return json.loads(clean_response)
except json.JSONDecodeError:
return {"raw_rules": response, "structured": False}
class AIServiceError(Exception):
"""Raised when AI service requests fail."""
pass

View File

@@ -1,138 +0,0 @@
import json
from pathlib import Path
from datetime import datetime
import zipfile
from backend.app.database import SessionLocal
from backend.app.models import Route, Rule, Plan
import tempfile
import logging
import shutil
logger = logging.getLogger(__name__)
class ExportService:
def __init__(self):
self.temp_dir = Path(tempfile.gettempdir()) / "cycling_exports"
self.temp_dir.mkdir(exist_ok=True)
async def create_export(self, export_types, export_format):
"""Main export creation entry point"""
export_data = await self._fetch_export_data(export_types)
export_path = self._generate_export_file(export_data, export_format, export_types)
return export_path
async def _fetch_export_data(self, export_types):
"""Fetch data from database based on requested types"""
db = SessionLocal()
try:
data = {}
if 'routes' in export_types:
routes = db.query(Route).all()
data['routes'] = [self._serialize_route(r) for r in routes]
if 'rules' in export_types:
rules = db.query(Rule).all()
data['rules'] = [self._serialize_rule(r) for r in rules]
if 'plans' in export_types:
plans = db.query(Plan).all()
data['plans'] = [self._serialize_plan(p) for p in plans]
return data
finally:
db.close()
def _generate_export_file(self, data, format, types):
"""Generate the export file in specified format"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
base_name = f"export_{'_'.join(types)}_{timestamp}"
if format == 'json':
return self._create_json_export(data, base_name)
elif format == 'zip':
return self._create_zip_export(data, base_name)
elif format == 'gpx':
return self._create_gpx_export(data, base_name)
else:
raise ValueError(f"Unsupported format: {format}")
def _create_json_export(self, data, base_name):
"""Create single JSON file export"""
export_path = self.temp_dir / f"{base_name}.json"
with open(export_path, 'w') as f:
json.dump(data, f, indent=2)
return export_path
def _create_zip_export(self, data, base_name):
"""Create ZIP archive with JSON and GPX files"""
zip_path = self.temp_dir / f"{base_name}.zip"
with zipfile.ZipFile(zip_path, 'w') as zipf:
# Add JSON data
json_path = self._create_json_export(data, base_name)
zipf.write(json_path, arcname=json_path.name)
# Add GPX files if exporting routes
if 'routes' in data:
gpx_dir = Path("/app/data/gpx")
for route in data['routes']:
gpx_path = gpx_dir / route['gpx_file_path']
if gpx_path.exists():
zipf.write(gpx_path, arcname=f"gpx/{gpx_path.name}")
return zip_path
def _create_gpx_export(self, data, base_name):
"""Export only GPX files from routes"""
if 'routes' not in data:
raise ValueError("GPX export requires routes to be selected")
zip_path = self.temp_dir / f"{base_name}.zip"
with zipfile.ZipFile(zip_path, 'w') as zipf:
gpx_dir = Path("/app/data/gpx")
for route in data['routes']:
gpx_path = gpx_dir / route['gpx_file_path']
if gpx_path.exists():
zipf.write(gpx_path, arcname=gpx_path.name)
return zip_path
def _serialize_route(self, route):
return {
"id": route.id,
"name": route.name,
"description": route.description,
"category": route.category,
"gpx_file_path": route.gpx_file_path,
"created_at": route.created_at.isoformat(),
"updated_at": route.updated_at.isoformat()
}
def _serialize_rule(self, rule):
return {
"id": rule.id,
"name": rule.name,
"natural_language": rule.natural_language,
"jsonb_rules": rule.jsonb_rules,
"version": rule.version,
"created_at": rule.created_at.isoformat()
}
def _serialize_plan(self, plan):
return {
"id": plan.id,
"name": plan.name,
"jsonb_plan": plan.jsonb_plan,
"version": plan.version,
"created_at": plan.created_at.isoformat()
}
def cleanup_temp_files(self):
"""Clean up temporary export files older than 1 hour"""
cutoff = datetime.now().timestamp() - 3600
for file in self.temp_dir.glob("*"):
if file.stat().st_mtime < cutoff:
try:
file.unlink()
except Exception as e:
logger.warning(f"Failed to clean up temp file {file}: {str(e)}")

View File

@@ -1,84 +0,0 @@
import os
import garth
from typing import List, Dict, Any, Optional
from datetime import datetime, timedelta
import logging
logger = logging.getLogger(__name__)
class GarminService:
"""Service for interacting with Garmin Connect API."""
def __init__(self):
self.username = os.getenv("GARMIN_USERNAME")
self.password = os.getenv("GARMIN_PASSWORD")
self.client: Optional[garth.Client] = None
self.session_dir = "/app/data/sessions"
# Ensure session directory exists
os.makedirs(self.session_dir, exist_ok=True)
async def authenticate(self) -> bool:
"""Authenticate with Garmin Connect and persist session."""
if not self.client:
self.client = garth.Client()
try:
# Try to load existing session
self.client.load(self.session_dir)
logger.info("Loaded existing Garmin session")
return True
except Exception:
# Fresh authentication required
try:
await self.client.login(self.username, self.password)
self.client.save(self.session_dir)
logger.info("Successfully authenticated with Garmin Connect")
return True
except Exception as e:
logger.error(f"Garmin authentication failed: {str(e)}")
raise GarminAuthError(f"Authentication failed: {str(e)}")
async def get_activities(self, limit: int = 10, start_date: datetime = None) -> List[Dict[str, Any]]:
"""Fetch recent activities from Garmin Connect."""
if not self.client:
await self.authenticate()
if not start_date:
start_date = datetime.now() - timedelta(days=7)
try:
activities = self.client.get_activities(limit=limit, start=start_date)
logger.info(f"Fetched {len(activities)} activities from Garmin")
return activities
except Exception as e:
logger.error(f"Failed to fetch activities: {str(e)}")
raise GarminAPIError(f"Failed to fetch activities: {str(e)}")
async def get_activity_details(self, activity_id: str) -> Dict[str, Any]:
"""Get detailed activity data including metrics."""
if not self.client:
await self.authenticate()
try:
details = self.client.get_activity(activity_id)
logger.info(f"Fetched details for activity {activity_id}")
return details
except Exception as e:
logger.error(f"Failed to fetch activity details for {activity_id}: {str(e)}")
raise GarminAPIError(f"Failed to fetch activity details: {str(e)}")
def is_authenticated(self) -> bool:
"""Check if we have a valid authenticated session."""
return self.client is not None
class GarminAuthError(Exception):
"""Raised when Garmin authentication fails."""
pass
class GarminAPIError(Exception):
"""Raised when Garmin API calls fail."""
pass

View File

@@ -1,62 +0,0 @@
import os
import uuid
import logging
from fastapi import UploadFile, HTTPException
import gpxpy
from backend.app.config import settings
logger = logging.getLogger(__name__)
async def store_gpx_file(file: UploadFile) -> str:
"""Store uploaded GPX file and return path"""
try:
file_ext = os.path.splitext(file.filename)[1]
if file_ext.lower() != '.gpx':
raise HTTPException(status_code=400, detail="Invalid file type")
file_name = f"{uuid.uuid4()}{file_ext}"
file_path = os.path.join(settings.GPX_STORAGE_PATH, file_name)
# Ensure storage directory exists
os.makedirs(os.path.dirname(file_path), exist_ok=True)
# Save file
with open(file_path, "wb") as f:
f.write(await file.read())
return file_path
except Exception as e:
logger.error(f"Error storing GPX file: {e}")
raise HTTPException(status_code=500, detail="Error storing file")
async def parse_gpx(file_path: str) -> dict:
"""Parse GPX file and extract key metrics"""
try:
with open(file_path, 'r') as f:
gpx = gpxpy.parse(f)
total_distance = 0.0
elevation_gain = 0.0
points = []
for track in gpx.tracks:
for segment in track.segments:
total_distance += segment.length_3d()
for i in range(1, len(segment.points)):
elevation_gain += max(0, segment.points[i].elevation - segment.points[i-1].elevation)
points = [{
'lat': point.latitude,
'lon': point.longitude,
'ele': point.elevation,
'time': point.time.isoformat() if point.time else None
} for point in segment.points]
return {
'total_distance': total_distance,
'elevation_gain': elevation_gain,
'points': points
}
except Exception as e:
logger.error(f"Error parsing GPX file: {e}")
raise HTTPException(status_code=500, detail="Error parsing GPX file")

View File

@@ -1,94 +0,0 @@
import psutil
from datetime import datetime
import logging
from typing import Dict, Any
from sqlalchemy import text
from backend.app.database import get_db
from backend.app.models.garmin_sync_log import GarminSyncLog, SyncStatus
import requests
from backend.app.config import settings
logger = logging.getLogger(__name__)
class HealthMonitor:
def __init__(self):
self.warning_thresholds = {
'cpu_percent': 80,
'memory_percent': 75,
'disk_percent': 85
}
def check_system_health(self) -> Dict[str, Any]:
"""Check vital system metrics and log warnings"""
metrics = {
'timestamp': datetime.utcnow().isoformat(),
'cpu': psutil.cpu_percent(),
'memory': psutil.virtual_memory().percent,
'disk': psutil.disk_usage('/').percent,
'services': self._check_service_health()
}
self._log_anomalies(metrics)
return metrics
def _check_service_health(self) -> Dict[str, str]:
"""Check critical application services"""
return {
'database': self._check_database(),
'garmin_sync': self._check_garmin_sync(),
'ai_service': self._check_ai_service(),
'sync_queue_size': self._get_sync_queue_size(),
'pending_analyses': self._count_pending_analyses()
}
def _get_sync_queue_size(self) -> int:
"""Get number of pending sync operations"""
from backend.app.models.garmin_sync_log import GarminSyncLog, SyncStatus
return GarminSyncLog.query.filter_by(status=SyncStatus.PENDING).count()
def _count_pending_analyses(self) -> int:
"""Count workouts needing analysis"""
from backend.app.models.workout import Workout
return Workout.query.filter_by(analysis_status='pending').count()
def _check_database(self) -> str:
try:
with get_db() as db:
db.execute(text("SELECT 1"))
return "ok"
except Exception as e:
logger.error("Database check failed", extra={"component": "database", "error": str(e)})
return "down"
def _check_garmin_sync(self) -> str:
try:
last_sync = GarminSyncLog.get_latest()
if last_sync and last_sync.status == SyncStatus.FAILED:
logger.warning("Garmin sync has failed status", extra={"component": "garmin_sync", "status": last_sync.status.value})
return "warning"
return "ok"
except Exception as e:
logger.error("Garmin sync check failed", extra={"component": "garmin_sync", "error": str(e)})
return "down"
def _check_ai_service(self) -> str:
try:
response = requests.get(
f"{settings.AI_SERVICE_URL}/ping",
timeout=5,
headers={"Authorization": f"Bearer {settings.OPENROUTER_API_KEY}"}
)
return "ok" if response.ok else "down"
except Exception as e:
logger.error("AI service check failed", extra={"component": "ai_service", "error": str(e)})
return "down"
def _log_anomalies(self, metrics: Dict[str, Any]):
alerts = []
for metric, value in metrics.items():
if metric in self.warning_thresholds and value > self.warning_thresholds[metric]:
alerts.append(f"{metric} {value}%")
logger.warning("System threshold exceeded", extra={"metric": metric, "value": value, "threshold": self.warning_thresholds[metric]})
if alerts:
logger.warning("System thresholds exceeded", extra={"alerts": alerts})

View File

@@ -1,259 +0,0 @@
import json
import zipfile
from pathlib import Path
import tempfile
from datetime import datetime
from backend.app.database import SessionLocal
from backend.app.models import Route, Rule, Plan
import shutil
import logging
from sqlalchemy import and_
from typing import Dict, List
logger = logging.getLogger(__name__)
class ImportService:
def __init__(self):
self.temp_dir = Path(tempfile.gettempdir()) / "cycling_imports"
self.temp_dir.mkdir(exist_ok=True)
async def validate_import(self, file: UploadFile) -> dict:
"""Validate import file and detect conflicts"""
try:
# Save uploaded file to temp location
file_path = self.temp_dir / file.filename
with open(file_path, "wb") as f:
shutil.copyfileobj(file.file, f)
# Extract data based on file type
if file.filename.endswith('.zip'):
data = self._process_zip_import(file_path)
elif file.filename.endswith('.json'):
data = self._process_json_import(file_path)
else:
raise ValueError("Unsupported file format")
# Detect conflicts
conflicts = []
if 'routes' in data:
conflicts += self._detect_route_conflicts(data['routes'])
if 'rules' in data:
conflicts += self._detect_rule_conflicts(data['rules'])
if 'plans' in data:
conflicts += self._detect_plan_conflicts(data['plans'])
return {
"valid": True,
"conflicts": conflicts,
"summary": {
"routes": len(data.get('routes', [])),
"rules": len(data.get('rules', [])),
"plans": len(data.get('plans', []))
}
}
except Exception as e:
logger.error(f"Validation error: {str(e)}")
return {"valid": False, "error": str(e)}
async def execute_import(self, file: UploadFile,
conflict_resolution: str,
resolutions: List[dict]) -> dict:
"""Execute the import with specified conflict resolution"""
db = SessionLocal()
try:
db.begin()
# Process file
file_path = self.temp_dir / file.filename
with open(file_path, "wb") as f:
shutil.copyfileobj(file.file, f)
if file.filename.endswith('.zip'):
data = self._process_zip_import(file_path)
gpx_files = self._extract_gpx_files(file_path)
elif file.filename.endswith('.json'):
data = self._process_json_import(file_path)
gpx_files = []
else:
raise ValueError("Unsupported file format")
# Apply resolutions
resolution_map = {r['id']: r['action'] for r in resolutions}
# Import data
results = {
"imported": {"routes": 0, "rules": 0, "plans": 0},
"skipped": {"routes": 0, "rules": 0, "plans": 0},
"errors": []
}
# Import routes
if 'routes' in data:
for route_data in data['routes']:
action = resolution_map.get(route_data['id'], conflict_resolution)
try:
if self._should_import_route(route_data, action, db):
self._import_route(route_data, db)
results["imported"]["routes"] += 1
else:
results["skipped"]["routes"] += 1
except Exception as e:
results["errors"].append(f"Route {route_data['id']}: {str(e)}")
# Import rules
if 'rules' in data:
for rule_data in data['rules']:
action = resolution_map.get(rule_data['id'], conflict_resolution)
try:
if self._should_import_rule(rule_data, action, db):
self._import_rule(rule_data, db)
results["imported"]["rules"] += 1
else:
results["skipped"]["rules"] += 1
except Exception as e:
results["errors"].append(f"Rule {rule_data['id']}: {str(e)}")
# Import plans
if 'plans' in data:
for plan_data in data['plans']:
action = resolution_map.get(plan_data['id'], conflict_resolution)
try:
if self._should_import_plan(plan_data, action, db):
self._import_plan(plan_data, db)
results["imported"]["plans"] += 1
else:
results["skipped"]["plans"] += 1
except Exception as e:
results["errors"].append(f"Plan {plan_data['id']}: {str(e)}")
# Save GPX files
if gpx_files:
gpx_dir = Path("/app/data/gpx")
for gpx in gpx_files:
shutil.move(gpx, gpx_dir / gpx.name)
db.commit()
return results
except Exception as e:
db.rollback()
logger.error(f"Import failed: {str(e)}")
return {"error": str(e)}
finally:
db.close()
self._cleanup_temp_files()
def _process_zip_import(self, file_path: Path) -> dict:
"""Extract and process ZIP file import"""
data = {}
with zipfile.ZipFile(file_path, 'r') as zipf:
# Find data.json
json_files = [f for f in zipf.namelist() if f.endswith('.json')]
if not json_files:
raise ValueError("No JSON data found in ZIP file")
with zipf.open(json_files[0]) as f:
data = json.load(f)
return data
def _process_json_import(self, file_path: Path) -> dict:
"""Process JSON file import"""
with open(file_path) as f:
return json.load(f)
def _extract_gpx_files(self, file_path: Path) -> List[Path]:
"""Extract GPX files from ZIP archive"""
gpx_files = []
extract_dir = self.temp_dir / "gpx"
extract_dir.mkdir(exist_ok=True)
with zipfile.ZipFile(file_path, 'r') as zipf:
for file in zipf.namelist():
if file.startswith('gpx/') and file.endswith('.gpx'):
zipf.extract(file, extract_dir)
gpx_files.append(extract_dir / file)
return gpx_files
def _detect_route_conflicts(self, routes: List[dict]) -> List[dict]:
conflicts = []
db = SessionLocal()
try:
for route in routes:
existing = db.query(Route).filter(
(Route.id == route['id']) |
(Route.name == route['name'])
).first()
if existing:
conflict = {
"type": "route",
"id": route['id'],
"name": route['name'],
"existing_version": existing.updated_at,
"import_version": datetime.fromisoformat(route['updated_at']),
"resolution_options": ["overwrite", "rename", "skip"]
}
conflicts.append(conflict)
finally:
db.close()
return conflicts
def _should_import_route(self, route_data: dict, action: str, db) -> bool:
existing = db.query(Route).filter(
(Route.id == route_data['id']) |
(Route.name == route_data['name'])
).first()
if not existing:
return True
if action == 'overwrite':
return True
elif action == 'rename':
route_data['name'] = f"{route_data['name']} (Imported)"
return True
elif action == 'skip':
return False
return False
def _import_route(self, route_data: dict, db):
"""Import a single route"""
existing = db.query(Route).get(route_data['id'])
if existing:
# Update existing route
existing.name = route_data['name']
existing.description = route_data['description']
existing.category = route_data['category']
existing.gpx_file_path = route_data['gpx_file_path']
existing.updated_at = datetime.fromisoformat(route_data['updated_at'])
else:
# Create new route
route = Route(
id=route_data['id'],
name=route_data['name'],
description=route_data['description'],
category=route_data['category'],
gpx_file_path=route_data['gpx_file_path'],
created_at=datetime.fromisoformat(route_data['created_at']),
updated_at=datetime.fromisoformat(route_data['updated_at'])
)
db.add(route)
# Similar methods for rules and plans would follow...
def _cleanup_temp_files(self):
"""Clean up temporary files older than 1 hour"""
cutoff = datetime.now().timestamp() - 3600
for file in self.temp_dir.glob("*"):
if file.stat().st_mtime < cutoff:
try:
if file.is_dir():
shutil.rmtree(file)
else:
file.unlink()
except Exception as e:
logger.warning(f"Failed to clean temp file {file}: {str(e)}")

View File

@@ -1,74 +0,0 @@
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from backend.app.services.ai_service import AIService
from backend.app.models.analysis import Analysis
from backend.app.models.plan import Plan
import logging
logger = logging.getLogger(__name__)
class PlanEvolutionService:
"""Service for evolving training plans based on workout analysis."""
def __init__(self, db: AsyncSession):
self.db = db
self.ai_service = AIService(db)
async def evolve_plan_from_analysis(
self,
analysis: Analysis,
current_plan: Plan
) -> Plan:
"""Create a new plan version based on workout analysis."""
if not analysis.approved:
return None
suggestions = analysis.suggestions
if not suggestions:
return None
# Generate new plan incorporating suggestions
evolution_context = {
"current_plan": current_plan.jsonb_plan,
"workout_analysis": analysis.jsonb_feedback,
"suggestions": suggestions,
"evolution_type": "workout_feedback"
}
new_plan_data = await self.ai_service.evolve_plan(evolution_context)
# Create new plan version
new_plan = Plan(
jsonb_plan=new_plan_data,
version=current_plan.version + 1,
parent_plan_id=current_plan.id
)
self.db.add(new_plan)
await self.db.commit()
await self.db.refresh(new_plan)
logger.info(f"Created new plan version {new_plan.version} from analysis {analysis.id}")
return new_plan
async def get_plan_evolution_history(self, plan_id: int) -> list[Plan]:
"""Get the evolution history for a plan."""
result = await self.db.execute(
select(Plan)
.where(
(Plan.id == plan_id) |
(Plan.parent_plan_id == plan_id)
)
.order_by(Plan.version)
)
return result.scalars().all()
async def get_current_active_plan(self) -> Plan:
"""Get the most recent active plan."""
result = await self.db.execute(
select(Plan)
.order_by(Plan.version.desc())
.limit(1)
)
return result.scalar_one_or_none()

View File

@@ -1,92 +0,0 @@
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, update, func
from backend.app.models.prompt import Prompt
import logging
logger = logging.getLogger(__name__)
class PromptManager:
"""Service for managing AI prompts with versioning."""
def __init__(self, db: AsyncSession):
self.db = db
async def get_active_prompt(self, action_type: str, model: str = None) -> str:
"""Get the active prompt for a specific action type."""
query = select(Prompt).where(
Prompt.action_type == action_type,
Prompt.active == True
)
if model:
query = query.where(Prompt.model == model)
result = await self.db.execute(query.order_by(Prompt.version.desc()))
prompt = result.scalar_one_or_none()
return prompt.prompt_text if prompt else None
async def create_prompt_version(
self,
action_type: str,
prompt_text: str,
model: str = None
) -> Prompt:
"""Create a new version of a prompt."""
# Deactivate previous versions
await self.db.execute(
update(Prompt)
.where(Prompt.action_type == action_type)
.values(active=False)
)
# Get next version number
result = await self.db.execute(
select(func.max(Prompt.version))
.where(Prompt.action_type == action_type)
)
max_version = result.scalar() or 0
# Create new prompt
new_prompt = Prompt(
action_type=action_type,
model=model,
prompt_text=prompt_text,
version=max_version + 1,
active=True
)
self.db.add(new_prompt)
await self.db.commit()
await self.db.refresh(new_prompt)
logger.info(f"Created new prompt version {new_prompt.version} for {action_type}")
return new_prompt
async def get_prompt_history(self, action_type: str) -> list[Prompt]:
"""Get all versions of prompts for an action type."""
result = await self.db.execute(
select(Prompt)
.where(Prompt.action_type == action_type)
.order_by(Prompt.version.desc())
)
return result.scalars().all()
async def activate_prompt_version(self, prompt_id: int) -> bool:
"""Activate a specific prompt version."""
# First deactivate all prompts for this action type
prompt = await self.db.get(Prompt, prompt_id)
if not prompt:
return False
await self.db.execute(
update(Prompt)
.where(Prompt.action_type == prompt.action_type)
.values(active=False)
)
# Activate the specific version
prompt.active = True
await self.db.commit()
logger.info(f"Activated prompt version {prompt.version} for {prompt.action_type}")
return True

View File

@@ -1,124 +0,0 @@
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select, desc
from backend.app.services.garmin import GarminService, GarminAPIError, GarminAuthError
from backend.app.models.workout import Workout
from backend.app.models.garmin_sync_log import GarminSyncLog
from backend.app.models.garmin_sync_log import GarminSyncLog
from datetime import datetime, timedelta
import logging
from typing import Dict, Any
import asyncio
logger = logging.getLogger(__name__)
class WorkoutSyncService:
"""Service for syncing Garmin activities to database."""
def __init__(self, db: AsyncSession):
self.db = db
self.garmin_service = GarminService()
async def sync_recent_activities(self, days_back: int = 7) -> int:
"""Sync recent Garmin activities to database."""
try:
# Create sync log entry
sync_log = GarminSyncLog(status="in_progress")
self.db.add(sync_log)
await self.db.commit()
# Calculate start date
start_date = datetime.now() - timedelta(days=days_back)
# Fetch activities from Garmin
activities = await self.garmin_service.get_activities(
limit=50, start_date=start_date
)
synced_count = 0
for activity in activities:
activity_id = activity['activityId']
if await self.activity_exists(activity_id):
continue
# Get full activity details with retry logic
max_retries = 3
for attempt in range(max_retries):
try:
details = await self.garmin_service.get_activity_details(activity_id)
break
except (GarminAPIError, GarminAuthError) as e:
if attempt == max_retries - 1:
raise
await asyncio.sleep(2 ** attempt)
logger.warning(f"Retrying activity details fetch for {activity_id}, attempt {attempt + 1}")
# Merge basic activity data with detailed metrics
full_activity = {**activity, **details}
# Parse and create workout
workout_data = await self.parse_activity_data(full_activity)
workout = Workout(**workout_data)
self.db.add(workout)
synced_count += 1
# Update sync log
sync_log.status = "success"
sync_log.activities_synced = synced_count
sync_log.last_sync_time = datetime.now()
await self.db.commit()
logger.info(f"Successfully synced {synced_count} activities")
return synced_count
except GarminAuthError as e:
sync_log.status = "auth_error"
sync_log.error_message = str(e)
await self.db.commit()
logger.error(f"Garmin authentication failed: {str(e)}")
raise
except GarminAPIError as e:
sync_log.status = "api_error"
sync_log.error_message = str(e)
await self.db.commit()
logger.error(f"Garmin API error during sync: {str(e)}")
raise
except Exception as e:
sync_log.status = "error"
sync_log.error_message = str(e)
await self.db.commit()
logger.error(f"Unexpected error during sync: {str(e)}")
raise
async def get_latest_sync_status(self):
"""Get the most recent sync log entry"""
result = await self.db.execute(
select(GarminSyncLog)
.order_by(desc(GarminSyncLog.created_at))
.limit(1)
)
return result.scalar_one_or_none()
async def activity_exists(self, garmin_activity_id: str) -> bool:
"""Check if activity already exists in database."""
result = await self.db.execute(
select(Workout).where(Workout.garmin_activity_id == garmin_activity_id)
)
return result.scalar_one_or_none() is not None
async def parse_activity_data(self, activity: Dict[str, Any]) -> Dict[str, Any]:
"""Parse Garmin activity data into workout model format."""
return {
"garmin_activity_id": activity['activityId'],
"activity_type": activity.get('activityType', {}).get('typeKey'),
"start_time": datetime.fromisoformat(activity['startTimeLocal'].replace('Z', '+00:00')),
"duration_seconds": activity.get('duration'),
"distance_m": activity.get('distance'),
"avg_hr": activity.get('averageHR'),
"max_hr": activity.get('maxHR'),
"avg_power": activity.get('avgPower'),
"max_power": activity.get('maxPower'),
"avg_cadence": activity.get('averageBikingCadenceInRevPerMinute'),
"elevation_gain_m": activity.get('elevationGain'),
"metrics": activity # Store full Garmin data as JSONB
}

View File

@@ -1,5 +0,0 @@
[pytest]
testpaths = tests
addopts = -p no:warnings --verbose
python_files = test_*.py
log_cli = true

View File

@@ -1,12 +0,0 @@
fastapi==0.110.0
uvicorn[standard]==0.29.0
python-dotenv==1.0.1
sqlalchemy==2.0.29
psycopg2-binary==2.9.9
alembic==1.13.1
pydantic-settings==2.2.1
python-multipart==0.0.9
gpxpy # Add GPX parsing library
garth==0.4.46 # Garmin Connect API client
httpx==0.25.2 # Async HTTP client for OpenRouter API
asyncpg==0.29.0 # Async PostgreSQL driver

View File

@@ -1,394 +0,0 @@
#!/usr/bin/env python3
"""
Database backup and restore utilities for containerized deployments.
Ensures safe backup/restore operations with migration compatibility checks.
"""
import sys
import os
import subprocess
import shutil
from pathlib import Path
from datetime import datetime
from typing import Optional
# Add backend directory to path
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
from backend.app.database import get_database_url
class DatabaseManager:
"""Handles database backup and restore operations."""
def __init__(self, backup_dir: str = "/app/data/backups"):
self.backup_dir = Path(backup_dir)
self.backup_dir.mkdir(parents=True, exist_ok=True)
self.gpx_dir = Path("/app/data/gpx")
self.manifest_file = self.backup_dir / "gpx_manifest.json"
self.encryption_key = os.getenv("BACKUP_ENCRYPTION_KEY").encode()
def get_db_connection_params(self):
"""Extract database connection parameters from URL."""
from urllib.parse import urlparse
db_url = get_database_url()
parsed = urlparse(db_url)
return {
'host': parsed.hostname,
'port': parsed.port or 5432,
'user': parsed.username,
'password': parsed.password,
'database': parsed.path.lstrip('/')
}
def _backup_gpx_files(self, backup_dir: Path) -> Optional[Path]:
"""Backup GPX files directory"""
gpx_dir = Path("/app/data/gpx")
if not gpx_dir.exists():
return None
backup_path = backup_dir / "gpx.tar.gz"
with tarfile.open(backup_path, "w:gz") as tar:
tar.add(gpx_dir, arcname="gpx")
return backup_path
def _backup_sessions(self, backup_dir: Path) -> Optional[Path]:
"""Backup Garmin sessions directory"""
sessions_dir = Path("/app/data/sessions")
if not sessions_dir.exists():
return None
backup_path = backup_dir / "sessions.tar.gz"
with tarfile.open(backup_path, "w:gz") as tar:
tar.add(sessions_dir, arcname="sessions")
return backup_path
def _generate_checksum(self, file_path: Path) -> str:
"""Generate SHA256 checksum for a file"""
hash_sha256 = hashlib.sha256()
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
hash_sha256.update(chunk)
return hash_sha256.hexdigest()
def _verify_backup_integrity(self, backup_path: Path):
"""Verify backup file integrity using checksum"""
checksum_file = backup_path.with_suffix('.sha256')
if not checksum_file.exists():
raise FileNotFoundError(f"Checksum file missing for {backup_path.name}")
with open(checksum_file) as f:
expected_checksum = f.read().split()[0]
actual_checksum = self._generate_checksum(backup_path)
if actual_checksum != expected_checksum:
raise ValueError(f"Checksum mismatch for {backup_path.name}")
def create_backup(self, name: Optional[str] = None) -> str:
"""Create a full system backup including database, GPX files, and sessions"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_name = name or f"full_backup_{timestamp}"
backup_dir = self.backup_dir / backup_name
backup_dir.mkdir(parents=True, exist_ok=True)
try:
# Backup database
db_backup_path = self._backup_database(backup_dir)
# Backup GPX files
gpx_backup_path = self._backup_gpx_files(backup_dir)
# Backup sessions
sessions_backup_path = self._backup_sessions(backup_dir)
# Generate checksums for all backup files
for file in backup_dir.glob("*"):
if file.is_file():
checksum = self._generate_checksum(file)
with open(f"{file}.sha256", "w") as f:
f.write(f"{checksum} {file.name}")
# Verify backups
for file in backup_dir.glob("*"):
if file.is_file() and not file.name.endswith('.sha256'):
self._verify_backup_integrity(file)
print(f"✅ Full backup created successfully: {backup_dir}")
return str(backup_dir)
except Exception as e:
shutil.rmtree(backup_dir, ignore_errors=True)
print(f"❌ Backup failed: {str(e)}")
raise
def _backup_database(self, backup_dir: Path) -> Path:
"""Create database backup"""
params = self.get_db_connection_params()
backup_file = backup_dir / "database.dump"
cmd = [
"pg_dump",
"-h", params['host'],
"-p", str(params['port']),
"-U", params['user'],
"-d", params['database'],
"-f", str(backup_file),
"--no-password",
"--format=custom",
"--compress=9"
]
env = os.environ.copy()
env['PGPASSWORD'] = params['password']
result = subprocess.run(cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
raise Exception(f"Database backup failed: {result.stderr}")
return backup_file
def restore_backup(self, backup_file: str, confirm: bool = False) -> None:
"""Restore database from backup."""
backup_path = Path(backup_file)
if not backup_path.exists():
raise FileNotFoundError(f"Backup file not found: {backup_file}")
if not confirm:
print(f"⚠️ This will overwrite the current database!")
response = input("Are you sure you want to continue? (yes/no): ")
if response.lower() != 'yes':
print("Restore cancelled.")
return
params = self.get_db_connection_params()
# Drop and recreate database to ensure clean restore
self._recreate_database()
# Use pg_restore for restore
cmd = [
"pg_restore",
"-h", params['host'],
"-p", str(params['port']),
"-U", params['user'],
"-d", params['database'],
"--no-password",
"--clean",
"--if-exists",
"--create",
str(backup_path)
]
env = os.environ.copy()
env['PGPASSWORD'] = params['password']
try:
print(f"Restoring from backup: {backup_file}")
result = subprocess.run(cmd, env=env, capture_output=True, text=True)
if result.returncode == 0:
print("✅ Database restored successfully")
else:
print(f"❌ Restore failed: {result.stderr}")
raise Exception(f"Restore failed: {result.stderr}")
except FileNotFoundError:
print("❌ pg_restore not found. Ensure PostgreSQL client tools are installed.")
raise
def backup_gpx_files(self, incremental: bool = True) -> Optional[Path]:
"""Handle GPX backup creation with incremental/full strategy"""
try:
if incremental:
return self._incremental_gpx_backup()
return self._full_gpx_backup()
except Exception as e:
print(f"GPX backup failed: {str(e)}")
return None
def _full_gpx_backup(self) -> Path:
"""Create full GPX backup"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = self.backup_dir / f"gpx_full_{timestamp}"
backup_path.mkdir()
# Copy all GPX files
subprocess.run(["rsync", "-a", f"{self.gpx_dir}/", f"{backup_path}/"])
self._encrypt_backup(backup_path)
return backup_path
def _incremental_gpx_backup(self) -> Optional[Path]:
"""Create incremental GPX backup using rsync --link-dest"""
last_full = self._find_last_full_backup()
if not last_full:
return self._full_gpx_backup()
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = self.backup_dir / f"gpx_inc_{timestamp}"
backup_path.mkdir()
# Use hardlinks to previous backup for incremental
subprocess.run([
"rsync", "-a",
"--link-dest", str(last_full),
f"{self.gpx_dir}/",
f"{backup_path}/"
])
self._encrypt_backup(backup_path)
return backup_path
def _find_last_full_backup(self) -> Optional[Path]:
"""Find most recent full backup"""
full_backups = sorted(self.backup_dir.glob("gpx_full_*"), reverse=True)
return full_backups[0] if full_backups else None
def _encrypt_backup(self, backup_path: Path):
"""Encrypt backup directory using Fernet (AES-256-CBC with HMAC-SHA256)"""
from cryptography.fernet import Fernet
fernet = Fernet(self.encryption_key)
for file in backup_path.rglob('*'):
if file.is_file():
with open(file, 'rb') as f:
data = f.read()
encrypted = fernet.encrypt(data)
with open(file, 'wb') as f:
f.write(encrypted)
def decrypt_backup(self, backup_path: Path):
"""Decrypt backup directory"""
from cryptography.fernet import Fernet
fernet = Fernet(self.encryption_key)
for file in backup_path.rglob('*'):
if file.is_file():
with open(file, 'rb') as f:
data = f.read()
decrypted = fernet.decrypt(data)
with open(file, 'wb') as f:
f.write(decrypted)
def _recreate_database(self):
"""Drop and recreate the database."""
params = self.get_db_connection_params()
# Connect to postgres database to drop/recreate target database
postgres_params = params.copy()
postgres_params['database'] = 'postgres'
drop_cmd = [
"psql",
"-h", postgres_params['host'],
"-p", str(postgres_params['port']),
"-U", postgres_params['user'],
"-d", postgres_params['database'],
"-c", f"DROP DATABASE IF EXISTS {params['database']};"
]
create_cmd = [
"psql",
"-h", postgres_params['host'],
"-p", str(postgres_params['port']),
"-U", postgres_params['user'],
"-d", postgres_params['database'],
"-c", f"CREATE DATABASE {params['database']};"
]
env = os.environ.copy()
env['PGPASSWORD'] = params['password']
for cmd in [drop_cmd, create_cmd]:
result = subprocess.run(cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
print(f"Database recreation step failed: {result.stderr}")
def list_backups(self):
"""List available backup files."""
backups = list(self.backup_dir.glob("*.sql"))
backups.sort(key=lambda x: x.stat().st_mtime, reverse=True)
if not backups:
print("No backup files found.")
return
print("Available backups:")
for backup in backups:
size = backup.stat().st_size / (1024 * 1024) # Size in MB
mtime = datetime.fromtimestamp(backup.stat().st_mtime)
print(".2f")
def cleanup_old_backups(self, keep_days: int = 30):
"""Remove backups older than specified days."""
from datetime import timedelta
cutoff = datetime.now() - timedelta(days=keep_days)
removed = []
# Clean all backup directories (full_backup_*)
for backup_dir in self.backup_dir.glob("full_backup_*"):
if backup_dir.is_dir() and datetime.fromtimestamp(backup_dir.stat().st_mtime) < cutoff:
shutil.rmtree(backup_dir)
removed.append(backup_dir.name)
if removed:
print(f"Removed {len(removed)} old backups: {', '.join(removed)}")
else:
print("No old backups to remove.")
def main():
if len(sys.argv) < 2:
print("Usage: python backup_restore.py <command> [options]")
print("Commands:")
print(" backup [name] - Create a new database backup")
print(" gpx-backup [--full] - Create GPX backup (incremental by default)")
print(" restore <file> [--yes] - Restore from backup")
print(" list - List available backups")
print(" cleanup [days] - Remove backups older than N days (default: 30)")
print(" decrypt <dir> - Decrypt backup directory")
sys.exit(1)
manager = DatabaseManager()
command = sys.argv[1]
try:
if command == "backup":
name = sys.argv[2] if len(sys.argv) > 2 else None
name = sys.argv[2] if len(sys.argv) > 2 else None
manager.create_backup(name)
elif command == "gpx-backup":
if len(sys.argv) > 2 and sys.argv[2] == "--full":
manager.backup_gpx_files(incremental=False)
else:
manager.backup_gpx_files()
elif command == "restore":
if len(sys.argv) < 3:
print("Error: Please specify backup file to restore from")
sys.exit(1)
backup_file = sys.argv[2]
confirm = "--yes" in sys.argv
backup_file = sys.argv[2]
confirm = "--yes" in sys.argv
manager.restore_backup(backup_file, confirm)
elif command == "list":
manager.list_backups()
elif command == "cleanup":
days = int(sys.argv[2]) if len(sys.argv) > 2 else 30
manager.cleanup_old_backups(days)
else:
print(f"Unknown command: {command}")
sys.exit(1)
except Exception as e:
print(f"Error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,259 +0,0 @@
#!/usr/bin/env python3
"""
Migration compatibility and version checker for containerized deployments.
Validates migration integrity and compatibility before deployments.
"""
import sys
import os
from pathlib import Path
from typing import Dict, List, Tuple
# Add backend directory to path
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
from alembic.config import Config
from alembic import command
from alembic.migration import MigrationContext
from alembic.script import ScriptDirectory
from sqlalchemy import create_engine, text
from backend.app.database import get_database_url
class MigrationChecker:
"""Validates migration compatibility and integrity."""
def __init__(self):
self.config = self._get_alembic_config()
self.script = ScriptDirectory.from_config(self.config)
def _get_alembic_config(self):
"""Get Alembic configuration."""
config = Config("alembic.ini")
config.set_main_option("sqlalchemy.url", get_database_url())
return config
def check_migration_files(self) -> Dict[str, bool]:
"""Check integrity of migration files."""
results = {
"files_exist": False,
"proper_ordering": False,
"no_duplicates": False,
"valid_syntax": False
}
try:
# Check if migration directory exists
versions_dir = Path("alembic/versions")
if not versions_dir.exists():
print("❌ Migration versions directory not found")
return results
# Get all migration files
migration_files = list(versions_dir.glob("*.py"))
if not migration_files:
print("⚠️ No migration files found")
results["files_exist"] = True # Empty is valid
return results
results["files_exist"] = True
# Check for duplicate revision numbers
revisions = []
for file_path in migration_files:
with open(file_path, 'r') as f:
content = f.read()
# Extract revision from file
if "revision = " in content:
rev_line = [line for line in content.split('\n') if "revision = " in line]
if rev_line:
rev = rev_line[0].split("'")[1]
if rev in revisions:
print(f"❌ Duplicate revision found: {rev}")
return results
revisions.append(rev)
results["no_duplicates"] = True
# Validate migration ordering
try:
# Get ordered revisions from script directory
ordered_revisions = []
for rev in self.script.walk_revisions():
ordered_revisions.append(rev.revision)
# Check if our files match the ordering
if set(revisions) == set(ordered_revisions):
results["proper_ordering"] = True
else:
print("❌ Migration ordering mismatch")
return results
except Exception as e:
print(f"❌ Error checking migration ordering: {e}")
return results
# Basic syntax validation
for file_path in migration_files:
try:
compile(open(file_path).read(), file_path, 'exec')
except SyntaxError as e:
print(f"❌ Syntax error in {file_path}: {e}")
return results
results["valid_syntax"] = True
print("✅ All migration files are valid")
except Exception as e:
print(f"❌ Error checking migration files: {e}")
return results
def check_database_state(self) -> Dict[str, any]:
"""Check current database migration state."""
results = {
"connected": False,
"current_revision": None,
"head_revision": None,
"up_to_date": False,
"pending_migrations": []
}
try:
engine = create_engine(get_database_url())
with engine.connect() as conn:
results["connected"] = True
# Get current revision
context = MigrationContext.configure(conn)
current_rev = context.get_current_revision()
results["current_revision"] = current_rev
# Get head revision
head_rev = self.script.get_current_head()
results["head_revision"] = head_rev
# Check if up to date
results["up_to_date"] = current_rev == head_rev
# Get pending migrations
if not results["up_to_date"]:
pending = []
for rev in self.script.walk_revisions():
if rev.revision > current_rev:
pending.append(rev.revision)
results["pending_migrations"] = pending
except Exception as e:
print(f"❌ Database connection error: {e}")
return results
def validate_deployment_readiness(self) -> bool:
"""Validate if deployment can proceed safely."""
print("🔍 Checking deployment readiness...")
# Check migration files
file_checks = self.check_migration_files()
all_files_good = all(file_checks.values())
# Check database state
db_checks = self.check_database_state()
db_connected = db_checks["connected"]
if not all_files_good:
print("❌ Migration files have issues")
return False
if not db_connected:
print("❌ Cannot connect to database")
return False
if not db_checks["up_to_date"]:
print(f"⚠️ Database not up to date. Current: {db_checks['current_revision']}, Head: {db_checks['head_revision']}")
print(f"Pending migrations: {db_checks['pending_migrations']}")
# For deployment, we might want to allow this if migrations will be run
print(" This is acceptable if migrations will be run during deployment")
return True
print("✅ Deployment readiness check passed")
return True
def generate_migration_report(self) -> str:
"""Generate a detailed migration status report."""
report = []
report.append("# Migration Status Report")
report.append("")
# File checks
report.append("## Migration Files")
file_checks = self.check_migration_files()
for check, status in file_checks.items():
status_icon = "" if status else ""
report.append(f"- {check}: {status_icon}")
# Database state
report.append("")
report.append("## Database State")
db_checks = self.check_database_state()
for check, value in db_checks.items():
if isinstance(value, list):
value = ", ".join(value) if value else "None"
report.append(f"- {check}: {value}")
# Deployment readiness
report.append("")
report.append("## Deployment Readiness")
ready = self.validate_deployment_readiness()
readiness_icon = "" if ready else ""
report.append(f"- Ready for deployment: {readiness_icon}")
return "\n".join(report)
def main():
if len(sys.argv) < 2:
print("Usage: python migration_checker.py <command>")
print("Commands:")
print(" check-files - Check migration file integrity")
print(" check-db - Check database migration state")
print(" validate-deploy - Validate deployment readiness")
print(" report - Generate detailed migration report")
sys.exit(1)
checker = MigrationChecker()
command = sys.argv[1]
try:
if command == "check-files":
results = checker.check_migration_files()
all_good = all(results.values())
print("✅ Files OK" if all_good else "❌ Files have issues")
sys.exit(0 if all_good else 1)
elif command == "check-db":
results = checker.check_database_state()
print(f"Connected: {'' if results['connected'] else ''}")
print(f"Up to date: {'' if results['up_to_date'] else ''}")
print(f"Current: {results['current_revision']}")
print(f"Head: {results['head_revision']}")
elif command == "validate-deploy":
ready = checker.validate_deployment_readiness()
sys.exit(0 if ready else 1)
elif command == "report":
report = checker.generate_migration_report()
print(report)
else:
print(f"Unknown command: {command}")
sys.exit(1)
except Exception as e:
print(f"Error: {e}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,89 +0,0 @@
#!/usr/bin/env python3
"""
Migration rollback script for containerized deployments.
Provides safe rollback functionality with validation.
"""
import sys
import os
from pathlib import Path
# Add backend directory to path
backend_dir = Path(__file__).parent.parent
sys.path.insert(0, str(backend_dir))
from alembic.config import Config
from alembic import command
from alembic.migration import MigrationContext
from alembic.script import ScriptDirectory
import sqlalchemy as sa
from backend.app.database import get_database_url
def get_alembic_config():
"""Get Alembic configuration."""
config = Config("alembic.ini")
config.set_main_option("sqlalchemy.url", get_database_url())
return config
def get_current_revision():
"""Get current database revision."""
config = get_alembic_config()
script = ScriptDirectory.from_config(config)
with sa.create_engine(get_database_url()).connect() as conn:
context = MigrationContext.configure(conn)
current_rev = context.get_current_revision()
return current_rev
def rollback_migration(revision="head:-1"):
"""
Rollback to specified revision.
Args:
revision: Target revision (default: one step back from head)
"""
try:
print(f"Rolling back to revision: {revision}")
config = get_alembic_config()
command.downgrade(config, revision)
print("Rollback completed successfully")
# Verify rollback
current = get_current_revision()
print(f"Current revision after rollback: {current}")
except Exception as e:
print(f"Rollback failed: {e}")
sys.exit(1)
def list_migrations():
"""List available migrations."""
config = get_alembic_config()
script = ScriptDirectory.from_config(config)
print("Available migrations:")
for rev in script.walk_revisions():
print(f" {rev.revision}: {rev.doc}")
if __name__ == "__main__":
if len(sys.argv) < 2:
print("Usage: python migration_rollback.py <command> [revision]")
print("Commands:")
print(" rollback [revision] - Rollback to revision (default: head:-1)")
print(" current - Show current revision")
print(" list - List available migrations")
sys.exit(1)
command = sys.argv[1]
if command == "rollback":
revision = sys.argv[2] if len(sys.argv) > 2 else "head:-1"
rollback_migration(revision)
elif command == "current":
current = get_current_revision()
print(f"Current revision: {current}")
elif command == "list":
list_migrations()
else:
print(f"Unknown command: {command}")
sys.exit(1)

View File

@@ -1 +0,0 @@
# Empty file to mark tests directory as Python package

View File

@@ -1,36 +0,0 @@
import pytest
from fastapi.testclient import TestClient
from backend.app.main import app
from backend.app.database import get_db, Base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
TEST_DATABASE_URL = "postgresql://postgres:postgres@localhost:5432/test_db"
@pytest.fixture(scope="session")
def test_engine():
engine = create_engine(TEST_DATABASE_URL)
Base.metadata.create_all(bind=engine)
yield engine
Base.metadata.drop_all(bind=engine)
@pytest.fixture
def db_session(test_engine):
connection = test_engine.connect()
transaction = connection.begin()
session = sessionmaker(autocommit=False, autoflush=False, bind=connection)()
yield session
session.close()
transaction.rollback()
connection.close()
@pytest.fixture
def client(db_session):
def override_get_db():
try:
yield db_session
finally:
db_session.close()
app.dependency_overrides[get_db] = override_get_db
return TestClient(app)

View File

@@ -1,102 +0,0 @@
import pytest
from unittest.mock import AsyncMock, patch, MagicMock
from backend.app.services.ai_service import AIService, AIServiceError
from backend.app.models.workout import Workout
import json
@pytest.mark.asyncio
async def test_analyze_workout_success():
"""Test successful workout analysis with valid API response"""
mock_db = MagicMock()
mock_prompt = MagicMock()
mock_prompt.format.return_value = "test prompt"
ai_service = AIService(mock_db)
ai_service.prompt_manager.get_active_prompt = AsyncMock(return_value=mock_prompt)
test_response = json.dumps({
"performance_summary": "Good workout",
"suggestions": ["More recovery"]
})
with patch('httpx.AsyncClient.post') as mock_post:
mock_post.return_value = AsyncMock(
status_code=200,
json=lambda: {"choices": [{"message": {"content": test_response}}]}
)
workout = Workout(activity_type="cycling", duration_seconds=3600)
result = await ai_service.analyze_workout(workout)
assert "performance_summary" in result
assert len(result["suggestions"]) == 1
@pytest.mark.asyncio
async def test_generate_plan_success():
"""Test plan generation with structured response"""
mock_db = MagicMock()
ai_service = AIService(mock_db)
ai_service.prompt_manager.get_active_prompt = AsyncMock(return_value="Plan prompt: {rules} {goals}")
test_plan = {
"weeks": [{"workouts": ["ride"]}],
"focus": "endurance"
}
with patch('httpx.AsyncClient.post') as mock_post:
mock_post.return_value = AsyncMock(
status_code=200,
json=lambda: {"choices": [{"message": {"content": json.dumps(test_plan)}}]}
)
result = await ai_service.generate_plan([], {})
assert "weeks" in result
assert result["focus"] == "endurance"
@pytest.mark.asyncio
async def test_api_retry_logic():
"""Test API request retries on failure"""
mock_db = MagicMock()
ai_service = AIService(mock_db)
with patch('httpx.AsyncClient.post') as mock_post:
mock_post.side_effect = Exception("API failure")
with pytest.raises(AIServiceError):
await ai_service._make_ai_request("test")
assert mock_post.call_count == 3
@pytest.mark.asyncio
async def test_invalid_json_handling():
"""Test graceful handling of invalid JSON responses"""
mock_db = MagicMock()
ai_service = AIService(mock_db)
with patch('httpx.AsyncClient.post') as mock_post:
mock_post.return_value = AsyncMock(
status_code=200,
json=lambda: {"choices": [{"message": {"content": "invalid{json"}}]}
)
result = await ai_service.parse_rules_from_natural_language("test")
assert "raw_rules" in result
assert not result["structured"]
@pytest.mark.asyncio
async def test_code_block_parsing():
"""Test extraction of JSON from code blocks"""
mock_db = MagicMock()
ai_service = AIService(mock_db)
test_response = "```json\n" + json.dumps({"max_rides": 4}) + "\n```"
with patch('httpx.AsyncClient.post') as mock_post:
mock_post.return_value = AsyncMock(
status_code=200,
json=lambda: {"choices": [{"message": {"content": test_response}}]}
)
result = await ai_service.evolve_plan({})
assert "max_rides" in result
assert result["max_rides"] == 4

View File

@@ -1,78 +0,0 @@
import pytest
from unittest.mock import AsyncMock, patch
from backend.app.services.garmin import GarminService
from backend.app.models.garmin_sync_log import GarminSyncStatus
from datetime import datetime, timedelta
@pytest.mark.asyncio
async def test_garmin_authentication_success(db_session):
"""Test successful Garmin Connect authentication"""
with patch('garth.Client') as mock_client:
mock_instance = mock_client.return_value
mock_instance.login = AsyncMock(return_value=True)
service = GarminService(db_session)
result = await service.authenticate("test_user", "test_pass")
assert result is True
mock_instance.login.assert_awaited_once_with("test_user", "test_pass")
@pytest.mark.asyncio
async def test_garmin_authentication_failure(db_session):
"""Test authentication failure handling"""
with patch('garth.Client') as mock_client:
mock_instance = mock_client.return_value
mock_instance.login = AsyncMock(side_effect=Exception("Invalid credentials"))
service = GarminService(db_session)
result = await service.authenticate("bad_user", "wrong_pass")
assert result is False
log_entry = db_session.query(GarminSyncLog).first()
assert log_entry.status == GarminSyncStatus.AUTH_FAILED
@pytest.mark.asyncio
async def test_activity_sync(db_session):
"""Test successful activity synchronization"""
with patch('garth.Client') as mock_client:
mock_instance = mock_client.return_value
mock_instance.connectapi = AsyncMock(return_value=[
{"activityId": 123, "startTime": "2024-01-01T08:00:00"}
])
service = GarminService(db_session)
await service.sync_activities()
# Verify workout created
workout = db_session.query(Workout).first()
assert workout.garmin_activity_id == 123
# Verify sync log updated
log_entry = db_session.query(GarminSyncLog).first()
assert log_entry.status == GarminSyncStatus.COMPLETED
@pytest.mark.asyncio
async def test_rate_limiting_handling(db_session):
"""Test API rate limit error handling"""
with patch('garth.Client') as mock_client:
mock_instance = mock_client.return_value
mock_instance.connectapi = AsyncMock(side_effect=Exception("Rate limit exceeded"))
service = GarminService(db_session)
result = await service.sync_activities()
assert result is False
log_entry = db_session.query(GarminSyncLog).first()
assert log_entry.status == GarminSyncStatus.FAILED
assert "Rate limit" in log_entry.error_message
@pytest.mark.asyncio
async def test_session_persistence(db_session):
"""Test session cookie persistence"""
service = GarminService(db_session)
# Store session
await service.store_session({"token": "test123"})
session = await service.load_session()
assert session == {"token": "test123"}
assert Path("/app/data/sessions/garmin_session.pickle").exists()

View File

@@ -1,56 +0,0 @@
import pytest
from unittest.mock import AsyncMock, MagicMock
from backend.app.services.plan_evolution import PlanEvolutionService
from backend.app.models.plan import Plan
from backend.app.models.analysis import Analysis
from datetime import datetime
@pytest.mark.asyncio
async def test_evolve_plan_with_valid_analysis():
"""Test plan evolution with approved analysis and suggestions"""
mock_db = AsyncMock()
mock_plan = Plan(
id=1,
version=1,
jsonb_plan={"weeks": []},
parent_plan_id=None
)
mock_analysis = Analysis(
approved=True,
jsonb_feedback={"suggestions": ["More recovery"]}
)
service = PlanEvolutionService(mock_db)
service.ai_service.evolve_plan = AsyncMock(return_value={"weeks": [{"recovery": True}]})
result = await service.evolve_plan_from_analysis(mock_analysis, mock_plan)
assert result.version == 2
assert result.parent_plan_id == 1
mock_db.add.assert_called_once()
mock_db.commit.assert_awaited_once()
@pytest.mark.asyncio
async def test_evolution_skipped_for_unapproved_analysis():
"""Test plan evolution is skipped for unapproved analysis"""
mock_db = AsyncMock()
mock_analysis = Analysis(approved=False)
service = PlanEvolutionService(mock_db)
result = await service.evolve_plan_from_analysis(mock_analysis, MagicMock())
assert result is None
@pytest.mark.asyncio
async def test_evolution_history_retrieval():
"""Test getting plan evolution history"""
mock_db = AsyncMock()
mock_db.execute.return_value.scalars.return_value = [
Plan(version=1), Plan(version=2)
]
service = PlanEvolutionService(mock_db)
history = await service.get_plan_evolution_history(1)
assert len(history) == 2
assert history[0].version == 1

View File

@@ -1,81 +0,0 @@
import pytest
from unittest.mock import AsyncMock, MagicMock, patch
from backend.app.services.workout_sync import WorkoutSyncService
from backend.app.models.workout import Workout
from backend.app.models.garmin_sync_log import GarminSyncLog
from datetime import datetime, timedelta
import asyncio
@pytest.mark.asyncio
async def test_successful_sync():
"""Test successful sync of new activities"""
mock_db = AsyncMock()
mock_garmin = MagicMock()
mock_garmin.get_activities.return_value = [{'activityId': '123'}]
mock_garmin.get_activity_details.return_value = {'metrics': 'data'}
service = WorkoutSyncService(mock_db)
service.garmin_service = mock_garmin
result = await service.sync_recent_activities()
assert result == 1
mock_db.add.assert_called()
mock_db.commit.assert_awaited()
@pytest.mark.asyncio
async def test_duplicate_activity_handling():
"""Test skipping duplicate activities"""
mock_db = AsyncMock()
mock_db.execute.return_value.scalar_one_or_none.return_value = True
mock_garmin = MagicMock()
mock_garmin.get_activities.return_value = [{'activityId': '123'}]
service = WorkoutSyncService(mock_db)
service.garmin_service = mock_garmin
result = await service.sync_recent_activities()
assert result == 0
@pytest.mark.asyncio
async def test_activity_detail_retry_logic():
"""Test retry logic for activity details"""
mock_db = AsyncMock()
mock_garmin = MagicMock()
mock_garmin.get_activities.return_value = [{'activityId': '123'}]
mock_garmin.get_activity_details.side_effect = [Exception(), {'metrics': 'data'}]
service = WorkoutSyncService(mock_db)
service.garmin_service = mock_garmin
result = await service.sync_recent_activities()
assert mock_garmin.get_activity_details.call_count == 2
assert result == 1
@pytest.mark.asyncio
async def test_auth_error_handling():
"""Test authentication error handling"""
mock_db = AsyncMock()
mock_garmin = MagicMock()
mock_garmin.get_activities.side_effect = Exception("Auth failed")
service = WorkoutSyncService(mock_db)
service.garmin_service = mock_garmin
with pytest.raises(Exception):
await service.sync_recent_activities()
sync_log = mock_db.add.call_args[0][0]
assert sync_log.status == "auth_error"
@pytest.mark.asyncio
async def test_get_sync_status():
"""Test retrieval of latest sync status"""
mock_db = AsyncMock()
mock_log = GarminSyncLog(status="success")
mock_db.execute.return_value.scalar_one_or_none.return_value = mock_log
service = WorkoutSyncService(mock_db)
result = await service.get_latest_sync_status()
assert result.status == "success"

View File

View File

@@ -1,248 +0,0 @@
---
# **AI-Assisted Cycling Coach — Design Document**
## **1. Architecture Overview**
**Goal:** Web-based cycling coach that plans workouts, analyzes Garmin rides, and integrates AI while enforcing strict user-defined rules.
### **Components**
| Component | Tech | Purpose |
| ---------------- | -------------------------- | ------------------------------------------------------------------ |
| Frontend | React/Next.js | UI for routes, plans, analysis, file uploads |
| Backend | Python (FastAPI, async) | API layer, AI integration, Garmin sync, DB access |
| Database | PostgreSQL | Stores routes, sections, plans, rules, workouts, prompts, analyses |
| File Storage | Mounted folder `/data/gpx` | Store GPX files for sections/routes |
| AI Integration | OpenRouter via backend | Plan generation, workout analysis, suggestions |
| Containerization | Docker + docker-compose | Encapsulate frontend, backend, database with persistent storage |
**Workflow Overview**
1. Upload/import GPX → backend saves to mounted folder + metadata in DB
2. Define plaintext rules → Store directly in DB
3. Generate plan → AI creates JSON plan → DB versioned
4. Ride recorded on Garmin → backend syncs activity metrics → stores in DB
5. AI analyzes workout → feedback & suggestions stored → user approves → new plan version created
---
## **2. Backend Design (Python, Async)**
**Framework:** FastAPI (async-first, non-blocking I/O)
**Tasks:**
* **Route/Section Management:** Upload GPX, store metadata, read GPX files for visualization
* **Rule Management:** CRUD rules with plaintext storage
* **Plan Management:** Generate plans (AI), store versions
* **Workout Analysis:** Fetch Garmin activity, run AI analysis, store reports
* **AI Integration:** Async calls to OpenRouter
* **Database Interaction:** Async Postgres client (e.g., `asyncpg` or `SQLAlchemy Async`)
**Endpoints (examples)**
| Method | Endpoint | Description |
| ------ | ------------------- | ------------------------------------------------ |
| POST | `/routes/upload` | Upload GPX file for route/section |
| GET | `/routes` | List routes and sections |
| POST | `/rules` | Create new rule set (plaintext) |
| POST | `/plans/generate` | Generate new plan using rules & goals |
| GET | `/plans/{plan_id}` | Fetch plan JSON & version info |
| POST | `/workouts/analyze` | Trigger AI analysis for a synced Garmin activity |
| POST | `/workouts/approve` | Approve AI suggestions → create new plan version |
**Async Patterns:**
* File I/O → async reading/writing GPX
* AI API calls → async HTTP requests
* Garmin sync → async polling/scheduled jobs
---
## **3. Database Design (Postgres)**
**Tables:**
```sql
-- Routes & Sections
CREATE TABLE routes (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
created_at TIMESTAMP DEFAULT now()
);
CREATE TABLE sections (
id SERIAL PRIMARY KEY,
route_id INT REFERENCES routes(id),
gpx_file_path TEXT NOT NULL,
distance_m NUMERIC,
grade_avg NUMERIC,
min_gear TEXT,
est_time_minutes NUMERIC,
created_at TIMESTAMP DEFAULT now()
);
-- Rules (plaintext storage)
CREATE TABLE rules (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
description TEXT,
user_defined BOOLEAN DEFAULT true,
rule_text TEXT NOT NULL, -- Plaintext rules
version INT DEFAULT 1,
parent_rule_id INT REFERENCES rules(id),
created_at TIMESTAMP DEFAULT now(),
updated_at TIMESTAMP DEFAULT now()
);
-- Plans (versioned)
CREATE TABLE plans (
id SERIAL PRIMARY KEY,
jsonb_plan JSONB NOT NULL,
version INT NOT NULL,
created_at TIMESTAMP DEFAULT now()
);
-- Workouts
CREATE TABLE workouts (
id SERIAL PRIMARY KEY,
plan_id INT REFERENCES plans(id),
garmin_activity_id TEXT NOT NULL,
metrics JSONB,
created_at TIMESTAMP DEFAULT now()
);
-- Analyses
CREATE TABLE analyses (
id SERIAL PRIMARY KEY,
workout_id INT REFERENCES workouts(id),
jsonb_feedback JSONB,
created_at TIMESTAMP DEFAULT now()
);
-- AI Prompts
CREATE TABLE prompts (
id SERIAL PRIMARY KEY,
action_type TEXT, -- plan, analysis, suggestion
model TEXT,
prompt_text TEXT,
version INT DEFAULT 1,
created_at TIMESTAMP DEFAULT now()
);
```
---
## **4. Containerization (Docker Compose)**
```yaml
version: '3.9'
services:
backend:
build: ./backend
ports:
- "8000:8000"
volumes:
- gpx-data:/app/data/gpx
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/cycling
depends_on:
- db
frontend:
build: ./frontend
ports:
- "3000:3000"
db:
image: postgres:15
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: cycling
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
gpx-data:
driver: local
postgres-data:
driver: local
```
**Notes:**
* `/app/data/gpx` inside backend container is persisted on host via `gpx-data` volume.
* Postgres data persisted via `postgres-data`.
* Backend talks to DB via async client.
---
## **5. Frontend UI Layouts & Flows**
### **5.1 Layout**
* **Navbar:** Routes | Rules | Plans | Workouts | Analysis | Export/Import
* **Sidebar:** Filters (date, type, difficulty)
* **Main Area:** Dynamic content depending on selection
### **5.2 Key Screens**
1. **Routes**
* Upload/import GPX
* View route map + section metadata
2. **Rules**
* Plaintext rule editor
* Simple create/edit form
* Rule version history
3. **Plan**
* Select goal + rule set → generate plan
* View plan timeline & weekly workouts
4. **Workout Analysis**
* List synced Garmin activities
* Select activity → AI generates report
* Visualizations: HR, cadence, power vs planned
* Approve suggestions → new plan version
5. **Export/Import**
* Export JSON/ZIP of routes, rules, plans
* Import JSON/GPX
### **5.3 User Flow Example**
1. Upload GPX → backend saves file + DB metadata
2. Define rule set → Store plaintext → DB versioned
3. Generate plan → AI → store plan version in DB
4. Sync Garmin activity → backend fetches metrics → store workout
5. AI analyzes → report displayed → user approves → new plan version
6. Export plan or route as needed
---
## **6. AI Integration**
* Each **action type** (plan generation, analysis, suggestion) has:
* Stored prompt template in DB
* Configurable model per action
* Async calls to OpenRouter
* Store raw AI output + processed structured result in DB
* Use plaintext rules directly in prompts without parsing
---
## ✅ **Next Steps**
1. Implement **Python FastAPI backend** with async patterns.
2. Build **Postgres DB schema** and migration scripts.
3. Setup **Docker Compose** with mounted GPX folder.
4. Design frontend UI based on the flows above.
5. Integrate AI endpoints and Garmin sync.
---

View File

@@ -1,91 +0,0 @@
# Export/Import API Specification
## Export Endpoint
`GET /api/export`
### Parameters (query string)
- `types` (required): Comma-separated list of data types to export
- Valid values: `routes`, `rules`, `plans`, `all`
- `format` (required): Export format
- `json`: Single JSON file
- `zip`: ZIP archive with separate files
- `gpx`: Only GPX files (routes only)
### Response
- `200 OK` with file download
- `400 Bad Request` for invalid parameters
- `500 Internal Server Error` for export failures
### Example
```http
GET /api/export?types=routes,plans&format=zip
```
---
## Import Validation
`POST /api/import/validate`
### Request
- Multipart form with `file` field containing import data
### Response
```json
{
"valid": true,
"conflicts": [
{
"type": "route",
"id": 123,
"name": "Mountain Loop",
"existing_version": 2,
"import_version": 3,
"resolution_options": ["overwrite", "rename", "skip"]
}
],
"summary": {
"routes": 15,
"rules": 3,
"plans": 2
}
}
```
---
## Import Execution
`POST /api/import`
### Request
- Multipart form with:
- `file`: Import data file
- `conflict_resolution`: Global strategy (overwrite, skip, rename)
- `resolutions`: JSON array of specific resolutions (optional)
```json
[{"type": "route", "id": 123, "action": "overwrite"}]
```
### Response
```json
{
"imported": {
"routes": 12,
"rules": 3,
"plans": 2
},
"skipped": {
"routes": 3,
"rules": 0,
"plans": 0
},
"errors": []
}
```
### Status Codes
- `200 OK`: Import completed
- `202 Accepted`: Import in progress (async)
- `400 Bad Request`: Invalid input
- `409 Conflict`: Unresolved conflicts
---

View File

@@ -1,221 +0,0 @@
# Export/Import Frontend Implementation
## File Structure
```
src/pages/
ExportImport.jsx # Main page
src/components/export/
DataExporter.jsx # Export functionality
DataImporter.jsx # Import functionality
ConflictDialog.jsx # Conflict resolution UI
ImportSummary.jsx # Post-import report
```
## Component Specifications
### ExportImport.jsx
```jsx
import { useState } from 'react';
import DataExporter from '../components/export/DataExporter';
import DataImporter from '../components/export/DataImporter';
export default function ExportImportPage() {
const [activeTab, setActiveTab] = useState('export');
return (
<div className="export-import-page">
<div className="tabs">
<button onClick={() => setActiveTab('export')}>Export</button>
<button onClick={() => setActiveTab('import')}>Import</button>
</div>
{activeTab === 'export' ? <DataExporter /> : <DataImporter />}
</div>
);
}
```
### DataExporter.jsx
```jsx
import { useState } from 'react';
const EXPORT_TYPES = [
{ id: 'routes', label: 'Routes' },
{ id: 'rules', label: 'Training Rules' },
{ id: 'plans', label: 'Training Plans' }
];
const EXPORT_FORMATS = [
{ id: 'json', label: 'JSON' },
{ id: 'zip', label: 'ZIP Archive' },
{ id: 'gpx', label: 'GPX Files' }
];
export default function DataExporter() {
const [selectedTypes, setSelectedTypes] = useState([]);
const [selectedFormat, setSelectedFormat] = useState('json');
const [isExporting, setIsExporting] = useState(false);
const [progress, setProgress] = useState(0);
const handleExport = async () => {
setIsExporting(true);
// API call to /api/export?types=...&format=...
// Track progress and trigger download
};
return (
<div className="exporter">
<h2>Export Data</h2>
<div className="type-selection">
<h3>Select Data to Export</h3>
{EXPORT_TYPES.map(type => (
<label key={type.id}>
<input
type="checkbox"
checked={selectedTypes.includes(type.id)}
onChange={() => toggleType(type.id)}
/>
{type.label}
</label>
))}
</div>
<div className="format-selection">
<h3>Export Format</h3>
<select value={selectedFormat} onChange={e => setSelectedFormat(e.target.value)}>
{EXPORT_FORMATS.map(format => (
<option key={format.id} value={format.id}>{format.label}</option>
))}
</select>
</div>
<button
onClick={handleExport}
disabled={selectedTypes.length === 0 || isExporting}
>
{isExporting ? `Exporting... ${progress}%` : 'Export Data'}
</button>
</div>
);
}
```
### DataImporter.jsx
```jsx
import { useState } from 'react';
import ConflictDialog from './ConflictDialog';
export default function DataImporter() {
const [file, setFile] = useState(null);
const [validation, setValidation] = useState(null);
const [isImporting, setIsImporting] = useState(false);
const [showConflictDialog, setShowConflictDialog] = useState(false);
const handleFileUpload = (e) => {
const file = e.target.files[0];
setFile(file);
// Call /api/import/validate
// Set validation results
};
const handleImport = () => {
if (validation?.conflicts?.length > 0) {
setShowConflictDialog(true);
} else {
startImport();
}
};
const startImport = (resolutions = []) => {
setIsImporting(true);
// Call /api/import with conflict resolutions
};
return (
<div className="importer">
<h2>Import Data</h2>
<input type="file" onChange={handleFileUpload} />
{validation && (
<div className="validation-results">
<h3>Validation Results</h3>
<p>Found: {validation.summary.routes} routes,
{validation.summary.rules} rules,
{validation.summary.plans} plans</p>
{validation.conflicts.length > 0 && (
<p> {validation.conflicts.length} conflicts detected</p>
)}
</div>
)}
<button
onClick={handleImport}
disabled={!file || isImporting}
>
{isImporting ? 'Importing...' : 'Import Data'}
</button>
{showConflictDialog && (
<ConflictDialog
conflicts={validation.conflicts}
onResolve={startImport}
onCancel={() => setShowConflictDialog(false)}
/>
)}
</div>
);
}
```
### ConflictDialog.jsx
```jsx
export default function ConflictDialog({ conflicts, onResolve, onCancel }) {
const [resolutions, setResolutions] = useState({});
const handleResolution = (id, action) => {
setResolutions(prev => ({ ...prev, [id]: action }));
};
const applyResolutions = () => {
const resolutionList = Object.entries(resolutions).map(([id, action]) => ({
id,
action
}));
onResolve(resolutionList);
};
return (
<div className="conflict-dialog">
<h3>Resolve Conflicts</h3>
<div className="conflicts-list">
{conflicts.map(conflict => (
<div key={conflict.id} className="conflict-item">
<h4>{conflict.name} ({conflict.type})</h4>
<p>Existing version: {conflict.existing_version}</p>
<p>Import version: {conflict.import_version}</p>
<select
value={resolutions[conflict.id] || 'skip'}
onChange={e => handleResolution(conflict.id, e.target.value)}
>
<option value="overwrite">Overwrite</option>
<option value="rename">Rename</option>
<option value="skip">Skip</option>
</select>
</div>
))}
</div>
<div className="actions">
<button onClick={onCancel}>Cancel</button>
<button onClick={applyResolutions}>Apply Resolutions</button>
</div>
</div>
);
}
```
## Dependencies to Install
```bash
npm install react-dropzone react-json-view file-saver

25
fitness-tui/cmd/main.go Normal file
View File

@@ -0,0 +1,25 @@
package main
import (
"fmt"
"os"
"github.com/sstent/aicyclingcoach-go/internal/config"
"github.com/sstent/aicyclingcoach-go/internal/tui"
)
func main() {
cfg, err := config.Load()
if err != nil {
fmt.Printf("Failed to load config: %v\n", err)
os.Exit(1)
}
fmt.Printf("Using storage path: %s\n", cfg.StoragePath)
app := tui.App{}
if err := app.Run(); err != nil {
fmt.Printf("Application error: %v\n", err)
os.Exit(1)
}
}

43
fitness-tui/go.mod Normal file
View File

@@ -0,0 +1,43 @@
module github.com/sstent/aicyclingcoach-go
go 1.24.0
toolchain go1.24.2
replace garmin-connect => github.com/sstent/go-garth v0.1.0
require (
github.com/charmbracelet/bubbletea v1.3.9
github.com/spf13/viper v1.21.0
)
require (
github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc // indirect
github.com/charmbracelet/lipgloss v1.1.0 // indirect
github.com/charmbracelet/x/ansi v0.10.1 // indirect
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd // indirect
github.com/charmbracelet/x/term v0.2.1 // indirect
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-localereader v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 // indirect
github.com/muesli/cancelreader v0.2.2 // indirect
github.com/muesli/termenv v0.16.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.28.0 // indirect
)

86
fitness-tui/go.sum Normal file
View File

@@ -0,0 +1,86 @@
github.com/aymanbagabas/go-osc52/v2 v2.0.1 h1:HwpRHbFMcZLEVr42D4p7XBqjyuxQH5SMiErDT4WkJ2k=
github.com/aymanbagabas/go-osc52/v2 v2.0.1/go.mod h1:uYgXzlJ7ZpABp8OJ+exZzJJhRNQ2ASbcXHWsFqH8hp8=
github.com/charmbracelet/bubbletea v1.3.9 h1:OBYdfRo6QnlIcXNmcoI2n1NNS65Nk6kI2L2FO1puS/4=
github.com/charmbracelet/bubbletea v1.3.9/go.mod h1:ORQfo0fk8U+po9VaNvnV95UPWA1BitP1E0N6xJPlHr4=
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc h1:4pZI35227imm7yK2bGPcfpFEmuY1gc2YSTShr4iJBfs=
github.com/charmbracelet/colorprofile v0.2.3-0.20250311203215-f60798e515dc/go.mod h1:X4/0JoqgTIPSFcRA/P6INZzIuyqdFY5rm8tb41s9okk=
github.com/charmbracelet/lipgloss v1.1.0 h1:vYXsiLHVkK7fp74RkV7b2kq9+zDLoEU4MZoFqR/noCY=
github.com/charmbracelet/lipgloss v1.1.0/go.mod h1:/6Q8FR2o+kj8rz4Dq0zQc3vYf7X+B0binUUBwA0aL30=
github.com/charmbracelet/x/ansi v0.10.1 h1:rL3Koar5XvX0pHGfovN03f5cxLbCF2YvLeyz7D2jVDQ=
github.com/charmbracelet/x/ansi v0.10.1/go.mod h1:3RQDQ6lDnROptfpWuUVIUG64bD2g2BgntdxH0Ya5TeE=
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd h1:vy0GVL4jeHEwG5YOXDmi86oYw2yuYUGqz6a8sLwg0X8=
github.com/charmbracelet/x/cellbuf v0.0.13-0.20250311204145-2c3ea96c31dd/go.mod h1:xe0nKWGd3eJgtqZRaN9RjMtK7xUYchjzPr7q6kcvCCs=
github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f h1:Y/CXytFA4m6baUTXGLOoWe4PQhGxaX0KpnayAqC48p4=
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f/go.mod h1:vw97MGsxSvLiUE2X8qFplwetxpGLQrlU1Q9AUEIzCaM=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-localereader v0.0.1 h1:ygSAOl7ZXTx4RdPYinUpg6W99U8jWvWi9Ye2JC/oIi4=
github.com/mattn/go-localereader v0.0.1/go.mod h1:8fBrzywKY7BI3czFoHkuzRoWE9C+EiG4R1k4Cjx5p88=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6 h1:ZK8zHtRHOkbHy6Mmr5D264iyp3TiX5OmNcI5cIARiQI=
github.com/muesli/ansi v0.0.0-20230316100256-276c6243b2f6/go.mod h1:CJlz5H+gyd6CUWT45Oy4q24RdLyn7Md9Vj2/ldJBSIo=
github.com/muesli/cancelreader v0.2.2 h1:3I4Kt4BQjOR54NavqnDogx/MIoWBFa0StPA8ELUXHmA=
github.com/muesli/cancelreader v0.2.2/go.mod h1:3XuTXfFS2VjM+HTLZY9Ak0l6eUKfijIfMUZ4EgX0QYo=
github.com/muesli/termenv v0.16.0 h1:S5AlUN9dENB57rsbnkPyfdGuWIlkmzJjbFf0Tf5FWUc=
github.com/muesli/termenv v0.16.0/go.mod h1:ZRfOIKPFDYQoDFF4Olj7/QJbW60Ol/kL1pU3VfY/Cnk=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561 h1:MDc5xs78ZrZr3HMQugiXOAkSZtfTpbJLDr/lwfgO53E=
golang.org/x/exp v0.0.0-20220909182711-5c715a9e8561/go.mod h1:cyybsKvd6eL0RnXn6p/Grxp8F5bW7iYuBgsNCOHpMYE=
golang.org/x/sys v0.0.0-20210809222454-d867a43fc93e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -0,0 +1,52 @@
package config
import (
"fmt"
"os"
"path/filepath"
"github.com/spf13/viper"
)
type Config struct {
Garmin struct {
Username string
Password string
}
OpenRouter struct {
APIKey string
Model string
}
StoragePath string
}
func Load() (*Config, error) {
home, err := os.UserHomeDir()
if err != nil {
return nil, fmt.Errorf("failed to get user home directory: %w", err)
}
viper.SetConfigName("config")
viper.SetConfigType("yaml")
viper.AddConfigPath(filepath.Join(home, ".fitness-tui"))
viper.AddConfigPath(".")
if err := viper.ReadInConfig(); err != nil {
return nil, fmt.Errorf("failed to read config: %w", err)
}
var cfg Config
if err := viper.Unmarshal(&cfg); err != nil {
return nil, fmt.Errorf("failed to unmarshal config: %w", err)
}
// Set defaults if not configured
if cfg.StoragePath == "" {
cfg.StoragePath = filepath.Join(home, ".fitness-tui")
}
if cfg.OpenRouter.Model == "" {
cfg.OpenRouter.Model = "deepseek/deepseek-r1-0528"
}
return &cfg, nil
}

View File

@@ -0,0 +1,3 @@
package config
// Default configuration values will be added here

View File

@@ -0,0 +1,3 @@
package storage
// Activity storage implementation will be added here

View File

@@ -0,0 +1,3 @@
package storage
// Analysis caching implementation will be added here

View File

@@ -0,0 +1,38 @@
package tui
import (
"fmt"
tea "github.com/charmbracelet/bubbletea"
)
type App struct {
currentModel tea.Model
}
func (a *App) Init() tea.Cmd {
return nil
}
func (a *App) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
switch msg := msg.(type) {
case tea.KeyMsg:
switch msg.String() {
case "ctrl+c", "q":
return a, tea.Quit
}
}
return a, nil
}
func (a *App) View() string {
return "AICyclingCoach-GO\n\nPress q to quit\n"
}
func (a *App) Run() error {
p := tea.NewProgram(a)
if _, err := p.Run(); err != nil {
return fmt.Errorf("failed to run application: %w", err)
}
return nil
}

View File

@@ -0,0 +1,41 @@
package models
import (
"fmt"
"time"
)
type Activity struct {
ID string
Name string
Type string
Date time.Time
Duration time.Duration
Distance float64 // meters
Metrics ActivityMetrics
}
type ActivityMetrics struct {
AvgHeartRate int
MaxHeartRate int
AvgPace float64 // seconds per km
AvgSpeed float64 // km/h
ElevationGain float64 // meters
ElevationLoss float64 // meters
}
func (a *Activity) FormattedDuration() string {
hours := int(a.Duration.Hours())
minutes := int(a.Duration.Minutes()) % 60
return fmt.Sprintf("%02d:%02d", hours, minutes)
}
func (a *Activity) FormattedDistance() string {
return fmt.Sprintf("%.2fkm", a.Distance/1000)
}
func (a *Activity) FormattedPace() string {
minutes := int(a.Metrics.AvgPace) / 60
seconds := int(a.Metrics.AvgPace) % 60
return fmt.Sprintf("%d:%02d/km", minutes, seconds)
}

Some files were not shown because too many files have changed in this diff Show More