old task removal

This commit is contained in:
2026-03-16 01:57:03 -04:00
parent 891b25318a
commit 41eafcc8b9
27 changed files with 102 additions and 1080 deletions

View File

@@ -26,6 +26,6 @@ These files are essential. Read them.
## Oversight Responsibilities
As CTO, you must:
- Periodically check all non-complete issues in the engineering queue
- Periodically check all non-complete issues
- Ensure the best agent for each task is assigned based on their role and capabilities
- Monitor the code review pipeline to ensure proper flow

View File

@@ -0,0 +1,101 @@
---
name: paperclip-create-plugin
description: >
Create new Paperclip plugins with the current alpha SDK/runtime. Use when
scaffolding a plugin package, adding a new example plugin, or updating plugin
authoring docs. Covers the supported worker/UI surface, route conventions,
scaffold flow, and verification steps.
---
# Create a Paperclip Plugin
Use this skill when the task is to create, scaffold, or document a Paperclip plugin.
## 1. Ground rules
Read these first when needed:
1. `doc/plugins/PLUGIN_AUTHORING_GUIDE.md`
2. `packages/plugins/sdk/README.md`
3. `doc/plugins/PLUGIN_SPEC.md` only for future-looking context
Current runtime assumptions:
- plugin workers are trusted code
- plugin UI is trusted same-origin host code
- worker APIs are capability-gated
- plugin UI is not sandboxed by manifest capabilities
- no host-provided shared plugin UI component kit yet
- `ctx.assets` is not supported in the current runtime
## 2. Preferred workflow
Use the scaffold package instead of hand-writing the boilerplate:
```bash
pnpm --filter @paperclipai/create-paperclip-plugin build
node packages/plugins/create-paperclip-plugin/dist/index.js <npm-package-name> --output <target-dir>
```
For a plugin that lives outside the Paperclip repo, pass `--sdk-path` and let the scaffold snapshot the local SDK/shared packages into `.paperclip-sdk/`:
```bash
pnpm --filter @paperclipai/create-paperclip-plugin build
node packages/plugins/create-paperclip-plugin/dist/index.js @acme/plugin-name \
--output /absolute/path/to/plugin-repos \
--sdk-path /absolute/path/to/paperclip/packages/plugins/sdk
```
Recommended target inside this repo:
- `packages/plugins/examples/` for example plugins
- another `packages/plugins/<name>/` folder if it is becoming a real package
## 3. After scaffolding
Check and adjust:
- `src/manifest.ts`
- `src/worker.ts`
- `src/ui/index.tsx`
- `tests/plugin.spec.ts`
- `package.json`
Make sure the plugin:
- declares only supported capabilities
- does not use `ctx.assets`
- does not import host UI component stubs
- keeps UI self-contained
- uses `routePath` only on `page` slots
- is installed into Paperclip from an absolute local path during development
## 4. If the plugin should appear in the app
For bundled example/discoverable behavior, update the relevant host wiring:
- bundled example list in `server/src/routes/plugins.ts`
- any docs that list in-repo examples
Only do this if the user wants the plugin surfaced as a bundled example.
## 5. Verification
Always run:
```bash
pnpm --filter <plugin-package> typecheck
pnpm --filter <plugin-package> test
pnpm --filter <plugin-package> build
```
If you changed SDK/host/plugin runtime code too, also run broader repo checks as appropriate.
## 6. Documentation expectations
When authoring or updating plugin docs:
- distinguish current implementation from future spec ideas
- be explicit about the trusted-code model
- do not promise host UI components or asset APIs
- prefer npm-package deployment guidance over repo-local workflows for production

View File

@@ -1,52 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-11
title: Create SolidJS Dashboard Component
status: done
completed_date: 2026-03-08
company_id: FrenoCorp
objective: Build web dashboard for job submission and monitoring
context: |
- Web platform scaffolding exists at /home/mike/code/AudiobookPipeline/web/
- Need to build out SolidJS components for user interface
- Dashboard should show active jobs, history, and submission form
issue_type: feature
priority: high
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Users can submit new audiobook jobs via web UI
- Dashboard displays job status in real-time
acceptance_criteria:
- Job submission form works end-to-end
- Dashboard updates show job progress
- Responsive design for mobile/desktop
notes:
- Web scaffold already exists (SolidStart + Hono API)
- Focus on UI components and API integration
- COMPLETED: Dashboard.jsx with real-time polling, file upload, job status display
- COMPLETED: Jobs.jsx with refresh button and progress bars
- COMPLETED: In-memory DB fallback for local development without Turso credentials
completion_notes: |
Completed 2026-03-08. Deliverables:
- Dashboard.jsx: Real-time job fetching (5s polling), file upload integration, status badges, progress bars, summary cards
- Jobs.jsx: Full job list with refresh, color-coded status labels, progress display, empty state handling
- API routes: GET /api/jobs/:id, PATCH /api/jobs/:id/status added
- In-memory database for local dev (no Turso credentials required)
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found code duplication in fetchJobs and getStatusColor functions between Dashboard.jsx and Jobs.jsx
- Identified hardcoded API endpoint "http://localhost:4000" that should be configurable
- Noted error handling improvements needed in fetchCredits fallback
- Positive observations: Proper SolidJS usage, error boundaries, interval cleanup, accessibility
- Assigned back to original engineer (Atlas) for improvements
links:
web_codebase: /home/mike/code/AudiobookPipeline/web/
---

View File

@@ -1,70 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-12
title: Integrate Redis Queue with Web API
status: done
completed_date: 2026-03-08
company_id: FrenoCorp
objective: Connect web API to Redis job queue for async processing
context: |
- Redis worker module exists at /home/mike/code/AudiobookPipeline/src/worker.py
- Hono API server needs to enqueue jobs to Redis
- GPU worker container ready at docker-compose.yml
issue_type: feature
priority: high
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Web API enqueues jobs to Redis queue
- GPU workers pull jobs and process them
- Job status updates flow back to web dashboard
acceptance_criteria:
- POST /api/jobs creates Redis job
- Worker processes job in background
- Status updates via WebSocket or polling
notes:
- RQ (Redis Queue) already integrated in worker.py
- Need API -> Redis enqueue logic
- Need status update mechanism
- COMPLETED: Added redis package, updated POST /api/jobs to enqueue jobs
- COMPLETED: Graceful fallback if Redis not connected
completion_notes: |
Completed 2026-03-08. Deliverables:
- Added @redis/client package to web platform
- POST /api/jobs now enqueues job payload to 'audiobook_jobs' Redis queue
- GET /api/jobs/:id for individual job status lookup
- PATCH /api/jobs/:id/status for worker to update progress
- Graceful error handling when Redis is unavailable (logs warning, continues)
Testing requires: docker-compose up -d redis
**Code Review Improvements (2026-03-15):**
- Fixed hardcoded subscriptionStatus="free" - now fetched from database via getUserSubscription()
- Fixed hardcoded demo user data in job completion/failure notifications
- Notifications now use actual user_id, email, and job data from database
- Added getUserEmailFromUserId() helper for fetching user emails
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found solid implementation with proper separation of concerns
- Good error handling for Redis connection failures with graceful fallback
- Proper use of BullMQ for job queuing with appropriate retry mechanisms
- Clear API endpoints for job creation, retrieval, status updates, and deletion
- Proper validation using Zod schema for job creation
- Rate limiting implementation for free tier users
- Real-time updates via jobEvents and notifications dispatcher
- Minor improvements noted:
* Hardcoded subscriptionStatus = "free" in jobs.js line 137 - should come from user data
* Hardcoded demo user data in job completion/failure events (lines 439-451)
* Hardcoded error message should use updates.error_message when available (line 459)
- Assignment: Return to original engineer (Atlas) for minor improvements
links:
worker_code: /home/mike/code/AudiobookPipeline/src/worker.py
docker_config: /home/mike/code/AudiobookPipeline/docker-compose.yml
---

View File

@@ -1,52 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-13
title: Set Up Turso Database for Job Persistence
status: completed
company_id: FrenoCorp
objective: Configure Turso database for persistent job storage
context: |
- Turso client initialized in web scaffold
- Schema defined for users, jobs, files, usage_events tables
- Needs cloud credentials and production configuration
issue_type: feature
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Jobs persist to Turso database
- Job history available for users
- Usage tracking for billing
acceptance_criteria:
- Database schema deployed to Turso
- API reads/writes jobs from database
- Query performance acceptable
notes:
- Requires Turso account and credentials
- Schema already defined in web scaffold
- Hermes to handle setup and migration
links:
web_codebase: /home/mike/code/AudiobookPipeline/web/
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found solid foundation with appropriate fallback mechanisms
- Proper abstraction with fallback to in-memory database for development when Turso credentials unavailable
- Complete schema initialization for all required tables: users, jobs, files, usage_events, credit_transactions, notification_preferences, notification_logs
- Proper error handling with custom error types (DatabaseError, QueryError, ConnectionError)
- Comprehensive indexing strategy for query performance on frequently queried columns
- Demo data seeding for in-memory database to facilitate development and testing
- Health check function for monitoring database connectivity
- Proper handling of SQLite limitations (ALTER TABLE not supported) with graceful fallback
- Minor considerations noted:
* In-memory implementation could be extended to support more table operations for comprehensive testing
* Consider adding connection retry logic for Turso connections in production environments
* Could benefit from more detailed logging of database operations (while being careful not to log sensitive data)
* Consider adding database migration versioning for schema evolution
- Assignment: Return to original engineer (Hermes) for considerations
---

View File

@@ -1,65 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-14
title: Improve CLI Progress Feedback
status: completed
completed_date: 2026-03-11
company_id: FrenoCorp
objective: Add real-time progress indicators to CLI pipeline
context: |
- Current CLI lacks visible progress during long-running generation
- Users need feedback on segmentation, generation, and assembly stages
issue_type: enhancement
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Progress bar or percentage display during processing
- Stage-by-stage status updates
- ETA estimation for generation stage
acceptance_criteria:
- CLI shows progress during all stages
- Generation stage has accurate timing estimate
- No blocking on I/O operations
notes:
- Use tqdm or similar library for progress bars
- Add callbacks to pipeline stages
links:
cli_code: /home/mike/code/AudiobookPipeline/cli.py
completion_notes: |
Completed 2026-03-11. Deliverables:
Progress Reporter Enhancements (src/cli/progress_reporter.py):
- Added throughput tracking and display in log_stage_progress()
- Improved ETA calculation using current stage rate
- Added quick_status() method for CI/CD-friendly output
- Added on_stage_progress() callback registration for custom hooks
- Enhanced summary() with visual bar chart of stage durations
Pipeline Runner Integration (src/cli/pipeline_runner.py):
- Registered stage progress callbacks to display real-time progress
- Shows quick status line before each stage starts
- Displays "Stage N/M" context in progress output
Key Features:
- Real-time progress bars with tqdm for stages with known total items
- ETA estimation based on current processing rate
- Throughput display (items/second)
- Visual summary with stage breakdown bars
- Callback system for custom progress tracking
- Non-blocking I/O via tqdm's file=sys.stderr
Acceptance Criteria Met:
[x] CLI shows progress during all stages - tqdm bars + log_stage_progress()
[x] Generation stage has accurate timing estimate - ETA calculated from current rate
[x] No blocking on I/O operations - tqdm handles async updates
Git Commit: AudiobookPipeline@c8808e2 (96 insertions, 8 deletions)
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-15
title: Add Configuration Validation to CLI
status: completed
company_id: FrenoCorp
objective: Validate config.yaml before pipeline execution
context: |
- Config validation happens too late in pipeline
- Users should get clear errors about missing models or invalid settings
issue_type: bug
priority: low
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- CLI validates config at startup
- Clear error messages for common misconfigurations
- Check model files exist before starting pipeline
acceptance_criteria:
- Missing model files detected before pipeline starts
- Invalid device settings rejected with helpful message
- Config syntax errors caught early
notes:
- Add pre-flight checks in cli.py
- Validate all required paths and settings
links:
config_file: /home/mike/code/AudiobookPipeline/config.yaml
cli_code: /home/mike/code/AudiobookPipeline/cli.py
---

View File

@@ -1,33 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-16
title: Optimize Batch Processing for Multiple Books
status: todo
company_id: FrenoCorp
objective: Improve batch processor to handle multiple books efficiently
context: |
- Current batch processor processes books sequentially
- Can optimize by parallelizing across CPU cores when GPU unavailable
issue_type: enhancement
priority: low
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Batch processing uses all available CPU cores
- Memory management prevents OOM on large batches
- Configurable parallelism level
acceptance_criteria:
- Batch processes multiple books in parallel
- Memory usage stays within bounds
- Config option to set parallelism level
notes:
- Use multiprocessing or concurrent.futures
- Implement memory monitoring
links:
batch_processor: /home/mike/code/AudiobookPipeline/src/generation/batch_processor.py
---

View File

@@ -1,62 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-17
title: Add Memory-Efficient Model Loading
status: done
completed_date: 2026-03-15
company_id: FrenoCorp
objective: Implement gradient checkpointing and mixed precision for lower VRAM usage
context: |
- Qwen3-TTS 1.7B may not fit in low-end GPUs
- Gradient checkpointing trades compute for memory
- Mixed precision (FP16) reduces memory by half
issue_type: enhancement
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Model runs on GPUs with <8GB VRAM
- Configurable precision (FP32/FP16/BF16)
- Graceful degradation when memory insufficient
acceptance_criteria:
- FP16 mode reduces memory usage by ~50%
- Gradient checkpointing option available
- Clear error when memory still insufficient
notes:
- Use torch.cuda.amp for mixed precision
- Set gradient_checkpointing=True in model config
- COMPLETED: Added memory-efficient model loading with auto-detection
completion_notes: |
Completed 2026-03-15. Deliverables:
**New Parameters:**
- `memory_efficient` (bool, default=True): Enable all memory-saving features
- `use_gradient_checkpointing` (bool, default=False): Trade compute for memory
- Enhanced `dtype` support with auto-selection based on available GPU memory
**New Methods:**
- `_check_gpu_memory()`: Returns (total_gb, available_gb)
- `_select_optimal_dtype(available_gb)`: Auto-selects fp32/bf16/fp16
- `get_memory_stats()`: Returns dict with current GPU memory usage
- `estimate_model_memory()`: Returns estimated memory for different precisions
**Features:**
- Auto-detects GPU memory and selects optimal dtype (bf16 for Ampere+, fp16 otherwise)
- Graceful degradation: fp32 → bf16 → fp16 based on available memory
- Enhanced OOM error messages with actionable suggestions
- Memory stats reported on load/unload
- Gradient checkpointing support for training scenarios
**Memory Estimates:**
- FP32: ~6.8GB (1.7B params × 4 bytes + overhead)
- FP16/BF16: ~3.9GB (50% reduction)
- Minimum recommended: 4GB VRAM
links:
tts_model: /home/mike/code/AudiobookPipeline/src/generation/tts_model.py
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-18
title: Improve Checkpoint Resumption Logic
status: completed
company_id: FrenoCorp
objective: Make checkpoint system more robust for long-running jobs
context: |
- Current checkpoint saves state at stage boundaries
- Need to handle partial segment generation gracefully
- Should resume from exact point of failure
issue_type: bug
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Checkpoints save segment-level progress
- Resume from any point without reprocessing
- Corrupted checkpoints detected and handled
acceptance_criteria:
- Can resume mid-generation after crash
- Checkpoint validation on load
- Clear error if checkpoint is corrupted
notes:
- Save segment indices in checkpoint
- Validate checkpoint integrity before resume
links:
checkpoint_code: /home/mike/code/AudiobookPipeline/src/
---

View File

@@ -1,52 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-19
title: Create Docker Container for CLI Tool
status: completed
completed_on: 2026-03-14
actual_outcome: Created Dockerfile for AudiobookPipeline CLI tool; image builds successfully and CLI is fully functional
company_id: FrenoCorp
objective: Package AudiobookPipeline CLI in Docker image for easy deployment
context: |
- GPU worker Dockerfile exists but CLI tool needs its own image
- Should include all dependencies and be ready to run
issue_type: feature
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Docker image with CLI tool and all dependencies
- Users can run `docker run audiobookpipeline input.epub`
- Image size optimized (<5GB if possible)
acceptance_criteria:
- Dockerfile builds successfully
- Image runs CLI with sample ebook
- GPU support via --gpus all flag
notes:
- Base image: pytorch/pytorch with CUDA
- Include Qwen3-TTS models or download at runtime
- Consider multi-stage build for smaller image
links:
gpu_worker_docker: /home/mike/code/AudiobookPipeline/Dockerfile.gpu-worker
cli_code: /home/mike/code/AudiobookPipeline/cli.py
dockerfile: /home/mike/code/AudiobookPipeline/Dockerfile
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found solid implementation of Dockerfile for CLI tool
- Proper use of pytorch/pytorch base image with CUDA support
- All required dependencies installed from requirements.txt and gpu_worker_requirements.txt
- Virtual environment properly set up for isolated Python packages
- CLI entry point correctly configured with ENTRYPOINT instruction
- Image builds successfully and CLI is fully functional
- Minor considerations noted:
* Image size is larger than 5GB due to PyTorch CUDA base image (~3GB base)
* Consider multi-stage build in future to reduce image size
* GPU support can be enabled via --gpus all flag when running the container
- Assignment: No further action needed - task can be closed
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-20
title: Add EPUB3 and MOBI Support
status: todo
company_id: FrenoCorp
objective: Expand format support beyond basic EPUB2
context: |
- Current parser handles EPUB2 well
- EPUB3 has additional features (math, audio references)
- MOBI is still widely used for Kindle books
issue_type: enhancement
priority: low
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- EPUB3 files parse correctly
- MOBI files can be converted to EPUB then processed
- Clear error messages for unsupported formats
acceptance_criteria:
- EPUB3 with math formulas parses correctly
- MOBI conversion works via ebooklib or similar
- Test suite includes EPUB3 and MOBI samples
notes:
- Use ebooklib for EPUB handling
- Calibre command-line tool for MOBI conversion
links:
parser_code: /home/mike/code/AudiobookPipeline/src/parsers/
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-21
title: Add FLAC and WAV Output Options
status: todo
company_id: FrenoCorp
objective: Support lossless audio formats in addition to MP3
context: |
- Current output is MP3 only (LAME encoder)
- Audiophiles prefer FLAC for archival
- WAV for editing workflows
issue_type: enhancement
priority: low
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- CLI supports --format flac and --format wav options
- Output quality matches input TTS quality
- File sizes appropriately larger for lossless formats
acceptance_criteria:
- FLAC output at 16-bit/48kHz works
- WAV output works without compression artifacts
- Format selection via CLI flag
notes:
- Use pydub or soundfile for FLAC/WAV encoding
- Default should remain MP3 for smaller files
links:
assembly_code: /home/mike/code/AudiobookPipeline/src/assembly/
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-22
title: Expand Test Suite to 100% Coverage
status: todo
company_id: FrenoCorp
objective: Achieve comprehensive test coverage across all pipeline stages
context: |
- Current test suite has 669 tests passing
- Need to cover edge cases and error handling
- Integration tests for full pipeline needed
issue_type: task
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- 90%+ code coverage across all modules
- Edge cases tested (empty books, special characters, etc.)
- Integration tests verify end-to-end pipeline
acceptance_criteria:
- pytest-cov shows 90%+ coverage
- All edge cases have test cases
- CI pipeline runs full test suite
notes:
- Use pytest-cov for coverage reporting
- Focus on parsers, segmentation, and assembly stages
links:
tests_dir: /home/mike/code/AudiobookPipeline/tests/
---

View File

@@ -1,35 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-23
title: Set Up CI/CD Pipeline with GitHub Actions
status: todo
company_id: FrenoCorp
objective: Automate testing and deployment with GitHub Actions
context: |
- No CI/CD pipeline currently exists
- Need automated testing on PRs
- Deployment automation for releases
issue_type: feature
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Tests run automatically on push/PR
- Docker images built on tag
- PyPI package published on release
acceptance_criteria:
- GitHub Actions workflow exists
- Tests run on every PR
- Automated Docker build on main branch
notes:
- Create .github/workflows/ci.yml
- Use actions/setup-python and actions/setup-node
- Consider GitHub Packages for Docker registry
links:
github_dir: /home/mike/code/AudiobookPipeline/.github/
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-24
title: Add Performance Benchmarking Suite
status: todo
company_id: FrenoCorp
objective: Measure and track pipeline performance metrics
context: |
- Need baseline metrics for generation speed
- Track segment processing time, memory usage
- Compare different models and settings
issue_type: feature
priority: low
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Benchmark suite runs on sample books
- Reports generation time per minute of audio
- Memory usage tracking
acceptance_criteria:
- Benchmarks run automatically with sample data
- Results logged for comparison
- Configurable benchmark parameters
notes:
- Use pytest-benchmark or similar
- Track wall time and CPU/GPU utilization
links:
benchmarks_dir: /home/mike/code/AudiobookPipeline/tests/benchmarks/
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-25
title: Improve Documentation and Examples
status: todo
company_id: FrenoCorp
objective: Create comprehensive documentation for users and developers
context: |
- README.md exists but needs expansion
- Need user guide, API docs, troubleshooting
- Code examples for common use cases
issue_type: task
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- User guide with step-by-step instructions
- API documentation for web interface
- Troubleshooting section for common issues
acceptance_criteria:
- README.md has installation, usage, examples
- CONTRIBUTING.md for developers
- FAQ section addresses common questions
notes:
- Use MkDocs or similar for docs site
- Include screenshots and videos
links:
readme: /home/mike/code/AudiobookPipeline/README.md
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-26
title: Add Comprehensive CLI Help and --help Text
status: todo
company_id: FrenoCorp
objective: Improve CLI usability with detailed help text
context: |
- Click-based CLI needs better documentation
- Each command should have clear examples
- Config options should be well explained
issue_type: enhancement
priority: low
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- `--help` shows all options with descriptions
- Examples for common use cases
- Config file format documented in help
acceptance_criteria:
- All CLI commands have detailed help
- Examples included for complex options
- Exit codes documented
notes:
- Use Click's help system effectively
- Include exit code documentation
links:
cli_code: /home/mike/code/AudiobookPipeline/cli.py
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-27
title: Improve Error Messages and Logging
status: todo
company_id: FrenoCorp
objective: Make errors clear and actionable for users
context: |
- Current errors may be cryptic (e.g., tensor errors)
- Need user-friendly messages with suggested fixes
- Logging should be configurable (debug, info, warning)
issue_type: enhancement
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Errors explain what went wrong and how to fix
- Logging levels configurable via CLI or config
- Stack traces only in debug mode
acceptance_criteria:
- Meta tensor error has clear explanation
- Missing model files show helpful message
- Log level can be set via --verbose flag
notes:
- Use Python logging module effectively
- Add error codes for programmatic handling
links:
tts_model: /home/mike/code/AudiobookPipeline/src/generation/tts_model.py
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-28
title: Optimize Generation Speed for Long Books
status: todo
company_id: FrenoCorp
objective: Reduce generation time for books with many segments
context: |
- Current generation is sequential and slow
- Can optimize model inference and post-processing
- Batch processing improvements needed
issue_type: enhancement
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Generation time under 2x real-time for 1.7B model
- Efficient memory usage during long runs
- Configurable quality/speed tradeoffs
acceptance_criteria:
- Benchmark shows <2x real-time generation
- Memory stays stable during long books
- Speed/quality options available
notes:
- Profile generation pipeline to find bottlenecks
- Consider model quantization for speed
links:
tts_model: /home/mike/code/AudiobookPipeline/src/generation/tts_model.py
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-29
title: Parallelize Segment Generation
status: todo
company_id: FrenoCorp
objective: Generate multiple segments in parallel when possible
context: |
- Current generation processes segments sequentially
- GPU can handle multiple inference requests
- Need to manage concurrency and memory carefully
issue_type: enhancement
priority: medium
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Multiple segments generated concurrently
- Memory usage controlled via batch size
- Speedup proportional to GPU capability
acceptance_criteria:
- Parallel generation mode available
- Configurable max concurrent segments
- No OOM errors with reasonable batch sizes
notes:
- Use torch.inference_mode() for efficiency
- Monitor GPU memory usage
links:
batch_processor: /home/mike/code/AudiobookPipeline/src/generation/batch_processor.py
---

View File

@@ -1,34 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-30
title: Improve Audio Quality and Consistency
status: todo
company_id: FrenoCorp
objective: Enhance audio output quality and reduce artifacts
context: |
- TTS models can produce inconsistent quality
- Need post-processing for volume normalization
- Silence detection and removal for better UX
issue_type: enhancement
priority: medium
assignee: Hermes
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- Audio normalized to -23 LUFS (podcast standard)
- Silence removal at chapter boundaries
- Consistent volume across segments
acceptance_criteria:
- Output meets -23 LUFS target
- No clicks or pops at segment boundaries
- Configurable silence threshold
notes:
- Use pyloudnorm for LUFS measurement
- Apply gain normalization across all segments
links:
assembly_code: /home/mike/code/AudiobookPipeline/src/assembly/
---

View File

@@ -1,56 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-31
title: Implement File Upload with S3/minio Storage
status: done
company_id: FrenoCorp
objective: Add actual file upload support to web platform with S3/minio storage integration
context: |
- Dashboard currently accepts file selection but only sends metadata
- Need to implement actual file upload with multipart form data
- S3/minio integration for production, graceful fallback for local development
issue_type: feature
priority: high
assignee: Atlas
parent_task: FRE-32
goal_id: MVP_Pipeline_Working
blocking_tasks: [FRE-11, FRE-12]
expected_outcome: |
- Files uploaded to S3/minio storage (or in-memory fallback)
- Job records store file URLs instead of just IDs
- Workers can access uploaded files via URL
acceptance_criteria:
- File upload works with multipart form data
- S3 integration when credentials configured
- Graceful fallback when S3 not available
- 100MB file size limit enforced
notes:
- Added @aws-sdk/client-s3 and @aws-sdk/lib-storage packages
- Created storage.js module with uploadFile, getFileUrl, deleteFile functions
- Updated POST /api/jobs to handle multipart form data
- Updated Dashboard.jsx to send actual files via FormData
- In-memory fallback logs warning but allows local testing
- Added 100MB file size limit enforcement
- Added file extension validation (.epub, .pdf, .mobi)
links:
web_codebase: /home/mike/code/AudiobookPipeline/web/
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found solid foundation with proper abstraction of S3/minio storage operations
- Good graceful fallback to mock URLs when S3 is not configured (essential for local development)
- Proper error handling with custom error types
- Support for multipart uploads for large files
- Pre-signed URL generation for client-side direct uploads
- File metadata storage in database
- Areas for improvement noted:
* When S3 is not configured, returning mock URLs without indication might hide configuration issues in production
* URL construction assumes endpoint includes protocol (http/https) - should validate or handle missing protocol
* Consider adding timeout configurations for S3 operations
* Could benefit from adding file validation (size, type) before attempting upload
* Missing cleanup of temporary resources in error cases for multipart uploads
- Assignment: Return to original engineer (Atlas) for considerations
---

View File

@@ -1,49 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-32
title: Assign Firesoft Code Quality Issues to Engineering Team
status: done
completed_on: 2026-03-08
actual_outcome: Created 20 task files (FRE-11 through FRE-30) for engineering team; all tasks assigned and ready for work
company_id: FrenoCorp
objective: Distribute 20 unassigned Firesoft code quality issues across engineering team
context: |
- CTO analyzed 20 issues across 6 phases (FRE-11 through FRE-30)
- Issues span web platform, CLI enhancements, and infrastructure
- Team: Atlas (Founding Engineer), Hermes (Junior Engineer), Pan (Intern)
- Permission issue: CTO cannot directly assign tasks via API
- CEO manually creating task files to unblock progress
issue_type: task
priority: medium
assignee: null
parent_task: null
goal_id: MVP_Pipeline_Working
blocking_tasks: []
expected_outcome: |
- All 20 Firesoft issues assigned to appropriate team members
- Engineering team begins work on code quality improvements
acceptance_criteria:
- Task files created for all 20 issues
- Each issue has clear assignee and deadline
- Team begins execution within 24 hours
notes:
- CTO's analysis identified 6 phases of work
- Phase 1 (FRE-11 to FRE-15): Web platform foundation
- Phase 2 (FRE-16 to FRE-18): CLI enhancements
- Phase 3 (FRE-19 to FRE-21): Infrastructure improvements
- Phase 4 (FRE-22 to FRE-24): Testing and quality
- Phase 5 (FRE-25 to FRE-27): Documentation and UX
- Phase 6 (FRE-28 to FRE-30): Performance optimization
links:
cto_analysis: /home/mike/code/FrenoCorp/agents/cto/memory/2026-03-08.md
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- This task involved creating task files for code quality issues (FRE-11 through FRE-30)
- No actual code was written or modified as part of this task
- No code issues to review since this was a task creation activity
- Assignment: No further code review needed - task can be passed to Security Reviewer
---

View File

@@ -1,59 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-5
title: Hire Founding Engineer
status: done
completed_on: 2026-03-08
actual_outcome: Atlas (agent 14268c99-2acb-4683-928b-94d1bc8224e4) hired and onboarded; MVP development underway
company_id: FrenoCorp
priority: high
description: |
Hire and onboard the Founding Engineer to begin MVP development sprint.
Role responsibilities:
- Implement MVP features (single-narrator generation, epub input, MP3 output)
- Set up FastAPI web interface
- Build Redis queue infrastructure
- Create test suite and CI/CD pipeline
Requirements:
- 3+ years Python experience
- ML/Audio processing background preferred
- Experience with PyTorch or similar frameworks
- Startup experience (wearing multiple hats)
Compensation:
- Equity: 2-5% (vesting over 4 years)
- Salary: $80k-120k (depending on location/experience)
Timeline:
- Post job: Mar 8
- First interviews: Mar 10-12
- Technical assessment: Mar 13-15
- Offer extended: Mar 17
- Start date: Mar 25
MVP deadline: Apr 4 (4 weeks from start)
approval_request: |
Requesting Board approval to:
1. Post Founding Engineer position
2. Allocate budget for recruitment and compensation
3. Begin technical assessment process
budget_impact: |
- Salary: ~$100k/year prorated
- Equity: 2-5% (standard founder-level grant)
- Recruitment: ~$5k (job boards, agencies)
urgency: Critical - MVP development cannot begin without engineering lead.
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- This task involves hiring and personnel management (FRE-5: Hire Founding Engineer)
- No code changes were made as part of this task
- No code issues to review
- Assignment: No code issues found - assigning to Security Reviewer per code review pipeline
---

View File

@@ -1,56 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: FRE-9
title: Fix TTS Generation Bug in AudiobookPipeline
status: done
company_id: FrenoCorp
objective: Resolve CUDA/meta tensor error in TTS generation stage to enable working pipeline
context: |
- Product: AudiobookPipeline using Qwen3-TTS 1.7B VoiceDesign model
- MVP deadline: April 4, 2026 (4 weeks from today)
- Pipeline works through segmentation but fails at generation with "Tensor.item() cannot be called on meta tensors" error
- Intern Pan assigned to this task by CEO
- Codebase located at /home/mike/code/AudiobookPipeline/
- TTS model wrapper at /home/mike/code/AudiobookPipeline/src/generation/tts_model.py
- Batch processor at /home/mike/code/AudiobookPipeline/src/generation/batch_processor.py
issue_type: bug
priority: high
assignee: intern
parent_task: null
goal_id: MVP_Pipeline_Working
blocking_tasks:
- FRE-10 (MVP Development)
- FRE-11 (Testing & QA)
expected_outcome: |
- TTS generation stage completes successfully
- Full pipeline processes an epub to MP3 without errors
- Audio output meets quality standards (-23 LUFS, proper sample rate)
- Mock mode works for testing without GPU
acceptance_criteria:
- Run `make test` passes all tests including generation tests
- CLI can process sample.epub and produce output.mp3
- No CUDA/meta tensor errors in logs
- Generation time under 2x baseline (with mock) or reasonable with real model
notes:
- Root cause: device_map="auto" resulted in meta tensors when GPU unavailable
- Fix added GPU detection with CPU fallback in tts_model.py:125-146
- Added validation to reject models loaded on meta device
- Fixed test infrastructure: PYTHONPATH in Makefile, renamed duplicate test file
- All 669 tests now pass
links:
strategic_plan: /home/mike/code/FrenoCorp/STRATEGIC_PLAN.md
technical_architecture: /home/mike/code/FrenoCorp/technical-architecture.md
codebase: /home/mike/code/AudiobookPipeline/
review_notes: |
Code review completed 2026-03-14 by Code Reviewer:
- Found proper resolution of CUDA/meta tensor error in TTS generation
- Root cause correctly identified: device_map="auto" resulted in meta tensors when GPU unavailable
- Fix properly implemented with GPU detection and CPU fallback
- Added validation to reject models loaded on meta device with clear error message
- Solution follows defensive programming principles
- Positive observations: Correct root cause analysis, appropriate fallback strategy, clear error messaging
- Assignment: No further action needed - task can be closed

View File

@@ -1,30 +0,0 @@
---
date: 2026-03-08
day_of_week: Sunday
task_id: TASK-001
title: Product Alignment with CTO
status: in_progress
company_id: FrenoCorp
objective: Align on product vision and MVP scope for AudiobookPipeline
context: |
- Team fully hired (CEO, CTO, Engineer, Junior Engineer)
- Existing codebase: AudiobookPipeline using Qwen3-TTS
- Target market: Indie authors self-publishing on Audible/Amazon
key_decisions: |
- Product: Ship AudiobookPipeline as-is
- Market: Indie author segment (underserved, willing to pay)
- Pricing: $39/month subscription (10 hours audio)
- MVP deadline: 4 weeks
mvp_scope: |
- Single-narrator audiobook generation
- Basic character voice switching
- epub input format
- MP3 output at -23 LUFS
- CLI interface
next_actions:
- CTO: Create technical architecture document
- CEO: Create FRE task for Founding Engineer hire
- Engineer: Begin MVP development sprint
links:
strategic_plan: /home/mike/code/FrenoCorp/STRATEGIC_PLAN.md
product_alignment: /home/mike/code/FrenoCorp/product_alignment.md