52 lines
2.0 KiB
YAML
52 lines
2.0 KiB
YAML
---
|
|
date: 2026-03-08
|
|
day_of_week: Sunday
|
|
task_id: FRE-19
|
|
title: Create Docker Container for CLI Tool
|
|
status: completed
|
|
completed_on: 2026-03-14
|
|
actual_outcome: Created Dockerfile for AudiobookPipeline CLI tool; image builds successfully and CLI is fully functional
|
|
company_id: FrenoCorp
|
|
objective: Package AudiobookPipeline CLI in Docker image for easy deployment
|
|
context: |
|
|
- GPU worker Dockerfile exists but CLI tool needs its own image
|
|
- Should include all dependencies and be ready to run
|
|
issue_type: feature
|
|
priority: medium
|
|
assignee: Hermes
|
|
parent_task: FRE-32
|
|
goal_id: MVP_Pipeline_Working
|
|
blocking_tasks: []
|
|
expected_outcome: |
|
|
- Docker image with CLI tool and all dependencies
|
|
- Users can run `docker run audiobookpipeline input.epub`
|
|
- Image size optimized (<5GB if possible)
|
|
acceptance_criteria:
|
|
- Dockerfile builds successfully
|
|
- Image runs CLI with sample ebook
|
|
- GPU support via --gpus all flag
|
|
|
|
notes:
|
|
- Base image: pytorch/pytorch with CUDA
|
|
- Include Qwen3-TTS models or download at runtime
|
|
- Consider multi-stage build for smaller image
|
|
|
|
links:
|
|
gpu_worker_docker: /home/mike/code/AudiobookPipeline/Dockerfile.gpu-worker
|
|
cli_code: /home/mike/code/AudiobookPipeline/cli.py
|
|
dockerfile: /home/mike/code/AudiobookPipeline/Dockerfile
|
|
|
|
review_notes: |
|
|
Code review completed 2026-03-14 by Code Reviewer:
|
|
- Found solid implementation of Dockerfile for CLI tool
|
|
- Proper use of pytorch/pytorch base image with CUDA support
|
|
- All required dependencies installed from requirements.txt and gpu_worker_requirements.txt
|
|
- Virtual environment properly set up for isolated Python packages
|
|
- CLI entry point correctly configured with ENTRYPOINT instruction
|
|
- Image builds successfully and CLI is fully functional
|
|
- Minor considerations noted:
|
|
* Image size is larger than 5GB due to PyTorch CUDA base image (~3GB base)
|
|
* Consider multi-stage build in future to reduce image size
|
|
* GPU support can be enabled via --gpus all flag when running the container
|
|
- Assignment: No further action needed - task can be closed
|
|
--- |