Applied Module 12 · AI-Powered Music Production Workflows

The Visualizer Content Pipeline

What you'll learn

~40 min
  • Build a Node.js CLI that batch-generates visualizer content for multiple platforms
  • Understand headless browser rendering with Puppeteer for automated video capture
  • Integrate cover art, track metadata, and visualizer themes into a content package
  • Produce platform-ready clips for Spotify Canvas, Instagram Reels, YouTube Shorts, and TikTok

What you’re building

You built an audio-reactive visualizer in Lesson 7. It looks great in a browser. But now you need that visualizer as actual video files — a looping 8-second clip for Spotify Canvas, a 15-second vertical for Instagram Reels, a 60-second Short for YouTube, and a full-length 16:9 background for your YouTube uploads. Four platforms, four aspect ratios, four durations. Normally you’d screen-record each one manually, crop it, trim it, export it. Four times. For every single release.

You’re going to build a CLI that does all of that in one command. Point it at a release config, and it renders every clip headlessly — no screen recording, no manual cropping, no export dialogs. One command, four platform-ready files, a manifest listing everything it produced.

This is the capstone. Every lesson in this module feeds into this one.

💬This pipeline replaces $50-200/month in visualizer subscriptions

Rotor Videos charges $5-15 per visualizer video. Specterr runs $10-30/month. Vizzy, Renderforest, and similar services charge monthly subscriptions for what is fundamentally the same thing: rendering an animation over your audio at specific dimensions. Your pipeline does this for free, uses YOUR visualizer design (not a generic template), and runs locally. Over a year of monthly releases, that’s $120-360 you keep. Over a catalog? It adds up fast.


How the pieces connect

This lesson pulls from almost everything you’ve built:

LessonWhat this pipeline uses
L1Cover art from release-assets/ — composited into the visualizer
L2Track metadata from campaign data — title, artist, album baked into clips
L6Command center integration — visualizer render status on your dashboard
L7The visualizer HTML itself — loaded headlessly by Puppeteer

The pipeline takes a release config file (YAML), loads your L7 visualizer in a headless browser, captures it at multiple resolutions and durations, and outputs platform-ready video files. That’s it. Config in, videos out.

What is headless browser rendering?

Puppeteer opens Chrome without showing it on your screen. No window, no UI — just Chrome running invisibly in the background. It loads your visualizer HTML, plays the audio, and records what the browser is rendering into a video file. Think of it as automated screen recording. Same result as if you pointed OBS at your browser, but no human needs to sit there pressing buttons.


The release config

Before the prompt, here’s what a release config looks like. This is the single file that drives the entire pipeline:

track:
title: "JADED"
artist: "moodmixformat"
album: "MIXTAPE FORMAT"
audio: "./audio/jaded.mp3"
cover: "./assets/jaded-cover.jpg"
visualizer:
theme: "tape-deck"
color: "#f97316"
outputs:
- platform: spotify-canvas
width: 720
height: 1280
duration: 8
loop: true
- platform: instagram-reel
width: 1080
height: 1920
duration: 15
- platform: youtube-short
width: 1080
height: 1920
duration: 60
- platform: youtube-background
width: 1920
height: 1080
duration: 0 # full track length

Each output entry defines a platform, dimensions, and duration. Duration 0 means full track length. The loop: true flag on Spotify Canvas tells the renderer to pick the best 8-second loop point (or just grab the first 8 seconds if peak detection isn’t available).


The prompt

Start your AI CLI tool and paste this prompt:

Build a Node.js CLI tool that batch-renders visualizer videos for multiple
platforms from a single audio file and release config. The tool uses Puppeteer
to headlessly capture a browser-based audio visualizer at different resolutions
and durations.
PROJECT STRUCTURE:
content-pipeline/
package.json
src/
cli.js (Commander-based CLI entry point)
renderer.js (Puppeteer headless rendering logic)
config-loader.js (Parse and validate release.yaml config)
ffmpeg-wrapper.js (Optional format conversion WebM -> MP4)
manifest.js (Output manifest generator)
templates/
visualizer.html (Self-contained audio visualizer page)
release.yaml (Sample release config)
README.md
REQUIREMENTS:
1. CLI INTERFACE (src/cli.js)
- Usage: node src/cli.js [options] <config-file>
- Default config file: release.yaml in the current directory
- --output or -o: output directory (default: ./output)
- --preview: open the visualizer in a visible browser window instead of
rendering headlessly (for testing/tweaking)
- --draft: render at half resolution for faster preview renders
- --format: output format, "webm" or "mp4" (default: "webm", mp4 requires
ffmpeg)
- --verbose or -v: print detailed progress including frame counts
- Use Commander npm package for argument parsing
2. CONFIG LOADER (src/config-loader.js)
- Parse YAML config file using js-yaml
- Validate required fields: track.title, track.artist, track.audio,
and at least one output entry
- Each output entry requires: platform, width, height, duration
- Resolve file paths relative to the config file location
- If track.cover is specified, verify the file exists
- If track.audio doesn't exist, exit with a clear error message
- Return a normalized config object
3. VISUALIZER TEMPLATE (templates/visualizer.html)
- A self-contained HTML page with an audio-reactive visualizer
- Uses Web Audio API (AnalyserNode) to drive visual animation
- Visual style: dark background (#09090b), audio waveform/frequency bars
rendered on a canvas element
- Accepts query parameters to customize:
- audioSrc: path to audio file
- coverArt: path to cover image (displayed centered behind the visualizer)
- title: track title (rendered as text overlay, bottom area)
- artist: artist name (rendered below title)
- theme: visual theme name (default: "tape-deck")
- color: accent color for visualizer elements (default: #f97316)
- The "tape-deck" theme should include:
- Subtle VHS/tape noise texture overlay (CSS-generated, not an image file)
- Slightly rounded frequency bars with the accent color
- Track title in a monospace font (Courier New or similar)
- A thin horizontal line above the title area (like a tape deck's
head position indicator)
- The visualizer canvas must fill the entire viewport (no margins, no
scrollbars) so Puppeteer captures it cleanly
- Audio must autoplay when the page loads (Puppeteer allows this in
headless mode)
- Include a subtle fade-in on load (0.5s opacity transition)
4. RENDERER (src/renderer.js)
- For each output in the config:
a. Launch Puppeteer with a viewport matching the output dimensions
b. Serve the visualizer.html locally (use a simple HTTP server like
http-server or express, needed so audio files load correctly)
c. Navigate to the visualizer page with query params for the current
track's metadata
d. Use Puppeteer's page.screencast() or page recording API to capture
the viewport as video
e. Record for the specified duration (or full audio length if duration
is 0)
f. Save the recording to the output directory
g. Filename format: {platform}_{width}x{height}_{duration}s.webm
(e.g., spotify-canvas_720x1280_8s.webm)
- If --preview flag is set, launch with headless: false and don't record
-- just open the visualizer in a visible browser for the user to watch
- If --draft flag is set, halve the width and height for faster rendering
- Print progress for each output: "Rendering spotify-canvas (720x1280,
8s)... done"
- Handle errors gracefully: if one output fails, log the error and
continue with the next output
IMPORTANT NOTE ON VIDEO CAPTURE:
Puppeteer's screencast API may have limitations. As a robust fallback,
use this approach:
- Capture frames as PNG screenshots at 30fps using page.screenshot()
in a loop with appropriate timing
- After capturing all frames, use ffmpeg to encode them into a video
- If ffmpeg is not available, save frames as individual PNGs in a
subdirectory and log instructions for manual encoding
- This frame-capture approach is more reliable across Puppeteer versions
5. FFMPEG WRAPPER (src/ffmpeg-wrapper.js)
- Check if ffmpeg is available on the system (try running "ffmpeg -version")
- If available:
- Convert WebM to MP4 (H.264 + AAC) when --format mp4 is specified
- Encode captured frames into video: ffmpeg -framerate 30 -i
frame_%04d.png -i audio.mp3 -c:v libx264 -pix_fmt yuv420p
-c:a aac output.mp4
- Mux audio into the final video file
- Delete intermediate frame files after successful encoding
- If not available:
- Log a warning: "ffmpeg not found. Output will be image frames only.
Install ffmpeg for video output."
- Keep frame PNGs as output
- Export a function to check ffmpeg availability and a function to
encode frames to video
6. MANIFEST GENERATOR (src/manifest.js)
- After all outputs are rendered, generate a manifest.json in the output
directory listing:
{
"track": { "title": "...", "artist": "...", "album": "..." },
"generatedAt": "ISO timestamp",
"outputs": [
{
"platform": "spotify-canvas",
"file": "spotify-canvas_720x1280_8s.mp4",
"width": 720,
"height": 1280,
"duration": 8,
"fileSize": "1.2 MB",
"format": "mp4"
},
...
]
}
- Include file sizes (human-readable: KB or MB)
- Print a summary table to the terminal after generation
7. SAMPLE RELEASE CONFIG (release.yaml)
- Use the moodmixformat example:
- Track: "JADED" from album "MIXTAPE FORMAT"
- Audio path: ./audio/jaded.mp3 (user provides their own audio file)
- Cover art: ./assets/jaded-cover.jpg (user provides their own)
- Theme: tape-deck, color: #f97316
- All four platform outputs as listed above
DEPENDENCIES: puppeteer, commander, js-yaml, express (for local file serving)
Puppeteer downloads Chrome

The first time you run npm install in this project, Puppeteer will download a bundled version of Chromium. This is roughly 170-300MB depending on your platform. It’s a one-time download — subsequent installs will use the cached version. If you’re on a slow connection, be patient. If it fails, check the Puppeteer troubleshooting section below.


What you get

After your AI CLI tool finishes, you’ll have:

content-pipeline/
package.json
release.yaml
README.md
src/
cli.js
renderer.js
config-loader.js
ffmpeg-wrapper.js
manifest.js
templates/
visualizer.html

Set it up

Terminal window
cd content-pipeline
npm install

The npm install will take a minute because Puppeteer downloads Chromium. Let it finish.

Provide your audio and artwork

The pipeline needs two things from you: an audio file and cover art. Create the directories and drop your files in:

Terminal window
mkdir -p audio assets
# Copy your audio file:
cp ~/Music/jaded.mp3 ./audio/jaded.mp3
# Copy your cover art:
cp ~/Design/jaded-cover.jpg ./assets/jaded-cover.jpg

Or edit release.yaml to point to wherever your files already live.

Preview first

Before rendering, check that the visualizer looks right:

Terminal window
node src/cli.js --preview

This opens a browser window with your visualizer running — audio playing, bars reacting, cover art displayed, track title showing. Make sure you like how it looks. Close the browser when you’re done.

Render everything

Terminal window
node src/cli.js --format mp4

You should see output like:

Loading config: release.yaml
Track: JADED by moodmixformat
Cover: ./assets/jaded-cover.jpg
Theme: tape-deck (#f97316)
Outputs: 4 platforms
Starting local server on port 3456...
Rendering spotify-canvas (720x1280, 8s)... done [12s]
Rendering instagram-reel (1080x1920, 15s)... done [28s]
Rendering youtube-short (1080x1920, 60s)... done [94s]
Rendering youtube-background (1920x1080, full track)... done [187s]
Output directory: ./output/
spotify-canvas_720x1280_8s.mp4 1.1 MB
instagram-reel_1080x1920_15s.mp4 3.8 MB
youtube-short_1080x1920_60s.mp4 12.4 MB
youtube-background_1920x1080_238s.mp4 42.1 MB
manifest.json 0.4 KB
4 clips rendered, manifest written.
💡Use --draft for faster iteration

Full-resolution rendering takes a while — a 60-second clip at 1080x1920 might take 90+ seconds. When you’re tweaking the visualizer theme or testing a new color, use node src/cli.js --draft to render at half resolution. It’s faster and good enough to check if the visual style is right. Save full-res renders for the final export.


A worked example: the full pipeline for a single release

Here’s the end-to-end workflow. You just finished mastering “JADED” and you have your cover art ready.

Step 1: Set up the release config

Edit release.yaml with your track details. Point the audio and cover paths to your actual files.

Step 2: Preview the visualizer

Terminal window
node src/cli.js --preview

The browser opens. Your cover art is centered, the tape deck theme is running, frequency bars are reacting to the audio in your signature orange. The track title “JADED” sits at the bottom in monospace. It looks right.

Step 3: Render all platforms

Terminal window
node src/cli.js --format mp4 -o ./jaded-visuals

Go make coffee. The pipeline renders all four clips.

Step 4: Check the outputs

Terminal window
ls -la ./jaded-visuals/
spotify-canvas_720x1280_8s.mp4 1.1 MB
instagram-reel_1080x1920_15s.mp4 3.8 MB
youtube-short_1080x1920_60s.mp4 12.4 MB
youtube-background_1920x1080_238s.mp4 42.1 MB
manifest.json 0.4 KB

Open each file. Check:

  • Does the Spotify Canvas loop cleanly? (It should be 8 seconds or less)
  • Does the Instagram Reel look good at 9:16 vertical?
  • Does the YouTube Short have enough visual interest for 60 seconds?
  • Does the YouTube background look right at 16:9 for use behind your mix?

Step 5: Upload

  • Spotify Canvas: Upload through Spotify for Artists (Canvas tab on any track). Must be under 2MB and 3-8 seconds.
  • Instagram Reel: Upload directly from your phone or through Creator Studio.
  • YouTube Short: Upload as a Short (under 60 seconds, vertical).
  • YouTube background: Use as the visual layer in your RECORDED LOCATIONS episode or as a standalone visualizer video.

One command. Four platforms. Done.

🔍Platform-specific requirements

Each platform has its own specs, and getting them wrong means rejected uploads or bad quality:

Spotify Canvas

  • Dimensions: 720x1280 (9:16)
  • Duration: 3-8 seconds (must loop)
  • File size: under 2MB (this is strict — Spotify will reject larger files)
  • Format: MP4 (H.264) or JPEG sequence
  • The pipeline’s 8-second render at 720p typically comes in around 1-1.5MB, well within limits

Instagram Reels / TikTok

  • Dimensions: 1080x1920 (9:16) preferred
  • Duration: 15, 30, 60, or 90 seconds (15 or 30 is the sweet spot for music content)
  • File size: up to 4GB (you’ll never hit this)
  • Format: MP4 (H.264 + AAC)
  • Minimum 720p, but 1080p looks noticeably better

YouTube Shorts

  • Dimensions: 1080x1920 (9:16)
  • Duration: up to 60 seconds
  • Format: standard YouTube upload formats (MP4 preferred)
  • Must be vertical — YouTube auto-classifies vertical videos under 60s as Shorts

YouTube (standard video)

  • Dimensions: 1920x1080 (16:9) minimum for HD
  • Duration: no practical limit for music content
  • Format: MP4 (H.264 + AAC)
  • This is your RECORDED LOCATIONS background or standalone visualizer upload

The pipeline handles all of these dimensions and durations automatically from the release config. You define it once and forget about the specs.


If something is off

ProblemFollow-up prompt
Visualizer has no audio reaction (bars are flat)The visualizer bars aren't reacting to the audio. Make sure the Web Audio API's AnalyserNode is connected to the audio source, and that the audio element has crossOrigin set to "anonymous" if served from localhost. Also check that the audio autoplay is working -- Puppeteer should allow autoplay without user interaction.
Colors don’t match the configThe visualizer is using default blue bars instead of my configured orange (#f97316). Make sure the color query parameter is read from the URL and applied to the canvas drawing context. Check that the fillStyle and strokeStyle use the color variable, not a hardcoded value.
Cover art doesn’t appearThe cover art image isn't showing in the visualizer. Check that the express static server is serving the assets directory, and that the coverArt query parameter resolves to a valid URL. The image path needs to be relative to the server root or an absolute URL like http://localhost:3456/assets/cover.jpg.
Rendering produces black framesThe rendered video is all black. This usually means Puppeteer is capturing frames before the page finishes loading. Add a wait step: after navigating to the visualizer URL, wait for the audio to start playing (check audio.currentTime > 0) before beginning frame capture. Also set waitUntil: 'networkidle0' on page.goto().

🔧

When Things Go Wrong

Use the Symptom → Evidence → Request pattern: describe what you see, paste the error, then ask for a fix.

Symptom
Puppeteer can't find Chrome or Chromium
Evidence
Error: Could not find Chrome. Set the PUPPETEER_EXECUTABLE_PATH environment variable or install Chrome.
What to ask the AI
"Puppeteer can't find a Chrome binary. Three fixes, try in order: (1) Run 'npx puppeteer browsers install chrome' to download the bundled Chromium. (2) If Chrome is installed but Puppeteer can't find it, set the path manually: export PUPPETEER_EXECUTABLE_PATH=/usr/bin/google-chrome (or wherever Chrome is). On Mac: /Applications/Google Chrome.app/Contents/MacOS/Google Chrome. On Windows/WSL: /mnt/c/Program Files/Google/Chrome/Application/chrome.exe. (3) Install the puppeteer package (not puppeteer-core) which bundles its own Chromium."
Symptom
Output video has no audio
Evidence
The rendered MP4 plays the visualizer animation but there's no sound -- the video is silent
What to ask the AI
"The frame-capture approach captures video frames but not audio. The audio needs to be muxed in during the ffmpeg encoding step. Make sure the ffmpeg command includes the original audio file as a second input: ffmpeg -framerate 30 -i frames/frame_%04d.png -i audio.mp3 -c:v libx264 -c:a aac -shortest output.mp4. The -shortest flag ensures the video and audio end at the same time."
Symptom
Rendering is extremely slow (minutes per clip)
Evidence
A 15-second clip is taking 5+ minutes to render, and longer clips seem to hang
What to ask the AI
"Full-resolution frame capture is CPU-intensive. Three speedups: (1) Add a --draft flag that halves width and height for faster test renders. (2) Reduce frame rate to 24fps instead of 30fps (most visualizer content doesn't need 30). (3) Make sure Puppeteer is using --disable-gpu and --no-sandbox flags for headless mode. On Linux, also add --disable-dev-shm-usage to prevent shared memory issues. For the final render, full resolution will still take time -- that's normal."
Symptom
Spotify Canvas file is too large (over 2MB)
Evidence
The spotify-canvas clip is 2.4MB and Spotify rejects it during upload
What to ask the AI
"Spotify Canvas has a strict 2MB limit. Reduce the file size: (1) In the ffmpeg encode step, add -crf 28 (higher CRF = smaller file, 28 is a good balance for short clips). (2) Reduce the bitrate with -b:v 800k. (3) Make sure the duration is exactly 8 seconds or less. (4) The 720x1280 resolution is correct -- don't go higher. (5) If still too large, try 6 seconds instead of 8. Add a check in the manifest generator that warns if a Spotify Canvas output exceeds 2MB."
Symptom
Colors look washed out or oversaturated in the rendered video
Evidence
The visualizer looks perfect in --preview mode but the rendered video has different, duller colors
What to ask the AI
"This is a color space mismatch between the browser's canvas rendering and the video encoding. In the ffmpeg command, add -colorspace bt709 -color_primaries bt709 -color_trc bt709 to force standard HD color space. If capturing PNGs, make sure Puppeteer's screenshot uses type: 'png' (not jpeg, which compresses colors). Also check that the canvas element doesn't have a CSS filter applied that only renders in the live browser."

Customize it

Batch-render an entire album

Add a --batch flag that accepts an album config YAML file instead of a single
release config. The album config has a tracks array, each with the same
structure as a single release config. When --batch is used, iterate through
every track and render all platform outputs for each one. Create a subdirectory
per track in the output directory (named after the track title, lowercased,
hyphens for spaces). Generate a master manifest.json at the album level that
lists all tracks and all outputs. Print a summary at the end: total tracks,
total clips rendered, total file size.

Auto-upload to YouTube

Add a --upload-youtube flag that uses the YouTube Data API v3 to upload
rendered clips directly. Require a credentials.json file (OAuth2 client
credentials from Google Cloud Console). On first run, open a browser for
OAuth consent and save the refresh token locally. Upload the youtube-short
output as a YouTube Short and the youtube-background output as a standard
video. Set the title to "{track.title} - {track.artist} [Visualizer]",
the description to "Visual: AI-generated tape deck visualizer", and add
tags from the release config. Print the YouTube URL after upload completes.

Highlight reel mode

Add a --highlight flag that automatically picks the most energetic section
of the track for short-format outputs (Spotify Canvas, Instagram Reel).
Instead of using the first N seconds of audio, analyze the audio file to
find the peak energy section: read the audio as raw samples using an audio
decoding library, compute RMS energy in 1-second windows, find the window
with the highest energy, then expand outward to the required duration.
This way the Spotify Canvas gets the drop or climax of the track, not the
intro. Print which timestamp range was selected.

Text overlay system

Add support for a "textOverlay" section in the release config that syncs
text to timestamps:
textOverlay:
font: "Courier New"
color: "#ffffff"
entries:
- text: "JADED"
start: 0
end: 3
position: "center"
size: 48
- text: "out now on all platforms"
start: 3
end: 6
position: "bottom"
size: 24
Pass the overlay data to the visualizer template via query params or a
JSON endpoint. The visualizer should render the text with fade-in/fade-out
transitions (0.3s) at the specified timestamps. This is for promotional
clips where you want lyrics, release dates, or calls-to-action overlaid
on the visualizer.

KNOWLEDGE CHECK

Your content pipeline uses Puppeteer to render visualizer videos. What does Puppeteer actually do in this workflow?


The complete module: everything you built

This is lesson 8 of 8. Here’s the full system you now have:

#ToolWhat it does
1Release Asset EngineOne cover image in, six platform-sized assets out
2Campaign GeneratorOne metadata file in, captions, hashtags, calendar, and EPK out
3RECORDED LOCATIONS BuilderEpisode notes in, all YouTube publishing metadata out
4Moodmix Control RoomCSV exports in, geographic and business intelligence out
5Royalty ReconcilerDistributor CSVs in, discrepancy flags and revenue report out
6Release Ops Command CenterAll of the above in, one weekly action-plan dashboard out
7Audio Visualizer GeneratorAudio file in, signature tape deck visualizer out
8Visualizer Content PipelineRelease config in, platform-ready video clips out

These tools chain together. The asset engine’s cover art feeds into the visualizer pipeline. The campaign generator’s metadata populates your content clips. The control room and reconciler feed the command center. The visualizer from L7 is the core of the content pipeline in L8. Everything connects.


What you actually built in this module

Eight CLI tools, each built in 20-40 minutes with AI assistance. Total time: roughly 4 hours spread across the module. What you got:

A complete release operations system. From the moment you finish mastering a track to the moment it’s live on every platform, you have a tool for every step. Cover art resized for 6 platforms. Campaign captions and calendar generated. YouTube episode metadata ready. Analytics dashboard running. Royalties reconciled. Command center showing your weekly priorities. Visualizer content rendered for 4 platforms. All in one pipeline, all running locally, all free.

The dollar math. Add up what this replaces: Canva Pro ($13/month), Later or Buffer ($15/month), Chartmetric ($10/month), a royalty tracking service ($10/month), Rotor Videos or Specterr ($15/month), plus the time cost of doing everything manually. That’s roughly $60-75/month in SaaS subscriptions, or $720-900/year. Your tools cost nothing to run after they’re built. Your data never leaves your machine. And they work exactly the way you want them to because you directed the AI that built them.

A repeatable pattern. Every tool follows the same structure: define a config, run a command, get outputs. The config-to-output pipeline pattern is the same one used by labels, distributors, and music tech companies — they just have engineering teams building it. You have a CLI and an AI tool. The result is the same.


Key takeaways

  • Automation compounds. Each tool saves 15-30 minutes per release. Eight tools across a monthly release cycle saves 4+ hours every month. Over a year, that’s 48+ hours of operational overhead eliminated — time you spend in the studio instead.
  • The pipeline pattern is universal. Config file in, processed outputs out. It’s the same structure whether you’re rendering visualizers, resizing cover art, or generating campaign content. Once you internalize this pattern, you can build a pipeline for anything.
  • Your content pipeline is your competitive advantage. Major labels have teams of people doing content operations — resizing art, scheduling posts, reconciling royalties, rendering promo clips. You have a set of CLI tools that do it all in minutes. The playing field is more level than it’s ever been.
  • Everything runs locally, costs nothing, and is yours forever. No subscriptions to cancel. No platform changes breaking your workflow. No vendor lock-in. The tools live on your machine. You can modify them, extend them, chain them together. They evolve with your workflow.
  • The only thing left is making the music. That was always the point. Every minute spent on release operations is a minute not spent creating. These tools exist so you can stop resizing images, stop copy-pasting captions, stop manually screen-recording visualizers, and go make something.

Try it yourself

  1. Open your AI CLI tool in an empty folder.
  2. Paste the main prompt.
  3. Run npm install and wait for Puppeteer’s Chromium download.
  4. Drop an audio file and cover art into the project.
  5. Edit release.yaml with your track details.
  6. Run node src/cli.js --preview to check the visualizer.
  7. Run node src/cli.js --format mp4 to render all four platform clips.
  8. Open the outputs. Upload them. Release your music.

Every new release is now a pipeline. The tools are built. The workflow is yours. Go make music.