Pull apart any song.
Vocals, drums, bass, guitar, piano — isolated cleanly, ready to study.

Riffd came from a simple frustration: most tools could isolate broad instruments, but not the specific parts you actually wanted to hear. It exists to give musicians a clearer way to break down real songs — and actually understand what's happening inside them.

1
Find
Search Spotify or upload your own audio file.
2
Isolate
Separate the track into vocals, drums, bass, guitar, piano, and other — each on its own stem.
3
Understand
See key, tempo, chords, and harmonic structure by section.
4
Interact
Mix, loop, solo, mute, and transpose directly in the browser.
5
Recommend
Get song suggestions matched to your track's key, tempo, and chord progression — not genre tags.
Demucs
Stem Separation
Meta's transformer model running on cloud GPU — isolates 6 stems (vocals, drums, bass, guitar, piano, other) with high fidelity and stereo field analysis.
Basic Pitch
Note Detection
Spotify's pitch detection model converts each stem into note events — the foundation for chord detection, key analysis, and tab generation.
Essentia
Key + Tempo Detection
Detects key and BPM directly from the audio signal — results surface in real time before stem separation finishes.
Web Audio API
Browser Mixing Engine
Powers the full mixer — mute, solo, volume, loop, and pitch transposition — entirely in the browser, with no server roundtrips.
Diatonic Template Matching
Chord & Harmonic Analysis
Chord progressions are sourced from tab databases and cross-referenced against diatonic chord templates — scoring each chord's fit to detect the key and convert raw chord names into roman numeral analysis by section.
Claude API
Musical Insight
Generates progression names, key context, and theory-based recommendations from the detected chords, tempo, and lyrics.
Multi-source Audio Acquisition
Audio Pipeline
Sources audio from YouTube with automatic fallback across multiple extraction methods — file upload always available as a direct path.
Concurrent Job Queue
Multi-user Processing
A FIFO queue lets multiple users run analyses simultaneously — waitlisted users see live status and are promoted automatically.
Flask + Vanilla JS
Application Stack
Lightweight Python backend with a zero-dependency frontend. SQLite caching, filesystem job state, deployed on Render with Gunicorn.
Build for failure, not the happy path
YouTube blocks requests. Replicate times out. The Claude API returns malformed JSON. The app only became genuinely usable once every external dependency had an explicit fallback — yt-dlp falls through to Cobalt, then Piped, then a user upload prompt. Partial results always surface rather than a blank screen.
Silent queue promotion
When the concurrent job limit is hit, users are queued and promoted automatically when a slot opens — no error message, no manual retry. The degraded state is invisible to the user.
Progressive delivery
The pipeline has five stages. Key and BPM results are pushed to the frontend as soon as Essentia finishes — before GPU stem separation completes. Users see meaningful output within seconds of triggering analysis, not at the end of a 10-minute wait.
Prefetch on selection
Full-track download starts the moment a user selects a song, before they click Analyze. By the time they trigger processing, the audio is usually already on the server — shaving a meaningful chunk off perceived latency.
Memory-efficient startup
Heavy Python imports — numpy, TensorFlow, Basic Pitch — are deferred to first job execution. Boot time stays around 40MB RSS instead of 300MB, which matters on a 2GB Render instance shared with Gunicorn workers.
Open the Demo
Built by Dylan Glatt — New York, NY