I started learning Blender with two goals in mind. One centers on toy ducks, as one might naturally expect. The other centers on music visualization.
Since my earliest days “tuning out” (sitting and listening to music with big headphones on, letting my mind wander) I pictured patterns and colors and what-not. Perhaps now that I have tools to do so I can try creating some of that effect in video form. Perhaps.
This is an early test of my ability to play with lights and colors to music:
Keep in mind that this is literally my very first anything-at-all along the way toward my actual goal. The camera is static, I rendered at a super-low resolution to get the frames made in a reasonable timespan, and I clearly need to experiment with the attack/release values when baking sounds to F-curves (hence the jittery quality of the oval lighting changes). But hey, it’s a start.
Technically speaking: I intend to actually carve up and pre-process my source audio going forward. It’s one thing to tell Blender “hey, use 250Hz to 500Hz for this bit” but perhaps better to say “hey, use this copy of the audio file that’s been pre-tweaked to get the best results from this material node.” Also? Every tutorial I’ve watched so far about doing music visualization in Blender has ignored the fact that stereo exists. I think I can bring something to the table in this regard.