AI Music Revolution

NAMM 2026: What the Music Industry Got Wrong About AI

Josh Episode 4

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 23:56

Send us Fan Mail

 I just spent three days at NAMM listening to the music industry argue about AI music. The education sessions felt defensive, anxious, and filled with one-way monologues about why AI is bad for music.

But they're fighting a caricature — and conflating two very different things.

In this episode, I break down:
→ The "junk food for the brain" argument (and why it falls apart)
→ The two lanes: spam vs. human-authored, AI-assisted production
→ Why the 97% stat actually proves our point, not theirs
→ The legal reality around copyright and human authorship
→ What serious AI music creators should do from here

"Being pro-AI music does not mean being anti-human music. This isn't binary."

The gatekeepers are building walls. Smart creators are walking through the open doors.

📝 Full blog post: www.jgbeatslab.com/ai-music-lab-blog/namm-2026-ai-music-industry-wrong

📚 AI Music Library: www.jgbeatslab.com/store/p/the-ai-music-library-lifetime-access-founding-member-pricing

If you're serious about AI music and ready to stop guessing — Red Lab Access is the complete system. Every book, every guide, every research report, all future releases included. One price. Lifetime access. jgbeatslab.com/red-lab-access

Red Lab Access is the complete system for serious AI music creators. Five books. Four guides. Five blind-tested research reports. Fourteen genre Blueprints. The 3-Song Sprint course. Fader your AI Studio Manager. And a private community of creators who are actually building. Hundreds of members across ten plus countries. One price. Lifetime access. Everything future included automatically. jgbeatslab.com/red-lab-access

New episodes of the AI Music Revolution drop every Friday, and most Tuesdays. Everything mentioned in today's episode is at jgbeatslab.com. Links in the show notes.

Hello and welcome to the AI Music Revolution. I’m your host, Josh Gilliland, fresh back from NAMM in beautiful Southern California. I was just there last week.

For those of you who don’t know, NAMM is the big annual conference where musicians and music makers from around the world come together to check out new gear, tools, and technology in the music industry. Alongside the equipment halls, there are extensive education sessions and panel discussions—where I spent the vast majority of my time.

Let me tell you, AI was at the top of everyone’s mind.

The experience was illuminating, and I enjoyed every second of it. I was completely exhausted by the end because I sat in sessions all day, took copious notes, and absorbed as much as I could. I’m very excited to share what I learned with all of you.

I’ve already published a blog post covering some of this material. This episode will overlap slightly, but we’ll also cover new ground.

As I walk through this, I do have notes in front of me, but some of this will be ad-libbed. I just want to get this out.

The vibe at NAMM this year depended on where you were.

If you were in the main hall with the guitars, drums, pianos, mixers, and gear, it felt very similar to NAMM in years past. But if you crossed over to the Hyatt, where the education sessions were held, the tone shifted dramatically.

That side of NAMM felt defensive. Concerned. You could feel anxiety about what the future of music looks like with AI.

There was a strong sense of protectionism. You could almost feel the moats being built around the existing music industry infrastructure.

As I sat through these sessions, it became increasingly clear that many of these were not real discussions. They were largely one-way monologues about why AI in music is bad, why it’s doomed, and why the historic way of doing things is the only valid path forward.

Underneath it all, you could hear fear.

I think a lot of what was being said was wishful thinking.

Let me walk through some of the specific arguments and experiences I had during these discussions.

One of the first claims I’ll call out is how the industry at large is characterizing AI music.

The prevailing sentiment was that AI music is “junk food for the brain.” That phrase came from Craig Anderton during one of his keynote speeches.

He also made a comparison—slightly cringeworthy—saying that AI music is like sex, while human-created music is like making love. Two very different experiences, in his words.

The point he was trying to make was that AI music creates a short-term sugar rush with no lasting emotional impact.

Every time he made these statements, the room erupted into applause. He was clearly preaching to an anti-AI audience.

Beyond that keynote, there were repeated arguments about how AI imitates while humans influence.

The idea presented was that AI learns by consuming catalogs and replicating patterns, while humans are influenced by music and then go on to create new genres and sounds.

In their framing, AI is a copy machine, while humans are the only true innovators.

I completely disagree—but that was the line being drawn.

Another point that came up repeatedly was how difficult it is for listeners to distinguish AI-generated music from human-created music.

The statistic cited was that around 97% of listeners couldn’t reliably tell the difference.

However, the claim was that listeners were more likely to return to human-created songs for repeat listens, suggesting greater “stickiness.”

To me, the headline there wasn’t the repeat listening argument. It was the 97%.

If nearly everyone can’t tell the difference, that tells us something very important about the current quality of AI-generated music—and it’s only going to improve.

Despite this, the speakers still tried to draw a hard line claiming human music is inherently superior, even though the data didn’t really support that conclusion.

Now, I want to be clear: being pro-AI music does not mean being anti-human music. This isn’t binary.

What I create—and what many of you create—is human-authored, AI-assisted music.

That distinction was largely ignored in these sessions.

One legitimate concern that was raised involves the royalty pool.

Streaming royalties are a fixed pool. Flooding the market with low-effort AI-generated music dilutes that pool and impacts existing artists.

That concern is fair. I empathize with it.

But we also need to be honest: streaming royalties are a very small portion of where artists actually make money. Touring, merch, sponsorships, and licensing matter far more.

We’re talking about pennies, not the core economics of the industry.

What was missing from these discussions is that AI music actually exists in two very different lanes.

Lane one is spam: auto-generated tracks pushed en masse onto streaming platforms, often paired with bots to inflate streams. That’s gross. No one should support that.

Lane two—our lane—is human-authored, AI-assisted production.

Those are not the same thing, but the industry continues to conflate them.

People in this community aren’t just prompting and shipping. There is a human in the loop.

We write lyrics.
 We hum melodies.
 We iterate.
 We regenerate.
 We bring tracks into a DAW, mix, master, add effects, and engineer the final product.

That reality is either misunderstood or intentionally ignored.

At one panel, an indie artist expressed concern that AI tools were using her music to compete against her.

Out of curiosity, I looked her up. She had around 100 monthly listeners.

That’s not an insult—having any audience is meaningful—but it’s highly unlikely AI models are over-indexing on her catalog or competing for her fanbase.

This is where emotional arguments replace factual ones.

Legally, one point the industry clings to is that fully AI-generated music is not copyrightable.

That’s true—but it’s also irrelevant to what we do.

Human-authored, AI-assisted music is a different category entirely. No one in these sessions could clearly argue that this type of work is uncopyrightable.

The laws are evolving quickly, but right now, that corner case applies only to fully autonomous generation.

There was also an anecdote about a monkey selfie—a real legal case where a monkey triggered a camera and took a photo, which was ruled uncopyrightable because no human authored it.

The analogy was that AI tools are the monkey.

That argument doesn’t hold up when there is a clear human directing the process, making creative decisions, and shaping the output.

AI is a tool. The human authorship is what anchors ownership.

One encouraging trend is that the industry is slowly moving from litigation toward licensing.

We’re starting to see deals between AI platforms and major rights holders. That’s likely the path forward.

Hopefully, that reduces the endless lawsuits—because the only consistent winners there are the lawyers.

So what does all of this mean for us as creators?

First: own your process.

Use AI as a tool, not as a replacement for authorship. Write first. Generate second.

Your inputs—lyrics, melodies, rhythms, decisions—are what anchor your role as the creator.

Second: focus on quality over quantity.

We’re heading toward a future where hundreds of thousands of songs could be uploaded to streaming platforms every day. That helps no one.

Refine your work. Learn your DAW. Make your tracks sound great.

Third: embrace transparency.

Be upfront about using AI as part of your process. Don’t hide it. Make it part of your identity.

And finally, develop your signature—your sound fingerprint. That’s what separates intentional art from generic output.

I could go on for a long time, but I’ll wrap it up here.

The opportunity in AI music is wide open. Gatekeepers are building walls, but creators will keep walking through the doors.

AI music is the future of music. Let’s embrace it, lead responsibly, and see how far we can push this art form.

Thanks for listening. I’ll talk to you next time.