AI Music Revolution
The AI music industry is moving faster than most artists can react. Platforms launch overnight. Terms change quietly. Laws lag behind reality. And everyone argues about whether this is "real" music — while the future gets built without them.
AI Music Revolution cuts through the noise.
Hosted by Josh Gilliland — 30-year Big Tech veteran, 5-star Submithub curator, 200+ track producer, and author of The AI Music Revolution — this weekly briefing is for creators who want to operate like professionals, not hobbyists.
What to expect:
• Market Intel — The truth about Suno, Udio, Bandcamp, and the major moves shaping this space (without the PR spin)
• The Lab — Prompt engineering, DAW mixing, mastering workflows, and professional release standards
• Distribution & Marketing — How to pass the curator test, get playlisted, and actually monetize your catalog
• The Philosophy — Authenticity, authorship, and the hard questions about creativity in the AI era
• Legal Reality Checks — What you own, what you don't, and how to protect your work
This is not a hype show. This is not a "press a button and get famous" fantasy.
It's a tactical briefing for the AI music era.
Join the Revolution. New briefings every week.
Books & resources: jgbeatslab.com/music-books
AI Music Revolution
Suno v5.5 — What We Actually Found (Emergency Episode)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Suno v5.5 dropped this week with three new features — Voices, Custom Models, and My Taste. We went straight into testing. This unscheduled episode covers what we actually found: the Voices sweet spot most people will miss, why Custom Models might be more important than Voices, and the My Taste feature nobody is talking about. Plus — the full v5.5 guide is available on the website right now. RLA members already have it.
UPDATE: Suno V5.5 Guide is now available on our website. https://www.jgbeatslab.com/store/p/suno-v55-beta-guide
Red Lab Access: https://www.jgbeatslab.com/store/p/the-ai-music-library-lifetime-access-founding-member-pricing
Unlock Suno: Studio Edition: https://www.jgbeatslab.com/store/p/unlock-suno-complete-guide
If you're serious about AI music and ready to stop guessing — Red Lab Access is the complete system. Every book, every guide, every research report, all future releases included. One price. Lifetime access. jgbeatslab.com/red-lab-access
Red Lab Access is the complete system for serious AI music creators. Five books. Four guides. Five blind-tested research reports. Fourteen genre Blueprints. The 3-Song Sprint course. Fader your AI Studio Manager. And a private community of creators who are actually building. Hundreds of members across ten plus countries. One price. Lifetime access. Everything future included automatically. jgbeatslab.com/red-lab-access
New episodes of the AI Music Revolution drop every Friday, and most Tuesdays. Everything mentioned in today's episode is at jgbeatslab.com. Links in the show notes.
Hello and welcome to the AI Music Revolution. I'm your host, Josh Gillaland. If you know anything about the podcast here, I'm trying to really uh have a lot of discipline around the Friday episode drops. But every once in a while we need to do an unscheduled episode because you know something newsworthy has happened. Well, something newsworthy has happened as Suno just dropped V5.5. So it definitely justifies an out-of-cycle drop. So it it dropped this week, and I took it and went straight into testing inside of the lab. Very excited. I've been waiting for this drop to occur, and it did. And I was like a kid in the candy store, Christmas morning, whatever analogy you want to use. Took it into the lab, fired it up, and I spent hours across multiple days breaking it, playing with it, learning about it, doing everything that I could to really understand how it operates, what is great about it, what a challenge challenges are with it, best approaches for using it, etc. And so I wanted to share those with you here on this episode. You know, there's already bad takes that are forming uh around this. And so I would rather you hear the real findings first, which is what I tried to do on this podcast. So let's get into it. So we'll start with uh voices first. That that's really the headline feature. So this is the one that most people are talking about, the one almost everyone is kind of misunderstanding from day one, to be honest with you. So voices let you capture your own voice and use it to influence how Suno sings your creations. So you provide a sample, so Suno learns from that sample, and then that influence carries into the generated output from Suno. But here's what most people think that means. They think it means voice cloning, like you know, close to a perfect replica of your voice that's out there singing your songs that you create. And that's not what it is, and that's okay. But voices, you think of voices more as an influence system, not a cloning system. So what it does is it blends elements of your vocal identity into Suno's internal vocal system. The output is Suno singing with your voice as a very strong directional push. It doesn't sound like you singing, it sounds like Suno influenced by you singing. And I think that distinction matters because it determines what you should actually expect from this. So if go in expecting if you go in expecting a clone, you you're gonna be frustrated. But if you go in expecting influence, a way to push the AI vocal towards something that you know feels more like you, then I think you're gonna find it genuinely useful. Now, let's talk about the settings because this is where I think most people are going to make mistakes. I know I did when I started playing with it. So the influence slider, I think this is a big one. It goes from zero to 100% like the Suno sliders do. And the instinct is to push that as high as possible. And I encourage you not to do that. So I tested across the full range, across multiple genres, multiple songs. And here's what here's what I found. So at 25%, there's a a tonal hint only. You know, your voice is there, but just kind of barely. When you get it into that 40 to 60 percent range, this really is the sweet spot. This is where your voice I find it becomes recognizable. Uh, artifacts are manageable at this level, the song it stays usable. I think that this 40 to 60 percent range is where most people should start. When you get to 75%, the identity does get stronger, but what I found was stability starts to drop. So you start to hear things break. So it's not ideal at 75%, but then if you go out to 100%, this is where it really breaks. So the resemblance is the strongest of your voice and short burst, um, but the output is often unusable. Uh, there's a lot of shimmer artifacts, there's a lot of instability, there's a lot of overprocessing. It's honestly it's not worth it for anything that you're looking to be release ready. So, my suggestion is start at 50% and then adjust from there based on how your specific voice responds inside of Suno. One one more setting most people will probably ignore is the skill level selector. So when you're recording your voice or loading it in, it'll ask you are you a beginner, intermediate, advanced, or professional? If you are happy with your voice, the way it sounds, and you really want Suno to replicate it, set it to professional every time. It's not cosmetic, it actively reshapes how the model interprets your voice. If the difference between beginner and professional is audible and meaningful. Now, if you don't have a great singing voice, then then go ahead and reduce it to intermediate or beginner. And then Suno will take your voice and then it'll you know massage it. It'll make it clean it up, they'll make it sound a little bit better. But if you're wanting Suno to really replicate your voice, your singing patterns, your your singing voice, definitely set it to professional. And your source audio matters as much as the slider. So be clean, consistent, 20 to 30 seconds. Use the same tone, the same intensity, the same delivery throughout. Pick the part of your recording where you sound the most like one stable version of yourself. That is your input. I I've I put songs in there that I sang where I go all across the board in different vocal styles, and I used that segment to use inside of voices, and it created some really interesting vocal patterns inside of there because Suno did not know what to actually do with that. So it wasn't until I really isolated kind of a standardized sound in my vocal patterns that Suno really responded extremely well. So, where does voices break? There are a few places I think that are worth knowing. The voice influence doesn't hold consistently across generations. So, same settings, the same sample, same prompt, different outputs just about each time. So you cannot build a repeatable artist brand on voices alone. Also, the voice tends to drift mid-song, I found. So it's stronger in the first half of the song, and then it progressively gets weaker as the track goes along. So this is kind of a known limitation. It's not a configuration issue with you. And very deep male vocals, you know, vocals with really deep bass, deep baritone, they they tend to get mapped a little bit more towards, you know, a generic AI voice rather than really maintaining that identity. So if that's you, if that's where your vocals fall, if that's where your voice falls, you can expect maybe weaker transfer at you know equivalent settings. So what is voices actually good for? I think vocal voices is really good for demo vocals, for song ideation, for pre-production, for hearing your voice in a composition before committing to a recording direction. So I think that that is where it really shines. What it is not yet is a final vocal solution. It's not release ready on its own, and it's not a replacement for recording your actual vocals. And one more thing before I move on. Don't confuse voices with personas. They're now combined into the same interface, but they're completely different systems. So Voices uses your real voice, it requires you to verify your voice, and it produces identity influence that changes generation to generation. Personas use AI-generated voices captured from previous generations. No verification needed for that. And it produces a bit more consistency across the songs. So, simple way to think about it: voices help you hear yourself in a composition. Personas help you build and maintain an AI artist identity across a body of work. Okay, next up we have custom models. So this is the one that I think is more significant than voices in the long run. And it's getting slightly less attention. But custom models lets you upload tracks from your own catalog to build a personalized version of the 5.5 model that really knows your sound. So when a custom model is active, when you're using that to generate your songs, it shapes every generation at the foundational level. Arrangement patterns, instrumentation fingerprints, energy profile, genre behavior. It's not a style filter, it's a version of the model that is fine-tuned on your specific music identity. And here's the most important finding from our testing. Custom models learn musical identity, not vocal identity. So what comes through is how your music is constructed, where your energy moves, which instruments and textures appear consistently, the specific flavor of genre as expressed in you know in the songs in your catalog. Your literal voice is not the primary carrier. That's the voice's job. These two features solve different problems at different layers. Once you understand that, the whole system clicks. On catalog quality, you need at least six songs, six tracks to build a model. You can upload a lot more than that. But here's what our testing confirmed: more coherent material consistently outperforms more material in general. So a tight set of 15 songs that all sound definitively like the same artist will outperform 30 songs that drift across experiments and half-finished ideas. So build from decisions, not experiments. Every song in your upload set is a vote for what the model thinks your project sounds like. If you add a lot of experimentation in there, that will dilute the signal. So one thing to know before you upload Suno applies copyright detection to all uploads. So you need to own the music. And this detection uses pattern matching heuristics, not account level ownership verification. So in testing, an original owned song of mine got flagged and blocked. So if that happens to you, instrumental versions or alternate takes are the workaround. Now, here's where custom models get really interesting. Combine them with a persona. They lock both layers of your artist's identity. So custom model without a persona is consistent musical world, kind of a different singer every time. Now, if you look at persona without a custom model, you get a consistent singer, but generic musical context. You combine them together, custom model defines the band's sound, persona defines the singer. Then you have the prompt to define the individual song. That combination is the core workflow for building a consistent AI artist identity in 5.5. It represents honestly the most significant shift on how serious creators should be approaching this platform, moving from prompt and prey towards really building an identity infrastructure first and generating content within it. The producers who understand that shift are going to build more consistent, more valuable, and more defensible catalogs than everyone else. I have already used, built three custom models myself, one for my own personal band and then two for AI bands that I have. And I am extremely excited about what I'm going to be able to do with these moving forward, combining those with the voices and personas. Absolute game changer. I am just very excited to see what comes next. Okay, it is time to move into the third of the major feature updates with 5.5. And that is MyTaste. So MyTaste is probably the most underrated feature of the three. It's the least talked about. It is available to every user on every tier. It's already running in the background right now, whether you know about it or not. So keep that in mind. Here's what it does. So MyTaste builds a taste profile from your creative behavior over time. So your generations, your listening patterns, uh the songs that you like or dislike. It takes all of that and it applies that profile when you use the magic wand in the styles field. So here's the finding I think most people will miss completely on this. You don't have to wait for passive learning. You can actually go in now and write your own taste profile manually. Up to 2,000 characters. You can put in specific genre, instruments, mood, tempo, themes. Do that today, and then your magic wand will reflect your sound immediately, day one, no accumulation required. So treat it like a compressed version of your artist's identity. So modern country rock, over-driven electric guitars, and themic courses, male baritone, 110 to 130 BPM, work and freedom themes will produce more useful suggestions than rock music high energy. So one practical note toggle it off when you're working outside of your lane. So if you're experimenting with a genre that doesn't reflect your general identity, MyTaste will keep pulling the magic wand back towards your profile. So turn it off if you're if you're exploring in a session and then back on for identity consistent work. Now remember, MyTaste only activates through the magic wand. It doesn't affect direct generation when you write your own style prompt. So it's not always on personalization, it's magic wand personalization specifically. All right, let's talk about the bigger picture here. So here's what V5.5 is actually saying about where Suno is heading from my point of view. All three features are personalization features. Voices pushes your vocal identity into the output. Custom models push your musical identity into the model itself. MyTaste pushes your stylistic preferences into the workflow. Taken together, they're moving the platform from a one-shot prompt generator towards a system where you build identity infrastructure first and then generate content within that. That is lane two development. The platform is rewarding creators who bring a real artistic identity to the tool and producing better results for them than for people who are still approaching this as a slot machine. So the gap between creators who understand that and creators who don't just got a lot wider with version 5.5. So let's close this up. We've been putting together a full 5.5 guide covering all three features in depth: real testing data, specific settings, where each feature breaks and how to use them together as a system. That guide is going to be available on the website any moment now. If you're listening to this when it drops, go check jgbeatslab.com. It may already be there. If you're a Red Lab Access member, you'll get it automatically, no additional charge. That's just how the RLA works. Every update, every new guide, yours forever. If you're not inside Red Lab Access and you want to make sure you never miss another update like this, jgbeastlab.com slash red-laben access link in the description. One price, lifetime access, everything we publish now and into the future. Unlock Suno Studio Dish is the foundation for everything else. It's what everything builds on. Link also in the show notes. Thank you for listening to an unscheduled episode of VAI Music Revolution. Back to the regular Friday cadence next week. You know, unless something significant enough happens to break that cadence again.