Transparency first

I use generative AI (Suno) to act as my session musicians and producers. I always write the lyrics, I direct the emotional arc, and I curate the output. I do not hide the tools I use.

I share the prompt with the song when possible and tag AI in any relevant aspect. In my blog Behind the songs I outline how each song was created.

The workflow

Let me describe my artistic process and how I use AI in some of the steps to help me shape my vision and get the most accurate sonic output from Suno.

In short: I write stories, the lyrics and AI provides the instruments I cannot play including the singing voice.

But it isn’t as simple as pressing a "Make Song" button. My process is a rigorous, iterative collaboration where I act as the Executive Producer and Lyricist, leading a team of AI collaborators to bring a specific vision to life.

The Core-Team

  • The Artist (Me): I provide the lived experience—the burnout, the neurodivergence, the "micro-hope." I write every word of the lyrics. The concept, the emotional arc, and the final decision on what is "true" are 100% human.

  • The Band (Suno AI): Think of Suno as my session musicians and vocalist. It, to the best of it’s ability, creates the audio based on my instructions. Suno has some limitations that you will understand better if you read the blog.

  • The Co-Producer (Gemini): I work with this AI to translate my abstract feelings (e.g., "I want this to sound like a sardonic woman holding a martini") into the specific musical terminology and prompt syntax that Suno understands. This helps me get around my limitations in knowing what words to use both from an AI-prompting perspective and a music theory perspective.

  • The Consultant (ChatGPT): When the music isn't landing—when a "doom march" sounds too much like a pop song—I consult this tool to analyse the model's biases and figure out the technical "hacks" needed to break through.

The co-producer and consultant are both prompted to play specific roles and a lot of pre work has gone in to making sure they know me, how to help and what not to do. So for example: they never re-write my lyrics or tell me what sonic costume my songs should have.

There is also a supporting team of other AI chats that are used in some scenarios. Recently I have created a “Rant-chat” in ChatGPT to store what I want to convey into music before I have the lyrics.

These supporting chats are constantly evolving and changing. If you follow my blog you will see that evolution in real time.

The Workflow: From Concept to Catharsis

My process is built on intentionality, not randomization.

  1. The Lyrics First: It always starts with the words. I write from a place of radical honesty about my experience, frustration and realisations.

  2. The Sonic Costume: I decide how to dress the lyrics. I think of it as building a world. How does it look, how does it feel, how does it sound. I base it on what I want the song to convey. Since I am using AI I am not limited to what I can sing or what instruments I play. I can focus on what the song deserves.

  3. The Prompting Loop: I collaborate with my Co-Producer to map out the song structure. That way my vision is translated into instrumentation, vocal textures, and production techniques. We go back and forth until I am happy with the prompt.

  4. The Diagnostic Loop: If the output sounds too generic or polished, I don't just try again. I analyse why it failed. I explore specific strategies to strip away things like "pop polish" and find more authentic, raw, or historical sounds. A song might be re-prompted 15-20 times before we nail the prompt.

  5. Skeleton & Skin: Once I find a version where the vocal performance and melody feels real (the Skeleton), I use Sunos premiere plan-tools to re-orchestrate the music around it (the Skin). This allows me to modernize a track or make it heavier without losing what made it resonate with my vision.

  6. Not everything is released: Even with all this AI, sometimes what I envision is not something that Suno has the dataset to understand or it is too far from the tags and categories that rule an AI tool like Suno. Those songs are left for another day, maybe another AI tool, a new version of Suno or for a future collaboration with a human musician.

By using AI in this way I am always in creative control but also get the help I would get if I had fellow musicians to collaborate with, experts at my fingertips and a bunch of studio musicians at my beck and call.

Examples

Se how the workflow was used for a particular song in the section “Behind the songs”. There you will find a blogpost per song/project.