With the current state of AI in creative spaces like music, the person doing the prompting still needs taste and an artistic eye/ear to generate something approximating “art”. How long that limitation will last is probably not. As in, “not long”.
However, current AI generated music is also mostly crappy fuzzy quality, and not in a good “lo-fi” way. It is also fragmented in that there isn’t much persistent “vision”. Try getting AI to generate an entire song and you’ll see what I mean. Time based stuff is still hard for AI. That said, some new AI generation tools are bridging that gap and getting better at making consistent tracks from beginning to end.
But they are in no way something you would release. Noisy, full of artifacts, weird alien vocals (if you’re doing vocals), etc.
But generative AI is getting better exponentially. Ask me the same question as soon as next week and see where we are.
But there is no joy in simply entering prompts and getting MP3s back.
Given the CURRENT state of AI music generation, I can see using AI as maybe an idea jump-starter if you’re feeling creatively stuck, especially to someone on a deadline. Get something raw as a starting point, then actually play/produce a song for real with the AI generated thing as a reference. Also, if you’re focused on a VERY specific thing to enhance your existing work, like, you need a specific part for a track you’re working on and can use something generative to “roll the dice” to generate some audio or MIDI.
But again, ask me again next week. We may be headed toward a future where live performance becomes more highly valued again because the lines between recorded “real” music and AI music become VERY blurred.
To answer the original question? No.
Whether AI is “better” or not, real musicians make music because they love to make music. I don’t care if a human is “better” than me at making music, so an AI being “better” than me will have the same non-effect. I will still make music.