SoundCloud recently updated its terms of service to allow user-uploaded content to be used for artificial intelligence development, sparking backlash from musicians and digital rights advocates. The platform claims it doesn't currently use artist content to train AI models and has implemented technical safeguards to prohibit unauthorized use.
The updated terms, added in February 2024, grant SoundCloud permission to use uploaded content to "inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services." Independent artists are particularly concerned, as they fear their work may be used without consent or compensation. Major labels, on the other hand, have separate agreements that exempt them from AI use.
SoundCloud emphasizes its commitment to supporting human creativity and ensuring artists maintain control over their work. The company assures users it has technical safeguards in place, including a "no AI" tag, to prevent unauthorized use of content for AI training.
This move is part of a larger trend of tech companies incorporating AI training clauses into their terms of service, raising questions about digital consent, transparency, and compensation. Other platforms, such as X (formerly Twitter) and LinkedIn, have also updated their policies to allow AI training on user-generated content, sparking debates about user rights and AI ethics.
The controversy highlights the ongoing tension between the development of AI and the rights of creators. As AI continues to play a larger role in the music industry, platforms like SoundCloud will need to navigate these complex issues to maintain the trust of their users.