As generative AI enters the mainstream, crowdfunding platform Kickstarter has struggled to formulate a policy that satisfies parties on all sides of the debate.
Most of the generative AI tools used to create art and text today, including toptechtrends.com/tag/stable-diffusion/”>Stable Diffusion and toptechtrends.com/2023/07/25/chatgpt-everything-you-need-to-know-about-the-open-ai-powered-chatbot/”>ChatGPT, were trained on publicly available images and text from the web. But in many cases, the artists, photographers and writers whose content was scraped for training haven’t been given credit, compensation or a chance to opt out.
The groups behind these AI tools argue that they’re protected by fair use doctrine — at least in the U.S. But content creators don’t necessarily agree, particularly where AI-generated content — or the AI tools themselves — are being monetized.
In an effort to bring clarity, Kickstarter today announced that projects on its platform using AI tools to generate images, text or other outputs (e.g. music, speech or audio) will be required to disclose “relevant details” on their project pages going forward. These details must include information about how the project owner plans to use the AI content in their work as well as which components of their project will be wholly original and which elements will be created using AI tools.
In addition, Kickstarter is mandating that new projects involving the development of AI tech, tools and software detail info about the sources of training data the project owner intends to use. The project owner will have to indicate how sources handle processes around consent and credit, Kickstarter says, and implement their own “safeguards” like opt-out or opt-in mechanisms for content creators.
An increasing number of AI vendors offer opt-out mechanisms, but Kickstarter’s training data disclosure rule could prove to be contentious, despite toptechtrends.com/tag/eu-ai-act/”>efforts by the European Union and others to codify such practices into law. OpenAI, among others, has declined to reveal the exact source of its more recent systems’ training data for competitive — and possibly toptechtrends.com/2023/05/09/generative-ai-and-copyright-law-whats-the-future-for-ip/”>legal liability — reasons.
Kickstarter’s new policy will go into effect on August 29. But the platform doesn’t plan to retroactively enforce it for projects submitted prior to that date, Susannah Page-Katz, Kickstarter’s director of trust and safety, said.
“We want to make sure that any project that’s funded through Kickstarter includes human creative input and properly credits and obtains permission for any artist’s work that it references,” Page-Katz wrote in a blog post shared with TechCrunch. “The policy requires creators to be transparent and specific about how they use AI in their projects because when we’re all on the same page about what a project entails, it builds trust and sets the project up for success.”
To enforce the new policy, project submissions on Kickstarter will have to answer a new set of questions, including several that touch on whether their project uses AI tech to generate artwork and the like or if the project’s primary focus is on developing generative AI tech. They’ll also be asked whether they have consent from the owners of the works used to produce — or train, as the case may be — AI-generated portions of their project.
Once AI project creators submit their work, it’ll go through Kickstarter’s standard human moderation process. If it’s accepted, any AI components will be labeled as such in a newly added “Use of AI” section on the project page, Page-Katz says.
“Throughout our conversations with creators and backers, what our community wanted most was transparency,” she added, noting that any use of AI that isn’t disclosed properly during the submission process may result in the project’s suspension. “We’re happy to directly answer this call from our community by adding a section to the project page where backers can learn about a project’s use of AI in the creator’s own words.”
Kickstarter first indicated that it was considering a change in policy around generative AI in December, when it said that it would reevaluate whether media owned or created by others in an algorithm’s training data constituted copying or mimicking an artist’s work.
Since then, the platform’s moved in fits and starts toward a new policy.
Toward the end of last year, Kickstarter toptechtrends.com/2022/12/21/kickstarter-shut-down-the-campaign-for-ai-porn-group-unstable-diffusion-amid-changing-guidelines/”>banned Unstable Diffusion, a group attempting to fund a generative AI art project that doesn’t include safety filters, letting users generate whatever artwork they please, including porn. Kickstarter justified the removal in part by implying that the project exploited particularly communities and put people at risk of harm.
More recently, Kickstarter approved, then removed, a project that used AI to plagiarize an original comic book — highlighting the challenges in moderating AI works.