Frequently Asked Questions

Why is it important to credit artists’ work when generating A.I. images?

Artists rely on attribution of their work for recognition and income, and A.I. models need human created images to function. But the training data for many popular A.I. image generators was scraped from the web, in ways the creators didn’t intend or consent to. In the process, attribution to the original creators was lost.

Artists and other creators deserve to be able to consent or refuse to have their works included in training data, to be assigned credit when their works are used, and to be compensated for their work.

By returning attribution, it’s possible to proportionally assign credit to human artists in every image generated by A.I. This opens up the possibility of real collaboration, on ethical terms.

By crediting artists and sharing revenue with creators according to attribution, A.I. driven apps could let artists passively earn income every time a model generates an image. By highlighting the artists who influenced the A.I. generated images, anyone can easily discover new creators whose work they love, just by typing what they want to see.

By crediting contributing artists, and only using explicitly opt-in data, A.I. could act as an amplifier for creativity and expression of all kinds.

How does Stable Attribution work?

When an A.I. model is trained to create images from text, it uses a huge dataset of images and their corresponding captions. The model is trained by showing it the captions, and having it try to recreate the images associated with each one, as closely as possible.

The model learns both general concepts present in millions of images, like what humans look like, as well as more specific details like textures, environments, poses and compositions which are more uniquely identifiable.

Version 1 of Stable Attribution’s algorithm decodes an image generated by an A.I. model into the most similar examples from the data that the model was trained with. Usually, the image the model creates doesn’t exist in its training data - it’s new - but because of the training process, the most influential images are the most visually similar ones, especially in the details.

The data from models like Stable Diffusion is publicly available - by indexing all of it, we can find the most similar images to what the model generates, no matter where they are in the dataset.

Do you claim rights to any of the images? Will you train A.I. on them?

No. We do not claim rights to any of the images, generated or otherwise, nor will we train any models on them. Our aim is solely to provide attribution information.

Who are you and why did you build this?

Stable Attribution was built at Chroma. Chroma is a start-up founded to make A.I. understandable, through analyzing how the data which models are trained on influences their behavior. Stable Attribution was built by Jeff Huber and Anton Troynikov, with the help and feedback of many others.

We built Stable Attribution because we saw a way to solve a real problem, for real people, using technology we understand. We believe A.I. - like all technology - should serve people, not alienate them.

What’s next?

Version 1 the Stable Attribution algorithm isn’t perfect, in part because the training process is noisy, and the training data contains a lot of errors and redundancy. But this is not an impossible problem. We are actively researching how to improve it and make attribution better for all kinds of generative models. If you are interested in working with us, please get in touch.

Work with us

4 open roles

Stable Attribution currently finds source images, but we want to be able to attribute work directly to the artist or creator of the image. If you spot an artist you know among the sources Stable Attribution finds, drop a link to their page so we can credit them! You can find the link on the source page.