~$ ./jackson.sh

The Problems With UChicago’s Glaze

Published Friday, March 17, 2023

"Anti-Plagiarism" Tool Caught Plagiarizing (and It Doesn’t Even Work)

Yesterday, the SAND Lab at UChicago made Glaze available to download. It's a tool to help artists protect against their work being used to train AI models. It got a bit of buzz last month, including a New York Times spot. However, it has some issues:

  1. The authors plagiarized code from DiffusionBee, an AI art tool licensed under GPL.
  2. The paper contains inflammatory language with no legal backing.
  3. It doesn't work and I was able to execute a proof-of-concept bypass in minutes!

Each of the sections below will go into further detail on these points.

1. Code Theft and GPL Violations #

Update as of March 20th, 2022: The SAND Lab has rewritten the UI from scratch to fix their GPL violations. Congratulations and credit to them.

Within hours of Glaze being released an enterprising Reddit user looked inside the Electron app and found substantial evidence of plagiarism. Entire sections of GPL-licensed code were lifted verbatim without even fixing spelling mistakes. One of the Glaze authors admitted to it shortly after:

Tweet from author admitting to stealing code This "careless mistake" is an academic integrity violation

In response to this the SAND Lab has released a download of the "glaze-fontend" [sic]:

Screenshot of file named glaze-fontend.zip

This is not enough and SAND Lab is still not complying with GPL. Glaze is packaged as a single program and is therefore completely infected by GPL. They must release the entire source code of the product or rewrite the frontend without stealing code. They state they have chosen the latter option but every day the current version of Glaze remains available the license of DiffusionBee is being infringed upon.

2. Paper Is Needlessly Inflammatory #

At numerous points throughout their paper, SAND Lab states that using AI to mimic the style of an artist is plagiarism or theft:

Screenshot from glaze paper claiming that style mimicry is plagiarism. Not plagiarism.
Screenshot from glaze paper claiming that style mimicry is plagiarism. Still not plagiarism. Obviously transformative works.
Screenshot from glaze paper accusing reddit user of stealing Are you kidding me?

This is flatly untrue. No court has ever ruled that a style can be owned or that mimicry is plagiarism. You can't copyright an art style. Additionally, plagiarism requires an attempt to pass work off as one's own, which the Reddit users referenced in the paper are not doing.

In academia, accusations of plagiarism are serious. The SAND Lab is defaming specific individuals as well as the entire AI art community. Hypocritically, SAND Lab is violating GPL and was plagiarizing code with no citation (until they were called out), an act which DOES have significant legal precedence as theft of intellectual property.

Glaze exists to solve a problem that the legal system rightfully refuses to. Style cannot be owned.

3. Glaze Destroys Image Quality and Doesn't Offer Protection #

I tried Glaze out myself on some art by Timo Pihlajamaki and Sam Delfanti. I ran Glaze with the following settings to get strong protection:

Glaze app settings It only took about 15 minutes but I'm on a desktop computer

Let's look at a specific portion of the image (a dragon) before and after glazing:

Picture of dragon before glazing Dragon before glazing
Picture of dragon after glazing Dragon after glazing

Wow! That's a big difference. The artifacts remind me of oracle bone script. I have no doubt that these markings impair training on these images. But can we remove them?

After some experimentation, I noticed that these artifacts do not impair our ability to run Canny edge detection:

Canny edge detection result on the dragon Result of running Canny edge detection on the glazed image

We can use this with ControlNet img2img with the negative prompt "((watermark)), jpeg artifacts, artifacts, blurry, aliasing" and a denoising strength of 0.1. Optionally, you can interrogate CLIP to get a positive prompt or use "Guess Mode" in ControlNet. We get this result:

Dragon with glaze removed De-Glazed dragon. A little detail loss but the style was definitely retained.

We can still see some artifacts but they're different. Every pixel has been repainted by Stable Diffusion. You can now train a style LoRA from a set of images de-glazed using this process. This Glaze bypass likely works better with lower Glaze intensity.

Additionally, Spawning has reported some success in defeating Glaze as well.

In the authors' defense, they predicted their technique being defeated. The Glaze website reads:

Unfortunately, Glaze is not a permanent solution against AI mimicry. AI evolves quickly, and systems like Glaze face an inherent challenge of being future-proof (Radiya et al). Techniques we use to cloak artworks today might be overcome by a future countermeasure, possibly rendering previously protected art vulnerable. It is important to note that Glaze is not panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. We hope that Glaze and followup projects will provide some protection to artists while longer term (legal, regulatory) efforts take hold.

Due to the breakneck speed the AI art community moves at, I expect Glaze to be circumvented by a fully automated solution by the end of this coming weekend. Additionally, I do not expect any courts anywhere in the world to legally protect creatives (authors, artists) against their works being used to train AI; it would trample decades of transformative fair use precedent.

Update as of March 20th, 2022 on Monday morning: Lvmin Zhang has released a simple adversarial noise removal tool and it's been integrated into the Auto1111 UI.

Suggestions for the Glaze Authors #

Your work would be made significantly less problematic in two easy steps:

  1. Completely release all source code for Glaze. The currently released version violates GPL. Additionally, the use of PyArmor in an academic product with an accompanying paper is suspect. If the paper plainly explains the method you implemented, why is your code obfuscated?
  2. Edit paper to remove all claims that use of AI art to mimic an artist's style constitutes theft or plagiarism. These are potentially libelous claims and do not belong in an academic paper.