Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?

1National Yang Ming Chiao Tung University, 2University of Illinois at Urbana Champaign, 3University of Chicago 4IBM Research
ICLR 2024
Demonstrations of Ring-A-Bell

Ring-A-Bell evaluates(red-teaming) T2I diffusion model in a black-box fashion.

Abstract

We present an effective black-box attack/evaluation framework for common T2I diffusion models, including Stable Diffusion, DALLE, MidJourney.

We introduce Ring-A-Bell, a model-agnostic red-teaming tool for T2I diffusion models, where the whole evaluation can be prepared in advance without prior knowledge of the target model. Specifically, Ring-A-Bell first performs concept extraction to obtain holistic representations for sensitive and inappropriate concepts. Subsequently, by leveraging the extracted concept, Ring-A-Bell automatically identifies problematic prompts for diffusion models with the corresponding generation of inappropriate content, allowing the user to assess the reliability of deployed safety mechanisms. Finally, we empirically validate our method by testing online services such as Midjourney and various methods of concept removal.

Our results show that Ring-A-Bell, by manipulating safe prompting benchmarks, can transform prompts that were originally regarded as safe to evade existing safety mechanisms, thus revealing the defects of the so-called safety mechanisms which could practically lead to the generation of harmful contents. In essence, Ring-A-Bell could serve as a red-teaming tool to understand the limitations of deployed safety mechanisms and to explore the risk under plausible attacks.

CAUTION: This paper includes model-generated content that may contain offensive or distressing material

Model Architecture

Model Architecture of Ring-A-Bell

Ring-A-Bell transforms benign prompts into harmful ones to create inappropriate images.

The rationale behind Ring-A-Bell is that current T2I models with safety mechanisms either learn to disassociate or simply filter out relevant words of the target concepts with their representation c, and thus the detection or removal of such concepts may not be carried out completely if there exist implicit text-concept associations embedded in the T2I generation process. That is, Ring-A-Bell aims to test whether a supposedly removed concept can be revoked via our prompt optimization procedure.

Performance

Evaluation against Concept Removal Models - Nudity Concept

Ring-A-Bell can effectively search for problematic prompts that are not eliminated by concept removal methods. Thereby increasing the chance of generating inappropriate images and exposing the vulnerabilities of such methods.

Nudity Concept Generation

Ring-A-Bell actively search for nudity concepts in order to generate nudity-related images.


Evaluation against Concept Removal Models - Artistic Concept

Ring-A-Bell can as well search for underlying artistic concept, and manipulate the safety-enhanced diffusion models to generate plausible copyrighted content.

Artistic Concept

Ring-A-Bell actively search for Van Gogh concepts in order to generate copyrighted images.

Evaluation against Concept Removal Models - Specific Object

Ring-A-Bell can also search for problematic prompts to generated specific model-prohibited object.

Specific Object

Ring-A-Bell actively search for remnant concept in order to generate specific object such as car.

Evaluation against Online APIs - Additional Results

Ring-A-Bell can also search for problematic prompts for blackbox APIs.

Specific Object

Ring-A-Bell actively search for nudity and violence concepts even facing blackbox APIs.

BibTeX

@inproceedings{ringabell,
title={Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?},
author={Yu-Lin Tsai*, Chia-Yi Hsu*, Chulin Xie, Chih-Hsun Lin, Jia-You Chen, Bo Li, Pin-Yu Chen, Chia-Mu Yu, Chun-Ying Huang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=lm7MRcsFiS}
}