SPG: Style-Prompting Guidance for Style-Specific Content Creation

Visual Computing Research Center, CSSE, Shenzhen University
Pacific Graphics 2025(Journal Track)

*Corresponding Author

TL;DR:A training free methods for style-specific text2image creation.

MY ALT TEXT

Abstract

Although recent text-to-image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style-Prompting Guidance (SPG), a novel sampling strategy for style-specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state-of-the-art methods.

Method

Gallery of SPG Results




Qualitative Comparisons




Quantitative Comparisons

Applications & Variations

BibTeX

@misc{liang2025spg,
  title={SPG: Style-Prompting Guidance for Style-Specific Content Creation}, 
  author={Qian Liang and Zichong Chen and Yang Zhou and Hui Huang},
  year={2025},
  eprint={2508.11476},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={http://arxiv.org/abs/2508.11476}, 
}