b'@online{Vandersanden2410.01540,'b"\nTITLE = {Edge-Preserving Noise for Diffusion Models},\nAUTHOR = {Vandersanden, Jente and Holl, Sascha and Huang, Xingchang and Singh, Gurprit},\nLANGUAGE = {eng},\nURL = {https://arxiv.org/abs/2410.01540},\nEPRINT = {2410.01540},\nEPRINTTYPE = {arXiv},\nYEAR = {2024},\nMARGINALMARK = {$\\bullet$},\nABSTRACT = {Classical generative diffusion models learn an isotropic Gaussian denoising<br>process, treating all spatial regions uniformly, thus neglecting potentially<br>valuable structural information in the data. Inspired by the long-established<br>work on anisotropic diffusion in image processing, we present a novel<br>edge-preserving diffusion model that is a generalization of denoising diffusion<br>probablistic models (DDPM). In particular, we introduce an edge-aware noise<br>scheduler that varies between edge-preserving and isotropic Gaussian noise. We<br>show that our model's generative process converges faster to results that more<br>closely match the target distribution. We demonstrate its capability to better<br>learn the low-to-mid frequencies within the dataset, which plays a crucial role<br>in representing shapes and structural information. Our edge-preserving<br>diffusion process consistently outperforms state-of-the-art baselines in<br>unconditional image generation. It is also more robust for generative tasks<br>guided by a shape-based prior, such as stroke-to-image generation. We present<br>qualitative and quantitative results showing consistent improvements (FID<br>score) of up to 30% for both tasks. We provide source code and supplementary<br>content via the public domain edge-preserving-diffusion.mpi-inf.mpg.de .<br>},\n}\n"