Abstract

Diffusion models have become a mainstream approach for high-resolution image synthesis. However, directly generating higher-resolution images from pretrained diffusion models will encounter unreasonable object duplication and exponentially increase the generation time. In this paper, we discover that object duplication arises from feature duplication in the deep blocks of the U-Net. Concurrently, We pinpoint the extended generation times to self-attention redundancy in U-Net’s top blocks. To address these issues, we propose a tuning-free higher-resolution framework named HiDiffusion. Specifically, HiDiffusion contains Resolution-Aware U-Net~(RAU-Net) that dynamically adjusts the feature map size to resolve object duplication and engages Modified Shifted Window Multi-head Self-Attention(MSW-MSA) that utilizes optimized window attention to reduce computations. we can integrate HiDiffusion into various pretrained diffusion models to scale image generation resolutions even to 4096×4096 at 1.5-6× the inference speed of previous methods. Extensive experiments demonstrate that our approach can address object duplication and heavy computation issues, achieving state-of-the-art performance on higher-resolution image synthesis tasks.

Paper: https://arxiv.org/abs/2311.17528

Code: https://github.com/megvii-model/HiDiffusion

Colab Demo: https://colab.research.google.com/drive/1EiBn9lSnPZTU4cikRRaBBexs429M-qty?usp=drive_link

Project Page: https://hidiffusion.github.io/

  • calabast@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Very cool! I only have experience using automatic1111, so if anyone has any hints on how I could enable this using that tool, I’d love to try it out!

  • wewbull
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    Does this mean, that because you’re now liberated from the dimensions of the training data, that all training data will apply to all sizes? e.g. generated portrait images will be influenced by landscape training data.