Abstract

This paper unveils Dimba, a new text-to-image diffusion model that employs a distinctive hybrid architecture combining Transformer and Mamba elements. Specifically, Dimba sequentially stacked blocks alternate between Transformer and Mamba layers, and integrate conditional information through the cross-attention layer, thus capitalizing on the advantages of both architectural paradigms. We investigate several optimization strategies, including quality tuning, resolution adaption, and identify critical configurations necessary for large-scale image generation. The model's flexible design supports scenarios that cater to specific resource constraints and objectives. When scaled appropriately, Dimba offers substantial throughput and a reduced memory footprint relative to conventional pure Transformers-based benchmarks. Extensive experiments indicate that Dimba achieves comparable performance compared with benchmarks in terms of image quality, artistic rendering, and semantic control. We also report several intriguing properties of architecture discovered during evaluation and release checkpoints in experiments. Our findings emphasize the promise of large-scale hybrid Transformer-Mamba architectures in the foundational stage of diffusion models, suggesting a bright future for text-to-image generation.

More Samples

Qualitative comparison of Dimba with four other open-source text-to-image models. Baselines include Playground v2.5, PixArt, SDXL and SDXL Turbo. Images generated by Dimba are very competitive with these benchmarks and show more details and aesthetics.

compare

User Study and AI Preference

The ratio values indicate the percentages of participants preferring Dimba over the corresponding baselines. Dimba achieves a superior capacity in both image quality and prompt following.

quality

Quality Tuning

Comparison of images generated by the pre-trained and quality-tuned Dimba model. Quality-tunning can significantly optimize the synthesis image in details and aesthetics.

quality

BibTeX

@misc{fei2024dimba,
    title={Dimba: Transformer-Mamba Diffusion Models}, 
    author={Zhengcong Fei and Mingyuan Fan and Changqian Yu and Debang Li and Youqiang Zhang and Junshi Huang},
    year={2024},
    eprint={2406.01159},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}