Caching methods

Cache methods speedup diffusion transformers by storing and reusing intermediate outputs of specific layers, such as attention and feedforward layers, instead of recalculating them at each inference step.

CacheMixin

[[autodoc]] CacheMixin

PyramidAttentionBroadcastConfig

[[autodoc]] PyramidAttentionBroadcastConfig

[[autodoc]] apply_pyramid_attention_broadcast

FasterCacheConfig

[[autodoc]] FasterCacheConfig

[[autodoc]] apply_faster_cache

FirstBlockCacheConfig

[[autodoc]] FirstBlockCacheConfig

[[autodoc]] apply_first_block_cache

TaylorSeerCacheConfig

[[autodoc]] TaylorSeerCacheConfig

[[autodoc]] apply_taylorseer_cache