sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

720
active users

Olivier Grisel

Interesting developments in subquadratic alternatives to self-attention based transformers for large sequence modeling (32k and more).

Hyena Hierarchy: Towards Larger Convolutional Language Models

arxiv.org/abs/2302.10866

They propose to replace the quadratic self-attention layers by an operator built with implicitly parametrized long kernel 1D convolutions.

1/4

arXiv.orgHyena Hierarchy: Towards Larger Convolutional Language ModelsRecent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to be combined with dense attention layers to match Transformers, indicating a gap in capability. In this work, we propose Hyena, a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. In recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on state-spaces and other implicit and explicit methods, matching attention-based models. We set a new state-of-the-art for dense-attention-free architectures on language modeling in standard datasets (WikiText103 and The Pile), reaching Transformer quality with a 20% reduction in training compute required at sequence length 2K. Hyena operators are twice as fast as highly optimized attention at sequence length 8K, and 100x faster at sequence length 64K.

Hyena transformers show good mechanistic generalization on traditionally challenging "reasoning" synthetic tasks.

The implicit parametrization of the convolution kernel makes it possible to have kernel length as long as the input sequence with a limited number of trainable parameters.

Furthermore, the use of FFTConv makes it possible to decrease the number of Floating Point arithmetic operations (FLOPS) to compute those convolutions.

2/4

Unfortunately, the reduced FLOPS of Hyena layers does not necessarily yield a competitive walltime performance because long length kernel FFT convolutions typically have a hard time at using hardware accelerators (GPUs, TPUs) efficiently.

In particular, FlashAttentionV2 transformers can stay competitive for relatively long input sequences because of their highly optimized fused kernels.

3/4

However, the following paper:

FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores

arxiv.org/abs/2311.05908

github.com/HazyResearch/flash-

shows that it's possible to implement FFTConv efficiently on GPUs, therefore making the Hyena architecture more competitive.

This might be a game changer to tackle long sequence "reasoning" and recall tasks for LLMs, DNA sequence analysis and so on.

4/4

arXiv.orgFlashFFTConv: Efficient Convolutions for Long Sequences with Tensor CoresConvolution models with long filters have demonstrated state-of-the-art reasoning abilities in many long-sequence tasks but lag behind the most optimized Transformers in wall-clock time. A major bottleneck is the Fast Fourier Transform (FFT)--which allows long convolutions to run in $O(N logN)$ time in sequence length $N$ but has poor hardware utilization. In this paper, we study how to optimize the FFT convolution. We find two key bottlenecks: the FFT does not effectively use specialized matrix multiply units, and it incurs expensive I/O between layers of the memory hierarchy. In response, we propose FlashFFTConv. FlashFFTConv uses a matrix decomposition that computes the FFT using matrix multiply units and enables kernel fusion for long sequences, reducing I/O. We also present two sparse convolution algorithms--1) partial convolutions and 2) frequency-sparse convolutions--which can be implemented simply by skipping blocks in the matrix decomposition, enabling further opportunities for memory and compute savings. FlashFFTConv speeds up exact FFT convolutions by up to 7.93$\times$ over PyTorch and achieves up to 4.4$\times$ speedup end-to-end. Given the same compute budget, FlashFFTConv allows Hyena-GPT-s to achieve 2.3 points better perplexity on the PILE and M2-BERT-base to achieve 3.3 points higher GLUE score--matching models with twice the parameter count. FlashFFTConv also achieves 96.1% accuracy on Path-512, a high-resolution vision task where no model had previously achieved better than 50%. Furthermore, partial convolutions enable longer-sequence models--yielding the first DNA model that can process the longest human genes (2.3M base pairs)--and frequency-sparse convolutions speed up pretrained models while maintaining or improving model quality.