Part of Advances in Neural Information Processing Systems 37 (NeurIPS 2024) Main Conference Track
Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, Abhinav Bhatele
Inference on large language models (LLMs) can be expensive in terms of thecompute and memory costs involved, especially when long sequence lengths areused. In particular, the self-attention mechanism used in LLM inference contributessignificantly to these costs, which has sparked an interest in approximating the self-attention computation to reduce such costs. In this work, we propose to approximateself-attention by focusing on the dimensionality of key vectors computed in theattention block. Our analysis reveals that key vectors lie in a significantly lower-dimensional space, consistently across several datasets and models. Exploiting thisobservation, we propose Loki, a novel sparse attention method that ranks and selectstokens in the KV-cache based on attention scores computed in low-dimensionalspace. Our evaluations show that Loki is able to speed up the attention computationdue to reduced data movement (load/store) and compute costs while maintainingthe efficacy of the models better than other popular approximation methods.