sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

645
active users

#optimizers

0 posts0 participants0 posts today

This MicroAdam paper from #NeurIPS2024 is nicely written! The algorithm is walked through in plain language first, and all the equations and proofs placed in the appendix. Super understandable, kudos to the authors.
arxiv.org/abs/2405.15593
#AI #MachineLearning #LLMs #optimizers

arXiv.orgMicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable ConvergenceWe propose a new variant of the Adam optimizer called MicroAdam that specifically minimizes memory overheads, while maintaining theoretical convergence guarantees. We achieve this by compressing the gradient information before it is fed into the optimizer state, thereby reducing its memory footprint significantly. We control the resulting compression error via a novel instance of the classical \emph{error feedback} mechanism from distributed optimization in which *the error correction information is itself compressed* to allow for practical memory gains. We prove that the resulting approach maintains theoretical convergence guarantees competitive to those of AMSGrad, while providing good practical performance. Specifically, we show that MicroAdam can be implemented efficiently on GPUs: on both million-scale (BERT) and billion-scale (LLaMA) models, MicroAdam provides practical convergence competitive to that of the uncompressed Adam baseline, with lower memory usage and similar running time. Our code is available at https://github.com/IST-DASLab/MicroAdam.