Johannes Gasteiger<p>You can sample nodes for scalable <a href="https://sigmoid.social/tags/GNN" class="mention hashtag" rel="tag">#<span>GNN</span></a> <a href="https://sigmoid.social/tags/training" class="mention hashtag" rel="tag">#<span>training</span></a>. But how do you do <a href="https://sigmoid.social/tags/scalable" class="mention hashtag" rel="tag">#<span>scalable</span></a> <a href="https://sigmoid.social/tags/inference" class="mention hashtag" rel="tag">#<span>inference</span></a>?</p><p>In our latest paper (Oral <a href="https://sigmoid.social/tags/LogConference" class="mention hashtag" rel="tag">#<span>LogConference</span></a><br />) we introduce influence-based mini-batching (<a href="https://sigmoid.social/tags/IBMB" class="mention hashtag" rel="tag">#<span>IBMB</span></a>) for both fast inference and training, achieving up to 130x and 17x speedups, respectively!</p><p>1/8 in 🧵</p>