sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

597
active users

#stochastically

0 posts0 participants0 posts today
JMLR<p>&#39;The ODE Method for Stochastic Approximation and Reinforcement Learning with Markovian Noise&#39;, by Shuze Daniel Liu, Shuhang Chen, Shangtong Zhang.</p><p><a href="http://jmlr.org/papers/v26/24-0100.html" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v26/24-0100.ht</span><span class="invisible">ml</span></a> <br /> <br /><a href="https://sigmoid.social/tags/stochastic" class="mention hashtag" rel="tag">#<span>stochastic</span></a> <a href="https://sigmoid.social/tags/stochastically" class="mention hashtag" rel="tag">#<span>stochastically</span></a> <a href="https://sigmoid.social/tags/martingale" class="mention hashtag" rel="tag">#<span>martingale</span></a></p>
JMLR<p>&#39;Optimizing Noise for f-Differential Privacy via Anti-Concentration and Stochastic Dominance&#39;, by Jordan Awan, Aishwarya Ramasethu.</p><p><a href="http://jmlr.org/papers/v25/23-1624.html" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/23-1624.ht</span><span class="invisible">ml</span></a> <br /> <br /><a href="https://sigmoid.social/tags/privacy" class="mention hashtag" rel="tag">#<span>privacy</span></a> <a href="https://sigmoid.social/tags/stochastically" class="mention hashtag" rel="tag">#<span>stochastically</span></a> <a href="https://sigmoid.social/tags/stochastic" class="mention hashtag" rel="tag">#<span>stochastic</span></a></p>
Published papers at TMLR<p>Expected Worst Case Regret via Stochastic Sequential Covering</p><p>Changlong Wu, Mohsen Heidari, Ananth Grama, Wojciech Szpankowski</p><p>Action editor: Shinichi Nakajima.</p><p><a href="https://openreview.net/forum?id=H1SekypXKA" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=H1Seky</span><span class="invisible">pXKA</span></a></p><p><a href="https://sigmoid.social/tags/stochastically" class="mention hashtag" rel="tag">#<span>stochastically</span></a> <a href="https://sigmoid.social/tags/stochastic" class="mention hashtag" rel="tag">#<span>stochastic</span></a> <a href="https://sigmoid.social/tags/regret" class="mention hashtag" rel="tag">#<span>regret</span></a></p>
New Submissions to TMLR<p>Expected Worst Case Regret via Stochastic Sequential Covering</p><p><a href="https://openreview.net/forum?id=H1SekypXKA" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=H1Seky</span><span class="invisible">pXKA</span></a></p><p><a href="https://sigmoid.social/tags/stochastically" class="mention hashtag" rel="tag">#<span>stochastically</span></a> <a href="https://sigmoid.social/tags/stochastic" class="mention hashtag" rel="tag">#<span>stochastic</span></a> <a href="https://sigmoid.social/tags/regret" class="mention hashtag" rel="tag">#<span>regret</span></a></p>