sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

597
active users

#denoising

0 posts0 participants0 posts today
LVX<p>Audiotame: denoise and normalize audio via command line or Gradio Web UI. Demo available on Hugging Face</p><p>Repo: <a href="https://github.com/lvxvvv/audiotame" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/lvxvvv/audiotame</span><span class="invisible"></span></a></p><p>Demo: <a href="https://huggingface.co/spaces/lvxvvv/audiotame" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">huggingface.co/spaces/lvxvvv/a</span><span class="invisible">udiotame</span></a></p><p><a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://mastodon.social/tags/AudioProcessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AudioProcessing</span></a> <a href="https://mastodon.social/tags/Denoising" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Denoising</span></a> <a href="https://mastodon.social/tags/Normalization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Normalization</span></a> <a href="https://mastodon.social/tags/FOSS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FOSS</span></a> <a href="https://mastodon.social/tags/ffmpeg" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ffmpeg</span></a> <a href="https://mastodon.social/tags/gradio" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gradio</span></a> <a href="https://mastodon.social/tags/huggingface" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>huggingface</span></a></p>
Tiago F. R. Ribeiro<p>“Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance”</p><p>Perturbed-Attention Guidance (<a href="https://mastodon.social/tags/PAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PAG</span></a>) é uma técnica sampling guidance de difusão que melhora a qualidade das amostras em cenários condicionais e incondicionais, sem necessidade de treino adicional ou integração de módulos externos. A PAG aprimora a estrutura das amostras sintetizadas durante o processo de <a href="https://mastodon.social/tags/denoising" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>denoising</span></a>, manipulando mapas de auto-atenção selecionados na <a href="https://mastodon.social/tags/UNet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UNet</span></a> de <a href="https://mastodon.social/tags/difus%C3%A3o" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>difusão</span></a> </p><p>🔗https://<a href="https://ku-cvlab.github.io/Perturbed-Attention-Guidance/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ku-cvlab.github.io/Perturbed-A</span><span class="invisible">ttention-Guidance/</span></a></p>
JMLR<p>&#39;On Efficient and Scalable Computation of the Nonparametric Maximum Likelihood Estimator in Mixture Models&#39;, by Yangjing Zhang, Ying Cui, Bodhisattva Sen, Kim-Chuan Toh.</p><p><a href="http://jmlr.org/papers/v25/22-1120.html" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v25/22-1120.ht</span><span class="invisible">ml</span></a> <br /> <br /><a href="https://sigmoid.social/tags/hessian" class="mention hashtag" rel="tag">#<span>hessian</span></a> <a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/likelihood" class="mention hashtag" rel="tag">#<span>likelihood</span></a></p>
JMLR<p>&#39;Lifted Bregman Training of Neural Networks&#39;, by Xiaoyu Wang, Martin Benning.</p><p><a href="http://jmlr.org/papers/v24/22-0934.html" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">http://</span><span class="ellipsis">jmlr.org/papers/v24/22-0934.ht</span><span class="invisible">ml</span></a> <br /> <br /><a href="https://sigmoid.social/tags/autoencoders" class="mention hashtag" rel="tag">#<span>autoencoders</span></a> <a href="https://sigmoid.social/tags/classifiers" class="mention hashtag" rel="tag">#<span>classifiers</span></a> <a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a></p>
Published papers at TMLR<p>Diffusion Models for Constrained Domains</p><p>Nic Fishman, Leo Klarner, Valentin De Bortoli, Emile Mathieu, Michael John Hutchinson</p><p>Action editor: Rianne van den Berg.</p><p><a href="https://openreview.net/forum?id=xuWTFQ4VGO" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=xuWTFQ</span><span class="invisible">4VGO</span></a></p><p><a href="https://sigmoid.social/tags/diffusion" class="mention hashtag" rel="tag">#<span>diffusion</span></a> <a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/riemannian" class="mention hashtag" rel="tag">#<span>riemannian</span></a></p>
Published papers at TMLR<p>Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration</p><p>Mauricio Delbracio, Peyman Milanfar</p><p>Action editor: Jia-Bin Huang.</p><p><a href="https://openreview.net/forum?id=VmyFF5lL3F" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=VmyFF5</span><span class="invisible">lL3F</span></a></p><p><a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/restoration" class="mention hashtag" rel="tag">#<span>restoration</span></a> <a href="https://sigmoid.social/tags/deblurring" class="mention hashtag" rel="tag">#<span>deblurring</span></a></p>
TMLR certifications<p>New <a href="https://sigmoid.social/tags/FeaturedCertification" class="mention hashtag" rel="tag">#<span>FeaturedCertification</span></a>:</p><p>Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration</p><p>Mauricio Delbracio, Peyman Milanfar</p><p><a href="https://openreview.net/forum?id=VmyFF5lL3F" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=VmyFF5</span><span class="invisible">lL3F</span></a></p><p><a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/restoration" class="mention hashtag" rel="tag">#<span>restoration</span></a> <a href="https://sigmoid.social/tags/deblurring" class="mention hashtag" rel="tag">#<span>deblurring</span></a></p>
New Submissions to TMLR<p>Diffusion Models for Constrained Domains</p><p><a href="https://openreview.net/forum?id=xuWTFQ4VGO" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=xuWTFQ</span><span class="invisible">4VGO</span></a></p><p><a href="https://sigmoid.social/tags/diffusion" class="mention hashtag" rel="tag">#<span>diffusion</span></a> <a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/riemannian" class="mention hashtag" rel="tag">#<span>riemannian</span></a></p>
Published papers at TMLR<p>Soft Diffusion: Score Matching with General Corruptions</p><p>Giannis Daras, Mauricio Delbracio, Hossein Talebi, Alex Dimakis, Peyman Milanfar</p><p>Action editor: Jonathan Scarlett.</p><p><a href="https://openreview.net/forum?id=W98rebBxlQ" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=W98reb</span><span class="invisible">BxlQ</span></a></p><p><a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/corruptions" class="mention hashtag" rel="tag">#<span>corruptions</span></a> <a href="https://sigmoid.social/tags/diffusion" class="mention hashtag" rel="tag">#<span>diffusion</span></a></p>
Published papers at TMLR<p>Training Data Size Induced Double Descent For Denoising Feedforward Neural Networks and the Role ...</p><p>Rishi Sonthalia, Raj Rao Nadakuditi</p><p><a href="https://openreview.net/forum?id=FdMWtpVT1I" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=FdMWtp</span><span class="invisible">VT1I</span></a></p><p><a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/generalization" class="mention hashtag" rel="tag">#<span>generalization</span></a> <a href="https://sigmoid.social/tags/shrinkage" class="mention hashtag" rel="tag">#<span>shrinkage</span></a></p>
New Submissions to TMLR<p>Training Data Size Induced Double Descent For Denoising Feedforward Neural Networks and the Role of Training Noise</p><p><a href="https://openreview.net/forum?id=FdMWtpVT1I" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openreview.net/forum?id=FdMWtp</span><span class="invisible">VT1I</span></a></p><p><a href="https://sigmoid.social/tags/denoising" class="mention hashtag" rel="tag">#<span>denoising</span></a> <a href="https://sigmoid.social/tags/generalization" class="mention hashtag" rel="tag">#<span>generalization</span></a> <a href="https://sigmoid.social/tags/shrinkage" class="mention hashtag" rel="tag">#<span>shrinkage</span></a></p>
ISP Group 🇧🇪<p>New post on the ISPGroup website, by Benoit Brummer (<span class="h-card" translate="no"><a href="https://fosstodon.org/@trougnouf" class="u-url mention">@<span>trougnouf</span></a></span>) and Christophe De Vleeschouwer, UCLouvain, on &quot;On the Importance of Denoising When Learning to Compress Images&quot;. <a href="https://sigmoid.social/tags/Image" class="mention hashtag" rel="tag">#<span>Image</span></a> <a href="https://sigmoid.social/tags/Compression" class="mention hashtag" rel="tag">#<span>Compression</span></a><br /><a href="https://sigmoid.social/tags/Denoising" class="mention hashtag" rel="tag">#<span>Denoising</span></a> <a href="https://sigmoid.social/tags/DeepLearning" class="mention hashtag" rel="tag">#<span>DeepLearning</span></a><br /> <a href="https://ispgroup.gitlab.io/research/on-the-importance-of-denoising-when-learning-to-compress-images/" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">ispgroup.gitlab.io/research/on</span><span class="invisible">-the-importance-of-denoising-when-learning-to-compress-images/</span></a></p>