sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

597
active users

#interpretableml

0 posts0 participants0 posts today
Marco 🌳 Zocca<p>Interested in interpretable ML, particularly for LLMs? </p><p>eg &quot;causal&quot; interpretability, as in the &quot;OthelloGPT&quot; paper [1]? </p><p>Let&#39;s connect!</p><p>1. <a href="https://arxiv.org/abs/2210.13382" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2210.13382</span><span class="invisible"></span></a> </p><p><a href="https://sigmoid.social/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://sigmoid.social/tags/machinelearning" class="mention hashtag" rel="tag">#<span>machinelearning</span></a> <a href="https://sigmoid.social/tags/interpretability" class="mention hashtag" rel="tag">#<span>interpretability</span></a> <a href="https://sigmoid.social/tags/interpretableml" class="mention hashtag" rel="tag">#<span>interpretableml</span></a> <a href="https://sigmoid.social/tags/mechanisticinterpretability" class="mention hashtag" rel="tag">#<span>mechanisticinterpretability</span></a></p>
Cosima Meyer<p>This book is golden! (and I think I'll have to read it again because it's so full of information!) </p><p>For anyone trying to understand their models, Serg Masís' book "Interpretable Machine Learning with <a href="https://mas.to/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a>" provides the right mix of theory and practical approaches. It has both a high-level and applied perspective, which I really enjoyed, and gives both practitioners and those new to the field a good and illustrative starting point. </p><p><a href="https://mas.to/tags/interpretableml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretableml</span></a></p>
Cosima Meyer<p>And while this is true for any modeling approach, I think it's especially relevant as we move to more complex models. In my experience, interpretability is important not only in making models accessible to the end user but also in the development process as we strive to build fair and reliable models that produce the best results. </p><p><a href="https://mas.to/tags/interpretableml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>interpretableml</span></a> <a href="https://mas.to/tags/xai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>xai</span></a> <a href="https://mas.to/tags/datascience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>datascience</span></a> <a href="https://mas.to/tags/python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>python</span></a></p>
José Oramas<p>Great news, this year <a href="https://sigmoid.social/tags/AIMLAI" class="mention hashtag" rel="tag">#<span>AIMLAI</span></a> will be held in conjunction with <a href="https://sigmoid.social/tags/ECMLPKDD" class="mention hashtag" rel="tag">#<span>ECMLPKDD</span></a> 2024. Looking forward to meeting you in Vilnius! <a href="https://sigmoid.social/tags/xai" class="mention hashtag" rel="tag">#<span>xai</span></a> <a href="https://sigmoid.social/tags/interpretableML" class="mention hashtag" rel="tag">#<span>interpretableML</span></a> <a href="https://sigmoid.social/tags/explainability" class="mention hashtag" rel="tag">#<span>explainability</span></a> <a href="https://sigmoid.social/tags/interpretability" class="mention hashtag" rel="tag">#<span>interpretability</span></a> <a href="https://sigmoid.social/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://sigmoid.social/tags/ml" class="mention hashtag" rel="tag">#<span>ml</span></a> @ECMLPKDD @IDLabResearch @imecVlaanderen @UAntwerpen</p>
UKRI AI for Healthcare Centres<p>On Friday 28 July and Sat 29 July <br />@icmlconf</p><p>come and follow presentations by <br />@PFestor</p><p>@alj_jenkins<br /> and <br />@JoshSouthern13</p><p><a href="https://sigmoid.social/tags/interpretableML" class="mention hashtag" rel="tag">#<span>interpretableML</span></a> for health <a href="https://sigmoid.social/tags/GenerativeAI" class="mention hashtag" rel="tag">#<span>GenerativeAI</span></a> <a href="https://sigmoid.social/tags/GNN" class="mention hashtag" rel="tag">#<span>GNN</span></a></p>
Conor O'Sullivan<p>My new article in <span class="h-card" translate="no"><a href="https://me.dm/@towardsdatascience" class="u-url mention">@<span>towardsdatascience</span></a></span> </p><p>One of the points is that IML bring DS closer to goal of science - understanding our natural world. </p><p>This is one of the reasons I’m so attracted to the field. Trying to understand how a model works is 1000x more interesting than simply evaluating it. </p><p><a href="https://sigmoid.social/tags/DataScience" class="mention hashtag" rel="tag">#<span>DataScience</span></a> <a href="https://sigmoid.social/tags/MachineLearning" class="mention hashtag" rel="tag">#<span>MachineLearning</span></a> <a href="https://sigmoid.social/tags/InterpretableML" class="mention hashtag" rel="tag">#<span>InterpretableML</span></a> <a href="https://sigmoid.social/tags/XAI" class="mention hashtag" rel="tag">#<span>XAI</span></a></p><p>No paywall link:<br /><a href="https://towardsdatascience.com/data-science-is-not-science-bb95d783697a?source=friends_link&amp;sk=990dff05efbd4c9369c667f7977a23a7" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">towardsdatascience.com/data-sc</span><span class="invisible">ience-is-not-science-bb95d783697a?source=friends_link&amp;sk=990dff05efbd4c9369c667f7977a23a7</span></a></p>
José Oramas<p>Our <a href="https://sigmoid.social/tags/WACV23" class="mention hashtag" rel="tag">#<span>WACV23</span></a> paper on the topic of evaluation for model interpretation methods is available online <a href="https://openaccess.thecvf.com/content/WACV2023/html/Behzadi-Khormouji_A_Protocol_for_Evaluating_Model_Interpretation_Methods_From_Visual_Explanations_WACV_2023_paper.html" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">openaccess.thecvf.com/content/</span><span class="invisible">WACV2023/html/Behzadi-Khormouji_A_Protocol_for_Evaluating_Model_Interpretation_Methods_From_Visual_Explanations_WACV_2023_paper.html</span></a></p><p>Congrats to @behzadikhormuji for the good work.<br /><a href="https://sigmoid.social/tags/XAI" class="mention hashtag" rel="tag">#<span>XAI</span></a> <a href="https://sigmoid.social/tags/Interpretability" class="mention hashtag" rel="tag">#<span>Interpretability</span></a> <a href="https://sigmoid.social/tags/interpretableml" class="mention hashtag" rel="tag">#<span>interpretableml</span></a> <a href="https://sigmoid.social/tags/UAntwerp" class="mention hashtag" rel="tag">#<span>UAntwerp</span></a> <a href="https://sigmoid.social/tags/imec" class="mention hashtag" rel="tag">#<span>imec</span></a></p>