If you care about **reliable AI explainability**, it's time to rethink the status quo.
Thoughts? Have you faced issues with LIME/SHAP? Let’s discuss!
If you care about **reliable AI explainability**, it's time to rethink the status quo.
Thoughts? Have you faced issues with LIME/SHAP? Let’s discuss!
#AI #interpretability vs #explainability
"The explanations themselves can be difficult to convey to nonexperts, such as end users and line-of-business teams" https://www.techtarget.com/searchenterpriseai/feature/Interpretability-vs-explainability-in-AI-and-machine-learning
Liquid AI Is Redesigning the Neural Network https://www.wired.com/story/liquid-ai-redesigning-neural-network/ #AI #energyEfficiency #privacy #explainability (looks promising)
Liquid AI Is Redesigning the Neural Network https://www.wired.com/story/liquid-ai-redesigning-neural-network/ #AI #energyEfficiency #privacy #explainability (looks promising)
"Feature importance helps in understanding which features contribute most to the prediction"
A few lines with #sklearn: https://mljourney.com/sklearn-linear-regression-feature-importance/
"The following sections discuss several state-of-the-art interpretable and explainable #ML methods. The selection of works does not comprise an exhaustive survey of the literature. Instead, it is meant to illustrate the commonest properties and inductive biases behind interpretable models and [black-box] explanation methods using concrete instances."
https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1493#widm1493-sec-0010-title
Model "#interpretability and [black-box] #explainability, although not necessary in many straightforward applications, become instrumental when the problem definition is incomplete and in the presence of additional desiderata, such as trust, causality, or fairness."
https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1493
Kolmogorov-Arnold networks are a promising step towards #explainability vs multilayer perceptrons (at least for certain kinds of problems)
Found my way Vilnius looking forward to an engaging and fun week @ECMLPKDD
'24 and at workshop on interpretable #ML & #AI to be take place this monday #AIMLAI #xai #interpretability #explainability
It's been cool to be part of the #IJDH issue on #reproducibility and #explainability.
But my reservations regarding the publishing model are being confirmed:
There is a clear relationship between access modality and number of accesses (see barchart). The #openaccess articles (paid via individual APC or read and publish agreement) have many more accesses per month than the #closed access articles (no payment required), so far.
Open access advantage
Rich country advantage
Mapping the Mind of a Large Language Model https://www.anthropic.com/research/mapping-mind-language-model #AI #interpretability #explainability #Anthropic
National Deep Inference Fabric https://ndif.us/ #AI #research #explainability
“Explainability is the next frontier of statistical computing and AI. This can begin only when we open up the black box and uncover the math.“
'Towards Explainable Evaluation Metrics for Machine Translation', by Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger.
http://jmlr.org/papers/v25/22-0416.html
#explainability #generative #translation
Somebody tell this guy about trying to test #quality into a product. SMH.
I wonder how he'll feel about the lack of #explainability if he ever gets wrongfully arrested and convicted based on a false positive, and his challenge being dismissed because we can't explain how the #AI #algorithm works in the first place yet still trust its output.
#QA
https://www.forbes.com/sites/jamesbroughel/2024/04/13/artificial-intelligence-explainability-is-overrated/
Andreas Malm: “Sabotage has to be precise”
The Berliner, 2022, https://www.the-berliner.com/politics/andreas-malm-sabotage-has-to-be-precise-how-to-blow-up-a-pipeline-nord-stream-2-interview/ @climate@a.gup.pe @climate@slrpnk.net
Large language models use a surprisingly simple mechanism to retrieve some stored knowledge https://news.mit.edu/2024/large-language-models-use-surprisingly-simple-mechanism-retrieve-stored-knowledge-0325 #AI #explainability
Large language models can do jaw-dropping things. But nobody knows exactly why. https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/ #AI #research #explainability
Great news, this year #AIMLAI will be held in conjunction with #ECMLPKDD 2024. Looking forward to meeting you in Vilnius! #xai #interpretableML #explainability #interpretability #ai #ml @ECMLPKDD @IDLabResearch @imecVlaanderen @UAntwerpen
Also in this issue, @upol and Mark Riedl argue that a singular monolithic definition of explainable AI is neither feasible nor desirable at present.
https://www.cell.com/patterns/fulltext/S2666-3899(24)00017-5