sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

649
active users

#icml2023

0 posts0 participants0 posts today

"Causal Deep Learning"
Jeroen Berrevoets, Krzysztof Kacprzyk, Zhaozhi Qian, Mihaela van der Schaar

Prof Mihaela van der Schaar just gave a very exciting talk at the workshop at on her lab's CDL framework dealing explicitly and with concepts from and -learning. Definitely going on my reading list.

arxiv.org/abs/2303.02186

arXiv.orgCausal Deep LearningCausality has the potential to truly transform the way we solve a large number of real-world problems. Yet, so far, its potential largely remains to be unlocked as causality often requires crucial assumptions which cannot be tested in practice. To address this challenge, we propose a new way of thinking about causality -- we call this causal deep learning. Our causal deep learning framework spans three dimensions: (1) a structural dimension, which incorporates partial yet testable causal knowledge rather than assuming either complete or no causal knowledge among the variables of interest; (2) a parametric dimension, which encompasses parametric forms that capture the type of relationships among the variables of interest; and (3) a temporal dimension, which captures exposure times or how the variables of interest interact (possibly causally) over time. Causal deep learning enables us to make progress on a variety of real-world problems by leveraging partial causal knowledge (including independencies among variables) and quantitatively characterising causal relationships among variables of interest (possibly over time). Our framework clearly identifies which assumptions are testable and which ones are not, such that the resulting solutions can be judiciously adopted in practice. Using our formulation we can combine or chain together causal representations to solve specific problems without losing track of which assumptions are required to build these solutions, pushing real-world impact in healthcare, economics and business, environmental sciences and education, through causal deep learning.

Aloha!
Next week I'll be in Hawai'i at the conference!

If you want to meet up drop me a line here, or come to our posters:
- Tue on "Generative Causal Representation Learning for Out-of-Distribution Motion Forecasting" with experiments on pedestrian motion forecasting.
(Poster Ses 1, Tue 11am, Hall 1, #110)
icml.cc/virtual/2023/poster/25
- Fri at the SynS&ML Workshop.
My talk is about our chemgymrl.com simulator for using Reinforcement Learning for Material Design

icml.ccGenerative Causal Representation Learning for Out-of-Distribution Motion Forecasting Poster

Happy to hear that our paper "Graph Representation of the Magnetic Field Topology in High-Fidelity Plasma Simulations for Machine Learning Applications" (which is probably a new record in the length of the title) was accepted to the 2nd ICML Workshop on Machine Learning for Astrophysics (ML4Astro) to be held in Honolulu, Hawai'i, July 29th.

This is joint work with Prof Minna Palmroth's team at the Department of Physics, University of Helsinki.

While this may be somewhat of a faux pas, in about an hour @natolambert and I will be joining @talkrl in a Twitter Spaces discussion to chat about recent advances in the world of !

I hope that anyone interested and can afford a break from writing feels free to join!

twitter.com/i/spaces/1rmxPkRXX

TwitterScheduled: Reinforcement Learning TopicsTalkRL Podcast’s Space · Where live audio conversations happen

: "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless these produced text is presented as a part of the paper’s experimental analysis."

I'd be happy to know the detector planned to be used...

After the discussion period (or lack thereof), it's very unlikely one of my papers will get in.
Disappointing, for sure, as I like this paper (and saddened for my student), but looking forward to taking the bits of reviewer feedback that made sense, and preparing a stronger submission.

Onward!