sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

597
active users

#metascience

0 posts0 participants0 posts today

Could AI slow progress in science?

open.substack.com/pub/aisnakeo?

what is progress in science? more papers, but fewer breakthroughs? (how do you measure “progress in science” or recognize a breakthrough?)

the low hanging fruit is picked? but progress should give is taller ladders. also, some progress is in the form of entirely new fields, new trees with new low hanging fruit.

nice bit about better pattern matching and better fits for models needs a better theory to justify the model — geocentric orbits with epicycles made better predictions than heliocentrism until Kepler realized orbits weren’t circular, but elliptic — heliocentrism progressed mostly because it was simpler than all those epicycles.

the gold is at the end of the paper, inspired by an essay by a mathematician named Thurston, who notes that the goal of mathematics is not proofs of theorems but human understanding. to what extent does a result from an AI circumvent human understanding?

AI Snake Oil · Could AI slow science?By Sayash Kapoor

Defending science in public we often talk about 'peer reviewed science'. But could this framing contribute to undermining trust in science and holding us back from improving the scientific process? How about instead we talk about the work that has received the most thorough and transparent scrutiny?

Peer review goes a step towards this in having a couple of people scrutinise the work, but there are limits on how thorough it can be and in most journals it's not transparent. Switching the framing to transparent scrutiny allows us to experiment with other models with a path to improvement.

For example, making review open to all, ongoing, and all reviews published improves this. When authors make their raw data and code open, it improves this.

It also gives us a way to criticise problematic organisations that formally do peer review but add little value (e.g. predatory journals). If their reviews are not open and observably of poor quality, then they are less 'thoroughly transparent'.

So with this framing the existence of 'peer reviewed' but clearly poor quality work doesn't undermine trust in science as a whole because we don't pin our meaning and value on an exploitable binary measure of 'peer reviewed'.

It also offers a hopeful way forward because it shows us how we can improve, and every step towards this becomes meaningful. If all we have is binary 'peer reviewed' or not, why spend more effort doing it better?

In summary, I think this new framing would be better for science, both in terms of the public perception of it, and for us as scientists.

Continued thread

Bernhard Angele is now presenting "Living meta-analyses in Language Sciences" at #WoReLa1. Having explaining the need for such living meta-analyses in a very houmous way, Bernhard demonstrated this very cool project. The main output is a #Shiny app that can perform Bayesian meta-analyses with lots of opportunities to control various parameters AND allows you to upload additional data to an existing meta-analysis. The app can be used as is or the code adapted to your needs: dallbrit.shinyapps.io/Breathin. Also check out the associated paper: doi.org/10.5334/joc.389 #MetaScience

Could a novelty indicator improve science?
nature.com/articles/d41586-025 #MetaScience I have mixed feelings about this. The competition for machine-based indicators that align well with human assessments is well-designed, and I agree that researching the role of novelty is interesting. 1/ @openscience

www.nature.comCould a novelty indicator improve science?A competition to develop computational approaches to detect ‘novelty’ in published papers will help metascientists to study how out-of-the-box research changes the scientific landscape.

You are all warmly invited to our #metascience2025 virtual symposium on Friday 27 June 15:00–16:30 CEST on doing #metascience and #interdisciplinary research as an early-career researcher.

Convenor: Anna Leung (psycholinguist and language teacher; University Hospital, LMU, Germany)

Discussants:
- Shawn Hemelstrand (psychologist and methodologist; The Chinese University of Hong Kong)
- Daniel Kristanto (engineer turned neuroscientist; Carl von Ossietzky Universität Oldenburg)
- Elen Le Foll (corpus linguist and language teaching; University of Cologne)

Abstract: nomadit.co.uk/conference/metas.

Registration is free: cos-io.zoom.us/webinar/registe.

We look forward to discussing these important topics with you! #ECR #PhD #PostDoc #Academia #OpenScience

🔄 🄱🄾🄾🅂🅃🄸🄽🄶 🅃🄷🄸🅂 🅃🄾🄾🅃 = 1% 🄼🄾🅁🄴 🄴🄲🅁 🄴🄼🄿🄾🅆🄴🅁🄼🄴🄽🅃! 🔄

Continued thread

"As we report below, even well-meaning scientists provided with identical data and freed from pressures to distort results may not reliably converge in their findings because of the complexity and ambiguity inherent to the process of scientific analysis."
#metascience