sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

605
active users

#hypothesis

0 posts0 participants0 posts today

#Autaxys formal #hypothesis (H1): Given a sufficiently simple and well-defined initial set of Proto-property spaces a minimal #Cosmic Algorithm (graph rewrite rules), and a computable #Autaxic Lagrangian that embodies the Economy of Existence, the iterative application of the local maximization #dynamic to an initially simple or random graph state will spontaneously generate a non-trivial diversity of stable, emergent #patterns.

doi.org/10.5281/zenodo.1566203

Nie znoszę wielu rzeczy… a wśród nich znajduje się również paczka Pythona #Hypothesis.

Dlaczego? Bo naprawdę *mam już dość* padających "health checks" (czyli błędów typu "dane dla testów są generowane zbyt powoli"). jasne, takie informacje są przydatne autorom testów. Dla nas, dystrybucji — są bezużyteczne. Ja chcę wiedzieć, czy dana paczka działa. Nie obchodzi mnie, czy generowanie testów zajęło pół sekundy więcej niż autorzy Hypothesis uznają za stosowne, i nie mam czasu na to, żeby te problemy ciągle wszystkim zgłaszać, po to tylko, żeby te zgłoszenia były ignorowane, a testy dalej padały.

A najgorsze w tym wszystkim? Autorzy paczki Hypothesis mają to w rzyci. Nie chcą zaoferować nam możliwości wyłączenia tych testów. No tak, przecież możemy dodać inny "profil ustawień"! Tyle że wtedy musimy z osobna prosić autorów każdej paczki, która używa Hypothesis, albo sami je łatać. No dobra, alternatywnie możemy zupełnie wyłączyć testy używające Hypothesis — przecież to tylko fuzzing! Rzecz jasna, jest oczywiście sporo paczek, w których to jedyne testy dla danej funkcjonalności…

github.com/HypothesisWorks/hyp

GitHubPlease make it possible to disable health checks via pytest command-line args · Issue #2533 · HypothesisWorks/hypothesisBy mgorny

I guess I hate a lot of things… and among them, there's the #Python #Hypothesis package.

Why? Because *I am so tired* of failing health checks (e.g. errors because test data is generated "too slow"). Sure, these are useful to test authors. They are completely useless for downstream testing. I just want to know if the package works. I don't care if this time test generation randomly took half a second more than Hypothesis authors liked. I don't want random false positives all the time, and I don't have time to keep reporting them to countless upstreams, so that they just ignore my bug reports and tests still keep failing randomly for us.

And the worst part? Hypothesis upstream just doesn't care. They just don't want to add an override for downstream packagers. Oh yes, we can just add another settings profile! We just need to either talk every consumer of Hypothesis to provide one or patch every consumer. Or, well, they suggest just disabling tests using Hypothesis altogether — it's only fuzzing! If only people didn't rely on them as the only tests for given functionality…

github.com/HypothesisWorks/hyp

GitHubPlease make it possible to disable health checks via pytest command-line args · Issue #2533 · HypothesisWorks/hypothesisBy mgorny

Replacing #Omnivore seems more difficult than I thought:

I am looking for
* Collecting articles in the web, via RSS and ideally newsletters
* Reading and highlighting (on mobile - ideally also offline)
* Getting the highlights into #LogSeq

#Wallabag looks good but can't highlight on mobile devices

#Raindrop has a logseq integration but this project is discontinued

#Hypothesis does not work offline and does not have a RSS aggregator

None of these options is a really good replacement...

Replied in thread

@paninid p-values, to a large extent, exist because calculating the posterior is computationally expensive. Not all fields use the .05 cutoff.

A p-value is an #estimate of p(Data | Null Hypothesis). If the two #hypotheses are equally likely and they are mutually exclusive and they are closed over the #hypothesis space, then this is the same as p(Hypothesis | Data).

Meaning, under certain assumption, the p-value does represent the actually probability of being wrong.

However, given modern computers, there is no reason that #Bayesian odds-ratios can't completely replace their usage and avoid the many many problems with p-values.

When researchers gave healthy mice antibodies from patients with #Long #COVID, some of the animals began showing Long COVID symptoms
—specifically heightened pain sensitivity and dizziness.

It is among the first studies to offer enticing evidence for the #autoimmunity #hypothesis.

The research was led by #Akiko #Iwasaki, PhD, Sterling Professor of Immunobiology at Yale School of Medicine (YSM).

“We believe this is a big step forward in trying to understand and provide treatment to patients with this subset of Long COVID,” Iwasaki said.

Iwasaki zeroed in on autoimmunity in this study for several reasons.

First, Long COVID’s persistent nature suggested that a chronic triggering of the immune system might be at play.

Second, women between ages 30 and 50, who are most susceptible to autoimmune diseases, are also at a heightened risk for Long COVID.

Finally, some of Iwasaki’s previous research had detected heightened levels of antibodies in people infected with SARS-CoV-2.

yalemedicine.org/news/antibodi

Yale MedicineAntibodies From Long COVID Patients Provide Clues to Autoimmunity HypothesisPromising new research supports that autoimmunity—in which the immune system targets its own body—may contribute to Long COVID symptoms in some patients.