sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

586
active users

#catboost

0 posts0 participants0 posts today

Müssen es immer rechenintensive #deeplearning Modelle sein oder reichen für manche Anwendungsfälle auch leichtgewichtigere Alternativen, wie #xgboost, #catboost, #lightgbm oder gar klassische Methoden wie #SVMs und #logisticregression?

Auf der #M3 Konferenz in #Köln (vom 23.04.2025 bis zum 25.04.2024) werde ich diese Frage im Rahmen eines Vortrags diskutieren. Mehr Informationen unter: m3-konferenz.de/veranstaltung-

Minds Mastering Machines 2024Minds Mastering Machines 2024By heise Developer https://heise.de/developer - dpunkt.verlag https://dpunkt.de

I'm excited to see making some news! There is so much around (and before that it was ) but I think that for most working in industry the development of algorithms (like and ) are the real revolution and will have a much more long lived impact on our work.

nature.com/articles/s41598-022

NatureGradient boosting decision tree becomes more reliable than logistic regression in predicting probability for diabetes with big data - Scientific ReportsWe sought to verify the reliability of machine learning (ML) in developing diabetes prediction models by utilizing big data. To this end, we compared the reliability of gradient boosting decision tree (GBDT) and logistic regression (LR) models using data obtained from the Kokuho-database of the Osaka prefecture, Japan. To develop the models, we focused on 16 predictors from health checkup data from April 2013 to December 2014. A total of 277,651 eligible participants were studied. The prediction models were developed using a light gradient boosting machine (LightGBM), which is an effective GBDT implementation algorithm, and LR. Their reliabilities were measured based on expected calibration error (ECE), negative log-likelihood (Logloss), and reliability diagrams. Similarly, their classification accuracies were measured in the area under the curve (AUC). We further analyzed their reliabilities while changing the sample size for training. Among the 277,651 participants, 15,900 (7978 males and 7922 females) were newly diagnosed with diabetes within 3 years. LightGBM (LR) achieved an ECE of 0.0018 ± 0.00033 (0.0048 ± 0.00058), a Logloss of 0.167 ± 0.00062 (0.172 ± 0.00090), and an AUC of 0.844 ± 0.0025 (0.826 ± 0.0035). From sample size analysis, the reliability of LightGBM became higher than LR when the sample size increased more than $$10^4$$ . Thus, we confirmed that GBDT provides a more reliable model than that of LR in the development of diabetes prediction models using big data. ML could potentially produce a highly reliable diabetes prediction model, a helpful tool for improving lifestyle and preventing diabetes.

lesson of the day: Working with a model, I got no traction cross-validating hyperparameters like tree depth and number; but different evaluation metrics (e.g. SMSE vs. MAE etc.) had a major impact. Have you tried this?

IMHO get all the press because they do sexy human jobs like seeing and processing language. But in the business world of tabular data, is where the real revolution is happening!