sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

572
active users

#languagemodels

1 post1 participant0 posts today
UKP Lab<p>Stay tuned as Yang reflects on his research path, current projects, and the lasting impact of his time at UKP Lab 💻💡.</p><p>(5/5)</p><p><a href="https://sigmoid.social/tags/UKPLab" class="mention hashtag" rel="tag">#<span>UKPLab</span></a> <a href="https://sigmoid.social/tags/UKPAlumni" class="mention hashtag" rel="tag">#<span>UKPAlumni</span></a> <a href="https://sigmoid.social/tags/LanguageModels" class="mention hashtag" rel="tag">#<span>LanguageModels</span></a> <a href="https://sigmoid.social/tags/RLHF" class="mention hashtag" rel="tag">#<span>RLHF</span></a> <a href="https://sigmoid.social/tags/ReinforcementLearning" class="mention hashtag" rel="tag">#<span>ReinforcementLearning</span></a> <a href="https://sigmoid.social/tags/DeepMind" class="mention hashtag" rel="tag">#<span>DeepMind</span></a> <a href="https://sigmoid.social/tags/Gemini" class="mention hashtag" rel="tag">#<span>Gemini</span></a> <a href="https://sigmoid.social/tags/NLProc" class="mention hashtag" rel="tag">#<span>NLProc</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="tag">#<span>AI</span></a> <a href="https://sigmoid.social/tags/TextGeneration" class="mention hashtag" rel="tag">#<span>TextGeneration</span></a> <a href="https://sigmoid.social/tags/ComputationalLinguistics" class="mention hashtag" rel="tag">#<span>ComputationalLinguistics</span></a></p>
IT News<p>College student’s “time travel” AI experiment accidentally outputs real 1834 history - A hobbyist developer building AI language models that speak ... - <a href="https://arstechnica.com/information-technology/2025/08/ai-built-from-1800s-texts-surprises-creator-by-mentioning-real-1834-london-protests/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/information-te</span><span class="invisible">chnology/2025/08/ai-built-from-1800s-texts-surprises-creator-by-mentioning-real-1834-london-protests/</span></a> <a href="https://schleuss.online/tags/computationalarchaeology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computationalarchaeology</span></a> <a href="https://schleuss.online/tags/largelanguagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>largelanguagemodels</span></a> <a href="https://schleuss.online/tags/digitalarchaeology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>digitalarchaeology</span></a> <a href="https://schleuss.online/tags/historicalresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historicalresearch</span></a> <a href="https://schleuss.online/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://schleuss.online/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://schleuss.online/tags/timecapsulellm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>timecapsulellm</span></a> <a href="https://schleuss.online/tags/aidevelopment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aidevelopment</span></a> <a href="https://schleuss.online/tags/londonhistory" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>londonhistory</span></a> <a href="https://schleuss.online/tags/historicalai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historicalai</span></a> <a href="https://schleuss.online/tags/victorianera" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>victorianera</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
Saarland Informatics Campus<p>📥 Last month, DFKI’s Prof. Josef van Genabith &amp; Simon Ostermann spoke to t-online on how Gmail's auto translate could've altered words on their newsletter and disrupted the reporting of recent events: sic.link/tonline</p><p>In the age of misinformation, we thank them for speaking out about the issue! 🙏</p><p><a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://mastodon.social/tags/translation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>translation</span></a> <a href="https://mastodon.social/tags/dfki" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dfki</span></a> <a href="https://mastodon.social/tags/saarlanduniversity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>saarlanduniversity</span></a> <a href="https://mastodon.social/tags/saarlandinformaticscampus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>saarlandinformaticscampus</span></a> <a href="https://mastodon.social/tags/aiethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>aiethics</span></a> <a href="https://mastodon.social/tags/responsibleai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>responsibleai</span></a> <a href="https://mastodon.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://mastodon.social/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> <a href="https://mastodon.social/tags/technews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technews</span></a> <a href="https://mastodon.social/tags/sciencenews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>sciencenews</span></a> <a href="https://mastodon.social/tags/machinelearningnews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearningnews</span></a></p>
Nick Byrd, Ph.D.<p>Are <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a> vulnerable to <a href="https://nerdculture.de/tags/anchoring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anchoring</span></a> <a href="https://nerdculture.de/tags/bias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bias</span></a>?</p><p>Huang et al. generated the <a href="https://nerdculture.de/tags/SynAnchors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SynAnchors</span></a> dataset to find out.</p><p>Anchoring was more common in shallower layers of models.</p><p>A reflective reasoning strategy was usually most helpful.</p><p><a href="https://doi.org/10.48550/arXiv.2505.15392" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2505.15</span><span class="invisible">392</span></a></p><p><a href="https://nerdculture.de/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://nerdculture.de/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://nerdculture.de/tags/edu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>edu</span></a></p>
Harald Klinke<p>Trained on a curated mixture of English and code, these dense decoder-only models are optimized for performance and broad compatibility. Designed to support research, customization, and fine-tuning, according to OpenAI. Model Card: <a href="https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7637/oai_gpt-oss_model_card.pdf" rel="nofollow noopener" target="_blank">cdn.openai.com/pdf/419b6906...</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23AI" target="_blank">#AI</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23OpenSource" target="_blank">#OpenSource</a> <a class="hashtag" rel="nofollow noopener" href="https://bsky.app/search?q=%23LanguageModels" target="_blank">#LanguageModels</a><br><br><a href="https://cdn.openai.com/pdf/419b6906-9da6-406c-a19d-1bb078ac7637/oai_gpt-oss_model_card.pdf" rel="nofollow noopener" target="_blank">cdn.openai.com/pdf/419b6906-9...</a></p>
Nick Byrd, Ph.D.<p>Do smaller <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a> learn to reason better from longer chains of thought?</p><p>Luo et al. found<br>- Larger CoT data subsets didn't increase reflective reasoning (Figure 3)<br>- Accuracy didn't beat baseline until ≅32k CoT tokens (Figure 2)</p><p><a href="https://doi.org/10.48550/arXiv.2506.07712" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2506.07</span><span class="invisible">712</span></a></p><p><a href="https://nerdculture.de/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://nerdculture.de/tags/edu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>edu</span></a></p>
Nick Byrd, Ph.D.<p>What's <a href="https://nerdculture.de/tags/philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophy</span></a> got to offer fields like <a href="https://nerdculture.de/tags/computerScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computerScience</span></a>?</p><p>An <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> system was trained via a method championed by a philosopher.</p><p>The result wasn't perfect, but it was better than "off-the-shelf neural <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a>" at learning about human <a href="https://nerdculture.de/tags/ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethics</span></a>.</p><p><a href="https://doi.org/10.1038/s42256-024-00969-6" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1038/s42256-024-009</span><span class="invisible">69-6</span></a></p>
Hacker News<p>The Big LLM Architecture Comparison</p><p><a href="https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">magazine.sebastianraschka.com/</span><span class="invisible">p/the-big-llm-architecture-comparison</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/LLMArchitecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMArchitecture</span></a> <a href="https://mastodon.social/tags/ComparisonAIModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComparisonAIModels</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a></p>
PPC Land<p>Large language models lack true reasoning capabilities, researchers argue: Large language models function through sophisticated retrieval rather than genuine reasoning, according to research published across multiple studies in 2025. <a href="https://ppc.land/large-language-models-lack-true-reasoning-capabilities-researchers-argue/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/large-language-models</span><span class="invisible">-lack-true-reasoning-capabilities-researchers-argue/</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a></p>
rijo<p>Large language models lack true reasoning capabilities, researchers argue <a href="https://ppc.land/large-language-models-lack-true-reasoning-capabilities-researchers-argue/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/large-language-models</span><span class="invisible">-lack-true-reasoning-capabilities-researchers-argue/</span></a> <a href="https://frankfurt.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://frankfurt.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://frankfurt.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://frankfurt.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://frankfurt.social/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DeepLearning</span></a></p>
Harald Klinke<p>The key insight: hallucinations are not bugs, but artifacts of compression. Like Xerox photocopiers that silently replaced digits in floorplans to save memory, LLMs can introduce subtle distortions. Because the output still looks right, we may not notice what has been lost or changed.<br>The more they’re used to generate content, the more the web becomes a blurrier copy of itself.<br><a href="https://det.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://det.social/tags/CompressionArtifacts" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompressionArtifacts</span></a> <a href="https://det.social/tags/AIliteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIliteracy</span></a></p>
The-14<p>AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations<br><a href="https://mastodon.world/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://mastodon.world/tags/GPT4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT4</span></a> <a href="https://mastodon.world/tags/EmotionDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EmotionDetection</span></a> <a href="https://mastodon.world/tags/SarcasmDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SarcasmDetection</span></a> <a href="https://mastodon.world/tags/PoliticalBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PoliticalBias</span></a> <a href="https://mastodon.world/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.world/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a> <a href="https://mastodon.world/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.world/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> <a href="https://mastodon.world/tags/FutureOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfAI</span></a><br><a href="https://the-14.com/ai-might-now-be-as-good-as-humans-at-detecting-emotion-political-leaning-and-sarcasm-in-online-conversations/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">the-14.com/ai-might-now-be-as-</span><span class="invisible">good-as-humans-at-detecting-emotion-political-leaning-and-sarcasm-in-online-conversations/</span></a></p>
Hacker News<p>The Dangers of Stochastic Parrots: Can Language Models Be Too Big?</p><p><a href="https://dl.acm.org/doi/10.1145/3442188.3445922" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">dl.acm.org/doi/10.1145/3442188</span><span class="invisible">.3445922</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/StochasticParrots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StochasticParrots</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://mastodon.social/tags/TechDebate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechDebate</span></a> <a href="https://mastodon.social/tags/BigData" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BigData</span></a></p>
rijo<p>ICYMI: AI models fake understanding while failing basic tasks <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://frankfurt.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AImodels</span></a> <a href="https://frankfurt.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://frankfurt.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://frankfurt.social/tags/MITResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MITResearch</span></a> <a href="https://frankfurt.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a></p>
PPC Land<p>ICYMI: AI models fake understanding while failing basic tasks: MIT research reveals language models can define concepts but cannot apply them consistently <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://mastodon.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AImodels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/MITResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MITResearch</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a></p>
rijo<p>ICYMI: AI models fake understanding while failing basic tasks <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://frankfurt.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AImodels</span></a> <a href="https://frankfurt.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://frankfurt.social/tags/MITresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MITresearch</span></a> <a href="https://frankfurt.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://frankfurt.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>
PPC Land<p>ICYMI: AI models fake understanding while failing basic tasks: MIT research reveals language models can define concepts but cannot apply them consistently <a href="https://ppc.land/ai-models-fake-understanding-while-failing-basic-tasks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ppc.land/ai-models-fake-unders</span><span class="invisible">tanding-while-failing-basic-tasks/</span></a> <a href="https://mastodon.social/tags/AImodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AImodels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MITresearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MITresearch</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>
.:\dGh/:.<p>Sometimes I feel vindicated when I ask the <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> to do something with code, just to give me an idea of a better solution.</p><p>I mean, if the idea is to make me smarter or point me to the right direction, current <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> are doing that entertainingly a success.</p><p><a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a></p>
Agustin V. Startari<p>From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models<br>When models no longer obey but execute, what happens to legitimacy?</p><p>Core contributions:<br>• Execution vs. obedience in LLMs<br>• Structural legitimacy without subject<br>• Reasoning as authority loop</p><p>🔗 Full article: <a href="https://zenodo.org/records/15635364" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/15635364</span><span class="invisible"></span></a><br>🌐 Website: <a href="https://www.agustinvstartari.com" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">agustinvstartari.com</span><span class="invisible"></span></a><br>🪪 ORCID: <a href="https://orcid.org/0009-0002-1483-7154" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">orcid.org/0009-0002-1483-7154</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/Execution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Execution</span></a> <a href="https://mastodon.social/tags/StructuralLegitimacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StructuralLegitimacy</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/AlgorithmicPower" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AlgorithmicPower</span></a> <a href="https://mastodon.social/tags/Authority" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Authority</span></a> <a href="https://mastodon.social/tags/Epistemology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Epistemology</span></a></p>
.:\dGh/:.<p>Good news for tech bros: training AI on copyrighted work is legal…</p><p><a href="https://www.cnbc.com/2025/06/24/ai-training-books-anthropic.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cnbc.com/2025/06/24/ai-trainin</span><span class="invisible">g-books-anthropic.html</span></a></p><p>…as long the copyrighted works are not reproducible. May be <a href="https://mastodon.social/tags/Midjourney" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Midjourney</span></a> and similar tools are not doomed after all?</p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/LM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LM</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/Legal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Legal</span></a> <a href="https://mastodon.social/tags/Lawsuit" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Lawsuit</span></a></p>