sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

608
active users

#LLMs

86 posts69 participants8 posts today
Continued thread

>>> But inference engines are now out of fashion, because they rely (at least, for now) on human curated knowledge bases, and lack the facile fluency of #LLMs.

I do not believe that we can achieve #ArtificialGeneralIntelligence #AGI without a semantic layer comprising at least an explicit ontology, a semantic model, and an auditable, sourced, set of inference rules.

I think #LLMs are probably a dead end. And I think they are distracting from technologies which could be genuinely useful.

"[Chain of reasoning] reports are untrustworthy on principle: they are plausible explanations for plausible responses, and since the inferences involved are more complex, they burn more compute and carbon per query as well as introducing more mistakes"

This is a particularly offensive point about #LLMs: we actually do have a class of systems, inference engines, which do reason and can provide very clear explanations of how they reached each decision. >>>

helenbeetham.substack.com/p/ar

imperfect offerings · Artificial intelligence is the opposite of educationBy Helen Beetham

🚫no -splaining thanks🚫

One of the things I don't see talked about at all with regard to people using AI for everything instead of thinking themselves, is the cognitive function decline due to repeated covid infections (and probably undiagnosed long covid).

#AI#tech#technology
Continued thread

#LLM-Generated Invariants for Bounded Model Checking Without Loop Unrolling

dl.acm.org/doi/10.1145/3691620

"We investigate a modification of the classical Bounded Model Checking (BMC) procedure that does not handle loops through unrolling but via modifications to the control flow graph (CFG). A portion of the CFG representing a loop is replaced by a node asserting invariants of the loop[...]."

(continues next toot...)

2/3

Replied in thread

@gerrymcgovern it will end in a financial crisis. Now we are in good humor and smiling, but when the crisis will hit, many people will be crying and complaining that nobody told them about the issues. They will be loosing their money and some billionaires will cash in, profiting from the losses of the many.
Hey people, we are telling you since years that #LLMs don’t work as you believe. We are the nobody experts in #AI who have told you about the issues!

People are like "Here's why AI is the worst thing ever..." and it's just because they had to talk to someone who was confused by AI as if they haven't been complaining about people who ask questions without googling first for decades now.

#ai#llm#chatgpt
Continued thread

👆

AI Is Homogenizing Our Thoughts (June 2025):

newyorker.com/culture/infinite

"With the #LLM “you have no divergent opinions being generated,” Kosmyna said. She continued, “Average everything everywhere all at once—that’s kind of what we’re looking at here.”

#AI is a technology of averages: [#LLMs] are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing [...] and in the calibre of the ideas."

1/2

The New Yorker · A.I. Is Homogenizing Our ThoughtsBy Kyle Chayka
Continued thread

This article explains it quite nicely. An LLM is not an artificial mind but rather a zillion-dimensional map of the abstract vector space of all the texts on which it has been trained. Every possible text is a single one-dimensional path through that space. The LLM treats your input as the beginning of a path through that landscape and tries to continue it.

Likewise, an image generator is a zillion-dimensional map of all the images in its training data set. It doesn't know anything about the world, it just tries to find the coordinates of a point in the vector space of all possible images that matches your prompt.

psychologytoday.com/us/blog/th

Psychology TodayAI's Hidden Geometry of ThoughtAI doesn’t replicate human thought—it bypasses it entirely, operating in a way that diverges from our minds even as we keep trying to map it onto ourselves.
Continued thread

If we ever see a real artificial mind, some kind of LLM will probably be a small but significant component of that, but the current wave of machine learning will most likely come to a grinding halt very soon because of a lack of cheap training data. The reason why all of this is happening now is simple: The technologies behind machine learning have been around for decades, but computers weren't fast enough and didn't have enough memory for those tools to become really powerful until the early 2000s, and around the same time, the Internet went mainstream and got filled with all kinds of data that could be datamined for training sets. Now there is so much synthetic content out there that automated data mining won't work much longer, you need humans to curate and clean the training data, which makes the process slow and expensive. I expect to see another decades long AI winter after the commercial hype is over.

If you look for real intelligence, look at autonomous robots and computer game NPCs. There you can find machine learning and artificial neural networks applied to actual cognitive tasks in which an agent interacts with its environment. Those things may not even be as intelligent as a rat yet, but they are actually intelligent, unlike LLMs.

#llm#LLMs#ai
Continued thread

Patrick Hall, who teaches #data #ethics & #MachineLearning at George Washington University, said he's not surprised #Grok ended up spewing toxic #content, given that the #LLMs that power #chatbots are initially trained on unfiltered online data.

“It's not like these language models precisely understand their system prompts. They're still just doing the statistical trick of predicting the next word.”He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
#tech

LLMs are not really intelligent, they just create synthetic text by reproducing patterns from their training data. They can be incredibly powerful tools, but the way they are commonly used nowadays is less than useful, it's quite counterproductive. They don't have any kind of cognition, there is no mind in there; all the perceived intelligence exists in the language itself and in the structure of the texts the LLM has been trained on.
However, by fooling yourself into thinking that the LLM is actually intelligent, you are putting yourself in psychological danger.

I don't say, " Don't use LLMs!" They are still powerful tools or cute toys, you just need to remind yourself that you aren't talking to anybody, it's just a text synth that doesn't know what it's doing, a text synth that doesn't know anything but the texts it has eaten, and that generates new texts to look similar to the ones in the traininig data. Don't listen to the advertising, they're just trying to sell you some rubbish that can't do what they promise you.

Also, if you use LLMs, try to use smaller ones that can be run on a local machine, and then install them there. Those don't use insane amounts of energy, those also don't archive everything you tell them for data mining, and they don't even need an Internet connection in order to work, unless you want them to search the web for information.

psychologytoday.com/us/blog/th

Psychology TodayAI and the Architecture of Anti-IntelligenceAI is different enough from us to redefine intelligence itself without ever understanding what that means.
Replied in thread

@josephcox wrote

Turns out if you just use a bunch of big words, you can trick LLMs into telling you how to (redacted) or hack an ATM.

That's a totally predictable result, sadly. LLMs don't do higher symbolic reasoning, just sophisticated word association.

Their manual alignment/safety training includes probing with harmful prompts ("red teaming," IIRC), but it's easy to bypass those guardrails if you're creative: for example, I tricked Google Gemini into giving reckless medical "advice" by adopting the language and tone of a physician consulting about a patient rather than a patient asking about their own health.

Basically, any highschool improv kid should be able to beat their protections by playacting different roles and personalities.

AI is pressuring education in a way hardly ever seen in the past decades (perhaps with the exception of COVID lockdowns). The discussion about this is hard and very multifaceted, but I think it may be also an opportunity to rethink the way we teach and rethink how to engage with students.

As an educator and a user, I made a presentation to start a dialogue at our department. My aim was not to give answers, but to provide some tools and resources to raise awareness, experiment, share experiences and brainstorm solutions.

Since a few colleagues have been asking to get the slides and to share them, I have finally decided to update them a bit and post them on my blog.

You can find them here with a bit of extra commentary: mseri.me/ai-in-education-some-

A fractal spectrum of tales · AI in Education - Some food for thoughtWhat are we supposed to do about GenAI in (mathematical) education? Many of us have been wrestling with it over the past few months. I certainly don’t have a definitive answer, but it’s a topic that demands we share what we’re thinking and what we’re seeing. If anything, it may bring a felt and needed opportunity to finally be able to rethink and reshape the way our education is organized. We discussed this in less uncertain times both in It’s Not Just Numbers and various episodes of Degrees of Freedom, our podcasts on mathematics and education respectively.