I'm not shocked that Musk would try to manipulate Grok to vent his opinions. We've seen his manipulation before in the twitter recommender system.
I am kind of fascinated by how difficult it is. I think that broadly trained LLMs tend to converge to the same "worldview", which is mostly left-libertarian.
You can't really force them away from this on one issue without introducing inconsistencies, and breaking the guardrails.
https://www.theguardian.com/technology/2025/may/14/elon-musk-grok-white-genocide
The problem with MAGA-style talking points (and other propaganda) is that it usually forms a wildly inconsistent worldview, requiring lots of willful blindness.
LLMs can't have this kind of selective attention because they are trained on such a broad array of domains. If you want to be consistent across the board, it eliminates a lot of worldviews.
It's a bit like all those efforts to build conservative clones of Wikipedia. That kind of large-scale collaborative effort is going to show up the differences between editors.
Having a common enemy is fine for creating political unity among people (angry) people with fundamentally different views. But it's not enough when you're building a massive collaborative information repository. Then you need actual agreement on fundamentals.
My guess is that LLMs will always require this concistency.