sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

533
active users

#sovereignai

0 posts0 participants0 posts today

AI runs on data but whose data? For too long, Africa’s languages, cultures, and communities have fueled global AI models with little say in return. This is data colonialism: when information flows out, power and ownership don’t come back.

The alternative? Sovereign AI — systems designed with African realities at their core. Think local language models, ethical frameworks built from cultural values, and infrastructures that put data ownership back in the hands of Africans.

This shift isn’t optional. If Africa is to thrive in the AI age, we must shape models that don’t just include us but are built by us, for us.

"Rising tensions between the U.S. and China, alongside fears of being left behind in the AI race, have spurred governments from Seoul to São Paulo to prioritize sovereign AI — the ability to produce AI with their own data, infrastructure, workforce, and networks — which officials say is critical to national security.

Big tech companies have responded by offering sovereignty as a service. Nvidia has made deals with countries including Thailand, Vietnam and the United Arab Emirates, while Microsoft has agreements with the UAE and others, and Amazon Web Services has a European “sovereign cloud.” Huawei, meanwhile, is courting Peru, Indonesia, and other Chinese allies.

But in entering these deals, nations risk locking themselves into long-term dependencies on foreign architectures, chips, and other export-controlled technologies that can undermine their sovereignty and their ambition, Rui-Jie Yew, a doctoral student at Brown University who researches AI policy, told Rest of World."

restofworld.org/2025/chinese-u

Rest of World · The myth of sovereign AI: Countries rely on U.S. and Chinese techAs countries pursue self-sufficiency in AI, they risk depending on foreign companies, undermining their independence and their goals.

Big News! The completely #opensource #LLM #Apertus 🇨🇭 has been released today:

📰 swisscom.ch/en/about/news/2025

🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.

▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: publicai.co/

🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
huggingface.co/collections/swi

🔧 Tech report: huggingface.co/swiss-ai/Apertu

After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.

I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.

Excellent op-ed by #CSIRO Chair Ming Long AM and Deputy CEO, Professor Elanor Huntington @profElanor on #SovereignAI and the double bind of #data and #AI.

This position - which carries strong weight given the roles Prof Huntington and Ms Long have - as Deputy CEO and Chair respectively of CSIRO - echoes recent calls from like-minded heavyweights such as Dr Alex Antic and Simon Kriss.

What we need now is concerted action, funding, and collaboration across the government, academic and industry sectors to move forward before the level of dependence is intractable.

The current efforts toward Sovereign hashtag#AI in Australia are deeply problematic for a range of reasons.

Business has taken the lead - sensing the market opportunity and profits to be had by developing sovereign models - then selling them back to government and academia.

They're bottle-necked by access to sovereign data for training.

Kangaroo LLM's approach was to to scrape all .au websites - without consent from site owners, using volunteer labour pitched as #OpenSource. Maincode is working on Mathilda - pitched as "Australia's LLM". They're hiring ML and NLP PhDs. They're not transparent about how they're collecting data for Mathilda, but claim to have partnerships with government, and to be working on profit-sharing models for entities that provide data for training. Maincode is bootstrapped by a founder who made their money in online gambling.

Academia sees the problem - but is too cash-strapped and in survivability mode to act unilaterally - and needs to partner with industry and government to have any chance of steering sovereign AI ethically, transparently, sustainably and responsibly.

Government is taking policy advice - or should that be - having policy written for them - by the tech giants who gain the most from Australia NOT having sovereign AI capabilities - such as the Tech Council of Australia which is funded by corporations who now seek to gain a return on the massive up front investment in training foundation models. We've even seen pitches recently to get governments to fund LLM use for every citizen - on the promise of as-yet-not-evidence productivity gains.

Imagine - every Australian citizen providing training data for a foreign-owned corporation - and the government paying for it!

So, what do we need?

Strong regulation. Access to sovereign data that is legal and compensated. Certainty for business on selling access to sovereign models to government. Well-funded AI centres in universities to provide talent pipelines.

And most of all? We need to back ourselves.

(Note: the term "sovereign" is problematic - as Keir Winesmith so well articulated to me - because sovereignty was never ceded - but we don't have a better term, yet) in Australia.

afr.com/technology/australian-

Australian Financial Review · AI revolution: Australian business faces a double-bind on AI and dataBy Elanor Huntington, Ming Long

🇨🇭 Switzerland is taking a public-interest-first approach to LLMs. EPFL, ETH Zurich, and CSCS are building a fully open, multilingual language model trained on public infrastructure, and they’re releasing it under Apache 2.0 this summer. This one isn’t just open weights. It’s 100% transparent: source, data, training methods. And trained on 15T tokens across 1,500+ languages using a carbon-neutral supercomputer (Alps), it’s a real shot at sovereign AI that serves scientific, civic, and commercial needs without the lock-in.

TL;DR
🌍 Fluency in 1,000+ languages
🧠 Open 8B and 70B param models
⚡ Alps supercomputer, 100% green
🔓 Fully open: data, code, weights

ethz.ch/en/news-and-events/eth
#opensourceAI #multilingualAI #sovereignAI #SwissTech #Freedom #AI

ETH ZurichA language model built for the public goodETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

Today in #sovereignAI
Migration complete, way faster and with audio. models get a tad slower if you use RAG, but we’ll fix it next week.

Weirdly, audio works on iPhone as webpage, but not as webapp. Of course, Apple.

AI chat types:

🎯 Project
For ideas that need planning, scope, roadmap
Agent: Project Manager or more specific (lawyer, accountant, doctor)

🏔️ Learning
For topics I wanna learn
Agent: Study Wizard

📘 Book
Companion for books I’m reading
Agent: Study Wizard

🌀 Reflection
Integration, confessions, analysis
Agent: Therapist, oracle reader

✅ Secretary
A main agent integrated with calendar, home automation, work systems, TODOs, documentation
Agent: Secretary or more specific agent