sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

586
active users

#AISecurity

2 posts2 participants0 posts today

Microsoft paid a record $17M to 344 security researchers across 59 countries over the past year 🛡️💰
1,469 valid reports helped fix 1,000+ security flaws across Windows, Azure, Xbox, 365 & more.
Highest single bounty: $200K.

AI & identity systems now see expanded bounty scopes.

@serghei
@BleepingComputer

bleepingcomputer.com/news/micr

Replied in thread

@dangoodin

Weird thing I observed in #infosec
There is an incredible amount of disinterest/contempt for #AI amongst many practitioners.

This contempt extends to willful ignorance about the subject.
q.v. "stochastic parrots/bullshit machines" etc.

Which, in a field with hundreds of millions of users, strikes me as highly unprofessional. Just the other day I read a blog post by a renown hacker (and likely earned a mute/block) "Why I don't use AI and you should not too".

Connor Leahy, CEO of #conjecture is one of the few credible folks in the field.

But to the question at hand.
The prompts are superbly sanitised.
In part by design, in part due to the fact that you are not connecting to a database but to a multidimensional vector data structure.

The #prompt is how you get in through the backdoor. Though I haven't looked into fuzzing, but I suspect because of the tech, the old #sqlinjection tek and similar will not work.

Long story short; It is literally impossible to build a secure #AI. By the virtue of the tech.
#promptengineering is the key to open the back door to the knowledge tree.

Then of course there are local models you can train on your own datasets. Including a stack of your old #2600magazine

💥 3 Days. 4 Elite Trainings. Unlimited AppSec Growth.

Join us in Washington, D.C., Nov 3–5, 2025 for immersive, hands-on 3-day sessions at OWASP Global AppSec USA:
⚡ Threat Modeling with AI – Adam Shostack
⚡ AI Security for Developers – Jim Manico
⚡ Attacking & Defending Cloud Apps – AWS, Azure, GCP
⚡ Full-Stack Pentesting Lab – 100% hands-on + lifetime access

Register: owasp.glueup.com/event/131624/

We’re excited to welcome Simran Kaur to the BSides Vancouver Island 2025 speaker lineup! With over 15 years of experience in the IT industry, Simran is a force in cybersecurity and AI-driven innovation. Her expertise spans LLMOps, cloud security, risk management, and beyond all grounded in building secure, resilient systems. 🔐⚙️

This year, she’ll be taking us into the evolving world of AI security with her talk: “Navigating AI Security: Identifying Risks and Implementing Mitigations”. Get ready to explore the hidden vulnerabilities of AI systems and walk away with actionable insights to defend against emerging threats. 🧠⚠️

You won’t want to miss this one!
#BSidesVI2025 #victoriabc #vancouverisland #techconferencespeaker #artificialintelligence #Cybersecurity #AIsecurity

⚠️ How secure are the #AI systems used by your company? #EchoLeak has really been putting a spotlight on this issue—and the reality is that AI tools pose many risks to businesses. 😨

"AI tools are increasingly being embedded deep into business infrastructure — often alongside 'vague policies' or 'limited visibility into how they process and store data'", says Robert Rea, chief technical officer at #Graylog.

Kate O'Flaherty spoke about this challenge with several industry experts, in addition to Graylog's Robert Rea, including:
👉 Lillian Tsang at Harper James
👉 Emilio Pinna at SecureFlag
👉 Joseph Thompson at Birketts LLP
👉 Sam Peters at ISMS.online

Learn about what it means to be risk aware when it comes to AI tools, and how to strengthen the AI governance strategies at your org. 👇

isms.online/cyber-security/ech #cybersecurity #artificialintelligence #AIsecurity #AItools #agenticAI

While many (according to my timeline) #infosec practicioners are still stuck in the "Lolz #AI Stochastic Parrot bullshit machine" rut of professional denial and ignorance. Some steely eyed infosex (!) practitioners are actively working on developing tools for #AISecurity

#SacroML is a tool to attack ML to exfiliate potential damaging data.

Paper:
arxiv.org/html/2212.01233v3

Repo:
github.com/AI-SDC/SACRO-ML

Kudos to the SacroML team to take the emerging threat of AI seriously, rather than embarassingly contributing to the clownshow.

arxiv.orgSafe machine learning model release from Trusted Research Environments: The SACRO-ML package

🚨 NEW Weekly Series Alert! 🚨

I’m excited to launch the Cybersecurity Weekly Roundup—a new series where I’ll share the top cybersecurity news stories every Friday.

Each week, I’ll curate the biggest incidents, emerging threats, critical vulnerabilities, and key industry insights—all from trusted cybersecurity sources like CISA, MITRE, The Hacker News, and more.

🛡️ Whether you're a cybersecurity pro, IT leader, or just security-curious, this roundup will help you:

Stay ahead of ransomware trends

Monitor critical vulnerabilities and patch releases

Learn about new threat actor campaigns

Track shifts in AI, ICS/OT, and post-quantum security

Every article includes a concise, expert-written summary designed to save you time and deliver actionable insights.

👉 Check out the first edition on the blog today!
🔗 weblog.kylereddoch.me/2025/07/

Follow me for weekly updates and stay cyber-resilient! 🔒

weblog.kylereddoch.me🛡️ Welcome to the Cybersecurity Weekly Roundup - Kyle's Tech Korner
More from CybersecKyle