sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

597
active users

#TTP

0 posts0 participants0 posts today

I very often agree with Bruce Schneier. But not today.

If I wanted to make a private agreement through a digital trusted third party, why would I need an LLM?

The examples include comparing salaries. Instead of setting up (and later securely deleting) an LLM, we could just as easily run a function boiling down to
`return a > b;`

No need to involve LLMs with their uncertainty or possibility to do prompt injection.
#BruceSchneier #LLM #TTP
schneier.com/blog/archives/202

Schneier on Security · AIs as Trusted Third Parties - Schneier on SecurityThis is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them...

🎁 GenAI x Sec Advent #15

We previously talked about Incident Response on GenAI Systems. Today, I am going to talk about threat intelligence on GenAI systems with the MITRE ATLAS Matrix (Adversarial Threat Landscape for Artificial-Intelligence Systems). 👇

This project has the goal to classify the techniques and tactics used against and with GenAI systems. 🤓

I think this is quite interesting and also provides early capabilities to hunt and detect threat actors’ operating methods that involves these systems. 🔍

A few months ago, OpenAI released one of the first threat reports documenting threat actors' TTPs on GenAI systems (cdn.openai.com/threat-intellig).

In my opinion, with the broader adoption of GenAI systems in organizations, this kind of classification can offer an early way to understand new kind of TTPs used by attackers. 🧐

But one question remains: How do you hunt effectively for these TTPs? 🤔

➡️ atlas.mitre.org/matrices/ATLAS

#cybersecurity #threatintelligence #TTP #threathunting @MITREcorp