sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

594
active users

@moultano they all did as far as I know, but some are more secretive than others so confirmation is good to report.

@Riedl @moultano It is funny to see OpenAI just not mention it and reap the benefits. I've had people sincerely praise OpenAI for how it treats artists relative to Stability, which is imo just a hilarious PR gap.

@moultano I'm getting probably close to using 1 million images with my brain, personally. Is that better or worse?

@Adverb I don't know. I'm mostly staying out of the ethics of all this because I don't want to be subpoenaed in some future litigation against my employer, and because I find it all genuinely confusing.

@Adverb Every analogy people make requires AI art to be "like" something else, and we're just arguing about which thing it's "like." But I don't think it's like anything else.

@moultano it's fair to say that analogy is not exact here (Dryhurst talks about this too), though I think the principle is the same on many fronts.

The scale could never be ofc.

@Adverb I would be happier if the models trained with differential privacy. I think that would be closer to the norms of inspiration that artists expect from each other.

@moultano
To the end of preventing memorization?

I feel like it's already past human artists in some ways by having a similarity-queryable dataset if people are worried.

And that post by OpenAI on deduplication and (ironically) that one on Stable Diffusion's regurgitation that notes the imagenet LDE sees no significant memorization make me very not-worried about memorization.

@Adverb @moultano I should already know those, but if you wanted to toss in a couple of links I would bookmark them.

@TedUnderwood @moultano They LITERALLY CANNOT DETECT REPLICATION WITH IMAGENET!!! (Pardon my screaming.)

But nobody bothers reading the paper 🙃

@Adverb @moultano @TedUnderwood Yes, the framing in the abstract and intro doesn’t really match the results.

@lowd @moultano @TedUnderwood Yeah! Not to mention the model card for stable diffusion explicitly states this issue!

But due to this paper even some many many people think this was some hidden secret. Was having to argue over this with an ML industry person just yesterday.

@Adverb @moultano Yes, actually I do remember reading that paper and thinking "hmm — if I'm reading this rightly it's only a problem for small models." But no one talked about it that way so I thought I was crazy.

@TedUnderwood @moultano at this point it's a tidal wave of "For real?" wherever I go :(

@Adverb @moultano And then in the last section they introduce a concept of "style copying" which seems to muddy the waters rather a lot.

@TedUnderwood @moultano Yeah, I have viewed LAION too many times to buy that one as a fair standard for any entity.

@TedUnderwood @moultano DALL-E2 usage also reassures me a ton here.

Otoh I'd be down to see people do differential privacy here: it just feels like enough just will not be on the data sourcing side :/