You are viewing a single comment's thread from:

RE: NEW DATA POISONING TOOL LETS ARTISTS FIGHT BACK AGAINST AI - IS THIS A GOOD IDEA?

in #ai11 months ago

Training models is a ongoing thing, that is why you have so many new modes coming out, while old modes are discarded. But since this supposedly came out last October, I have not seen any adverse effects with the various modes and prompts I use. The part that is effective in some way is the option for artists to opt out - in fact I seen this particularly on Deep Dream Text2Dream: the pick list of artist modifiers has shrunk a bit.
But I think that this 'poison' is in infancy, and could possibly be altered and expanded in a malicious way. Of course if you download the app you swear on a bible you won't screw around with it, but would such agreement bother a hacker?
Just because the developer of the program sees it as unlikely to happen, doesn't mean it couldn't, all it tells me that he cannot see beyond the edge of his teacup.

I don't think Nobel had initially any bad thoughts about his invention, he just wanted it for defense. Changed his mind rather quick.