I get where you’re coming from, and I’ve had similar thoughts after actually testing a couple of these tools out of curiosity. From a purely technical angle, the AI models are fascinating, but the ethical side is where it gets complicated very fast. I spent some time reading through explanations and limitations on sites like https://clothoff.ai/ and what stood out to me wasn’t the results, but the disclaimers and rules they try to put in place. They usually say uploads must be consensual and that public figures or private individuals shouldn’t be targeted, but in practice it’s hard to enforce that.
From my experience moderating a small online community, rules only work when there’s accountability. With undress AI, accountability is weak because uploads are often anonymous. I think the ethical line should be clear: no real person’s image should be processed unless there’s explicit permission, ideally documented. Otherwise, it becomes less about tech experimentation and more about violating someone’s privacy. I’m not against the technology itself, but I do think platforms should slow down development until safeguards are genuinely effective, not just written in the terms of service.