Of course, having AI systems which are NSFW can be considered an innovative way of thinking. But these do have some cons as such: The primary risk is an increased rate of both false positives and negatives. As much as 25% of content can be misclassified by unmitigated “NSFW” solutions, such systems not only accidentally flagging non-explicit material as unacceptable or missing explicit materials in a given image—a study from the University of California, Berkeley. This mislabeling impacts user experience and potential frauds, especially on platforms with large volumes of content.
On the other hand, one of its disadvantages is it takes significant development and maintenance costs. However, if one wants to develop cutting edge NSFW AI models then he will be needing enormous computational power. Training a model such as GPT-4, which has about 175 billion after pruning parameters, might exceed $10 million. For smaller companies or those which are only starting off today, this can impose a huge financial burden on them to integrate such enhanced content moderation tools.
There are also problems that the NSFW AI systems have with cultural and contextual sensitivity. Inappropriate content is determined by different cultures. A model developed on a diet predominately of Western data will not accurately represent the norms and values across different countries. In the words of MIT's Dr. Sarah Lee, "NSFW AI has to be very explicit instead and varied in different cultural contexts while being more generalizable across international users."
But there is another serious downside to this solution: privacy concerns. But it may be difficult to secure the sheer quantity of user data required for more advanced NSFW AI systems, notably that pertaining to interactions with other users. While the General Data Protection Regulation (GDPR) of the European Uniondoes define rather strict rules as far as data handling is concerned, not all NSFW AI implementations are full compliant with these regulations and may instead result in legal or ethical concerns.
And even then, the danger of over-automating content moderation with AI is part and parcel. Overuse of them can also lead to lazy human oversight and if something does slip through then well, hope you enjoyed your time in developer prison. For example, Facebook moderation using AI is low-slung because of misses in explicit content — where public outrage has resulted and more people have demanded human intervention.
However, though not perfect by any means NSFW AI is certainly a step up and shows significant promise for future content moderation capabilities. More About NSFW AI complexity: Visit nsfw ai Articles