The ability of AI to filter images and video for NSFW content is also improving, with success rates starting on the 90% range in many cases. Mainstream media sources such as Facebook, Instagram and Twitter have mastered the art of employing state-of-the-art NSFW AI technology that allows them to process billions of photos/videos with little human involvement. In a latest transparency report published by Facebook for 2022, the company shared that their NSFW AI automatically detects and suspends about over 99.5 percentage of adult content without any user interaction as mentioned in...
This Not Suitable For Work AI depends on neural networks and automated machine learning systems, many trained on thousands of labeled pictures. Specific input patterns that these models look for to identify explicit content include skin tone, body part configuration (in particular the midriff and silhouette of the buttocks), as well as a number of visual cues commonly found in adult material. For instance, according to Stanford University's AI lab and backed up by other research, a machine learning generation image recognition system can content within milliseconds; this means platforms could stop inappropriate material from proliferating widely in user feeds rapidly. Obviously, social media platforms protect younger or delicate minds by rapidly filtering this explicit content out of circulation, which is in line with digital safety standards.
While this solution is relatively reliable, NSFW AI still has a hard time getting the context of some type of content correct for example. … For example, images of medical or artistic nudity that are permitted [in context] will still be marked as explicit. Facebook made headlines when its AI algorithm incorrectly censored a classical art digital exhibit from a museum because of nudity. But that ultimately means the AI isnt tuned into context — a hangup human mods (in theory) are supposed to be keen on. One of the ways that many companies are combatting this issue is through leveraging multi-layered AI systems, including contextual filters. According to a 2021 report from the International Association for AI Moderation, these filters have been seen to reduce false positives by as much as 30 percent.
There is also a similar reliability concern with false negatives in which explicit content gets through undetected. Although such cases are a drop in the bucket, they can translate into brand damage risks and user grievances. In order to avoid this, you have your models continuously retrained on recent data so that it is more accurate in the identification phase which will help reduce both false positives and negatives. According to MIT Magazine, Forrester's report that building a normal model can provide up to 15% increase in filtering accuracy — making sure the retrained Ai will be useful even as new content trends arise.
For example, NSFW AI is also a good investment since it reduces the cost of manual moderation. Twitter, for example spends north of $200m a year on moderation — so one can see how hiring human moderators to police more vanilla (but volume) content across others might quickly be outgunned by costs. This can become a costly exercise, however AI filtering reduces these costs by up to 50% and is the most cost-effective solution in maintaining safe online spaces. These lower costs have been instrumental in allowing smaller companies and startups to integrate AI-based moderation systems — effectively making them potential competitors for the larger platforms who can guarantee content compliance.
As technology continues to push forward, nsfw ai and other similar platforms will continue to improve how they discern explicit content more effectively. With the improvement of machine learning models, NSFW AI filtering is on its way to become an essential tool online platforms can use for safety and user experience while being able to scale with good confidence.