I don’t know if you’ve noticed this, but there are a large number of men on the internet who have problems with their pants. The way I can tell is that first they take a picture of the “affected area” and then share that picture with someone who doesn’t want it. The way I figure it, the men are just trying to say “help, I don’t understand pants!” and asking for technical assistance to correctly apply pants to their crotches.
First off, men: don’t do this.
Secondly: this is almost entirely a problem generated by cisgender men. I will not speculate as to why in this article.
Thirdly: no, I’m not really that clueless, but if you’re going to build products where one user can send another user pictures and you want that product to be built with respect, you are going to need to consider this problem.
Pictures come through a ton of mechanisms like photo albums, pictures in social media posts or messages, and pictures sent through file transfer. Even more than that, pictures are sent in ways you might not originally consider, like profile pictures. Every time I send a text message through a mechanism with a profile picture, that picture goes right along with the message. Even text alone can be used to send rather surprisingly explicit imagery.
Every single one of these mechanisms (and more) has been used to send unsolicited pictures of pants problems. Pants problems are rampant. I asked another woman how many she got and she replied “My unsolicited pants problem is fairly minimal nowadays (like 2-3 a week).” First off, that’s still a lot of pants problems. Secondly, it didn’t go down because of improvements in the platform. It went down because she took a less high-profile job and pretty much entirely stopped engaging on social media.
So if you are designing a product that will work well, you need to take this mode of communication into account.
Note that this is a clear example to be wary of metrics that think all engagement is good. This will bite you in the longer term as people will retreat from the platform. One element of bad engagement can be far worse than one element of good engagement. It’s difficult to measure, but at the very least keep this in mind as a limitation of engagement and connection metrics.
One problem is that people disagree deeply on what constitutes good engagement. Some communities find certain types of religious statements deeply offensive, while others find the lack of them offensive. Mixing those people willy-nilly will almost certainly have bad effects. Even strangers who can engage on some topics productively (say, flowers) may not be able to engage productively on other topics (say, vaccination). Increasing people’s ability to engage with others is an open and urgent research problem. What we do know is that clashing over a polarizing issue can cause increased attitude polarization, deepening the problem.
At root, consider engagement “good” only when everyone involved would agree that it was good.
The root of the pants problem is that the offensive communication is unsolicited. Unsolicited communication is risky. This is particularly tricky because you will want to welcome actual members to the system, even though some percentage of “sock puppets” (people using new accounts for nefarious purposes) will be camouflaged among them. Separate the goal of welcoming people from preventing harassment in your design. Consider a welcome flow rather than immediately directing communication from a new person at a stranger. Remember that new members will also experience pants problems.
If possible, strip images from risky communication. A picture is worth a thousand words and all of them may be offensive. If not completely possible, hide higher-risk images. For example, if Bob sends Alice an invitation to chat out of the blue, consider hiding his profile picture. It may be a sneaky request for assistance with his pants. If Bob uses a service to send pictures to those around him, consider hiding the pictures. People often wish to send pants problem pictures to strangers, but many men will not balk at sending them in person.
If you have a number of examples of the genre (and if you don’t, I’m sure that women online would happily donate them), consider training a machine learning model to detect possible offenders. Then plan for the annoying fact that your model will have both false positives and false negatives by not making a model match equal an automatic takedown. The appropriate action will depend on your application, but might include putting the possibly-offensive picture behind a warning note (with the picture only visible on click-through) or sending the picture to a human who can determine whether it is inappropriate for the context.
Just like with other content, let your users tell you what is offensive. Treating these reports appropriately is by itself a deep topic, but start by considering that users have more context than the person reviewing the report. Also consider that people can and will try to abuse your abuse-reporting mechanism, so don’t allow it to work blindly; audit and evaluate success with a human eye.
People looking for help with their pants are exceedingly inventive, so this won’t stop them entirely, but it should help you clear up your product, the space over which you have control and for which you are directly responsible.