Think | Instagram users want to report death threats | Tech Reddy

[ad_1]

(Washington Post staff photo; iStock)
(Washington Post staff photo; iStock)

Talk

Sherry Hakimi is the founder and executive director of genEquality, an organization that promotes gender equality and inclusion.

Friday, October 14, is a special day for me. It started with a meeting with Secretary of State Antony Blinken and ended with death threats.

I was one of a small group of Iranian American women who were invited to a meeting with senior officials of the State Department to discuss the women’s protest movement that was taking place in Iran. In the past, the secretary’s office asked for our permission to have news teams present when the secretary was speaking. I don’t think it’s bad, I say yes.

Unfortunately, I was mistaken. The following week, I learned how the policies and design decisions of social media platforms, such as Instagram, directly affect the security of users.

Shortly after the photos from the State Department meeting appeared in the news, my Twitter account – which is nothing, because I don’t use Twitter – went up. I saw tweets that contained misinformation, hatred, harassment, violence and slander. That night, the attacks spread to Instagram. The next morning, I woke up to a lot of message requests on my private Instagram account. Along with the negative and harassing messages are the death threats.

As a self-proclaimed “policy nerd” – admittedly not a public one – I was pleasantly surprised.

Not knowing how to deal with deadly threats, I followed the platform’s user interface, hoping it would lead me to the right answer. Instagram gives you three options at the bottom of message requests: block, delete, or accept.

Obviously, “block” is my only viable option. It makes no sense to “allow” hate and threats, and to “delete” them without stopping that these evil people (or voters) can come back. No, thank you.

Clicking on “block” brings up a menu with options to “don’t forget,” “block account,” or “report.” Clicking on “Report” brings up a new menu, which says: “Select a problem to report.”

Here, I ran into a new problem (on top of death threats): None of the reporting categories sufficiently captures the severity of a death threats.

So, I read all the messages under two close options: “violence or abusive behavior” and “bullying or harassment.” Then I waited. Two days later, despite reaching out to a friend who works at Meta and agreeing to his request for an internal raise, there was no response. The threatening text messages kept coming. Finally, a security professional showed me how to change my Instagram settings to block message requests.

It is worth noting that in many places, the local police do not have jurisdiction over cases of threats on social media. When my local officials heard that I had received death threats on Instagram, they told me that they had no control over the matter and advised me to contact Meta and hang up.

As of Wednesday — four days later — I have not heard from anyone in Meta who is not a friend of mine or a friend of a friend. Coincidentally, when Nick Clegg, Meta’s president of global affairs, spoke last Friday at the Council on Foreign Relations (CFR), I was a member. So, through a combination of risk, benefit and confidence, I took the opportunity to ask Clegg why Meta doesn’t have a “death threat” option in its reporting process.

I appreciate that he took my question seriously. He seemed to be very depressed, not knowing what to say and what to do, and his team followed suit. However, the bigger problem with the reporting process is that it needs to be scaled.

There are at least three ways that Meta’s approach and filter architecture could be redesigned to better support its community guidelines and ensure greater user safety: First, add a “threat” option disease” to reportable problem areas. For example, Twitter’s reporting process includes a section that identifies “threatened me with violence.” In most jurisdictions, making death threats is a criminal offense – especially when made in writing. It should be listed like this.

Second, assemble a team dedicated to monitoring and managing threat intelligence. If the local police have no control over his platform, Meta should step up. Users who threaten harm should be flagged and banned from the platforms without delay. In addition to implementing its own social guidelines, the company must work with law enforcement to ensure proper reporting of incidents and enforcement of local laws.

Third, perhaps the most simple and quick action: under settings, privacy and messages, set the default to “do not receive.” Receiving messages from others should be an inbound setting, not an outbound one. In his remarks at the CFR, Clegg said that Meta uses baggage theory principles to guide users to the best practices. It took me three days to realize that I had a choice no receive message requests from people I don’t follow. At least, “don’t get” is the default for users with private accounts.

While I’m talking about someone whose life has been ruined by death threats on the Meta platform, I have one more question for the company: Please fix things ASAP.

[ad_2]

Source link