Press releases
Twitter still failing women over online violence and abuse - new analysis
Amnesty International has graded Twitter on progress to keep women safe on platform
The social media company still not doing enough to protect women despite warnings more than two years ago
‘Twitter CEO Jack Dorsey needs to match words with action’ - Michael Kleinman
Twitter is still not doing enough to protect women from online violence and abuse despite repeated promises to do so, new analysis by Amnesty International reveals, more than two years since the social media company was set clear recommendations for improvement.
Amnesty has produced a Twitter Scorecard* which grades the social media company’s record on implementing a series of recommendations to tackle abuse against women on the platform since Amnesty first highlighted the scale of the problem in its 2018 Toxic Twitter report, which revealed how Twitter is failing to keep women safe.
Despite some progress, the company has fully implemented just one of ten concrete recommendations, with limited progress in increasing transparency on how it handles reports of abuse.
The persistent abuse women face on the platform undermines their right to express themselves equally, freely and without fear. This abuse is highly intersectional and women from ethnic or religious minorities, marginalised castes, lesbian, bisexual or transgender women - as well as non-binary individuals - and women with disabilities are disproportionately impacted by abuse on the platform.
Indian author and activist, Meena Kandasamy, said:
“Being a Tamil, mixed-caste woman, who speaks out against India’s discriminatory caste system, has proved an explosive mix on Twitter. I receive a torrent of racist and misogynistic abuse, including rape threats.
“Twitter always seems to be playing catchup and is too slow to address the different types of abuse women face.
“Twitter is a powerful place to express ourselves, but Twitter needs to do more to clean up the platform and make it a safe place for women.”
Rasha Abdul Rahim, Co-Director of Amnesty Tech, said:
“Our analysis shows that despite some progress, Twitter is not doing enough to protect women users, leading many women to silence or censor themselves on the platform.
“We have outlined clear, straightforward steps that Twitter can take to make its platform a safer place for women to express their views.
“Twitter can and must do more to protect women from abuse.”
Twitter failing to protect women
Amnesty’s previous analysis of the platform has revealed shocking findings, including:
- a survey of woman politicians and journalists in the UK and USA found that an abusive or problematic tweet was sent to a woman every 30 seconds during 2017;
- Analysis in the run up to the 2017 UK general election revealed that Black, Asian and Minority Ethnic women MPs received almost half (41%) of abuse tweets, despite there being almost eight times as many white MPs in the study;
- The same study showed that Diane Abbott MP alone received almost a third (31.6%) of all abusive tweets analysed.
Since the release of Toxic Twitter in 2018, Amnesty International has continued to highlight the scale of abuse women face on Twitter, including in Argentina, India, UK and USA. Meanwhile, women have continued to speak out about the abuse they experience on Twitter, and the company’s failure to adequately respond.
Twitter Scorecard findings
In 2018, Amnesty provided Twitter with concrete recommendations on how it can better meet its human rights responsibilities, highlighting ten that the human rights organisation believes are key to helping to tackle online abuse against women. The Twitter Scorecard uses a traffic light system to grade Twitter’s progress in implementing the recommendations, which cover transparency, reporting mechanisms, and enhanced privacy and security features.
Due to the lack of meaningful data Twitter provides, it is difficult to gauge the full extent of the problem of online abuse. Twitter still does not provide detailed country-level breakdowns of user reports of abuse, nor does it provide data about how many users report specific kinds of abusive language, for example abuse based on gender or race.
Twitter is also reticent about disclosing detailed information about the number of content moderators it employs, including what kind of coverage they provide across different countries and languages.
The social media platform needs to be more transparent as to how it designs and implements automated processes to identify online abuse against women. While Twitter has disclosed details on how it is using algorithms to combat misinformation during the current COVID-19 pandemic, it is yet to provide the same level of transparency on how algorithms are used to address abusive tweets.
Twitter has made progress in some areas, including improving the appeals process, by offering more guidance to users on how the process works and how decisions are made. The company was graded amber for its efforts towards increasing users’ awareness of privacy and security features and in educating users on the harm such abuse causes.
Michael Kleinman, Director of Amnesty International’s Silicon Valley Initiative, said:
"It is totally in Twitter’s power to implement these changes that would make a real difference to millions of women’s experience on the platform.
“Twitter CEO Jack Dorsey needs to match words with action to show he is genuinely committed to making Twitter a safer place for women. We will continue to press the company until we see more changes that truly show that abuse against women is not welcome on the platform.”
Twitter’s response
In response to Amnesty’s analysis, Twitter acknowledged it needs to do more. However, the company said its combination of human moderation and use of technology allows it to take a more proactive response to online abuse. On publishing disaggregated data by country or region, Twitter argued this could be open to misinterpretation and give a misleading impression of the problem.
While Amnesty acknowledges that context is important, there is nothing to stop Twitter providing context alongside data, and the company’s human rights responsibilities means it has a duty to be transparent in how it deals with reports of violence and abuse.