Twitter has conducted a review of the racist abuse that was targeted at England players on its platform after the Euro 2020 final in July.
Back in July, Marcus Rashford, Jadon Sancho, and Bukayo Saka — three England footballers playing in the UEFA Euro 2020 final at Wembley Stadium — were subjected to a barrage of racist harassment and abuse on Twitter and Instagram.
At the time, many people attempted to drown out the abuse on the players’ Instagram pages by commenting on their posts with messages of love and support. At the time, Instagram commented saying, “We quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.”
One month on from the England V Italy final, Twitter has published its findings of an analysis of the abuse faced by the players on the platform after they missed penalties in the 3-2 shootout loss.
In a series of tweets, Twitter UK said that it had identified that “the UK was – by far – the largest country of origin for the abusive Tweets we removed.” This finding debunks the statements issued by Conservative commentators at the time, who claimed that the abuse was coming from abroad. This finding also proves wrong comments made by Conservative MP Michael Fabricant who suggested that the racist abuse might not be “home grown” and stated that it could have come from foreign powers “attempting to destabilise our society.”
“On the night of the Euros Final, our automated tools kicked in immediately, and ensured we identified and removed 1,622 abusive Tweets in the 24 hours that followed,” Twitter UK’s findings continued. “Only 2 percent of the Tweets we removed generated more than 1,000 Impressions.”
The report also provided crucial evidence to suggest that ID verification — a solution that was put forth by a petition in the wake of the abuse — would not be an effective strategy in tackling the online abuse. More than 696,000 people signed a petition lobbing the government to make ID a legal requirement for social media accounts. Many pointed out flaws with this idea, as it could also marginalise vulnerable groups and silence whistleblowers. However, Twitter says its data “suggests that ID verification would have been unlikely to prevent the abuse from happening – as of the permanently suspended accounts, 99 percent of account owners were identifiable.”
Twitter added that it will soon be trialling a feature that temporarily autoblocks accounts that use “harmful language.” The company’s UK account concluded by saying, “There is no place for racist abuse on Twitter.”
“Our aim is always that Twitter be used as a vehicle for every person to communicate safely. We’re determined to do all we can, along with our partners, to stop these abhorrent views and behaviours being seen on and off the platform,” read the final tweet in the thread.