Twitter experiments with a warning to users before posting harmful or abusive Tweets
This will prevent usres from facing consequences for violating its policies
Microblogging platform Twitter is currently experimenting with giving users a warning to revise their tweets and replies before posting them, if they contain harmful, abusive and hate content. This will prevent usres from facing consequences for violating its policies.
It is still not an Edit button for users but a self-edit tool to tackle rampant harassment on its platform, said Twitter. "When things get heated, you may say things you don't mean. To let you rethink a reply, we're running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it's published if it uses language that could be harmful," tweeted Twitter Support.
Twitter describes it as a limited experiment, and is currently only available for the iOS users. The prompt will come as a pop-up on tweets which carry harmful content and Twitter AI/ML tools will try to catch such hate words first-hand.
Twitter users have been demanding an Edit button so that they can improve tweets that have been posted.
Twitter CEO Jack Dorsey first addressed the possibility of adding an edit feature for tweets in December 2016, based on the suggestions.
Back in 2018, while visiting India for Twitter's pre-election campaign, Dorsey was quizzed why Twitter does not have an edit button.
To which, he said, "the reason Twitter does not have an ‘edit' button is because people may change their opinions by editing the original tweet and then people who don't agree with the original view, may have already retweeted the tweet, which is not an accurate representation of what they believe."
*Edited from an IANS report