Websites in Europe may have to delete extremist content on their sites within an hour or face a fine according to a plan by the European Commission.
Twitter, Facebook and YouTube among others are one who would be impacted by such a regulation.
Such a regulation would be a shift by the EU away from its current strategy of allowing the IT companies to self-police themselves.
This plan of a new regulation has been drawn up following the spate of terror attacks throughout Europe in the last few years.
The EU would “take stronger action in order to protect our citizens”, said Julian King, the EU’s commissioner for security in an interview with the Financial Times. That news report has been subsequently verified by other media outlets.
Details of the current voluntary agreement were published by the EU’s civil service in March. The organization had noted then that “terrorist content is most harmful in the first hours of its appearance online”.
At the time, it said there was “significant scope for more effective action”.
According to other sets of news reports, the draft for the new regulations are set to be made public next month. but before such a draft can be transformed into a law, it would have to be passed by the European Parliament and in the parliaments of a majority of EU states.
Small social media apps as well as the bigger players would come under the purview of the new regulations, King told the FT.
“Platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent,” he added.
There were more than 163,000 views for 1,348 videos that were related to the Islamic State group and which were uploaded on to YouTube through 278 separate accounts, claimed a study report that was published last month and was conducted by the not-for-profit organization Counter Extremism Project.
The report said that 24% of the videos had remained online for more than two hours.
There has bene no comments on the study report as well as the new regulation from Google, Twitter and Facebook.
There have been claims made previously by Google where the company said that there are typically less than 10 view for more than half of the videos – containing violent extremism and posted on YouTube and subsequently removed from the platform.
The latest transparency report published by the social media platform Twitter, the company claimed that between the period of July and December 2017, it had permanently suspended a total of 274,460 on charges of violations that were related to the promotion of terrorism. According to the company report, the company managed to suspend over 74 per cent of such accounts even before any tweet was posted.
It would be the first time that that the EU would take a step directly targeting the handling of illegal content by the tech companies if the proposal is transformed into a law.
(Adapted from BBC.com)