Facebook took down more than 12 million pieces of terrorist content on its social network between April and September, the company disclosed on Thursday. Facebook defines terrorist content as posts that praise, endorse or represent ISIS, al-Qaeda and their affiliate groups.

The removal of the terrorist content is part of an on-going effort by Facebook to rid its service of harmful content, which also includes misinformation, propaganda and spam.

Facebook said, “We measure how many pieces of content (such as posts, images, videos or comments) we took action on because they went against our standards for terrorist propaganda, specifically related to ISIS, al-Qaeda and their affiliates.”

The company said it removed 9.4 million pieces of terrorist content during the second quarter and another 3 million posts during the third quarter. By comparison, the company in May announced that it removed 1.9 million posts during the first quarter of 2018.

“Terrorists are always looking to circumvent our detection and we need to counter such attacks with improvements in technology, training, and process,” the company said in a blog post. “These technologies improve and get better over time, but during their initial implementation such improvements may not function as quickly as they will at maturity.”

A lot of the material removed was old. But Facebook said it removed 2.2 million brand new terrorist posts in the second quarter and 2.3 million in the third quarter, up from 1.2 million in the first quarter.

Facebook explained that it has focused its efforts on removing terrorist content before it is viewed by a wide audience. With that focus in mind, Facebook has reduced the median time between when a user first reports a terrorist post to when Facebook takes it down. That median time was 43 hours in the first quarter, but fell to 22 hours in the second quarter and 18 hours in the third quarter.

The company said it has relied on machine learning technology to detect terrorist content. In most cases, that terrorist content is reviewed and removed by trained humans, but the machine learning technology can remove content on its own if its “confidence level is high enough that its ‘decision’ indicates it will be more accurate than our human reviewers,” the company said.