Our Flagging & Scoring Capabilities

Hate speech and abusive content

This refers to sexist, racist, toxic, obscene, threats, or ethnic statements that use slurs, attack or criticise a minority (without a well-founded argument), seek to distort views on a minority with unfounded claims, negatively stereotype a minority, or defend xenophobia or sexism.

Propaganda and hyperpartisan content

Hyperpartisan contentrefers to extremely one-sided and biased political articles. Such stories provide an unbalanced and provocative point of view in describing events, and often contain strong sentiment in describing political parties or politicians, with positive/negative association; insulting/aggravating or slanderous statements towards people or parties; direct calls to action to support a particular action or aggressive campaigning. Finally, this includes content which tends to cherry pick evidence to support its own biases and arguments.

Spoof websites and fake news

Our system picks up content propagated by known fake news sites (and networks of sites), content that links to such sites, and sites that try to spoof or plagiarise the branding of reputable news publishers. We define fake news as websites or articles knowingly propagating false information with the intention to either deceive others or to make money. This definition is heavily focused on intent, as other types of websites containing untrue information (e.g. satire, fiction) are not included in our definition of fake news.

Deceptive or misleading content

This includes misleading content deliberately written to sound authentic and truthful. For example, content that makes unsupported claims without evidence, links to entirely made up or promotion-led sources, or misrepresents arguments by leaving out important aspects, or framing them in a logically unsound way.

Extreme clickbait content

We define extreme clickbait content as articles with sensationalist headlines that are clearly created to generate clicks and shares. Often, the headlines don’t match the nature of the content being talked about in the article, promoted aggressively for engagement.

Aggressive rhetoric and style

We can score how journalistic a piece of content is written including if it is written very emotionally or in a wiki-style, if it is sensational or opinionated, or if it discusses known controversial issues in the current news cycle.

Biased stance and arguments

We are working towards extracting the different stances and leanings of any content towards key issues in real time, along with arguments made for or against those issues. This enables you to understand the bigger picture of any key issue and event online, and minimise the problem of filter bubbles.

Invalid claims

We can detect claims and propositions made about politics, statistics, economics and even health, using our unique technology. We are also working on technology that can automatically gather evidence that might help a reader fact check such statements using databases of facts and fact checks.

Our algorithms classify the credibility, quality and safety of any piece of content.

The Factmata approach

Factmata uses artificial intelligence, communities, and expert knowledge to identify and classify problematic content.

Artificial Intelligence

Our algorithms use advanced natural language understanding and artificial intelligence to learn what different types of deceptive content look like from vast, uniquely annotated datasets of example material, and then detect them in the wild.

Communities

To help the algorithms continue to do a better job, we also make use of communities and people who provide their own feedback to help guide our artificial intelligence.

Expert knowledge

Finally, through our work with expert journalists and social science researchers, we have developed heuristics which allow us to quickly highlight problematic content.

Want to know more?

Get in touch with us

Why trust our scores?

We rely on experts

We work with experts across a wide range of specialisms to ensure the highest quality data is used to train Factmata’s AI tools.

We expose potential biases

We make anonymised demographic information about our annotators available, to ensure transparency about any inherent biases. Our test datasets are also openly available for scrutiny. You can see our hyperpartisanship test dataset here.

We explain our scores

We believe that AI-driven decisions need to be explainable. That is why, we highlight the phrases or triggers which result in any given score.

Our scores constantly improve

Our system actively identifies content that needs clarification, validation and calibration. Therefore, we actively seek out experts who can improve our AI tools.

We are open to feedback

Building our technology is hard. We understand that we may not always get it right the first time. Opening our scoring system to feedback and critique is our way of improving our tools.