Current Capabilities

Hate speech and abusive content

This includes the existence of sexist, racist or ethnic statements that use slur, attack or criticize a minority (without a well founded argument), seek to distort views on a minority with unfounded claims, negatively stereotype a minority, or defend xenophobia or sexism.

Propaganda and extremely politically biased content

Hyperpartisan news can be understood as extremely one-sided, extremely biased news articles. These articles provide an unbalanced and provocative point of view in describing events, and often contain strong sentiment in describing political parties or politicians, with positive/negative association; insulting/aggravating or slanderous statements towards people or parties; direct calls to action to support a particular faction or aggressive campaigning, and finally content which tends to cherry pick evidence to support its own biases and arguments.

Spoof websites and content spread by known fake news networks

We define fake news as those websites or articles propagating untrue information, knowing they are untrue, to deceive others. This definition is heavily focused on intent, as other types of websites containing untrue information (e.g. satire, fiction) are not included in our definition of fake news. This system picks up fake news content propagated by known fake news sites (and networks of sites), content that links to such sites, and sites that try to spoof or plagiarise the branding of reputable news publishers.

Extreme clickbait content

We define extreme clickbait content as currently focused on articles that have headlines that are clearly and aggressively created to incentivise clicks and shares. Often, this type of content has headlines that don’t match the nature of the content being talked about in the article, promoted aggressively for engagement.

We are soon building algorithms to classify the quality and credibility of any piece of content, both text and videos.

The Factmata approach

Factmata uses artificial intelligence, communities, and expert knowledge to identify and classify problematic content.

Artificial Intelligence

Our algorithms use advanced natural language processing and artificial intelligence to learn what different types of deceptive content look like from vast, uniquely annotated and labelled datasets of example material, and then detect them in the wild.

Communities

To help the algorithms continue to do a better job, we also make use of communities and people who provide their own feedback.

Expert knowledge

Finally, through our work with expert journalists and social science researchers, we have developed heuristics which allow us to quickly highlight problematic content.

Want to know more?

Get in touch with us