The Internet of tomorrow: Privacy, Accountability, Compliance and Trust

Deep Learning based Credibility Analysis

A deep learning based approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.

Web Credibility Analysis

A generic approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.

Probabilistic Graphical Models for Credibility Analysis

Probabilistic graphical models to extract "credible", "trustworthy" and "expert" information from large-scale, non-expert, user-generated content in online communities.

Credibility Analysis in News Communities

A probabilistic graphical model to jointly identify credible news articles, trustworthy news sources, and expert users by leveraging joint interactions in a news community.

Credibility Analysis in Health Communities

Assessing trustworthiness of users, objectivity of language, and credibility of user statements in online health communities.

Fair Data Representations

This project introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models.

Mediator Accounts

This project proposes a framework which leverages solidarity in a large community to scramble user interaction histories.

Relationships between Users' Actions and Feeds

This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds.

ExFAKT: Explainable Fact Checking

Moving forward towards deriving more human understandable evidence from Knowledge graphs and text based on background knowledge in the form of rules.