imPACT: The Internet of tomorrow: Privacy, Accountability, Compliance and Trust

  • Counterfactual Explanations for Recommenders

    A provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item.

    more
  • Credibility Analysis in News Communities

    A probabilistic graphical model to jointly identify credible news articles, trustworthy news sources, and expert users by leveraging joint interactions in a news community.

    more
  • Credibility Analysis in Health Communities

    Assessing trustworthiness of users, objectivity of language, and credibility of user statements in online health communities.

    more
  • Probabilistic Graphical Models for Credibility Analysis

    Probabilistic graphical models to extract "credible", "trustworthy" and "expert" information from large-scale, non-expert, user-generated content in online communities.

    more
  • Deep Learning based Credibility Analysis

    A deep learning based approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.

    more
  • Web Credibility Analysis

    A generic approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.

    more
  • R-Susceptibility

    This project presents a ranking-based approach to the assessment of privacy risks emerging from textual contents in online communities, focusing on sensitive topics, such as being depressed.

    more
  • Fair Data Representations

    This project introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models.

    more
  • Mediator Accounts

    This project proposes a framework which leverages solidarity in a large community to scramble user interaction histories.

    more
  • Relationships between Actions and Feeds

    This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds.

    more
  • Learning from Feedback on Explanations

    A human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.

    more
  • ExFAKT: Explainable Fact Checking

    Moving forward towards deriving more human understandable evidence from Knowledge graphs and text based on background knowledge in the form of rules.

    more