- Algorithms & Complexity
- Computer Vision and Machine Learning
- Internet Architecture
- Computer Graphics
- Databases and Information Systems
- People
- Research
- Commonsense Knowledge
- Question Answering
- Personal Knowledge
- Neural Information Retrieval
- YAGO-NAGA
- Knowledge Base Recall
- Google Award
- imPACT
- Counterfactual Explanations for Recommenders
- Credibility Analysis in News Communities
- Credibility Analysis in Health Communities
- Probabilistic Graphical Models for Credibility Analysis
- Deep Learning based Credibility Analysis
- Web Credibility Analysis
- R-Susceptibility
- Fair Data Representations
- Mediator Accounts
- Relationships between Actions and Feeds
- Learning from Feedback on Explanations
- ExFAKT: Explainable Fact Checking
- AmbiverseNLU
- Offers
- Teaching
- News & Events
- Publications
- Software
- Demo Systems
- Visual Computing and Artificial Intelligence
- Research Group Computational Biology
- Automation of Logic
- Network and Cloud Systems
The Internet of tomorrow: Privacy, Accountability, Compliance and Trust
- Counterfactual Explanations for Recommenders
A provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item.
- Credibility Analysis in News Communities
A probabilistic graphical model to jointly identify credible news articles, trustworthy news sources, and expert users by leveraging joint interactions in a news community.
- Credibility Analysis in Health Communities
Assessing trustworthiness of users, objectivity of language, and credibility of user statements in online health communities.
- Probabilistic Graphical Models for Credibility Analysis
Probabilistic graphical models to extract "credible", "trustworthy" and "expert" information from large-scale, non-expert, user-generated content in online communities.
- Deep Learning based Credibility Analysis
A deep learning based approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
- Web Credibility Analysis
A generic approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
- R-Susceptibility
This project presents a ranking-based approach to the assessment of privacy risks emerging from textual contents in online communities, focusing on sensitive topics, such as being depressed.
- Fair Data Representations
This project introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models.
- Mediator Accounts
This project proposes a framework which leverages solidarity in a large community to scramble user interaction histories.
- Relationships between Actions and Feeds
This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds.
- Learning from Feedback on Explanations
A human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.
- ExFAKT: Explainable Fact Checking
Moving forward towards deriving more human understandable evidence from Knowledge graphs and text based on background knowledge in the form of rules.