Search

We found 343 hits for your search of 'Multimodal Computing and Interaction'.
  1. Yiting Xia

    /departments/rg2/people/yiting-xia

    {COVID}-19 and Beyond}, AUTHOR = {Xia, Yiting and Zhang, Ying and Zhong, Zhizhen and Yan, Guanqing and Lim, Chiunlin and Ahuja, Satyajeet Singh and Bali, Soshant and Nikolaidis, Alexander and Ghobadi, Kimia [...] ent and Uncertainty-resilient Backbone Network Planning with Hose}, AUTHOR = {Ahuja, Satyajeet Singh and Gupta, Varun and Dangui, Vinayak and Bali, Soshant and Gopalan, Abishek and Zhong, Hao and Lapukhov [...] System for Reliable Network Management}, AUTHOR = {Xing, Jiarong and Hsu, Kuo-Feng and Xia, Yiting and Cai, Yan and Li, Yanping and Zhang, Ying and Chen, Ang}, LANGUAGE = {eng}, ISBN = {979-8-4007-0437-6}, DOI

  2. Online hate speech & conspiracy theories

    /departments/inet/online-hate-speech

    ophobic). We find that 93% and 81% of posts that contain terms from our lexicons are Antisemitic and Islamophobic, respectively. Also, we find that the veracity of usage and frequency of these terms greatly [...] discussions and automatically discover new slurs related to online antisemitism. Overall, alarmingly, we find a rise of antisemitic rhetoric and antisemitic memes over time in both 4chan’s /pol/ and Gab. Reference [...] 4chan and Twitter. Also, we find differences across Twitter and 4chan: on Twitter we observed a shift towards blaming China for the pandemic, while on 4chan we observed a shift towards using more and new

  3. Labelled Pupils in the Wild (LPW)

    /departments/computer-vision-and-machine-learning/research/gaze-based-human-computer-interaction/labelled-pupils-in-the-wild-lpw

    Research Departments Computer Vision and Machine Learning Research Gaze-Based Human-Computer Interaction Labelled Pupils in the Wild (LPW) Labelled Pupils in the Wild (LPW): Pupil detection in unconstrained [...] glasses and 4. strong makeup. The third row shows croped images around the pupil region under challenging conditions: 1. reflection on the pupil, 2. self occluded, 3.strong sunlight and shade and 4. occlusion [...] conditions, i.e. different lighting conditions and eye camera positions, and 2) to have a large variability in appearance of participants, such as gender, ethnicity and use of vision aids. We took each participant

  4. Machine Learning for Harvesting Health Knowledge

    /departments/databases-and-information-systems/teaching/ss20/healthml

    Alsulmi and Ben Carterette. 2016. Improving clinical case search using semantic based query reformulations. In Bioinformatics and Biomedicine (BIBM'16). Sendong Zhao, Chang Su, Andrea Sboner, and Fei Wang [...] Anjuli Kannan, Linh Tran, Yuhui Chen, and Izhak Shafran. Extracting Symptoms and their Status from Clinical Conversations. Xuan Wang, Yu Zhang, Qi Li, Yinyin Chen, and Jiawei Han. 2018. Open Information [...] Conference on Bioinformatics, Computational Biology, and Health Informatics (BCB ’18). Conversational AI Bickmore T1, Giorgino Tm, Health dialog systems for patients and consumers. J Biomed Inform. 2006

  5. Privacy-Aware Eye Tracking Using Differential Privacy

    /departments/computer-vision-and-machine-learning/research/visual-privacy/privacy-aware-eye-tracking-using-differential-privacy

    about eye tracking and VR technologies, continued with questions about future use and applications, data sharing and privacy (especially with whom users are willing to share their data), and concluded with [...] by Pupil. We recorded a separate video from each eye and each document. Participants used the mouse to start and stop the document interaction and were free to read the documents in arbitrary order. We [...] experimental assistant stopped and saved the recording and asked participants questions on their current level of fatigue, whether they liked and understood the document, and whether they found the document

  6. PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features

    /departments/computer-vision-and-machine-learning/research/visual-privacy/privaceye-privacy-preserving-head-mounted-eye-tracking-using-egocentric-scene-image-and-eye-movement-features

    dataset and annotation (145 MB). P2 dataset and annotation (156 MB). P3 dataset and annotation (165 MB). P5 dataset and annotation (157 MB). P7 dataset and annotation (149 MB). P8 dataset and annotation [...] dataset and annotation (171 MB). P10 dataset and annotation (163 MB). P11 dataset and annotation (163 MB). P12 dataset and annotation (135 MB). P13 dataset and annotation (163 MB). P14 dataset and annotation [...] annotation (171 MB). P16 dataset and annotation (181 MB). P17 dataset and annotation (167 MB). P18 dataset and annotation (137 MB). P19 dataset and annotation (147 MB). P20 dataset and annotation (160 MB). PrivacEye:

  7. Mykhaylo Andriluka

    /departments/computer-vision-and-machine-learning/people/alumni-and-former-members/mykhaylo-andriluka

    area of security and building management (2001-2006) Department Visual Computing at Center of Computer Graphics (ZGDV) , research project EMBASSI in the area of human computer interaction (2000) Research [...] Learning for Computer Vision My recent CV is available here (pdf). Education: Ph.D. in Computer Science (with honors). TU Darmstadt , Germany, 2006-2010 Dipl.-Math. (B.Sc and M.Sc. in Mathematics and Computer [...] Research Departments Computer Vision and Machine Learning People Alumni and Former Members Mykhaylo Andriluka Mykhaylo Andriluka (Junior Research Leader) Personal Information Research Interests: Articulated

  8. RGBD Semantic Segmentation Using Spatio-Temporal Data-Driven Pooling

    /departments/computer-vision-and-machine-learning/research/video-segmentation/rgbd-semantic-segmentation-using-spatio-temporal-data-driven-pooling

    Spatio-Temporal Data-Driven Pooling }, author={Yang He and Wei-Chen Chiu and Margret Keuper and Mario Fritz}, booktitle={ IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2017}} VSB100: [...] sequence as input and computes the semantic segmentation of a target frame with the help of unlabeled frames. We use superpixels and optical flow to establish region correspondences, and fuse the posterior [...] platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences

  9. Espresso

    /departments/databases-and-information-systems/research/yago-naga/espresso

    Efficient Computation of Relationship-Centrality in Large Entity-Relationship Graphs (poster) Stephan Seufert, Srikanta J. Bedathur, Johannes Hoffart, Andrey Gubichev, and Klaus Berberisch Posters and Demo [...] Research Departments Databases and Information Systems Research YAGO-NAGA Espresso Espresso: Explaining Relationships between Entity Sets Espresso is a system to compute semantically meaningful substructures [...] European politicians are related to politicians in the United States and how?» or «How can one summarize the relationship between China and countries from the Middle East over the last five years?» In this

  10. Zero-Shot Learning - The Good, the Bad and the Ugly

    /departments/computer-vision-and-machine-learning/research/zero-shot-learning/zero-shot-learning-the-good-the-bad-and-the-ugly

    Learning - The Good, the Bad and the Ugly}, booktitle = {IEEE Computer Vision and Pattern Recognition (CVPR)}, year = {2017}, author = {Yongqin Xian and Bernt Schiele and Zeynep Akata} } @article{XLSA18 [...] Departments Computer Vision and Machine Learning Research Zero-Shot Learning Zero-Shot Learning - The Good, the Bad and the Ugly Zero-Shot Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly [...] Learning - A Comprehensive Evaluation of the Good, the Bad and the Ugly}, author={Xian, Yongqin and Lampert, H. Christoph and Schiele, Bernt and Akata, Zeynep}, journal={TPAMI}, year={2018}, } f-VAEGAN-D2: