Tribhuvanesh Orekondy (PhD Student)

MSc Tribhuvanesh Orekondy

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - Room 627
Phone
+49 681 9325 2027
Fax
+49 681 9325 2099
Email
Get email via email

Personal Information

Publications

Orekondy, T., Schiele, B., & Fritz, M. (n.d.). Knockoff Nets: Stealing Functionality of Black-Box Models. In 32nd IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019). Long Beach, CA, USA: IEEE.
(Accepted/in press)
Export
BibTeX
@inproceedings{orekondy18knockoff, TITLE = {Knockoff Nets: {S}tealing Functionality of Black-Box Models}, AUTHOR = {Orekondy, Tribhuvanesh and Schiele, Bernt and Fritz, Mario}, LANGUAGE = {eng}, PUBLISHER = {IEEE}, YEAR = {2019}, PUBLREMARK = {Accepted}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {32nd IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019)}, ADDRESS = {Long Beach, CA, USA}, }
Endnote
%0 Conference Proceedings %A Orekondy, Tribhuvanesh %A Schiele, Bernt %A Fritz, Mario %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations %T Knockoff Nets: Stealing Functionality of Black-Box Models : %G eng %U http://hdl.handle.net/21.11116/0000-0002-AA57-D %D 2019 %B 32nd IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2019-06-16 - 2019-06-20 %C Long Beach, CA, USA %B 32nd IEEE Conference on Computer Vision and Pattern Recognition %I IEEE
Orekondy, T., Schiele, B., & Fritz, M. (2019). Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks. Retrieved from http://arxiv.org/abs/1906.10908
(arXiv: 1906.10908)
Abstract
With the advances of ML models in recent years, we are seeing an increasing number of real-world commercial applications and services e.g., autonomous vehicles, medical equipment, web APIs emerge. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such ML applications, which require a lot of time, money, and effort to develop. In this paper, we address the issue by studying defenses for model stealing attacks, largely motivated by a lack of effective defenses in literature. We work towards the first defense which introduces targeted perturbations to the model predictions under a utility constraint. Our approach introduces the perturbations targeted towards manipulating the training procedure of the attacker. We evaluate our approach on multiple datasets and attack scenarios across a range of utility constrains. Our results show that it is indeed possible to trade-off utility (e.g., deviation from original prediction, test accuracy) to significantly reduce effectiveness of model stealing attacks.
Export
BibTeX
@online{Orekondy_arXiv1906.10908, TITLE = {Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks}, AUTHOR = {Orekondy, Tribhuvanesh and Schiele, Bernt and Fritz, Mario}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1906.10908}, EPRINT = {1906.10908}, EPRINTTYPE = {arXiv}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, ABSTRACT = {With the advances of ML models in recent years, we are seeing an increasing number of real-world commercial applications and services e.g., autonomous vehicles, medical equipment, web APIs emerge. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such ML applications, which require a lot of time, money, and effort to develop. In this paper, we address the issue by studying defenses for model stealing attacks, largely motivated by a lack of effective defenses in literature. We work towards the first defense which introduces targeted perturbations to the model predictions under a utility constraint. Our approach introduces the perturbations targeted towards manipulating the training procedure of the attacker. We evaluate our approach on multiple datasets and attack scenarios across a range of utility constrains. Our results show that it is indeed possible to trade-off utility (e.g., deviation from original prediction, test accuracy) to significantly reduce effectiveness of model stealing attacks.}, }
Endnote
%0 Report %A Orekondy, Tribhuvanesh %A Schiele, Bernt %A Fritz, Mario %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society External Organizations %T Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks : %G eng %U http://hdl.handle.net/21.11116/0000-0003-EC93-D %U http://arxiv.org/abs/1906.10908 %D 2019 %X With the advances of ML models in recent years, we are seeing an increasing number of real-world commercial applications and services e.g., autonomous vehicles, medical equipment, web APIs emerge. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such ML applications, which require a lot of time, money, and effort to develop. In this paper, we address the issue by studying defenses for model stealing attacks, largely motivated by a lack of effective defenses in literature. We work towards the first defense which introduces targeted perturbations to the model predictions under a utility constraint. Our approach introduces the perturbations targeted towards manipulating the training procedure of the attacker. We evaluate our approach on multiple datasets and attack scenarios across a range of utility constrains. Our results show that it is indeed possible to trade-off utility (e.g., deviation from original prediction, test accuracy) to significantly reduce effectiveness of model stealing attacks. %K Computer Science, Learning, cs.LG,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Statistics, Machine Learning, stat.ML
Orekondy, T., Oh, S. J., Schiele, B., & Fritz, M. (2018). Understanding and Controlling User Linkability in Decentralized Learning. Retrieved from http://arxiv.org/abs/1805.05838
(arXiv: 1805.05838)
Abstract
Machine Learning techniques are widely used by online services (e.g. Google, Apple) in order to analyze and make predictions on user data. As many of the provided services are user-centric (e.g. personal photo collections, speech recognition, personal assistance), user data generated on personal devices is key to provide the service. In order to protect the data and the privacy of the user, federated learning techniques have been proposed where the data never leaves the user's device and "only" model updates are communicated back to the server. In our work, we propose a new threat model that is not concerned with learning about the content - but rather is concerned with the linkability of users during such decentralized learning scenarios. We show that model updates are characteristic for users and therefore lend themselves to linkability attacks. We show identification and matching of users across devices in closed and open world scenarios. In our experiments, we find our attacks to be highly effective, achieving 20x-175x chance-level performance. In order to mitigate the risks of linkability attacks, we study various strategies. As adding random noise does not offer convincing operation points, we propose strategies based on using calibrated domain-specific data; we find these strategies offers substantial protection against linkability threats with little effect to utility.
Export
BibTeX
@online{orekondy18understand, TITLE = {Understanding and Controlling User Linkability in Decentralized Learning}, AUTHOR = {Orekondy, Tribhuvanesh and Oh, Seong Joon and Schiele, Bernt and Fritz, Mario}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1805.05838}, EPRINT = {1805.05838}, EPRINTTYPE = {arXiv}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Machine Learning techniques are widely used by online services (e.g. Google, Apple) in order to analyze and make predictions on user data. As many of the provided services are user-centric (e.g. personal photo collections, speech recognition, personal assistance), user data generated on personal devices is key to provide the service. In order to protect the data and the privacy of the user, federated learning techniques have been proposed where the data never leaves the user's device and "only" model updates are communicated back to the server. In our work, we propose a new threat model that is not concerned with learning about the content -- but rather is concerned with the linkability of users during such decentralized learning scenarios. We show that model updates are characteristic for users and therefore lend themselves to linkability attacks. We show identification and matching of users across devices in closed and open world scenarios. In our experiments, we find our attacks to be highly effective, achieving 20x-175x chance-level performance. In order to mitigate the risks of linkability attacks, we study various strategies. As adding random noise does not offer convincing operation points, we propose strategies based on using calibrated domain-specific data; we find these strategies offers substantial protection against linkability threats with little effect to utility.}, }
Endnote
%0 Report %A Orekondy, Tribhuvanesh %A Oh, Seong Joon %A Schiele, Bernt %A Fritz, Mario %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Understanding and Controlling User Linkability in Decentralized Learning : %G eng %U http://hdl.handle.net/21.11116/0000-0001-4BEC-2 %U http://arxiv.org/abs/1805.05838 %D 2018 %X Machine Learning techniques are widely used by online services (e.g. Google, Apple) in order to analyze and make predictions on user data. As many of the provided services are user-centric (e.g. personal photo collections, speech recognition, personal assistance), user data generated on personal devices is key to provide the service. In order to protect the data and the privacy of the user, federated learning techniques have been proposed where the data never leaves the user's device and "only" model updates are communicated back to the server. In our work, we propose a new threat model that is not concerned with learning about the content - but rather is concerned with the linkability of users during such decentralized learning scenarios. We show that model updates are characteristic for users and therefore lend themselves to linkability attacks. We show identification and matching of users across devices in closed and open world scenarios. In our experiments, we find our attacks to be highly effective, achieving 20x-175x chance-level performance. In order to mitigate the risks of linkability attacks, we study various strategies. As adding random noise does not offer convincing operation points, we propose strategies based on using calibrated domain-specific data; we find these strategies offers substantial protection against linkability threats with little effect to utility. %K Computer Science, Cryptography and Security, cs.CR,Computer Science, Artificial Intelligence, cs.AI,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG,Statistics, Machine Learning, stat.ML
Orekondy, T., Fritz, M., & Schiele, B. (2018). Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018). Salt Lake City, UT, USA: IEEE. doi:10.1109/CVPR.2018.00883
Export
BibTeX
@inproceedings{orekondy17connect, TITLE = {Connecting Pixels to Privacy and Utility: {A}utomatic Redaction of Private Information in Images}, AUTHOR = {Orekondy, Tribhuvanesh and Fritz, Mario and Schiele, Bernt}, LANGUAGE = {eng}, ISBN = {978-1-5386-6420-9}, DOI = {10.1109/CVPR.2018.00883}, PUBLISHER = {IEEE}, YEAR = {2018}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018)}, PAGES = {8466--8475}, ADDRESS = {Salt Lake City, UT, USA}, }
Endnote
%0 Conference Proceedings %A Orekondy, Tribhuvanesh %A Fritz, Mario %A Schiele, Bernt %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Connecting Pixels to Privacy and Utility: Automatic Redaction of Private Information in Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002E-7D65-4 %R 10.1109/CVPR.2018.00883 %D 2018 %B 31st IEEE Conference on Computer Vision and Pattern Recognition %Z date of event: 2018-06-18 - 2018-06-22 %C Salt Lake City, UT, USA %B IEEE/CVF Conference on Computer Vision and Pattern Recognition %P 8466 - 8475 %I IEEE %@ 978-1-5386-6420-9
Orekondy, T., Schiele, B., & Fritz, M. (2017). Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images. In IEEE International Conference on Computer Vision (ICCV 2017). Venice, Italy: IEEE. doi:10.1109/ICCV.2017.398
Export
BibTeX
@inproceedings{orekondy17iccv, TITLE = {Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images}, AUTHOR = {Orekondy, Tribhuvanesh and Schiele, Bernt and Fritz, Mario}, LANGUAGE = {eng}, ISBN = {978-1-5386-1032-9}, DOI = {10.1109/ICCV.2017.398}, PUBLISHER = {IEEE}, YEAR = {2017}, MARGINALMARK = {$\bullet$}, DATE = {2017}, BOOKTITLE = {IEEE International Conference on Computer Vision (ICCV 2017)}, PAGES = {3706--3715}, ADDRESS = {Venice, Italy}, }
Endnote
%0 Conference Proceedings %A Orekondy, Tribhuvanesh %A Schiele, Bernt %A Fritz, Mario %+ Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society %T Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images : %G eng %U http://hdl.handle.net/11858/00-001M-0000-002C-E65F-8 %R 10.1109/ICCV.2017.398 %D 2017 %B International Conference on Computer Vision %Z date of event: 2017-10-22 - 2017-10-29 %C Venice, Italy %B IEEE International Conference on Computer Vision %P 3706 - 3715 %I IEEE %@ 978-1-5386-1032-9