# Ahmed Abbas (PhD Student)

## MSc Ahmed Abbas

- Address
- Max-Planck-Institut für Informatik

Saarland Informatics Campus

Campus E1 4

66123 Saarbrücken - Location
- E1 4 - 629
- Phone
- +49 681 9325 2147
- Fax
- +49 681 9325 2099
- Get email via email

# Personal Information

### Research Interests:

My work revolves around the intersection of two exciting domains of discrete optimization and deep learning. I want to incorporate inductive bias and human prior knowledge into deep networks by training through discrete optimization solvers. Secondly, I want to devise fast methods for discrete optimization by leveraging parallel computing hardware (GPUs) and by training networks to solve discrete optimization problems.

### Previous positions:

- 2019 - 2020: Junior Scientist at Carl Zeiss AG, Germany
- 2013 - 2016: Research Engineer at LMKR, Pakistan

### Education:

- 2020 - present: Ph. D. student at the Max-Planck-Institut für Informatik
- 2016 - 2018: Master in Visual Computing from Saarland University, Germany
- 2009 - 2013: Bachelor in Electrical Engineering from NUST, Pakistan

Google scholar, Twitter

# Publications

Abbas, A. (2018).

*Bottleneck Potentials in Markov Random Fields*. Universität des Saarlandes, Saarbrücken.Export

BibTeX

@mastersthesis{Abbas_Master2019,
TITLE = {Bottleneck Potentials in Markov Random Fields},
AUTHOR = {Abbas, Ahmed},
LANGUAGE = {eng},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2018},
DATE = {2018},
}

Endnote

%0 Thesis
%A Abbas, Ahmed
%A referee: Schiele, Bernt
%Y Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0003-9192-3
%I Universität des Saarlandes
%C Saarbrücken
%D 2018
%P X, 57 p.
%V master
%9 master

Abbas, A., & Swoboda, P. (2021a). RAMA: A Rapid Multicut Algorithm on GPU. Retrieved from https://arxiv.org/abs/2109.01838

(arXiv: 2109.01838) Abstract

We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.<br>correlation clustering) problem, a classical graph clustering problem widely<br>used in machine learning and computer vision. Our algorithm consists of three<br>steps executed recursively: (1) Finding conflicted cycles that correspond to<br>violated inequalities of the underlying multicut relaxation, (2) Performing<br>message passing between the edges and cycles to optimize the Lagrange<br>relaxation coming from the found violated cycles producing reduced costs and<br>(3) Contracting edges with high reduced costs through matrix-matrix<br>multiplications.<br> Our algorithm produces primal solutions and dual lower bounds that estimate<br>the distance to optimum. We implement our algorithm on GPUs and show resulting<br>one to two order-of-magnitudes improvements in execution speed without<br>sacrificing solution quality compared to traditional serial algorithms that run<br>on CPUs. We can solve very large scale benchmark problems with up to<br>$\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We<br>make our code available at https://github.com/pawelswoboda/RAMA.<br>

Export

BibTeX

@online{Abbas_2109.01838,
TITLE = {{RAMA}: {A} Rapid Multicut Algorithm on {GPU}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {https://arxiv.org/abs/2109.01838},
EPRINT = {2109.01838},
EPRINTTYPE = {arXiv},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
ABSTRACT = {We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.<br>correlation clustering) problem, a classical graph clustering problem widely<br>used in machine learning and computer vision. Our algorithm consists of three<br>steps executed recursively: (1) Finding conflicted cycles that correspond to<br>violated inequalities of the underlying multicut relaxation, (2) Performing<br>message passing between the edges and cycles to optimize the Lagrange<br>relaxation coming from the found violated cycles producing reduced costs and<br>(3) Contracting edges with high reduced costs through matrix-matrix<br>multiplications.<br> Our algorithm produces primal solutions and dual lower bounds that estimate<br>the distance to optimum. We implement our algorithm on GPUs and show resulting<br>one to two order-of-magnitudes improvements in execution speed without<br>sacrificing solution quality compared to traditional serial algorithms that run<br>on CPUs. We can solve very large scale benchmark problems with up to<br>$\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We<br>make our code available at https://github.com/pawelswoboda/RAMA.<br>},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T RAMA: A Rapid Multicut Algorithm on GPU :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3E6-9
%U https://arxiv.org/abs/2109.01838
%D 2021
%X We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.<br>correlation clustering) problem, a classical graph clustering problem widely<br>used in machine learning and computer vision. Our algorithm consists of three<br>steps executed recursively: (1) Finding conflicted cycles that correspond to<br>violated inequalities of the underlying multicut relaxation, (2) Performing<br>message passing between the edges and cycles to optimize the Lagrange<br>relaxation coming from the found violated cycles producing reduced costs and<br>(3) Contracting edges with high reduced costs through matrix-matrix<br>multiplications.<br> Our algorithm produces primal solutions and dual lower bounds that estimate<br>the distance to optimum. We implement our algorithm on GPUs and show resulting<br>one to two order-of-magnitudes improvements in execution speed without<br>sacrificing solution quality compared to traditional serial algorithms that run<br>on CPUs. We can solve very large scale benchmark problems with up to<br>$\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We<br>make our code available at https://github.com/pawelswoboda/RAMA.<br>
%K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Learning, cs.LG

Abbas, A., & Swoboda, P. (2019a). Bottleneck Potentials in Markov Random Fields. In

*International Conference on Computer Vision (ICCV 2019)*. Seoul, Korea: IEEE. doi:10.1109/ICCV.2019.00327Export

BibTeX

@inproceedings{Abbas_ICCV2019,
TITLE = {Bottleneck Potentials in {Markov Random Fields}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
ISBN = {978-1-7281-4803-8},
DOI = {10.1109/ICCV.2019.00327},
PUBLISHER = {IEEE},
YEAR = {2019},
DATE = {2019},
BOOKTITLE = {International Conference on Computer Vision (ICCV 2019)},
PAGES = {3174--3183},
ADDRESS = {Seoul, Korea},
}

Endnote

%0 Conference Proceedings
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0006-DC7E-6
%R 10.1109/ICCV.2019.00327
%D 2019
%B International Conference on Computer Vision
%Z date of event: 2019-10-27 - 2019-11-02
%C Seoul, Korea
%B International Conference on Computer Vision
%P 3174 - 3183
%I IEEE
%@ 978-1-7281-4803-8

Abbas, A., & Swoboda, P. (2021b). FastDOG: Fast Discrete Optimization on GPU. Retrieved from https://arxiv.org/abs/2111.10270

(arXiv: 2111.10270) Abstract

We present a massively parallel Lagrange decomposition method for solving 0-1<br>integer linear programs occurring in structured prediction. We propose a new<br>iterative update scheme for solving the Lagrangean dual and a perturbation<br>technique for decoding primal solutions. For representing subproblems we follow<br>Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and<br>dual algorithms require little synchronization between subproblems and<br>optimization over BDDs needs only elementary operations without complicated<br>control flow. This allows us to exploit the parallelism offered by GPUs for all<br>components of our method. We present experimental results on combinatorial<br>problems from MAP inference for Markov Random Fields, quadratic assignment and<br>cell tracking for developmental biology. Our highly parallel GPU implementation<br>improves upon the running times of the algorithms from Lange et al. (2021) by<br>up to an order of magnitude. In particular, we come close to or outperform some<br>state-of-the-art specialized heuristics while being problem agnostic.<br>

Export

BibTeX

@online{Abbas_2111.10270,
TITLE = {{FastDOG}: {F}ast Discrete Optimization on {GPU}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {https://arxiv.org/abs/2111.10270},
EPRINT = {2111.10270},
EPRINTTYPE = {arXiv},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
ABSTRACT = {We present a massively parallel Lagrange decomposition method for solving 0-1<br>integer linear programs occurring in structured prediction. We propose a new<br>iterative update scheme for solving the Lagrangean dual and a perturbation<br>technique for decoding primal solutions. For representing subproblems we follow<br>Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and<br>dual algorithms require little synchronization between subproblems and<br>optimization over BDDs needs only elementary operations without complicated<br>control flow. This allows us to exploit the parallelism offered by GPUs for all<br>components of our method. We present experimental results on combinatorial<br>problems from MAP inference for Markov Random Fields, quadratic assignment and<br>cell tracking for developmental biology. Our highly parallel GPU implementation<br>improves upon the running times of the algorithms from Lange et al. (2021) by<br>up to an order of magnitude. In particular, we come close to or outperform some<br>state-of-the-art specialized heuristics while being problem agnostic.<br>},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T FastDOG: Fast Discrete Optimization on GPU :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3EA-5
%U https://arxiv.org/abs/2111.10270
%D 2021
%X We present a massively parallel Lagrange decomposition method for solving 0-1<br>integer linear programs occurring in structured prediction. We propose a new<br>iterative update scheme for solving the Lagrangean dual and a perturbation<br>technique for decoding primal solutions. For representing subproblems we follow<br>Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and<br>dual algorithms require little synchronization between subproblems and<br>optimization over BDDs needs only elementary operations without complicated<br>control flow. This allows us to exploit the parallelism offered by GPUs for all<br>components of our method. We present experimental results on combinatorial<br>problems from MAP inference for Markov Random Fields, quadratic assignment and<br>cell tracking for developmental biology. Our highly parallel GPU implementation<br>improves upon the running times of the algorithms from Lange et al. (2021) by<br>up to an order of magnitude. In particular, we come close to or outperform some<br>state-of-the-art specialized heuristics while being problem agnostic.<br>
%K Mathematics, Optimization and Control, math.OC,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC,Computer Science, Computer Science and Game Theory, cs.GT

Abbas, A., & Swoboda, P. (2021c). Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach. In

*Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)*. Virtual: Curran Associates, Inc.Export

BibTeX

@inproceedings{Abbas_Neurips2021,
TITLE = {Combinatorial Optimization for Panoptic Segmentation: {A} Fully Differentiable Approach},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
PUBLISHER = {Curran Associates, Inc.},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
BOOKTITLE = {Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)},
EDITOR = {Ranzato, M. and Beygelzimer, A. and Liang, P. S. and Vaughan, J. W. and Dauphin, Y.},
ADDRESS = {Virtual},
}

Endnote

%0 Conference Proceedings
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3DC-5
%D 2021
%B 35th Conference on Neural Information Processing Systems
%Z date of event: 2021-12-07 - 2021-12-07
%C Virtual
%B Advances in Neural Information Processing Systems 34 pre-proceedings
%E Ranzato, M.; Beygelzimer, A.; Liang, P. S.; Vaughan, J. W.; Dauphin, Y.
%I Curran Associates, Inc.

Abbas, A., & Swoboda, P. (2019b). Bottleneck Potentials in Markov Random Fields. Retrieved from http://arxiv.org/abs/1904.08080

(arXiv: 1904.08080) Abstract

We consider general discrete Markov Random Fields(MRFs) with additional<br>bottleneck potentials which penalize the maximum (instead of the sum) over<br>local potential value taken by the MRF-assignment. Bottleneck potentials or<br>analogous constructions have been considered in (i) combinatorial optimization<br>(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree<br>problem, bottleneck function minimization in greedoids), (ii) inverse problems<br>with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction<br>on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete<br>MRFs are a natural generalization of the above direction of modeling work to<br>Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs<br>whose objective consists of two parts: terms that factorize according to (i)<br>$(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e.<br>bottleneck potentials. To solve the ensuing inference problem, we propose<br>high-quality relaxations and efficient algorithms for solving them. We<br>empirically show efficacy of our approach on large scale seismic horizon<br>tracking problems.<br>

Export

BibTeX

@online{DBLP:journals/corr/abs-1904-08080,
TITLE = {Bottleneck Potentials in {Markov Random Fields}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {http://arxiv.org/abs/1904.08080},
EPRINT = {1904.08080},
EPRINTTYPE = {arXiv},
YEAR = {2019},
ABSTRACT = {We consider general discrete Markov Random Fields(MRFs) with additional<br>bottleneck potentials which penalize the maximum (instead of the sum) over<br>local potential value taken by the MRF-assignment. Bottleneck potentials or<br>analogous constructions have been considered in (i) combinatorial optimization<br>(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree<br>problem, bottleneck function minimization in greedoids), (ii) inverse problems<br>with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction<br>on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete<br>MRFs are a natural generalization of the above direction of modeling work to<br>Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs<br>whose objective consists of two parts: terms that factorize according to (i)<br>$(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e.<br>bottleneck potentials. To solve the ensuing inference problem, we propose<br>high-quality relaxations and efficient algorithms for solving them. We<br>empirically show efficacy of our approach on large scale seismic horizon<br>tracking problems.<br>},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0003-9D88-3
%U http://arxiv.org/abs/1904.08080
%D 2019
%X We consider general discrete Markov Random Fields(MRFs) with additional<br>bottleneck potentials which penalize the maximum (instead of the sum) over<br>local potential value taken by the MRF-assignment. Bottleneck potentials or<br>analogous constructions have been considered in (i) combinatorial optimization<br>(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree<br>problem, bottleneck function minimization in greedoids), (ii) inverse problems<br>with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction<br>on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete<br>MRFs are a natural generalization of the above direction of modeling work to<br>Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs<br>whose objective consists of two parts: terms that factorize according to (i)<br>$(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e.<br>bottleneck potentials. To solve the ensuing inference problem, we propose<br>high-quality relaxations and efficient algorithms for solving them. We<br>empirically show efficacy of our approach on large scale seismic horizon<br>tracking problems.<br>
%K Computer Science, Computer Vision and Pattern Recognition, cs.CV