# Ahmed Abbas (PhD Student)

## MSc Ahmed Abbas

- Address
- Max-Planck-Institut für Informatik

Saarland Informatics Campus

Campus E1 4

66123 Saarbrücken - Standort
- E1 4 - 629
- Telefon
- +49 681 9325 2147
- Fax
- +49 681 9325 2099
- Get email via email

# Personal Information

### Research Interests:

My work revolves around the intersection of two exciting domains of discrete optimization and deep learning. I want to incorporate inductive bias and human prior knowledge into deep networks by training through discrete optimization solvers. Secondly, I want to devise fast methods for discrete optimization by leveraging parallel computing hardware (GPUs) and by training networks to solve discrete optimization problems.

### Previous positions:

- 2019 - 2020: Junior Scientist at Carl Zeiss AG, Germany
- 2013 - 2016: Research Engineer at LMKR, Pakistan

### Education:

- 2020 - present: Ph. D. student at the Max-Planck-Institut für Informatik
- 2016 - 2018: Master in Visual Computing from Saarland University, Germany
- 2009 - 2013: Bachelor in Electrical Engineering from NUST, Pakistan

Google scholar, Twitter

# Publications

Abbas, A. (2018).

*Bottleneck Potentials in Markov Random Fields*. Universität des Saarlandes, Saarbrücken.Export

BibTeX

@mastersthesis{Abbas_Master2019,
TITLE = {Bottleneck Potentials in Markov Random Fields},
AUTHOR = {Abbas, Ahmed},
LANGUAGE = {eng},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2018},
DATE = {2018},
}

Endnote

%0 Thesis
%A Abbas, Ahmed
%A referee: Schiele, Bernt
%Y Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0003-9192-3
%I Universität des Saarlandes
%C Saarbrücken
%D 2018
%P X, 57 p.
%V master
%9 master

Abbas, A., & Swoboda, P. (2021a). RAMA: A Rapid Multicut Algorithm on GPU. Retrieved from https://arxiv.org/abs/2109.01838

(arXiv: 2109.01838) Abstract

We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.
correlation clustering) problem, a classical graph clustering problem widely
used in machine learning and computer vision. Our algorithm consists of three
steps executed recursively: (1) Finding conflicted cycles that correspond to
violated inequalities of the underlying multicut relaxation, (2) Performing
message passing between the edges and cycles to optimize the Lagrange
relaxation coming from the found violated cycles producing reduced costs and
(3) Contracting edges with high reduced costs through matrix-matrix
multiplications.
Our algorithm produces primal solutions and dual lower bounds that estimate
the distance to optimum. We implement our algorithm on GPUs and show resulting
one to two order-of-magnitudes improvements in execution speed without
sacrificing solution quality compared to traditional serial algorithms that run
on CPUs. We can solve very large scale benchmark problems with up to
$\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We
make our code available at https://github.com/pawelswoboda/RAMA.

Export

BibTeX

@online{Abbas_2109.01838,
TITLE = {{RAMA}: {A} Rapid Multicut Algorithm on {GPU}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {https://arxiv.org/abs/2109.01838},
EPRINT = {2109.01838},
EPRINTTYPE = {arXiv},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
ABSTRACT = {We propose a highly parallel primal-dual algorithm for the multicut (a.k.a. correlation clustering) problem, a classical graph clustering problem widely used in machine learning and computer vision. Our algorithm consists of three steps executed recursively: (1) Finding conflicted cycles that correspond to violated inequalities of the underlying multicut relaxation, (2) Performing message passing between the edges and cycles to optimize the Lagrange relaxation coming from the found violated cycles producing reduced costs and (3) Contracting edges with high reduced costs through matrix-matrix multiplications. Our algorithm produces primal solutions and dual lower bounds that estimate the distance to optimum. We implement our algorithm on GPUs and show resulting one to two order-of-magnitudes improvements in execution speed without sacrificing solution quality compared to traditional serial algorithms that run on CPUs. We can solve very large scale benchmark problems with up to $\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We make our code available at https://github.com/pawelswoboda/RAMA.},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T RAMA: A Rapid Multicut Algorithm on GPU :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3E6-9
%U https://arxiv.org/abs/2109.01838
%D 2021
%X We propose a highly parallel primal-dual algorithm for the multicut (a.k.a.
correlation clustering) problem, a classical graph clustering problem widely
used in machine learning and computer vision. Our algorithm consists of three
steps executed recursively: (1) Finding conflicted cycles that correspond to
violated inequalities of the underlying multicut relaxation, (2) Performing
message passing between the edges and cycles to optimize the Lagrange
relaxation coming from the found violated cycles producing reduced costs and
(3) Contracting edges with high reduced costs through matrix-matrix
multiplications.
Our algorithm produces primal solutions and dual lower bounds that estimate
the distance to optimum. We implement our algorithm on GPUs and show resulting
one to two order-of-magnitudes improvements in execution speed without
sacrificing solution quality compared to traditional serial algorithms that run
on CPUs. We can solve very large scale benchmark problems with up to
$\mathcal{O}(10^8)$ variables in a few seconds with small primal-dual gaps. We
make our code available at https://github.com/pawelswoboda/RAMA.
%K Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Data Structures and Algorithms, cs.DS,Computer Science, Learning, cs.LG

Abbas, A., & Swoboda, P. (2019a). Bottleneck Potentials in Markov Random Fields. In

*International Conference on Computer Vision (ICCV 2019)*. Seoul, Korea: IEEE. doi:10.1109/ICCV.2019.00327Export

BibTeX

@inproceedings{Abbas_ICCV2019,
TITLE = {Bottleneck Potentials in {Markov Random Fields}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
ISBN = {978-1-7281-4803-8},
DOI = {10.1109/ICCV.2019.00327},
PUBLISHER = {IEEE},
YEAR = {2019},
MARGINALMARK = {$\bullet$},
DATE = {2019},
BOOKTITLE = {International Conference on Computer Vision (ICCV 2019)},
PAGES = {3174--3183},
ADDRESS = {Seoul, Korea},
}

Endnote

%0 Conference Proceedings
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0006-DC7E-6
%R 10.1109/ICCV.2019.00327
%D 2019
%B International Conference on Computer Vision
%Z date of event: 2019-10-27 - 2019-11-02
%C Seoul, Korea
%B International Conference on Computer Vision
%P 3174 - 3183
%I IEEE
%@ 978-1-7281-4803-8

Abbas, A., & Swoboda, P. (2021b). FastDOG: Fast Discrete Optimization on GPU. Retrieved from https://arxiv.org/abs/2111.10270

(arXiv: 2111.10270) Abstract

We present a massively parallel Lagrange decomposition method for solving 0-1
integer linear programs occurring in structured prediction. We propose a new
iterative update scheme for solving the Lagrangean dual and a perturbation
technique for decoding primal solutions. For representing subproblems we follow
Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and
dual algorithms require little synchronization between subproblems and
optimization over BDDs needs only elementary operations without complicated
control flow. This allows us to exploit the parallelism offered by GPUs for all
components of our method. We present experimental results on combinatorial
problems from MAP inference for Markov Random Fields, quadratic assignment and
cell tracking for developmental biology. Our highly parallel GPU implementation
improves upon the running times of the algorithms from Lange et al. (2021) by
up to an order of magnitude. In particular, we come close to or outperform some
state-of-the-art specialized heuristics while being problem agnostic.

Export

BibTeX

@online{Abbas_2111.10270,
TITLE = {{FastDOG}: {F}ast Discrete Optimization on {GPU}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {https://arxiv.org/abs/2111.10270},
EPRINT = {2111.10270},
EPRINTTYPE = {arXiv},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
ABSTRACT = {We present a massively parallel Lagrange decomposition method for solving 0-1 integer linear programs occurring in structured prediction. We propose a new iterative update scheme for solving the Lagrangean dual and a perturbation technique for decoding primal solutions. For representing subproblems we follow Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and dual algorithms require little synchronization between subproblems and optimization over BDDs needs only elementary operations without complicated control flow. This allows us to exploit the parallelism offered by GPUs for all components of our method. We present experimental results on combinatorial problems from MAP inference for Markov Random Fields, quadratic assignment and cell tracking for developmental biology. Our highly parallel GPU implementation improves upon the running times of the algorithms from Lange et al. (2021) by up to an order of magnitude. In particular, we come close to or outperform some state-of-the-art specialized heuristics while being problem agnostic.},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T FastDOG: Fast Discrete Optimization on GPU :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3EA-5
%U https://arxiv.org/abs/2111.10270
%D 2021
%X We present a massively parallel Lagrange decomposition method for solving 0-1
integer linear programs occurring in structured prediction. We propose a new
iterative update scheme for solving the Lagrangean dual and a perturbation
technique for decoding primal solutions. For representing subproblems we follow
Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and
dual algorithms require little synchronization between subproblems and
optimization over BDDs needs only elementary operations without complicated
control flow. This allows us to exploit the parallelism offered by GPUs for all
components of our method. We present experimental results on combinatorial
problems from MAP inference for Markov Random Fields, quadratic assignment and
cell tracking for developmental biology. Our highly parallel GPU implementation
improves upon the running times of the algorithms from Lange et al. (2021) by
up to an order of magnitude. In particular, we come close to or outperform some
state-of-the-art specialized heuristics while being problem agnostic.
%K Mathematics, Optimization and Control, math.OC,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Distributed, Parallel, and Cluster Computing, cs.DC,Computer Science, Computer Science and Game Theory, cs.GT

Abbas, A., Abbasi, H., & Shafiq, A. (2015). Enhancement of Subtle Features in Coherence Volumes.

*SEG Technical Program Expanded Abstracts*,*2015*. doi:10.1190/segam2015-5901251.1Export

BibTeX

@article{Abbas2015,
TITLE = {Enhancement of Subtle Features in Coherence Volumes},
AUTHOR = {Abbas, Ahmed and Abbasi, Hameer and Shafiq, Arsalan},
LANGUAGE = {eng},
ISSN = {1052-3812},
DOI = {10.1190/segam2015-5901251.1},
PUBLISHER = {Society of Exploration Geophysicists},
YEAR = {2015},
DATE = {2015},
JOURNAL = {SEG Technical Program Expanded Abstracts},
VOLUME = {2015},
PAGES = {1707--1710},
}

Endnote

%0 Journal Article
%A Abbas, Ahmed
%A Abbasi, Hameer
%A Shafiq, Arsalan
%+ External Organizations
External Organizations
External Organizations
%T Enhancement of Subtle Features in Coherence Volumes :
%G eng
%U http://hdl.handle.net/21.11116/0000-0007-1EF1-8
%R 10.1190/segam2015-5901251.1
%7 2015
%D 2015
%J SEG Technical Program Expanded Abstracts
%V 2015
%& 1707
%P 1707 - 1710
%I Society of Exploration Geophysicists
%@ false

Abbas, A., & Swoboda, P. (2021c). Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach. In

*Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)*. Virtual: Curran Associates, Inc.Export

BibTeX

@inproceedings{Abbas_Neurips2021,
TITLE = {Combinatorial Optimization for Panoptic Segmentation: {A} Fully Differentiable Approach},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
PUBLISHER = {Curran Associates, Inc.},
YEAR = {2021},
MARGINALMARK = {$\bullet$},
BOOKTITLE = {Advances in Neural Information Processing Systems 34 pre-proceedings (NeurIPS 2021)},
EDITOR = {Ranzato, M. and Beygelzimer, A. and Liang, P. S. and Vaughan, J. W. and Dauphin, Y.},
ADDRESS = {Virtual},
}

Endnote

%0 Conference Proceedings
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach :
%G eng
%U http://hdl.handle.net/21.11116/0000-0009-B3DC-5
%D 2021
%B 35th Conference on Neural Information Processing Systems
%Z date of event: 2021-12-07 - 2021-12-07
%C Virtual
%B Advances in Neural Information Processing Systems 34 pre-proceedings
%E Ranzato, M.; Beygelzimer, A.; Liang, P. S.; Vaughan, J. W.; Dauphin, Y.
%I Curran Associates, Inc.

Abbas, A., Nadeem, M., & Shafiq, A. (2014). Detection of Isotropic Regions and Enhancement of Fault Attributes in Seismic Volumes.

*SEG Technical Program Expanded Abstracts*,*2014*. doi:10.1190/segam2014-0425.1Export

BibTeX

@article{Abbas2014,
TITLE = {Detection of Isotropic Regions and Enhancement of Fault Attributes in Seismic Volumes},
AUTHOR = {Abbas, Ahmed and Nadeem, Mehak and Shafiq, Arsalan},
LANGUAGE = {eng},
ISSN = {1052-3812},
DOI = {10.1190/segam2014-0425.1},
PUBLISHER = {Society of Exploration Geophysicists},
YEAR = {2014},
DATE = {2014},
JOURNAL = {SEG Technical Program Expanded Abstracts},
VOLUME = {2014},
PAGES = {1420--1423},
}

Endnote

%0 Journal Article
%A Abbas, Ahmed
%A Nadeem, Mehak
%A Shafiq, Arsalan
%+ External Organizations
External Organizations
External Organizations
%T Detection of Isotropic Regions and Enhancement of Fault Attributes in Seismic Volumes :
%G eng
%U http://hdl.handle.net/21.11116/0000-0007-1EF4-5
%R 10.1190/segam2014-0425.1
%7 2014
%D 2014
%J SEG Technical Program Expanded Abstracts
%V 2014
%& 1420
%P 1420 - 1423
%I Society of Exploration Geophysicists
%@ false

Abbas, A., & Swoboda, P. (2019b). Bottleneck Potentials in Markov Random Fields. Retrieved from http://arxiv.org/abs/1904.08080

(arXiv: 1904.08080) Abstract

We consider general discrete Markov Random Fields(MRFs) with additional
bottleneck potentials which penalize the maximum (instead of the sum) over
local potential value taken by the MRF-assignment. Bottleneck potentials or
analogous constructions have been considered in (i) combinatorial optimization
(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree
problem, bottleneck function minimization in greedoids), (ii) inverse problems
with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction
on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete
MRFs are a natural generalization of the above direction of modeling work to
Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs
whose objective consists of two parts: terms that factorize according to (i)
$(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e.
bottleneck potentials. To solve the ensuing inference problem, we propose
high-quality relaxations and efficient algorithms for solving them. We
empirically show efficacy of our approach on large scale seismic horizon
tracking problems.

Export

BibTeX

@online{DBLP:journals/corr/abs-1904-08080,
TITLE = {Bottleneck Potentials in {Markov Random Fields}},
AUTHOR = {Abbas, Ahmed and Swoboda, Paul},
LANGUAGE = {eng},
URL = {http://arxiv.org/abs/1904.08080},
EPRINT = {1904.08080},
EPRINTTYPE = {arXiv},
YEAR = {2019},
MARGINALMARK = {$\bullet$},
ABSTRACT = {We consider general discrete Markov Random Fields(MRFs) with additional bottleneck potentials which penalize the maximum (instead of the sum) over local potential value taken by the MRF-assignment. Bottleneck potentials or analogous constructions have been considered in (i) combinatorial optimization (e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree problem, bottleneck function minimization in greedoids), (ii) inverse problems with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete MRFs are a natural generalization of the above direction of modeling work to Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs whose objective consists of two parts: terms that factorize according to (i) $(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e. bottleneck potentials. To solve the ensuing inference problem, we propose high-quality relaxations and efficient algorithms for solving them. We empirically show efficacy of our approach on large scale seismic horizon tracking problems.},
}

Endnote

%0 Report
%A Abbas, Ahmed
%A Swoboda, Paul
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Bottleneck Potentials in Markov Random Fields :
%G eng
%U http://hdl.handle.net/21.11116/0000-0003-9D88-3
%U http://arxiv.org/abs/1904.08080
%D 2019
%X We consider general discrete Markov Random Fields(MRFs) with additional
bottleneck potentials which penalize the maximum (instead of the sum) over
local potential value taken by the MRF-assignment. Bottleneck potentials or
analogous constructions have been considered in (i) combinatorial optimization
(e.g. bottleneck shortest path problem, the minimum bottleneck spanning tree
problem, bottleneck function minimization in greedoids), (ii) inverse problems
with $L_{\infty}$-norm regularization, and (iii) valued constraint satisfaction
on the $(\min,\max)$-pre-semirings. Bottleneck potentials for general discrete
MRFs are a natural generalization of the above direction of modeling work to
Maximum-A-Posteriori (MAP) inference in MRFs. To this end, we propose MRFs
whose objective consists of two parts: terms that factorize according to (i)
$(\min,+)$, i.e. potentials as in plain MRFs, and (ii) $(\min,\max)$, i.e.
bottleneck potentials. To solve the ensuing inference problem, we propose
high-quality relaxations and efficient algorithms for solving them. We
empirically show efficacy of our approach on large scale seismic horizon
tracking problems.
%K Computer Science, Computer Vision and Pattern Recognition, cs.CV