many applications in Computer Science achieving exceptional performances compared

to other existing methods. However, neural networks have a strong memory limitation

which is considered to be one of its main challenges. This is why remarkable research

focus is recently directed towards model compression.

This thesis studies a divide-and-conquer approach that transforms an existing trained

neural network into another network with less number of parameters with the target of

decrasing its memory footprint. It takes into account the resulting loss in performance.

It is based on existing layer transformation techniques like Canonical Polyadic (CP) and

SVD a\xef\xac\x83ne transformations. Given an arti{fi}cial neural network, trained on a certain

dataset, an agent optimizes the architecture of the neural network in a bottom-up man-

ner. It cuts the network in sub-networks of length 1. It optimizes each sub-network using

layer transformations. Then it chooses the most- promising sub-networks to construct

sub-networks of length 2. This process is repeated until it constructs an arti{fi}cial neural

network that covers the functionalities of the original neural network.

This thesis o\xef\xac\x80ers an extensive analysis of the proposed approach. We tested this tech-

nique with di\xef\xac\x80erent known neural network architectures with popular datasets. We

could outperform recent techniques in both the compression rate and network perfor-

mance on LeNet5 with MNIST. We could compress ResNet-20 to 25% of their original

size achieving performance comparable with networks in the literature with double this

size.},\n}\n'