b'@inproceedings{StutzMLSYS2021,'b'\nTITLE = {Bit Error Robustness for Energy-Efficient {DNN} Accelerators},\nAUTHOR = {Stutz, David and Chandramoorthy, Nandhini and Hein, Matthias and Schiele, Bernt},\nLANGUAGE = {eng},\nPUBLISHER = {mlsys.org},\nYEAR = {2021},\nABSTRACT = {Deep neural network (DNN) accelerators received considerable attention in<br>past years due to saved energy compared to mainstream hardware. Low-voltage<br>operation of DNN accelerators allows to further reduce energy consumption<br>significantly, however, causes bit-level failures in the memory storing the<br>quantized DNN weights. In this paper, we show that a combination of robust<br>fixed-point quantization, weight clipping, and random bit error training<br>(RandBET) improves robustness against random bit errors in (quantized) DNN<br>weights significantly. This leads to high energy savings from both low-voltage<br>operation as well as low-precision quantization. Our approach generalizes<br>across operating voltages and accelerators, as demonstrated on bit errors from<br>profiled SRAM arrays. We also discuss why weight clipping alone is already a<br>quite effective way to achieve robustness against bit errors. Moreover, we<br>specifically discuss the involved trade-offs regarding accuracy, robustness and<br>precision: Without losing more than 1% in accuracy compared to a normally<br>trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher<br>energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even<br>for 4-bit DNNs.<br>},\nBOOKTITLE = {Proceedings of the 4th MLSys Conference},\nEDITOR = {Smola, A. and Dimakis, A. and Stoica, I.},\nADDRESS = {Virtual Conference},\n}\n'