Smooth Maximum Unit: Smooth Activation Function for Deep Networks Using Smoothing Maximum Technique

Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 794-803

Abstract


Deep learning researchers have a keen interest in proposing new novel activation functions that can boost neural network performance. A good choice of activation function can have a significant effect on improving network performance and training dynamics. Rectified Linear Unit (ReLU) is a popular hand-designed activation function and is the most common choice in the deep learning community due to its simplicity though ReLU has some drawbacks. In this paper, we have proposed two new novel activation functions based on approximation of the maximum function, and we call these functions Smooth Maximum Unit (SMU and SMU-1). We show that SMU and SMU-1 can smoothly approximate ReLU, Leaky ReLU, or more general Maxout family, and GELU is a particular case of SMU. Replacing ReLU by SMU, Top-1 classification accuracy improves by 6.22%, 3.39%, 3.51%, and 3.08% on the CIFAR100 dataset with ShuffleNet V2, PreActResNet-50, ResNet-50, and SeNet-50 models respectively. Also, our experimental evaluation shows that SMU and SMU-1 improve network performance in a variety of deep learning tasks like image classification, object detection, semantic segmentation, and machine translation compared to widely used activation functions.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Biswas_2022_CVPR, author = {Biswas, Koushik and Kumar, Sandeep and Banerjee, Shilpak and Pandey, Ashish Kumar}, title = {Smooth Maximum Unit: Smooth Activation Function for Deep Networks Using Smoothing Maximum Technique}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {794-803} }