Omni-Dimensional Adaptation for MobileNetV3 using Bayesian Hyperparameter Tuning

  • Duc-Long Dang


Convolutional Neural Networks (CNNs) have achieved tremendous success across various domains including computer vision and natural language processing. However, CNN models rely on static convolution kernels that are unable to adapt to the specific input features. This
limitation undermines the network’s representational capacity. This paper proposes to integrate Omni-Dimensional Dynamic Convolution (ODConv) into MobileNetV3, resulting in Omni-MobileNetV3. ODConv introduces multi-dimensional attention to dynamically adjust convolution
kernels across all four dimensions of the kernel space - spatial, input
channel, output channel, and number of kernels to enhances the network’s ability to capture diverse and adaptive features. To efficiently
optimize hyperparameters of the new architecture, we employ Bayesian
Optimization which leverages past evaluations to guide the search. Experiments on benchmark datasets including CIFAR-100, Tiny ImageNet
and medical images demonstrate that Omni-MobileNetV3 outperforms
standard MobileNetV3 baselines, achieving accuracy gains of up to 3%
while maintaining efficiency. This work introduces a powerful dynamic
convolution approach that adapts across all kernel dimensions. Combined with Bayesian hyperparameter tuning, it achieves state-of-the-art
performance on image classification tasks.

Author Biography

Duc-Long Dang

Long Vo, Nhat-Quang Phan, Van-Dat Tran∗, and Duc-Long Dang∗
VN-UK Institute for Research and Executive Education, the University of Danang -
Danang 550000, Vietnam