- Introduction
To date, small, imbalanced datasets are considered challenging to efficiently train deep learning (DL) models, especially in the medical domain. Consistently, most Artificial Intelligence (AI) approaches in conjunction with small datasets rely on shallow radiomics where traditional machine learning (ML) is utilized for analysing image-derived features. In this study, we evaluate a recently introduced spatial neural network scheme called Distance-Encoding Biomorphic-Informational Neural Network (DEBI-NN), which trains spatial coordinates of artificial neuron coordinates instead of weights, that are then calculated from neuron distances. This technique dramatically reduces the number of parameters to train, thereby making DEBI-NN eligible for the analysis of small, imbalanced datasets. We refer to this property as spatial plasticity. We hypothesized that DEBI-NNs could systematically outperform baseline NN models in small clinical datasets while requiring lessIntroduction
To date, small, imbalanced datasets are considered challenging to efficiently train deep learning (DL) models, especially in the medical domain. Consistently, most Artificial Intelligence (AI) approaches in conjunction with small datasets rely on shallow radiomics where traditional machine learning (ML) is utilized for analysing image-derived features. In this study, we evaluate a recently introduced spatial neural network scheme called Distance-Encoding Biomorphic-Informational Neural Network (DEBI-NN), which trains spatial coordinates of artificial neuron coordinates instead of weights, that are then calculated from neuron distances. This technique dramatically reduces the number of parameters to train, thereby making DEBI-NN eligible for the analysis of small, imbalanced datasets. We refer to this property as spatial plasticity. We hypothesized that DEBI-NNs could systematically outperform baseline NN models in small clinical datasets while requiring less regularizations to be implemented, as spatial plasticity may have self-regularization properties. To test our hypothesis, we aimed to compare DEBI-NNs with baseline NNs while relying on various regularization techniques to investigate how DEBI-NNs perform in the presence of regularizers in small multi-centric medical imaging datasets.
Methods
Three multi-centric datasets were collected including diffuse large B-cell lymphoma (DLBCL) [18F]FDG positron emission tomography (PET)/computed tomography (CT) with clinical parameters to predict 2-years event-free survival; the head and neck [18F]FDG PET/CT dataset from the 2022 MICCAI challenge (HECKTOR), predicting human papillomavirus status; and [68Ga]Ga-PSMA-11 (PSMA-11) PET/CT as well as PSMA-11 PET/magnetic resonance imaging (MRI) cases to predict histopathology-provided International Society of Urological Pathology (ISUP) grades as low (ISUP
) and high (ISUP
) risk. Per cohort, 5 different network configurations having 1, 2 and 3 hidden layers and neuron count configurations were defined. Per configuration, DEBI-NNs had 7 regularization techniques and baseline NNs had 6 regularization configurations, totalling 27 = 128 and 26 = 64 regularization variants per network scheme to train and evaluate. Test balanced accuracy (BACC) was measured for each model and correlation of the test BACC in the presence of regularization techniques was evaluated in DEBI-NN and baseline NN models.
Results
The best-performing DEBI-NN models yielded 84.5 %, 80 % and 80.5 % BACC in DLBCL, HECKTOR and PROSTATE datasets, respectively. In contrast, the highest-performing baseline NN models yielded 71.9 %, 77.3 % and 77.3 % BACC in the same cohorts, respectively. In addition, baseline NNs required the implementation of more regularization techniques to increase test BACC from an average test BACC of 53 % (no regularization) to 60 % (6 regularizations), while DEBI-NNs needed no regularization to achieve 62 % BACC. In return, DEBI-NN BACC monotonously fell down to 56 % BACC as the number of regularizations increased.
Conclusions
DEBI-NNs exhibit a significantly simpler training complexity compared to baseline NNs, while they also outperform baseline NNs with the presence of minimal or no regularization techniques. Our results strongly imply that DEBI-NNs have a potential to pave the way for the utilization of neural networks in small and imbalanced medical datasets, which the field of medical imaging research routinely operates with.…

