Optimizing Body Region Classification With Deep Convolutional Activation Features

Obioma Pelka, Felix Nensa, Christoph M. Friedrich; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


The goal of this work is to automatically apply generated image keywords as text representations, to optimize medical image classification accuracies of body regions. To create a keyword generative model, a Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) is adopted, which is trained with preprocessed biomedical image captions as text representation and visual features extracted using Convolutional Neural Networks (CNN). For image representation, deep convolutional activation features and Bag-of-Keypoints (BoK) features are extracted for each radiograph and combined with the automatically generated keywords. Random Forest models and Support Vector Machines are trained with these multimodal image representations, as well as just visual representation, to predict body regions. Adopting multimodal image features proves to be the better approach, as the prediction accuracy for body regions is increased.

Related Material


[pdf]
[bibtex]
@InProceedings{Pelka_2018_ECCV_Workshops,
author = {Pelka, Obioma and Nensa, Felix and Friedrich, Christoph M.},
title = {Optimizing Body Region Classification With Deep Convolutional Activation Features},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}