Pain Intensity Estimation Using Deep Spatiotemporal and Handcrafted Features

Jinwei WANG  Huazhi SUN  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E101-D   No.6   pp.1572-1580
Publication Date: 2018/06/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2017EDP7318
Type of Manuscript: PAPER
Category: Pattern Recognition
Keyword: 
pain intensity estimation,  3D convolutional network,  histogram of oriented gradients,  feature fusion,  

Full Text: PDF(670.1KB)
>>Buy this Article


Summary: 
Automatically recognizing pain and estimating pain intensity is an emerging research area that has promising applications in the medical and healthcare field, and this task possesses a crucial role in the diagnosis and treatment of patients who have limited ability to communicate verbally and remains a challenge in pattern recognition. Recently, deep learning has achieved impressive results in many domains. However, deep architectures require a significant amount of labeled data for training, and they may fail to outperform conventional handcrafted features due to insufficient data, which is also the problem faced by pain detection. Furthermore, the latest studies show that handcrafted features may provide complementary information to deep-learned features; hence, combining these features may result in improved performance. Motived by the above considerations, in this paper, we propose an innovative method based on the combination of deep spatiotemporal and handcrafted features for pain intensity estimation. We use C3D, a deep 3-dimensional convolutional network that takes a continuous sequence of video frames as input, to extract spatiotemporal facial features. C3D models the appearance and motion of videos simultaneously. For handcrafted features, we propose extracting the geometric information by computing the distance between normalized facial landmarks per frame and the ones of the mean face shape, and we extract the appearance information using the histogram of oriented gradients (HOG) features around normalized facial landmarks per frame. Two levels of SVRs are trained using spatiotemporal, geometric and appearance features to obtain estimation results. We tested our proposed method on the UNBC-McMaster shoulder pain expression archive database and obtained experimental results that outperform the current state-of-the-art.