TY - GEN
T1 - The Structure of Deep Neural Network for Interpretable Transfer Learning
AU - Kim, Dowan
AU - Lim, Woohyun
AU - Hong, Minye
AU - Kim, Hyeoncheol
N1 - Funding Information:
ACKNOWLEDGEMENT This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1A2B4003558)
Publisher Copyright:
© 2019 IEEE.
PY - 2019/4/1
Y1 - 2019/4/1
N2 - Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.
AB - Training a deep neural network requires a large amount of high-quality data and time. However, most of the real tasks don't have enough labeled data to train each complex model. To solve this problem, transfer learning reuses the pretrained model on a new task. However, one weakness of transfer learning is that it applies a pretrained model to a new task without understanding the output of an existing model. This may cause a lack of interpretability in training deep neural network. In this paper, we propose a technique to improve the interpretability in transfer learning tasks. We define the interpretable features and use it to train model to a new task. Thus, we will be able to explain the relationship between the source and target domain in a transfer learning task. Feature Network (FN) consists of Feature Extraction Layer and a single mapping layer that connects the features extracted from the source domain to the target domain. We examined the interpretability of the transfer learning by applying pretrained model with defined features to Korean characters classification.
KW - Interpretability
KW - Machine Learning
KW - Transfer Learning
UR - http://www.scopus.com/inward/record.url?scp=85064672305&partnerID=8YFLogxK
U2 - 10.1109/BIGCOMP.2019.8679150
DO - 10.1109/BIGCOMP.2019.8679150
M3 - Conference contribution
AN - SCOPUS:85064672305
T3 - 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings
BT - 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Conference on Big Data and Smart Computing, BigComp 2019
Y2 - 27 February 2019 through 2 March 2019
ER -