Open Access
ARTICLE
Comparative Efficacy of Transformer and Recurrent Neural Networks in Automated Blood Clot Detection from Clinical Text
Issue Vol. 1 No. 01 (2024): Volume 01 Issue 01 --- Section Articles
Abstract
The accurate and timely identification of medical conditions from electronic health records (EHRs) is crucial for patient care, research, and public health surveillance. Blood clot detection, specifically, presents a significant challenge due to the nuanced, often implicit, mentions within unstructured clinical text. This study presents a comparative analysis of advanced neural network architectures—Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pretraining Approach (RoBERTa), Text-to-Text Transfer Transformer (T5), and Recurrent Neural Networks (RNNs)—for their efficacy in identifying thrombus-related information from clinical narratives. Leveraging their distinct strengths in natural language understanding, we evaluate these models on a proprietary dataset of de-identified clinical notes, focusing on precision, recall, and F1-score. Our findings indicate that Transformer-based models, particularly those pre-trained on biomedical corpora, significantly outperform traditional RNNs, demonstrating superior ability to capture complex contextual dependencies vital for nuanced clinical concept extraction.
Keywords
References
Huang, K., Altosaar, J. and Ranganath, R., 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C.H. and Kang, J., 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), pp.1234-1240.
Si, Y., Wang, J., Xu, H. and Roberts, K., 2019. Enhancing clinical concept extraction with contextual embeddings. Journal of the American Medical Informatics Association, 26(11), pp.1297-1304.
Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. and Liu, P.J., 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140), pp.1-67.
Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2019, June. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) (pp. 4171-4186).
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L. and Stoyanov, V., 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Li, P. and Huang, H., 2016. Clinical information extraction via convolutional neural network. arXiv preprint arXiv:1603.09381.
Deo, R.C., 2015. Machine learning in medicine. Circulation, 132(20), pp.1920-1930.
Giorgi, J.M. and Bader, G.D., 2018. Transfer learning for biomedical named entity recognition with neural networks. Bioinformatics, 34(23), pp.4087-4094.
Habibi, M., Weber, L., Neves, M., Wiegandt, D.L. and Leser, U., 2017. Deep learning with word embeddings improves biomedical named entity recognition. Bioinformatics, 33(14), pp.i37-i48.
Wang, X., Zhang, Y., Ren, X., Zhang, Y., Zitnik, M., Shang, J., Langlotz, C. and Han, J., 2019. Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics, 35(10), pp.1745-1752.
Bhasuran,B. and Natarajan,J. (2018) Automatic extraction of gene-disease associations from literature using joint ensemble learning. PLoS One, 13, e0200699.
Lim, S. and Kang, J., 2018. Chemical–gene relation extraction using recursive neural network. Database, 2018, p.bay060.
Wiese, G., Weissenborn, D. and Neves, M., 2017. Neural domain adaptation for biomedical question answering. arXiv preprint arXiv:1706.03610.
Open Access Journal
Submit a Paper
Propose a Special lssue
pdf