•  
  •  
 

Subject Area

Computer and Control Systems Engineering

Article Type

Review

Abstract

Biomedical relation extraction represents a critical advancement in healthcare research. As the volume of biomedical publications continues to grow exponentially, efficient extraction of entity relationships has become essential for accelerating knowledge discovery, drug development, and precision medicine initiatives. The field has evolved from focusing on simple binary relations, such as Protein-Protein Interactions (PPI), which are fundamental to understanding cellular processes and therapeutic development, to addressing more complex multi-class classification challenges like Drug-Drug Interactions (DDI) and Chemical-Protein Interactions (CPI).Pre-trained language models have become the cornerstone of modern approaches of extracting biomedical relations. Models like BioBERT, PubMedBERT, and SciBERT, along with general-purpose transformers like BERT, RoBERTa, and more recently, large language models like GPT variants, specifically pre-trained on biomedical literature, have demonstrated unprecedented capabilities in capturing the nuanced relationships between drugs, proteins, chemicals, and other biological entities. This review provides a comprehensive analysis of how pre-trained language models have transformed biomedical relation extraction. It also, examines their performance across various relation extraction tasks, and discuss their impact on improving the accuracy and efficiency of biomedical knowledge extraction.

Keywords

Drug-Drug Interactions, Chemical-Protein Interactions, Protein-Protein Interactions, Biomedical Relation Extraction, Pre-trained Language Models

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS