The School of Computer Science would like to present…
Date: Tuesday, May, 07th 2025
Time: 11:00 AM
Location: Essex Hall, Room 122
Knowledge Graphs (KGs) provide structured representations of real-world facts and have recently become a core component of artificial intelligence and intelligent systems. Nevertheless, the majority of real-world KGs have an incompleteness problem. Because of this incompleteness or sparsity, we cannot make the most of such incomplete KG for applications such as question answering (QA), recommendation and reasoning. To deal with this issue, the Knowledge Graph Completion (KGC) has become an important problem, where the Link Prediction (LP) task is a common way to predict the missing links (relations) or triple completion. In Triple Completion, LP directly predicts missing links (edges) or relations in a KG, thus completing the graph structure.
To the best of our knowledge, this is the first comprehensive work on the taxonomy of KGE
Models for the LP task for triple completion. This provides a systematic and comparative review of KGE models on the LP task. KGE methods for LP can be categorized into different categories, such as
- Geometric-based- i) Translational, ii) Projection, iii) Gaussian, iv) Manifold, v) Rotational-based models
- Tensor decomposition-based- i) Bilinear, ii) Non-Bilinear models
- Neural network-based- i) Shallow Neural Network, ii) Convolutional Neural Network (CNN), iii) Recurrent Neural Network (RNN), iv) Graph Neural Network (GNN), v) Transformer/Attention Mechanism based models
- External information-augmented- i) Text-Aware, ii) Temporal, iii) Structure-Aware, iv) Multi-Modal based
- Logic-enhanced-i) Logic Rule Injection, ii) Constraints & Type Awareness
- Adversarial training- i) Negative Sampling Methods, ii) Generative Adversarial Networks (GANs) inspired
- Large Language Models (LLMS)-augmented KGE models
For each sub-category, we examine some typical modelling principles, architectural designs, limitations, as well as research work that well represents it. We review the comparison of their performance in the literature on various tasks, including link prediction, knowledge graph completion, question answering, entity classification, and temporal reasoning. We also included the benchmark databases, evaluation methodologies and performance measures employed in previous research studies. This comprehensive study emphasizes the research gaps, a comprehensive view of the transitions and evolution of KGE methods for the LP task, and to inspire future work on embedding-based reasoning over a knowledge graph.
Knowledge Graphs (KGs), Knowledge Graph Embedding (KGE), Link Prediction (LP), Knowledge Graph Completion (KGC), Representation Learning, Neural Network Models, Transforms, Attention Mechanism, Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Graph Neural Network (GNN), Geometric Models, Translational Models, Tensor Decomposition Models, Bilinear Models, Embeddings, Temporal Knowledge Graphs, Generative Adversarial Networks (GANs), Large Language Models (LLMs), Knowledge Reasoning, Knowledge Base (KB).
External Reader: Dr. Esam Abdel-Raheem
Internal Reader: Dr. Dan Wu
Internal Reader: Dr. Hamidreza Koohi
Advisor(s): Dr. Ziad Kobti