Identification of I-equivalent subnetworks in Bayesian networks to incorporate experts' knowledge

Sang Min Lee, Seoung Bum Kim

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Bayesian networks (BNs) have been widely used in causal analysis because they can express the statistical relationship between significant variables. To gain superior causal analysis results, numerous studies have emphasized the importance of combining a knowledge-based approach and a data-based approach. However, combining these two approaches is a difficult task because it can reduce the effectiveness of the BN structure learning. Further, the learning schemes of BNs for computational efficiency can cause an inadequate causal analysis. To address these problems, we propose a knowledge-driven BN structure calibration algorithm for rich causal semantics. We first present an algorithm that can efficiently identify the subnetworks that can be altered to satisfy the learning condition of the BNs. We then reflect experts' knowledge to reduce erroneous causalities from the learned network. Experiments on various simulation and benchmark data sets were conducted to examine the properties of the proposed method and to compare its performance with an existing method. Further, an experimental study with real data from semiconductor fabrication plants demonstrated that the proposed method provided superior performance in improving structural accuracy.

Original languageEnglish
Article numbere12346
JournalExpert Systems
Publication statusAccepted/In press - 2018 Jan 1


  • Bayesian networks
  • expert priors
  • inductive learning
  • knowledge representation
  • structure learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Artificial Intelligence


Dive into the research topics of 'Identification of I-equivalent subnetworks in Bayesian networks to incorporate experts' knowledge'. Together they form a unique fingerprint.

Cite this