Issue
Korean Journal of Chemical Engineering,
Vol.31, No.6, 930-943, 2014
Fault detection and identification using a Kullback-Leibler divergence based multi-block principal component analysis and bayesian inference
Considering the huge number of variables in plant-wide process monitoring and complex relationships(linear, nonlinear, partial correlation, or independence) among these variables, multivariate statistical process monitoring(MSPM) performance may be deteriorated especially by the independent variables. Meanwhile, whether related variables keep high concordance during the variation process is still a question. Under this circumstance, a multi-block technology based on mathematical statistics method, Kullback-Leibler Divergence, is proposed to put the variables having similar statistical characteristics into the same block, and then build principal component analysis (PCA) models in each lowdimensional subspace. Bayesian inference is also employed to combine the monitoring results from each sub-block into the final monitoring statistics. Additionally, a novel fault diagnosis approach is developed for fault identification. The superiority of the proposed method is demonstrated by applications on a simple simulated multivariate process and the Tennessee Eastman benchmark process.
[References]
  1. Chiang LH, Braatz RD, Russell EL, Fault detection and diagnosis in industrial systems, Springer, 2001
  2. Venkatasubramanian V, Rengaswamy R, Kavuri SN, Yin K, Comput. Chem. Eng., 27, 3, 2003
  3. Kresta JV, Macgregor JF, Marlin TE, Can. J. Chem. Eng., 69, 1, 1991
  4. Albert S, Kinley RD, TRENDS in Biotechnol., 19, 2, 2001
  5. Joe Qin S, J. Chemometrics, 17, 8, 2003
  6. Kourti T, International Journal of Adaptive Control and Signal Processing, 19, 4, 2005
  7. Lee C, Lee IB, Korean J. Chem. Eng., 25(2), 203, 2008
  8. Jolliffe I, Principal component analysis, Wiley Online Library, 2005
  9. Zou H, Hastienand T, Tibshirani R, Journal of Computational and Graphical Statistics, 15, 2, 2006
  10. Croux C, Haesbroeck G, Biometrika, 87, 3, 2000
  11. Han K, Park KJ, Chae H, Yoon ES, Korean J. Chem. Eng., 25(1), 13, 2008
  12. Hyvarinen A, Hurri J, Hoyer PO, Independent component analysis, in: Natural Image Statistics, Springer, 151, 2009
  13. Hyvarinen A, Oja E, Neural Networks, 13, 4, 2000
  14. Kim MH, Yoo CK, Korean J. Chem. Eng., 25(5), 947, 2008
  15. Lee JM, Yoo C, Lee IB, Journal of Process Control, 14, 5, 2004
  16. Ge Z, Song Z, Korean J. Chem. Eng., 26, 6, 2009
  17. Jia F, Martin E, Morris A, Int. J. Syst. Sci., 31, 11, 2000
  18. Scholz M, Kaplan F, Guy CL, Kopka J, Selbig J, Bioinformatics, 21, 20, 2005
  19. Ravi V, Pramodh C, Int. J. Information and Decision Sci., 2, 1, 2010
  20. Bakshi BR, AIChE J., 44, 7, 1998
  21. Qin SJ, Valle S, Piovoso MJ, J. Chemometrics, 15, 9, 2001
  22. Zhang Y, Ma C, Chem. Eng. Res. Design, 90, 5, 2012
  23. Lee DS, Vanrolleghem PA, Biotechnol. Bioeng., 82, 4, 2003
  24. Cherry GA, Qin SJ, Semiconductor Manufacturing, IEEE Transactions on, 19, 2, 2006
  25. Ge Z, Zhang M, Song Z, Journal of Process Control, 20, 5, 2010
  26. Tong CD, Song Y, Yan XF, Ind. Eng. Chem. Res., 52(29), 9897, 2013
  27. Bishop CM, Nasrabadi NM, Pattern recognition and machine learning, Springer New York, 2006
  28. Miller P, Swanson R, Heckler CE, Applied Mathematics and Computer Science, 8, 1998
  29. Westerhuis JA, Gurden SP, Smilde AK, Chemometrics and Intelligent Laboratory Systems, 51, 1, 2000
  30. De Persis C, Isidori A, Automatic Control, IEEE Transactions on, 46, 6, 2001
  31. Dunia R, Joe Qin S, AIChE J., 44, 8, 1998
  32. Alcala CF, Qin SJ, Automatica, 45, 7, 2009
  33. Abdi H, Williams LJ, Wiley Interdisciplinary Reviews: Computational Statistics, 2, 4, 2010
  34. Kullback S, The American Statistician, 41, 4, 1987
  35. Burnham KP, Anderson DR, Model selection and multi-model inference: A practical information-theoretic approach, Springer, 2002
  36. Downs JJ, Vogel EF, Comput. Chem. Eng., 17, 3, 1993