• subsumption resolution: an efficient and effective technique for semi-naive bayesian learning

    جزئیات بیشتر مقاله
    • تاریخ ارائه: 1392/07/24
    • تاریخ انتشار در تی پی بین: 1392/07/24
    • تعداد بازدید: 973
    • تعداد پرسش و پاسخ ها: 0
    • شماره تماس دبیرخانه رویداد: -
     semi-naive bayesian techniques seek to improve the accuracy of naive bayes (nb) by relaxing the attribute independence assumption. we present a new type of semi-naive bayesian operation, subsumption resolution (sr), which efficiently identifies occurrences of the specialization-generalization relationship and eliminates generalizations at classification time. we extend sr to near-subsumption resolution (nsr) to delete near–generalizations in addition to generalizations. we develop two versions of sr: one that performs sr during training, called eager sr (esr), and another that performs sr during testing, called lazy sr (lsr). we investigate the effect of esr, lsr, nsr and conventional attribute elimination (bse) on nb and averaged one-dependence estimators (aode), a powerful alternative to nb. bse imposes very high training time overheads on nb and aode accompanied by varying decreases in classification time overheads. esr, lsr and nsr impose high training time and test time overheads on nb. however, lsr imposes no extra training time overheads and only modest test time overheads on aode, while esr and nsr impose modest training and test time overheads on aode. our extensive experimental comparison on sixty uci data sets shows that applying bse, lsr or nsr to nb significantly improves both zero-one loss and rmse, while applying bse, esr or nsr to aode significantly improves zero-one loss and rmse and applying lsr to aode significantly improves zero-one loss. the friedman test and nemenyi test show that aode with esr or nsr have a significant zero-one loss and rmse advantage over logistic regression and a zero-one loss advantage over weka’s libsvm implementation with a grid parameter search on categorical data. aode with lsr has a zero-one loss advantage over logistic regression and comparable zero-one loss with libsvm. finally, we examine the circumstances under which the elimination of near-generalizations proves beneficial.

سوال خود را در مورد این مقاله مطرح نمایید :

با انتخاب دکمه ثبت پرسش، موافقت خود را با قوانین انتشار محتوا در وبسایت تی پی بین اعلام می کنم