English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 109952/140887 (78%)
Visitors : 46345085      Online Users : 869
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 統計學系 > 學位論文 >  Item 140.119/146306
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/146306


    Title: 基於K-Means和因素森林的特徵選取法
    Feature Selection Using Factor-Forest and K-Means
    Authors: 陳劭晏
    Chen, Shao-Yan
    Contributors: 周珮婷
    張育瑋

    Elizabeth Chou
    Yu-Wei Chang

    陳劭晏
    Chen, Shao-Yan
    Keywords: 特徵選取
    維度縮減
    集群分析
    Feature selection
    Dimension reduction
    Clustering
    Factor-Forest-K-Means
    Date: 2023
    Issue Date: 2023-08-02 13:04:02 (UTC+8)
    Abstract: 在資料分析的流程中,特徵選取是至關重要的步驟,可以用來從龐大而複雜的資料中篩選出重要的特徵。近年來,許多研究顯示K-Means 演算法不僅能用於進行特徵選取,更可以提升機器學習模型性能,它能夠找出使模型表現有所提升的變數子集。此外,Goretzko & Bühner (2020) 提出了一種名為因素森林 (Factor Forest) 的方法,可用於確定資料中潛在因子的適當數量。在本研究中,我們將提出一種全新的特徵選擇方法,Factor-Forest-K-Means(FFKM),該方法採用 Factor-Forest 作為指標,並透過 K-Means 來篩選變數。它不僅能夠將資料的維度減少約 90%,還能維持模型的準確率。FFKM 具備簡單易使用的特性,並且在本研究中的實驗中整體表現優於其他指標方法和模型,並在其選出的特徵子集上擁有最佳的準確度保留率 (accuracy retention)、降維幅度百分比 (reduction percentage) 和變數準確度保留比例 (Accuracy Retention per Variables)。實驗結果顯示,FFKM 是一種良好的維度縮減方法,能夠在大幅度降低維度的情況下,提升機器學習模型的性能。
    Feature selection is a critical step in data analysis to identify important variables from a large number of complex data. Many recent studies have demonstrated that K-Means can be utilized to find a subset of variables that enhances the performance of machine learning models. Another method, Factor Forest (Goretzko & Bühner, 2020), has also been proposed to determine the appropriate number of latent factors in data. In this research, we introduce a new feature selection method using K-Means clustering, called Factor-Forest-K-Means (FFKM), which not only reduces the dimensionality by approximately 90%, but also preserves the predictive accuracy of the original model. The FFKM method is easy to implement and outperforms other index methods and models tested in this study, with the highest accuracy retention, reduction percentage and accuracy retention per variable selected among all methods in different settings. Our results show that FFKM is a promising feature reduction method and can enhance machine learning models’ performance.
    Reference: Braeken, J., & Van Assen, M. A. (2017). An empirical kaiser criterion. Psychological methods, 22(3), 450.
    Breiman, L. (2001). Random forests. Machine learning, 45, 5–32.
    Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785–794).
    Gini, C. (1921). Measurement of inequality of incomes. The economic journal, 31(121),124–125.
    Goretzko, D., & Bühner, M. (2020). One model to rule them all? using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychological Methods, 25(6), 776.
    Goretzko, D., & Bühner, M. (2022). Factor retention using machine learning with ordinal data. Applied Psychological Measurement, 46(5), 406–421.
    Hartigan, J. (1975). Clustering algorithms. Wiley. Retrieved from https://books.google.com.tw/books?id=cDnvAAAAMAAJ
    Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30, 179–185.
    Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and psychological measurement, 20(1), 141–151.
    Khaleel, S. (2011). Feature selection using k-means clustering for data mining.
    Liaw, A., & Wiener, M. (2002). Classification and regression by randomforest. R News, 2(3), 18-22. Retrieved from https://CRAN.R-project.org/doc/Rnews/
    Lloyd, S. (1957). Least square quantization in pcm. bell telephone laboratories paper. published in journal much later: Lloyd, sp: Least squares quantization in pcm. IEEE Trans. Inform. Theor.(1957/1982), 18(11).
    MacQueen, J. (1967). Classification and analysis of multivariate observations. In 5th berkeley symp. math. statist. probability (pp. 281–297).
    Parida, K., Mandal, S., Das, S., & Tripathy, A. (2011). Feature extraction using k-means clustering: An approach implementation.
    Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20, 53–65.
    Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological assessment, 24(2), 282.
    Tang, X., Dong, M., Bi, S., Pei, M., Cao, D., Xie, C., & Chi, S. (2017). Feature selection algorithm based on k-means clustering. In 2017 ieee 7th annual international conference on cyber technology in automation, control, and intelligent systems (cyber) (pp. 1522–1527).
    Thomas, J., Coors, S., & Bischl, B. (2018). Automatic gradient boosting. arXiv preprint arXiv:1807.03873.
    Thorndike, R. (1953). Who belongs in the family? Psychometrika, 18(4), 267–276.
    Description: 碩士
    國立政治大學
    統計學系
    110354012
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110354012
    Data Type: thesis
    Appears in Collections:[統計學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    401201.pdf450KbAdobe PDF20View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback