表徵學習
閱讀設定
表徵學習(英文:feature learning / representation learning)係機械學習入面常用嘅一種演算法,專門用嚟將啲 input 轉化、令啲 input 簡單啲有用啲,但同時又保留到啲 input 當中帶嘅重要資訊。
例如喺現實應用當中,個數據庫有陣時可以有成幾百個用嚟做預測嘅變數(輸入 嘅維度有幾百咁高),而呢種情況喺用機械學習處理圖像同影片嗰陣好常見[1][2],噉可能會造成問題-例如搞到個程式要做嘅運算量太大,要花好耐時間先行得完[3]。於是個機械學習程式嘅設計者就可能會諗某啲方法,減少輸入當中變數嘅數量(降低 嘅維度),等個學習程式易行啲。常見方法有事前幫柞數據做吓主成分分析(principal component analysis)或者聚類分析先,跟住至將數據輸入落去個機械學習程式嗰度。但噉做就梗會令到輸入簡化咗,造成資訊流失。所以「表徵學習要點做」喺機械學習上係一個好受關注嘅課題[4][5][6]。
「做表徵學習,同柞數據降低維度」嘅過程可以用是但一種學習範式或者數學模型嚟做。譬如對於一個監督式學習人工神經網絡嘅表徵學習,係設佢啲輸入做成數據庫入面所有嘅 x 變數,跟手設佢啲輸出做成若干粒神經細胞(輸出神經細胞數量要明顯少過 變數嘅數量),再教佢畀出正確輸出[7][8]。
睇埋
[編輯]攷
[編輯]- ↑ Csurka, Gabriella; Dance, Christopher C.; Fan, Lixin; Willamowski, Jutta; Bray, Cédric (2004). Visual categorization with bags of keypoints (PDF). ECCV Workshop on Statistical Learning in Computer Vision.
- ↑ Daniel Jurafsky; James H. Martin (2009). Speech and Language Processing. Pearson Education International. pp. 145–146.
- ↑ A beginner’s guide to dimensionality reduction in Machine Learning. Towards Data Science.
- ↑ Y. Bengio; A. Courville; P. Vincent (2013). "Representation Learning: A Review and New Perspectives". IEEE Trans. PAMI, Special Issue Learning Deep Architectures. 35 (8): 1798–1828.
- ↑ Coates, Adam; Ng, Andrew Y. (2012). "Learning feature representations with k-means". In G. Montavon, G. B. Orr and K.-R. Müller (ed.). Neural Networks: Tricks of the Trade. Springer.
- ↑ Tillmann, A. M. (2015). "On the Computational Intractability of Exact and Approximate Dictionary Learning". IEEE Signal Processing Letters. 22 (1): 45–49.
- ↑ Nathan Srebro; Jason D. M. Rennie; Tommi S. Jaakkola (2004). Maximum-Margin Matrix Factorization. NIPS.
- ↑ Coates, Adam; Lee, Honglak; Ng, Andrew Y. (2011). An analysis of single-layer networks in unsupervised feature learning. Int'l Conf. on AI and Statistics (AISTATS).