English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 99910/130686 (76%)
Visitors : 37156814      Online Users : 259
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/111787
    Please use this identifier to cite or link to this item: http://nccur.lib.nccu.edu.tw/handle/140.119/111787

    Title: 基於深度學習之低解析度文字辨識
    Recognition of low resolution text using deep learning approach
    Authors: 黃依凡
    Contributors: 廖文宏
    Liao, Wen-Hung
    Keywords: 文字辨識
    Text recognition
    Convolution neural networks
    Low resolution
    Date: 2017
    Issue Date: 2017-08-10 09:59:08 (UTC+8)
    Abstract: 本論文關注的是電腦視覺中一個已充分研究過的議題,即光學文字識別。然而,我 們主要著重在一種非常特別的圖片類型:解析度非常低並且有大量失真與干擾的印刷中 文字。雖然使用卷積神經網路已能成功穩定識別高解析度印刷文字或手寫文字,然而, 對於品質非常低的印刷中文字仍有幾個挑戰,需要進一步分析研究。具體來說,我們的 資料集是點陣印刷機產生的 31,570 張文字圖片,包含模糊文字、缺少筆劃的文字以及 文字與其他文字或圖形重疊的文字圖片。為了有效地解決這些困難,我們實驗不同的深 層神經網路架構以及超參數,最後獲得辨識成果最佳的設置。在 1,530 類,平均解析度 為 16x18 像素的圖片中,top-1 和 top-5 的準確率分別為 71% 和 87%。
    Recent advances in deep neural networks have changed the landscape of computer vision and pattern recognition research significantly. Convolutional neural networks (CNN), for example, have demonstrated outstanding capabilities in image classification, in many cases exceeding human performance. Many tasks that did not get satisfactory results using conventional machine learning approaches are now being actively re-examined using deep learning techniques.

    This thesis is concerned with a well-investigated topic in computer vision, namely, optical character recognition (OCR). Our main focus, however, is a very specific class of input: printed Chinese texts with very low resolution and a significant amount of distortion/interference. Whereas the recognition of high-resolution texts, either printed or handwritten, has been successfully tackled using convolutional neural networks, the analysis of very low-quality printed Chinese texts poses several challenges that require further study. Specifically, our dataset consists of~31570~text images generated with dot-matrix printers, blurred texts, texts with missing strokes, and texts overlapping with other texts or graphs.To effectively address these difficulties, we have experimented with different deep neural networks with various combinations of network architectures and hyperparameters. The results are reported and discussed in order to obtain an optimal setting for the recognition task. The top-1 and top-5 accuracies are 71% and 87%, respectively, for input images with an average resolution of 16x18 pixels belonging to 1530 classes.
    Reference: [1] Yuhao Zhang. Deep convolutional network for handwritten chinese character recognition. CS231N course project.

    [2] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using con- volutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.

    [3] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3156–3164, 2015.

    [4] Xu Chen. Convolution neural networks for chinese handwriting recognition.

    [5] Charles Jacobs, Patrice Y Simard, Paul Viola, and James Rinker. Text recognition of low- resolution document images. In Eighth International Conference on Document Analysis and Recognition (ICDAR’05), pages 695–699. IEEE, 2005.

    [6] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.

    [7] YuanqingLin, FengjunLv, ShenghuoZhu, MingYang, TimotheeCour, KaiYu, Liangliang Cao, and Thomas Huang. Large-scale image classification: fast feature extraction and svm training. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1689–1696. IEEE, 2011.

    [8] AlexKrizhevsky, IlyaSutskever, and Geoffrey Hinton.Imagenetclassificationwithdeep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

    [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 1026–1034, 2015.

    [10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. IEEE, pages 770 – 778, 2016.

    [11] Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.

    [12] Yann LeCun, LD Jackel, Léon Bottou, Corinna Cortes, John S Denker, Harris Drucker, Isabelle Guyon, UA Muller, E Sackinger, Patrice Simard, et al. Learning algorithms for classification: A comparison on handwritten digit recognition. Neural networks: the sta- tistical mechanics perspective, 261:276, 1995.

    [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European Conference on Computer Vision, pages 346–361. Springer, 2014.

    [14] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 1440–1448, 2015.

    [15] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real- time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.

    [16] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. arXiv preprint arXiv:1506.02640, 2015.

    [17] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. Ssd: Single shot multibox detector. arXiv preprint arXiv:1512.02325, 2015.

    [18]Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman.Deepinsideconvolutionalnet- works: Visualising image classification models and saliency maps. arXiv preprint arXiv: 1312.6034, 2013.
    [19] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Com- puter Vision, pages 1520–1528, 2015.

    [20] Zhuoyao Zhong, Lianwen Jin, and Zecheng Xie. High performance offline handwritten chinese character recognition using googlenet and directional feature maps. In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pages 846– 850. IEEE, 2015.

    [21] KarenSimonyanandAndrewZisserman.Verydeepconvolutionalnetworksforlarge-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

    [22] MatthewDZeilerandRobFergus.Visualizingandunderstandingconvolutionalnetworks. In European conference on computer vision, pages 818–833. Springer, 2014.

    [23] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Jour- nal of Machine Learning Research, 15(1):1929–1958, 2014.

    [24] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedfor- ward neural networks. In Aistats, volume 9, pages 249–256, 2010.
    Description: 碩士
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0104753010
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File SizeFormat
    301001.pdf10381KbAdobe PDF76View/Open

    All items in 政大典藏 are protected by copyright, with all rights reserved.

    社群 sharing

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback