English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 92416/122720 (75%)
Visitors : 26257044      Online Users : 115
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 理學院 > 資訊科學系 > 期刊論文 >  Item 140.119/129119
    Please use this identifier to cite or link to this item: http://nccur.lib.nccu.edu.tw/handle/140.119/129119


    Title: Learning English–Chinese bilingual word representations from sentence-aligned parallel corpus
    Authors: 黃瀚萱*
    Chen, Hsin-Hsi
    Yen, An-Zi
    Chen, Hsin-Hsi
    Contributors: 資科系
    Keywords: Cross-lingual applications;Distributed word representation;Word alignment
    Date: 2019-07
    Issue Date: 2020-03-05 14:40:57 (UTC+8)
    Abstract: Representation of words in different languages is fundamental for various cross-lingual applications. In the past researches, there was an argument in using or not using word alignment in learning bilingual word representations. This paper presents a comprehensive empirical study on the uses of parallel corpus to learn the word representations in the embedding space. Various nonalignment and alignment approaches are explored to formulate the contexts for Skip-gram modeling. In the approaches without word alignment, concatenating A and B, concatenating B and A, interleaving A with B, shuffling A and B, and using A and B separately are considered, where A and B denote parallel sentences in two languages. In the approaches with word alignment, three word alignment tools, including GIZA++, TsinghuaAligner, and fast_align, are employed to align words in sentences A and B. The effects of alignment direction from A to B or from B to A are also discussed. To deal with the unaligned words in the word alignment approach, two alternatives, using the words aligned with their immediate neighbors and using the words in the interleaving approach, are explored. We evaluate the performance of the adopted approaches in four tasks, including bilingual dictionary induction, cross-lingual information retrieval, cross-lingual analogy reasoning, and cross-lingual word semantic relatedness. These tasks cover the issues of translation, reasoning, and information access. Experimental results show the word alignment approach with conditional interleaving achieves the best performance in most of the tasks. 2019 Elsevier Ltd. All rights reserved.
    Relation: Computer Speech & Language, Vol.56, pp.52-72
    Data Type: article
    DOI 連結: https://doi.org/10.1016/j.csl.2019.01.002
    DOI: 10.1016/j.csl.2019.01.002
    Appears in Collections:[資訊科學系] 期刊論文

    Files in This Item:

    File Description SizeFormat
    182.pdf2840KbAdobe PDF17View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback