English  |  正體中文  |  简体中文  |  Post-Print筆數 : 11 |  Items with full text/Total items : 89671/119468 (75%)
Visitors : 23931829      Online Users : 509
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://nccur.lib.nccu.edu.tw/handle/140.119/3859

    Title: 兩階段雙臂吃角子老虎問題
    Other Titles: Two-Stage Two-Armed Bandit Problems
    Authors: 余清祥
    Keywords: 最佳策略;兩階段決策;吃角子老虎
    Optimal strategy;Two-stage decision;Bandit
    Date: 2000
    Issue Date: 2007-04-18 16:36:51 (UTC+8)
    Publisher: 臺北市:國立政治大學統計學系
    Abstract: 在某疾病的兩個(或多個)藥方中找出較有效者,是臨床實驗中經常遇到的問題;其中吃角子老虎(Bandit)問題的目標在於使得治癒的病人總數最多,與一般直接檢定兩個藥方的療效不同。由於不易求解與不符實際要求,逐次方法在臨床實驗上並不可行,實際的作法是將病人依抵達實驗室的先後順序分組(或階段),不必在獲得前一病人的結果後再考慮下一位病人。過去的研究(例如Witmer,1986與Witmer and Clayton, 1988)顯示若採用兩階段式設計,且兩個藥方中一個藥方的療效已知(實際上此一假設並不合理),則在第一階段中不需使用此一藥方。Yue (1999)改良了療效已知的假設,僅假設其中一個藥方較為人知,也就是說其治癒率的變異數較小。在此假設下,Yue證明單臂吃角子老虎問題的結果大多仍適用,但其最佳決策的結構依然未知。本研究延續Yue的結果,證明了Berry (1972)及Pearson (1980)對於最佳決策的猜測,例如在第一階段中若兩個藥方使用次數相同,則必定不是最佳決策。
    Bandit problems generally are difficult to solve, even with the Bernoulli response. Other than sequential selection for every observation, separating the medical trial into several stages is a more realistic way of solving the two-armed bandit problem. Since the data can be collected at intervals throughout the trial, there is no need to know the result of previous patients before giving the next patient treatment, and the calculations can thus be simplified. Past work, such as Witmer (1986) and Clayton and Witmer (1988) showed that the well known treatment can be skipped in the first stage and can thus simplify the selection process. However, the assumption that one of the treatments is well known is unrealistic in practice. Yue (1999) extended the idea of two-stage decision by assuming that one of the treatment is better known, or has a smaller prior variance. He showed that the one-armed bandit is a special case under his setting, and most results in the one-armed bandit problem are still valid. However, the structure of optimal strategy in his setting remains unknown. In this study, we will continue searching the optimal strategy and study conjectures of inadmissible decisions by Berry (1972).
    Description: 核定金額:332800元
    Data Type: report
    Appears in Collections:[統計學系] 國科會研究計畫

    Files in This Item:

    File Description SizeFormat
    892118M004010.pdf227KbAdobe PDF1072View/Open

    All items in 政大典藏 are protected by copyright, with all rights reserved.

    社群 sharing

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback