資料載入中.....
|
請使用永久網址來引用或連結此文件:
https://nccur.lib.nccu.edu.tw/handle/140.119/66906
|
題名: | Benchmarking intelligent information integration – A generic construct-based model |
作者: | 諶家蘭 Seng, Jia-Lang;Lin, M.I. |
貢獻者: | 會計系 |
關鍵詞: | XML;Ontology;Intelligent information integration;Generic construct;Benchmark;Workload model;Performance measurement and evaluation |
日期: | 2010.06 |
上傳時間: | 2014-06-25 10:23:30 (UTC+8) |
摘要: | Benchmarks are vital tools in the performance measurement and evaluation of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of domain boundness and workload boundness which results in the ir-representative and ir-reproducible performance reading. We tackle the issue by proposing a domain-independent and workload-independent benchmark method which is developed from the perspective of the user requirements. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application. The benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. In web search, the generic constructs are main common carriers we adopt to capture and compose the workload requirements. We determine the requirements via the analysis of literature study. In this study, we have conducted ten baseline experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method is capable of modeling the standard benchmarks as well as more general benchmark requirements. |
關聯: | Expert Systems with Applications, 37(6), 4242-4255 |
資料類型: | article |
DOI 連結: | http://dx.doi.org/10.1016/j.eswa.2009.11.078 |
DOI: | 10.1016/j.eswa.2009.11.078 |
顯示於類別: | [會計學系] 期刊論文
|
文件中的檔案:
檔案 |
描述 |
大小 | 格式 | 瀏覽次數 |
4242-4255.pdf | | 748Kb | Adobe PDF2 | 1145 | 檢視/開啟 |
|
在政大典藏中所有的資料項目都受到原著作權保護.
|