政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/159056
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 118252/149288 (79%)
Visitors : 75163927      Online Users : 771
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/159056


    Title: 人機協作下的交通執法:台北市科技執法成效與自動化決策風險探討
    Human-Machine Collaboration in Traffic Law Enforcement: Evaluating the Effectiveness of Technological Enforcement and the Risks of Automated Decision-Making in Taipei
    Authors: 李奕萱
    Lee, Yi-Hsuan
    Contributors: 朱斌妤
    Chu, Pin-Yu
    李奕萱
    Lee, Yi-Hsuan
    Keywords: 科技執法
    政策成效
    政策順服
    委託代理人理論
    自動化決策
    Technological Enforcement
    Policy Effectiveness
    Policy Compliance
    Principal-Agent Theory
    Automated Decision-Making
    Date: 2025
    Issue Date: 2025-09-01 14:53:57 (UTC+8)
    Abstract: 本研究旨在探討台北市科技執法成效與自動化決策風險,採混合研究方法,蒐集分析客觀交通資料以及政策利害關係人(民眾與執法警察)主觀看法,共分為四個面向:一、以敘述統計和成對樣本中位數差異檢定分析科技執法前後的交通事故變化;二、透過問卷探討駕駛人對政策之順服態度與行為改變;三、訪談警察了解科技執法對其執勤工作的影響;四、結合委託代理理論,探討警察對AI自動化治理的看法與潛在風險。研究範圍為大安區及文山區,並以取締項目設置最多的「闖紅燈、不停讓行人、不依規定轉彎、不依標誌標線號誌指示行駛」作為本研究探討之取締項目。
    研究發現,在科技執法設備啟用前後的交通事故變化上,雖部分路段觀察到事故數短期下降,但整體而言,無論是一年或三個月的比較,統計結果皆未達顯著水準。事故變化可能受多重因素影響,如路口設計、駕駛行為等,顯示科技執法在短期內對事故的抑制效果有限,需配合其他交通治理手段方能發揮整體效益。
    在駕駛人政策順服問卷調查中,共回收117份問卷,分析民眾對科技執法政策的熟悉度、價值觀、個人效益與政策順服之間的關係。研究發現,價值觀與個人效益對政策順服具有顯著正向影響,顯示當民眾認同政策理念並感受到實質利益時,順服意願較高。相反地,政策內容熟悉度則與政策順服無顯著相關。結果強調政策推動應著重於溝通政策價值及效益,而非僅傳遞內容資訊,以促進民眾自發配合。
    在深度訪談科技執法對於警察工作影響方面,研究發現科技執法雖由AI自動偵測違規,但最終舉發仍需警察審核。科技執法能減輕警力負擔、提升效率,降低執法風險,並提供客觀且標準化證據,減少爭議。最後,以委託代理人理論角度訪談警察在使用AI科技執法系統時面臨的挑戰,研究發現多數警察反映演算法的透明度不足,系統運作邏輯不公開,造成執法人員在執法過程中存在資訊不對稱,難以完全理解AI的判斷依據;其次,由於資訊不對稱及系統特性,警察需保持基本判斷力並擁有干預權,避免過度依賴AI導致誤判;第三,AI在面對特殊情境如光線變化或人車複雜交會時,容易產生誤判,增加執法糾紛的風險,因此人工介入與系統持續優化不可或缺;最後,目前大數據資料建置及整合不足,加上不同廠商技術水準不一,導致系統準確度與執法標準不一致,將影響整體的執法效能與信任度。
    在政策建議方面,科技執法設備對事故數影響有限,建議結合安全教育與交通設計共同預防。AI系統仍有誤判與透明度問題,需持續優化系統並加強人工審核,建立雙重把關保障公正。此外,應建立正式回饋機制,促進警察與廠商溝通,並公開執法成效提升信任。而宣導上應強調實際效益,增加民眾配合度。其他配套方面,應彈性處理特殊情況,推動法規標準化及強化個資保護,提升數位執法合法性與接受度。
    This study aims to explore the effectiveness of technological law enforcement and the risks of automated decision-making in Taipei City. A mixed-methods approach was adopted, combining objective traffic data analysis with the subjective perspectives of key policy stakeholders, including the public and frontline police officers. The research is structured around four dimensions: (1) Descriptive Statistics and Wilcoxon Signed-Rank Test were used to analyze traffic accident changes before and after the implementation of automated enforcement; (2) a survey was conducted to examine drivers’ attitudes toward policy compliance and behavioral changes; (3) interviews were conducted with police officers to understand the impact of automated enforcement on their daily duties; and (4) the Principal-Agent Theory was applied to analyze officers' views on AI-driven governance and its potential risks. The study focused on Daan and Wenshan Districts, targeting the most frequently enforced violations: running red lights, failing to yield to pedestrians, improper turns, and disobeying signs or signals.
    Findings indicate that while certain intersections showed a short-term decline in accidents after implementation, the overall statistical results—whether compared over three months or one year—did not reach a significant level. Accident trends may be influenced by multiple factors such as intersection design and driver behavior, suggesting that the short-term impact of technological enforcement on accident reduction is limited. Additional traffic governance strategies are necessary to achieve comprehensive effectiveness.
    In the driver compliance survey, a total of 117 valid responses were collected, analyzing the relationship between policy familiarity, values, perceived personal benefits, and policy compliance. Results showed that personal values and perceived benefits significantly and positively influenced compliance. This implies that when the public agrees with the policy rationale and perceives tangible benefits, they are more willing to comply. In contrast, familiarity with policy content was not significantly related to compliance, highlighting the importance of communicating policy values and benefits rather than merely conveying informational content.
    From the in-depth interviews with police officers, the study found that although AI systems automatically detect violations, final citations still require officer verification. Technological enforcement helps reduce manpower burdens, improves efficiency, lowers enforcement risks, and provides objective, standardized evidence to reduce disputes. From the Principal-Agent Theory perspective, several challenges were identified. First, officers expressed concerns about the lack of algorithm transparency and the system’s undisclosed logic, resulting in information asymmetry and difficulty understanding the AI's basis for decisions. Second, due to such asymmetry and system limitations, officers must retain discretionary judgment and intervention rights to prevent overreliance on AI, which could lead to misjudgments. Third, AI systems are prone to errors in complex scenarios, such as variable lighting or heavy pedestrian-vehicle interaction, increasing the risk of enforcement disputes—underscoring the need for continuous human oversight and system refinement. Lastly, the current insufficiency in big data infrastructure and inconsistent technological standards across vendors contribute to variable system accuracy and enforcement standards, potentially affecting the overall effectiveness and public trust in enforcement systems.
    In terms of policy recommendations, given the limited impact of technological enforcement on accident reduction, it is advised to integrate it with safety education and improved traffic design. Since AI systems still face issues of misjudgment and opacity, continuous system optimization and strengthened manual review mechanisms are necessary to ensure fairness and accountability. A formal feedback mechanism should also be established to facilitate communication between police and vendors and to publicly share enforcement outcomes to build trust. Public outreach should emphasize the practical benefits of enforcement to enhance compliance. Moreover, flexible handling of special cases, standardized legal regulations, and strengthened personal data protection are essential to improve the legitimacy and public acceptance of digital enforcement.
    Reference: 一、中文文獻
    方妙玲(2008)。高階主管薪資與財務績效及社會績效之關聯性:代理理論及利害關係人理論觀點。企業管理學報,(77),47-80。https://www.airitilibrary.com/Article/Detail?DocID=10259627-200806-x-77-47-80-a
    丘昌泰(2013)。《公共政策》。高雄:巨流書局。
    朱志宏(2000)。《公共政策》。台北:三民書局。
    朱斌妤(2024-2026)。數位權利與公部門人工智慧資料治理(編號:MOST 113-2410-H-004-159-SS2)。國科會專題研究計畫,未出版成果報告。
    吳宗憲(2009)。非營利組織辦理兩岸文教交流活動時的策略性順服行爲。國家發展研究學報,9(1), 37-83。https://doi.org/10.6164/JNDS.9-1.2
    吳秋全(2021)。民眾亂丟垃圾行為的研究:政策順服之觀點〔未出版之碩士論文〕。國立臺灣大學。https://doi.org/10.6342/NTU202103092
    吳定(2019)。《公共政策》。新北:國立空中大學。
    李惠宗(2003)。超速的陷阱-交通速限標誌與照相測速器之法律問題。臺灣本土法學雜誌,44,121-122。https://www.lawbank.com.tw/treatise/pl_article.aspx?AID=P000055300#t1
    柯于璋(2013)。政府委託研究案代理問題之探討:一個結合賽局理論與代理人理論的研究取向。行政暨政策學報,(57),1-35。https://doi.org/10.29865/PAP.201312_(57).0001
    徐仁輝(1995)。組織經濟學的源起與發展。中國行政評論,4(3),105-126。
    陳人豪(民 90),兩岸員工工作價值觀與工作特性對工作態度之影響,中央大學,未出版之碩士論文,桃園市。
    陳俊宏 (2013) 。警察職權行使法施行迄今之實證分析--以執行臨檢勤務為中心。警專學報,37-53。https://tpl.ncl.edu.tw/NclService/JournalContentDetail?SysId=A13018555
    陳敦源、林靜美(2005)。有限理性下的不完全契約:公部門績效管理制度的反 思。考銓季刊,43,96-121。C
    張世賢、陳恆鈞(1997)。《公共政策》:政府與市場的觀點。台北:商鼎。
    黃心怡、陳敦源(2023)。人工智慧協作下的公共行政研究:對公部門組織議題的多層次反思。政治科學論叢,(96),139-178。https://doi.org/10.6166/TJPS.202306_(96).0004
    黃昆輝(2000)。政策順服。國家教育研究院辭典。https://pedia.cloud.edu.tw/Entry/Detail/?search=%E9%A0%86&title=%E6%94%BF%E7%AD%96%E9%A0%86%E6%9C%8D
    楊漢鵬(2019)。運用科技設備於交通執法之研究。交通學報,19(1),1-12。
    https://ts.cpu.edu.tw/var/file/20/1020/attach/41/pta_30208_7739705_46333.pdf
    楊國樞(1997),心理學研究的本土契合性及其相關問題,本土心理學研究,第 8 期,75-120 頁。
    葛傳宇(2009)。代理理論之侷限性-以世界銀行之環保政策變遷為例。公共事務評論,10(1),1-20。https://doi.org/10.29622/JPAR.200906.0001
    蔡淑敏(2016)。交通安全教育宣導與知識對大學生用路態度與行為的影響。〔未出版之碩士論文〕。國立屏東大學。https://hdl.handle.net/11296/vj3mec。
    簡龍鳳、賴宗裕(2006)。以代理理論觀點探討民間參與區段徵收開發之規模。都市與計劃,33(3),169-188。https://doi.org/10.6128/CP.33.3.169
    關雅筠(2023)。科技政策評估機制探討:誰來參與政策制定?。臺灣經濟研究月刊,46(12),105-112。https://doi.org/10.29656/TERM.202312_46(12).0015

    二、英文文獻
    Almada, M. (2019). Human intervention in automated decision-making: Toward the construction of contestable systems. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law (pp. 2–11). Association for Computing Machinery. https://doi.org/10.1145/3322640.3326699
    Alon-Barkat, S., & Busuioc, M. (2023). Human-AI interactions in public sector decision making: “automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), Article 1. https://doi.org/10.1093/jopart/muac007
    Aviram, N. F., Correa, C., & Oliviera, R. (2024). Technology 3.0: police officers’ perceptions towards technology shifts. The American Review of Public Administration, 54(1), 90-103. https://doi.org/10.1177/02750740231186791
    Beaton, M. D., Oakey, M., Newhouse, E., Copley, T. T., Fyfe, M., Karbakhsh, M., Turcotte, K., Zheng, A., & Pike, I. (2022). Critical elements of public acceptance and support for automated speed enforcement in British Columbia, Canada. Journal of Transport & Health, 26, 101461. https://doi.org/10.1016/j.jth.2022.101461
    Blattner, L., Nelson, S., & Spiess, J. (2022). Unpacking the black box: Regulating algorithmic decisions. In Proceedings of the 23rd ACM Conference on Economics and Computation. Association for Computing Machinery. https://doi.org/10.1145/3490486.3538379
    Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). It’s reducing a human being to a percentage: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3173574.3174018
    Boaz, A., Fitzpatrick, S., & Shaw, B. (2009). Assessing the impact of research on policy: A literature review. Science and Public Policy, 36(4), 255–270. https://doi.org/10.3152/030234209X436545
    Braun, D., & Guston, D. H. (2003). Principal-agent theory and research policy: An introduction. Science and Public Policy, 30(5), 302–308. https://doi.org/10.3152/147154303781780290
    Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9189-3
    Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
    Coleman, J. S. (1990). Foundations of Social Theory. Harvard University Press.
    Creswell, J., & Plano Clark, V. (2007). Designing and conducting mixed methods research. Sage.
    Danelid, F. (2024). Automation Bias in Public Sector Decision Making: A systematic review. https://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-226153
    Dawes, S. S., Vidiasova, L., & Parkhimovich, O. (2016). Planning and designing open government data programs: An ecosystem approach. Government Information Quarterly, 33(1), 15-27. https://doi.org/10.1016/j.giq.2016.01.003
    De-Arteaga, M., Fogliato, R., & Chouldechova, A. (2020). A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376638
    Diener, E., Lucas, R. E., & Oishi, S. (2002). Subjective well-being: The science of happiness and life satisfaction. In C. R. Snyder & S. J. Lopez (Eds.), Handbook of positive psychology (pp. 463–473). Oxford University Press.
    Dowding, K., & Taylor, B. R. (2024). Algorithmic decision-making, agency costs, and institution-based trust. Philosophy & Technology, 37, 68. https://doi.org/10.1007/s13347-024-00757-5
    Dunn, W. N. (1994). Public Policy Analysis: An Introduction. Prentice-Hall.
    Eisenhardt, K. M. (1989). Building theories from case study research. The Academy of Management Review, 14(4), 532–550. https://doi.org/10.2307/258557
    Electronic Privacy Information Center. (2022). Screened & scored in the District of Columbia: How automated tools discriminate against D.C. residents and reinforce systemic bias. https://epic.org/screened-scored-in-dc/
    Engstrom, D. F., & Haim, A. (2023). Regulating Government AI and the Challenge of Sociotechnical Design. Annual Review of Law and Social Science, 19, 277-298. https://www.annualreviews.org/doi/abs/10.1146/annurev-lawsocsci-120522-091626
    Gaudeul, A., Arrigoni, O., Charisi, V., Escobar-Planas, M., & Hupont Torres, I. (2024). Understanding the impact of human oversight on discriminatory outcomes in AI-supported decision-making. In Proceedings of the 2024 International Conference on Artificial Intelligence. https://doi.org/10.3233/FAIA240598
    Glennerster, H. (1981). [Review of the book Public policy evaluation: Approaches and methods, by D. Nachmias]. Journal of Social Policy, 10(1), 129–131. https://doi.org/10.1017/S0047279400010473
    Hanekom, S. X. (1987). Public Policy: Framework and Instrument for Action. Macmillan South Africa.
    Hardyns, W., & Rummens, A. (2018). Predictive policing as a new tool for law enforcement? Recent developments and challenges. European Journal of Criminal Policy and Research, 24, 201–218. https://doi.org/10.1007/s10610-017-9361-2
    Hemmer, P., Schemmer, M., Vössing, M., & Kühl, N. (2021). Human-AI complementarity in hybrid intelligence systems: A structured literature review. In Proceedings of the Twenty-Fifth Pacific Asia Conference on Information Systems (pp. 1–14).
    Himawan, A. (2023). Factors influencing traffic compliance. International Journal of Social and Management Studies, 4(3), 33–41.
    Janssen, M., Charalabidis, Y., & Zuiderwijk, A. (2012). Benefits, Adoption Barriers and Myths of Open Data and Open Government. Information Systems Management, 29(4), 258–268. https://doi.org/10.1080/10580530.2012.716740
    Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and ownership structure. Journal of Financial Economics, 3(4), 305–360. https://doi.org/10.1016/0304-405X(76)90026-X
    Keding, C., & Meissner, P. (2021). Managerial overreliance on AI-augmented decision making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions. Technological Forecasting and Social Change, 171, Article 120970. https://doi.org/10.1016/j.techfore.2021.120970
    Kerimov, M., Safiullin, R., Marusin, A., & Marusin, A. (2017). Evaluation of functional efficiency of automated traffic enforcement systems. Transportation Research Procedia, 20, 288–294. https://doi.org/10.1016/j.trpro.2017.01.025
    Kim, H. (2024). Bridging principal-agent and mechanism design theories: An integrated conceptual framework for policy evaluation. Asia Pacific Education Review, 25(3), 329–342. https://doi.org/10.1007/s12564-023-09844-2
    Kim, E.-S. (2020). Deep learning and principal–agent problems of algorithmic governance: The new materialism perspective. Technology in Society, 63, 101378. https://doi.org/10.1016/j.techsoc.2020.101378
    Kochenderfer, M. J., Wheeler, T. A., & Wray, K. H. (2022). Algorithms for decision making. MIT Press.
    Krafft, T. D., Zweig, K. A., & König, P. D. (2020). How to regulate algorithmic decision-making: A framework of regulatory requirements for different applications. Regulation & Governance. https://doi.org/10.1111/rego.12369
    Krafft, T. D., Zweig, K. A., & König, P. D. (2022). How to regulate algorithmic decision-making: A framework of regulatory requirements for different applications. Regulation & Governance, 16(1), 119–136. https://doi.org/10.1111/rego.12369
    Krislov, S., K. O. Boyum, J. N. Clark, R. C. Schaefer, & S. O. White (1972). Compliance and the law: A multidisciplinary approach. Criminology, 10(2), 243. https://doi.org/10.1111/j.1745-9125.1972.tb00560.x
    Kuziemski, M., & Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications Policy, 44(6), 101976. https://doi.org/10.1016/j.telpol.2020.101976
    Lazcoz, G., & de Hert, P. (2023). Humans in the GDPR and AIA governance of automated and algorithmic systems: Essential pre-requisites against abdicating responsibilities. Computer Law & Security Review, 50, 105833. https://doi.org/10.1016/j.clsr.2023.105833
    Lu, P., Li, D., Zhang, Z., Zhang, T., Huang, A., Yang, S., & Hu, Y. (2024). Human-AI collaboration: Unraveling the effects of user proficiency and AI agent capability in intelligent decision support systems. International Journal of Industrial Ergonomics, 103, 103629. https://doi.org/10.1016/j.ergon.2024.103629
    MacKenzie, N. L., McCallen, J., & Thayer, J. M. (2023). The divergence of auditors’ stated risk assessments and planned audit responses to clients’ use of artificial intelligence. SSRN. http://dx.doi.org/10.2139/ssrn.4619017
    Meier, K. J., & Morgan, D. R. (1982). Citizen compliance with public policy: The National Maximum Speed Law. The Western Political Quarterly, 35(2), 258–273. https://doi.org/10.4335/21.4.833-852(2023)
    Milward, H. B., Denhardt, K. G., Rucker, R. E., & Thomas, T. L. (1983). Implementing affirmative action and organizational compliance: The case of universities, Administration and Society, 15(3), 363-384. https://doi.org/10.1177/009539978301500304
    Mikušová, M., & Hrkút, P. (2014). Public perception of selected road safety problems. Procedia - Social and Behavioral Sciences, 162, 330-339. https://doi.org/10.1016/j.sbspro.2014.12.214
    Mikusova, M. (2011). Value of networking in transport policy related to road safety. In J. Mikulski (Ed.), Modern Transport Telematics (Communications in Computer and Information Science, Vol. 239). Springer. https://doi.org/10.1007/978-3-642-24660-9_8
    Molina-Azorin, J. F. (2016). Mixed methods research: An opportunity to improve our studies and our research skills. European Journal of Management and Business Economics, 25(2), 37-38. https://doi.org/10.1016/j.redeen.2016.05.001
    Montella, A., Imbriani, L. L., Marzano, V., & Mauriello, F. (2015). Effects on speed and safety of point-to-point speed enforcement systems: Evaluation on the urban motorway A56 Tangenziale di Napoli. Accident Analysis and Prevention, 75, 164-178. https://doi.org/10.1016/j.aap.2014.11.022
    Nosek, B.A., Hawkins, C.B., & Frazier, R.S. (2011). Implicit social cognition: From
    measures to mechanisms. Trends in Cognitive Sciences, 15(4), 152-159.
    OECD (2019). Artificial Intelligence in Society. https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-in-society_eedfee77-en
    Ranerup, A., & Henriksen, H. Z. (2019). Value positions viewed through the lens of automated decision-making: The case of social services. Government Information Quarterly, 36(4), 101377. https://doi.org/10.1016/j.giq.2019.05.004
    Scantamburlo, T., Charlesworth, A., & Cristianini, N. (2019). Machine decisions and human consequences. In K. Yeung & M. Lodge (Eds.), Algorithmic regulation. Oxford University Press.
    Shaaban, K., & Mohammad, A. (2023). Effectiveness of a fixed speed camera traffic enforcement system in a developing country. Ain Shams Engineering Journal, 14(1), 102154. https://doi.org/10.1016/j.asej.2023.102154
    Sinning, M., & Zhang, Y. (2023). Social norms or enforcement? A natural field experiment to improve traffic and parking fine compliance. Journal of Economic Behavior & Organization, 210, 43-60.https://doi.org/10.1016/j.jebo.2023.03.029.
    Snow, T. (2021). From satisficing to artificing: The evolution of administrative decision making in the age of the algorithm. Data & Policy, 3, Article e3. https://doi.org/10.1017/dap.2020.25
    So, W. (2023). Which information matters? Measuring landlord assessment of tenant screening reports. Housing Policy Debate, 33(6), Article 6. https://doi.org/10.1080/10511482.2022.2113815
    Soole, D. W., Lennon, A. J., & Watson, B. C. (2008). Driver perceptions of police speed enforcement: Differences between camera-based and non-camera based methods: Results from a qualitative study. In Australasian Road Safety Research, Policing and Education Conference, November 10-12, South Australia. https://www.researchgate.net/publication/27476559_Driver_perceptions_of_police_speed_enforcement_differences_between_camera-based_and_non-camera_based_methods_results_from_a_qualitative_study
    Søren C. W., & J. M. Peter. (2001). Motivation for Compliance with Environmental Regulations. Journal of Policy Analysis and Management, 20(4),675-698. https://doi.org/10.1002/pam.1023
    Starling, G. (1988). Strategies for Policy Making. The Dorsey Press.
    Tyler, T. R. (1990). Why People Obey the law. Yale University Press.
    Tyler, T. R. (2006). Why People Obey the Law. Princeton University Press.
    Williamson, O. E. (1985). The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting (1st ed.). New York: Free Press.
    Zuiderwijk, A., Janssen, M., Choenni, S., & Sheikh_Alibaks, R. (2012). Socio-technical impediments of open data. Government Information Quarterly, 10(2), 156-172.
    Description: 碩士
    國立政治大學
    公共行政學系
    112256023
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0112256023
    Data Type: thesis
    Appears in Collections:[Department of Public Administration] Theses

    Files in This Item:

    File Description SizeFormat
    602301.pdf5996KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback