خلاصه
1. معرفی
2. پشتوانه نظری و مدل تحقیق
3. روش شناسی
4. نتایج
5. بحث و نتیجه گیری
بیانیه مشارکت نویسنده CRediT
اعلامیه منافع رقابتی
در دسترس بودن داده ها
منابع
Abstract
1. Introduction
2. Theoretical support and research model
3. Methodology
4. Results
5. Discussion and conclusions
CRediT authorship contribution statement
Declaration of competing interest
Data availability
References
چکیده
اگرچه هوش مصنوعی میتواند به فرآیندهای تصمیمگیری کمک کند، بسیاری از بازیگران صنعت در استفاده از فناوریهای مبتنی بر هوش مصنوعی از شرکتهای پیشگام عقب ماندهاند، که این یک مشکل مهم است. هوش مصنوعی قابل توضیح می تواند راه حل مناسبی برای کاهش این مشکل باشد. این مقاله یک مدل تحقیقاتی را برای پرداختن به اینکه چگونه هوش مصنوعی قابل توضیح میتواند بر فرآیندهای تصمیمگیری تأثیر بگذارد، پیشنهاد میکند. با استفاده از یک طرح آزمایشی، داده های تجربی برای آزمون مدل تحقیق جمع آوری شده است. این مقاله یکی از مقالات پیشگام است که شواهد تجربی در مورد تأثیر هوش مصنوعی قابل توضیح بر فرآیندهای تصمیمگیری زنجیره تأمین ارائه میکند. ما یک مسیر میانجیگری سریالی را پیشنهاد میکنیم که شامل شفافیت و تصمیمگیری چابک است. یافتهها نشان میدهد که هوش مصنوعی قابل توضیح شفافیت را افزایش میدهد و در نتیجه به تصمیمگیری چابک برای بهبود انعطافپذیری سایبری در طول حملات سایبری زنجیره تامین کمک میکند. علاوه بر این، ما با استفاده از تجزیه و تحلیل متن، یک تجزیه و تحلیل post hoc انجام می دهیم تا موضوعات موجود در توییت ها را بررسی کنیم که در مورد هوش مصنوعی قابل توضیح در سیستم های پشتیبانی تصمیم بحث می کنند. نتایج نشاندهنده یک نگرش عمدتا مثبت نسبت به هوش مصنوعی قابل توضیح در این سیستمها است. علاوه بر این، تجزیه و تحلیل متن دو موضوع اصلی را نشان می دهد که بر اهمیت شفافیت، توضیح پذیری و تفسیرپذیری در هوش مصنوعی قابل توضیح تأکید می کند.
Abstract
Although artificial intelligence can contribute to decision-making processes, many industry players lag behind pioneering companies in utilizing artificial intelligence-driven technologies, which is a significant problem. Explainable artificial intelligence can be a viable solution to mitigate this problem. This paper proposes a research model to address how explainable artificial intelligence can impact decision-making processes. Using an experimental design, empirical data is collected to test the research model. This paper is one of the pioneer papers providing empirical evidence about the impact of explainable artificial intelligence on supply chain decision-making processes. We propose a serial mediation path, which includes transparency and agile decision-making. Findings reveal that explainable artificial intelligence enhances transparency, thereby significantly contributing to agile decision-making for improving cyber resilience during supply chain cyberattacks. Moreover, we conduct a post hoc analysis using text analysis to explore the themes present in tweets discussing explainable artificial intelligence in decision support systems. The results indicate a predominantly positive attitude towards explainable artificial intelligence within these systems. Furthermore, the text analysis reveals two main themes that emphasize the importance of transparency, explainability, and interpretability in explainable artificial intelligence.
Introduction
Automotive industry leaders, such as Tesla, have made substantial investments in artificial intelligence (AI) to expedite the introduction of self-driving vehicles to the market, enhancing their competitive capabilities. The integration of AI in supply chain operations has played a crucial role in enabling Tesla to optimize its operational costs [64] while simultaneously facilitating the establishment of a Gigafactory in China [78]. We are witnessing a rapid digital transformation driven by the integration of AI in supply chain management. The COVID-19 pandemic forced companies and organizations to expedite the digitalization of their operations [6]. To enhance their competitive edge, prominent companies, including Amazon, Walmart, Alibaba, Siemens, and Toyota have embraced AI-based technologies to automate and digitalize their operations and supply chain activities [1,32]. Digital transformation also introduces new possibilities for potential cyberattacks. However, using AI-based technologies for decision-making during cyberattacks (e.g., American Express monitoring; [75]) offers significant advantages that outweigh the potential losses incurred.
Although AI-based technologies can contribute to decision-making processes in operations and supply chain management, many industry players are lagging behind pioneering companies in utilizing AI-driven technologies, which is a main problem. Therefore, the adoption of AI-based technologies in decision-making processes, particularly during sensitive situations such as cyberattacks, may encounter potential barriers that delay their usage. Due to its vital advantages, AI has received much attention from decision-makers to address resilient case studies [45] and other sensitive problems such as healthcare [69]. The lack of explanations of the underlying AI processes leads to the rejection of AI in decision support systems [60]. Leveraging AI-powered decision-making platforms can significantly facilitate and expedite the decision-making process, which results in improved overall performance. For instance, the Colonial Pipeline, a U.S. oil supplier, faced a cyberattack and, after a week of deliberation, opted to pay around $4.4 million to solve the issue [21]. A quick decision on the first day through agile decision-making could have saved them money and enabled the uninterrupted continuation of their operations with stockholders.
Results
We employed regression analysis to test the hypothesized relationship between research model variables. We conducted required tests relating to the experimental design along with method assumptions and bias checks.
4.1. Design check
To check the design, the scenario test, the realism test, the validity test, and the manipulation test were performed. We conducted the scenario check to measure the participants' learning process regarding the provided information. The results showed that participants' learning was strong enough to support the scenario design (in the first scenario, the Mean = 5.3 out of 7, with a Standard Deviation = 0.99, and in the second scenario, the Mean = 5.5 out of 7, with a Standard Deviation = 0.85). The realism check was conducted through a face validity process and tested using the quantitative approach proposed by Thomas et al. [65]. Participants supported the realism check by agreeing that the scenarios were close enough to reality (Mean = 4.08 out of 5; Standard Deviation = 0.44). Similarly, the validity check supported the scenario design (Mean = 4.07 out of 5; Standard Deviation = 0.43). The manipulation test supported the manipulation by showing a significant difference between the two manipulated levels (p-value <0.001, the first level, the low level of XAI: Mean = 2.68 out of 7; Standard Deviation = 0.12; and the second level, the high level of XAI: Mean = 5.9 out of 7; Standard Deviation = 0.10).