Author
Abstract
The advent of artificial intelligence (AI) has significantly transformed the digital landscape, offering substantial potential for innovation and advancement across multiple fields. With the ability to automate complex tasks, analyze large datasets in real-time, and make autonomous decisions, AI and machine learning (ML) in particular offer new opportunities for organizations and individuals to solve problems. For instance, AI systems assist organizations in enhancing the efficacy and efficiency of their processes. Hence, AI has become a key driver in the evolution of industries such as healthcare, finance, manufacturing, and education, pushing the boundaries of what is possible in the digital age. Moreover, AI’s integration into everyday applications – such as the generation of personalized recommendations, the provision of healthcare services, or the automation of driving – provides valuable support for individuals in their daily lives. Beyond the potential for saving time by optimizing product selection when shopping online, AI-based systems can also serve the personal well-being of individuals. However, while some organizations have begun to apply AI to their core processes, the full potential of AI has yet to be realized. This is particularly the case in areas that do not directly impact an organization's effectiveness but are crucial to long-term success and compliance. Thus, the transformative power of AI in these critical yet undervalued domains remains largely untapped. Previous studies have demonstrated that the adoption of AI in organizations is a complex and challenging process, often associated with a range of difficulties and obstacles. On the one hand, this is due to the distinctive attributes of AI in comparison to traditional information systems. On the other hand, technical and user-related challenges frequently impede the successful adoption of AI in organizations. Despite the existence of numerous research papers on AI, the factors influencing the adoption of AI in specific organizational contexts, such as corporate environmental sustainability, remain underexplored. This dissertation offers valuable insights into AI adoption in organizations, particularly in the context of corporate environmental sustainability. In addition to the examination of technical and organizational factors, the influence of the external organizational environment is also investigated. Moreover, trust is crucial for the decision to adopt and use AI. The unique characteristics of AI including autonomy, learning capabilities, and the complexity of its decision-making processes can give rise to skepticism and impede its usage. In particular, a lack of knowledge about AI’s decision-making processes and outcomes can lead to concerns, making trust a critical element in achieving the widespread adoption and usage of AI. While trust often has been previously investigated as a single construct within the context of AI, this dissertation additionally considers the impact of diverse trust concepts on human behavior. This dissertation aims to provide a comprehensive understanding of AI adoption in the context of corporate environmental sustainability and offer actionable knowledge for enhancing trust in AI technologies. Two quantitative studies, a qualitative study, and an experimental study were conducted in this cumulative dissertation to investigate AI’s adoption with a focus on trust. The results of these studies were published in peer-reviewed conference proceedings. The four published studies contribute to theory development by revealing the factors influencing the organizational adoption of AI. Furthermore, they highlight the role of trust in AI and its impact on human behavior. Additionally, the research papers presented in this dissertation offer valuable guidance for practitioners. The first part of this dissertation comprises two research papers that examine the organizational adoption of AI. Paper A addresses the adoption of green AI in the context of corporate environmental sustainability, whereas Paper B analyzes the influence of external pressures on green AI adoption. For Paper A, interviews with 21 experts from various industries were conducted to provide an integrative framework including factors that determine the adoption of green AI. As a result, eight propositions were developed to explain the effects of the identified factors. For Paper B, an anonymous online survey was conducted with 453 participants to investigate the influence of specific factors, including external pressures, on the adoption of green AI. The key findings of Paper B indicate that coercive, mimetic, and normative pressures significantly influence green AI adoption, while top management support mediates these pressures’ effects. This highlights the pivotal role of high-level decision-makers within organizational contexts and the role of external influences. Furthermore, the second part of this dissertation examines the role of trust in AI. Previous research has indicated that trust is critical when using new information systems. This dissertation confirmed that trust in AI can facilitate the use of such systems. While papers A and B report results at the organizational level, papers C and D focus on the individual user. In particular, Paper C examines trust as a multidimensional concept, emphasizing cognitive and emotional trust in AI depending on two types of AI vendors (i.e., automobile manufacturers and technology companies). By conducting a large-scale anonymous online survey with 687 participants, a multi-group analysis revealed different degrees of trust depending on the vendor types. In particular, emotional trust is more important for technology companies, whereas cognitive trust has a greater impact on automobile manufacturers. Understanding how trust influences the intention to use AI systems is crucial, but knowing how to design trustworthy AI systems is also important. This ensures that AI systems are more likely to be adopted and used to their full potential, as users will be more willing to rely on and integrate these technologies into their daily lives. Thus, a design science research methodology is used in Paper D to develop design principles for a user-centered, trustworthy AI, namely a ML system. After conducting an effectiveness test, the results revealed that the design of the ML system based on the design principles is perceived as more trustworthy than existing designs. In summary, this dissertation contributes to the successful organizational adoption of AI, particularly in the context of corporate environmental sustainability, by providing a comprehensive overview of adoption factors. This work guides practitioners and decision-makers in the adoption process and helps organizations prepare to gather the necessary resources for adopting AI. In addition, the findings highlight the importance of trust for AI usage and provide insights into different trust concepts. By specifying design principles for a trustworthy and user-centered ML system, this dissertation guides developers in developing trustworthy ML systems. Such ML-based systems have the potential to facilitate the use of AI in everyday contexts. Moreover, the dissertation contributes to the advancement of theory by offering novel insights grounded in established theoretical models and provides researchers with a deeper comprehension of the factors influencing the adoption of AI, particularly the critical role of trust.
Suggested Citation
Fecho, Mariska, 2025.
"Toward the Adoption of Artificial Intelligence: Exploring the Critical Role of Trust,"
Publications of Darmstadt Technical University, Institute for Business Studies (BWL)
153649, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
Handle:
RePEc:dar:wpaper:153649
Note: for complete metadata visit http://tubiblio.ulb.tu-darmstadt.de/153649/
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dar:wpaper:153649. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dekanatssekretariat (email available below). General contact details of provider: https://edirc.repec.org/data/ivthdde.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.