A domain-independent review of explainable AI’s role in facilitating innovation and creativity in organizations

Authors

  • Miha Lavrič Doba Business School, Maribor, Slovenia
  • Aurora Lea Lavrič Gymnasium and Vocational Secondary School for Chemistry and Pharmacy Ruše

DOI:

https://doi.org/10.32015/JIBM.2025.17.1.8

Keywords:

Explainable AI, Innovation, Creativity, Decision-Making, Organizational Behavior

Abstract

Explainable Artificial Intelligence (XAI) offers transparency and interpretability in AI systems, addressing the opacity of traditional models. This review examines how XAI fosters innovation and creativity in organizations by enhancing decision-making, trust, and collaboration across diverse domains such as healthcare, manufacturing, and agriculture. Thematic analysis of relevant literature reveals that XAI builds stakeholder confidence, promotes ethical practices, and bridges gaps between technical and non-technical teams, encouraging inclusive problem-solving. While this study highlights significant short-term benefits, future longitudinal research is necessary to explore XAI’s long-term impact. This paper provides insights for academics, practitioners, and policymakers, emphasizing XAI's potential to foster innovation.

References

Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in neural information processing systems, 31. https://dl.acm.org/doi/10.5555/3327546.3327621

Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805. https://doi.org/10.1016/j.inffus.2023.101805

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., ... & Horvitz, E. (2019, May). Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems (pp. 1-13). https://dl.acm.org/doi/abs/10.1145/3290605.3300233

Binns, R. (2018, January). Fairness in machine learning: Lessons from political philosophy. In Conference on fairness, accountability and transparency (pp. 149-159). PMLR. https://proceedings.mlr.press/v81/binns18a.html

Blatch-Jones, A. J., Church, H., & Crane, K. (2024). Exploring the potential benefits and challenges of artificial intelligence for research funding organisations: a scoping review. medRxiv,2024-09. https://www.medrxiv.org/content/10.1101/2024.09.26.24314280v1

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Cambria, E., Malandri, L., Mercorio, F., Nobani, N., & Seveso, A. (2024). Xai meets llms: A survey of the relation between explainable ai and large language models. arXiv preprint arXiv:2407.15248. https://arxiv.org/abs/2407.15248

Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., & Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human resource management review, 33(1), 100899. https://www.sciencedirect.com/science/article/abs/pii/S1053482222000079

Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832. https://doi.org/10.3390/electronics8080832

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

Dwivedi, R., Dave, D., Naik, H., Singhal, S., Omer, R., Patel, P., & Ranjan, R. (2023). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys, 55(9), 1–33. https://dl.acm.org/doi/abs/10.1145/3561048

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), 80–89. https://doi.org/10.1109/DSAA.2018.00018

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1-42. https://dl.acm.org/doi/10.1145/3236009

Gunning, D., & Aha, D. W. (2019). DARPA’s explainable artificial intelligence program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., ... & Hussain, A. (2024). Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1), 45-74. https://link.springer.com/article/10.1007/s12559-023-10179-8

Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A. (2022). Trustworthy artificial intelligence: A review. ACM Computing Surveys, 55(2), 1–38. https://dl.acm.org/doi/abs/10.1145/3491209

Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1), 18. https://www.mdpi.com/1099-4300/23/1/18

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007

Molnar, C. (2024). Interpretable machine learning: A guide for making black box models explainable. https://christophm.github.io/interpretable-ml-book/

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1–13. https://doi.org/10.1177/1609406917733847

Patton, M. Q. (1999). Enhancing the quality and credibility of qualitative analysis. Health Services Research, 34(5), 1189–1208. https://pubmed.ncbi.nlm.nih.gov/10591279/

Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5

Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature machine intelligence, 1(5), 206-215. https://www.nature.com/articles/s42256-019-0048-x

Samek, W., Wiegand, T., & Müller, K. R. (2017). Explainable artificial intelligence: Understanding, visualizing, and interpreting deep learning models. arXiv preprint arXiv:1708.08296. https://arxiv.org/abs/1708.08296

Zhou, J., Gandomi, A. H., Chen, F., & Holzinger, A. (2021). Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5), 593. https://doi.org/10.3390/electronics10050593

Ziems, C., Held, W., Shaikh, O., Chen, J., Zhang, Z., & Yang, D. (2024). Can large language models transform computational social science?. Computational Linguistics, 50(1), 237-291. https://direct.mit.edu/coli/article/50/1/237/118498/Can-Large-Language-Models-Transform-Computational

Downloads

Published

2025-06-30

Issue

Section

Review article

How to Cite

Lavrič, M., & Lavrič, A. L. (2025). A domain-independent review of explainable AI’s role in facilitating innovation and creativity in organizations. Mednarodno Inovativno Poslovanje = Journal of Innovative Business and Management, 17(1). https://doi.org/10.32015/JIBM.2025.17.1.8