Exploring User Perception and Trust in Generative AI Applications: A Primary Study among Indian Consumers
DOI:
https://doi.org/10.5281/zenodo.16791916Keywords:
generative AI, user perception, trust, indian consumers, AI ethics, human-computer interaction, chatgpt, AI adoption, digital trust, AI awarenessAbstract
The rapid proliferation of generative AI technologies—such as ChatGPT, DALL·E, and other language or image generation tools—has brought forth new dimensions in human-computer interaction. As these technologies become increasingly embedded in daily life, understanding user perception and trust becomes critical, especially in a diverse and rapidly digitizing country like India. This study aims to explore how Indian consumers perceive generative AI applications, assess the level of trust they place in such tools, and identify factors influencing their usage decisions. Using a structured questionnaire and a sample of 400 respondents across varied demographics, the research examines users' awareness, perceived reliability, ethical concerns, and overall trust in generative AI. The findings indicate that while there is a growing curiosity and adoption, trust is significantly influenced by the transparency of AI processes, data privacy concerns, and the perceived authenticity of AI-generated content. This study offers valuable insights for developers, policymakers, and marketers seeking to foster responsible and user-aligned AI integration.
Downloads
References
Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Dwivedi, Y. K., Hughes, D. L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.
Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 681–694.
IAMAI (2024). Internet in India report 2024. Internet and Mobile Association of India. [Available on Official Website].
Kumar, A., Bansal, A., & Shah, H. (2023). Perception of artificial intelligence among Indian consumers: A mixed-method study. Journal of Technology Management & Innovation, 18(1), 67-78.
Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
Shin, D. (2021). The effects of explainability and causability on trust in AI. Computers in Human Behavior, 123, 106878.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504.
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Huang, P. S., Uesato, J., & Gabriel, I. (2022). Taxonomy of risks posed by language models. arXiv preprint arXiv:2112.04359.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623.
Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). 'It's reducing a human being to a percentage': Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
Brown, T. B., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
Dwivedi, Y. K., et al. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.
Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694.
Goodfellow, I., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.
Gupta, N., & Yadav, M. (2023). User adoption of AI tools in India: A demographic perspective. Asian Journal of Technology and Society, 5(2), 120–135.
Henderson, P., et al. (2018). Ethical challenges in data-driven dialogue systems. Proceedings of the AAAI/ACM Conference on AI Ethics and Society.
IAMAI. (2024). Internet in India report 2024. Internet and Mobile Association of India.
Kumar, A., Bansal, A., & Shah, H. (2023). Perception of artificial intelligence among Indian consumers: A mixed-method study. Journal of Technology Management & Innovation, 18(1), 67-78.
Madhavan, P, & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust. Theoretical Issues in Ergonomics Science, 8(4), 277-301.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.
MeitY. (2023). Draft digital personal data protection bill. Ministry of Electronics and Information Technology, Government of India.
Mosier, K. L., Skitka, L. J., Burdick, M. D., & Heers, S. T. (1996). Automation bias and errors: Are crews better than individuals?. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 40(4), 344-348.
O’Keefe, R. M., Barredo Arrieta, A., & Del Ser, J. (2022). Trustworthy AI: A survey of trust-enhancing techniques. Computers in Human Behavior Reports, 6, 100169.
Shin, D. (2021). The effects of explainability and causability on trust in AI. Computers in Human Behavior, 123, 106878.
Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.
Weidinger, L., et al. (2022). Taxonomy of risks posed by language models. arXiv preprint arXiv:2112.04359.

Published
How to Cite
Issue
Section
ARK
License
Copyright (c) 2025 Vrushali Yadavrao Ahire

This work is licensed under a Creative Commons Attribution 4.0 International License.
Research Articles in 'Social Science Journal for Advanced Research' are Open Access articles published under the Creative Commons CC BY License Creative Commons Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/. This license allows you to share – copy and redistribute the material in any medium or format. Adapt – remix, transform, and build upon the material for any purpose, even commercially.