E-ISSN:2583-0074

Research Article

Generative AI

Social Science Journal for Advanced Research

2025 Volume 5 Number 4 July
Publisherwww.singhpublication.com

Exploring User Perception and Trust in Generative AI Applications: A Primary Study among Indian Consumers

Ahire VY1*
DOI:10.5281/zenodo.16791916

1* Vrushali Yadavrao Ahire, Assistant Professor, Department of Management, Ashoka Business School, Nashik, Maharashtra, India.

The rapid proliferation of generative AI technologies—such as ChatGPT, DALL•E, and other language or image generation tools—has brought forth new dimensions in human-computer interaction. As these technologies become increasingly embedded in daily life, understanding user perception and trust becomes critical, especially in a diverse and rapidly digitizing country like India. This study aims to explore how Indian consumers perceive generative AI applications, assess the level of trust they place in such tools, and identify factors influencing their usage decisions. Using a structured questionnaire and a sample of 400 respondents across varied demographics, the research examines users' awareness, perceived reliability, ethical concerns, and overall trust in generative AI. The findings indicate that while there is a growing curiosity and adoption, trust is significantly influenced by the transparency of AI processes, data privacy concerns, and the perceived authenticity of AI-generated content. This study offers valuable insights for developers, policymakers, and marketers seeking to foster responsible and user-aligned AI integration.

Keywords: generative AI, user perception, trust, indian consumers, AI ethics, human-computer interaction, chatgpt, AI adoption, digital trust, AI awareness

Corresponding Author How to Cite this Article To Browse
Vrushali Yadavrao Ahire, Assistant Professor, Department of Management, Ashoka Business School, Nashik, Maharashtra, India.
Email:
Ahire VY, Exploring User Perception and Trust in Generative AI Applications: A Primary Study among Indian Consumers. Soc Sci J Adv Res. 2025;5(4):114-120.
Available From
https://ssjar.singhpublication.com/index.php/ojs/article/view/280

Manuscript Received Review Round 1 Review Round 2 Review Round 3 Accepted
2025-06-20 2025-07-09 2025-07-24
Conflict of Interest Funding Ethical Approval Plagiarism X-checker Note
None Nil Yes 4.82

© 2025 by Ahire VY and Published by Singh Publication. This is an Open Access article licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/ unported [CC BY 4.0].

Download PDFBack To Article1. Introduction2. Literature
Review
3. Research
Objectives
4. Research
Methodology
5. Data
Collection and
Interpretation
6. Limitations
of the Study
7. Future Scope
of the Study
8. ConclusionReferences

1. Introduction

The emergence of Generative Artificial Intelligence (AI) has revolutionized human-computer interaction by enabling machines to create text, images, code, and other content autonomously. Powered by advanced models such as GPT-4 by OpenAI, DALL·E, Bard/Gemini by Google, and Claude by Anthropic, generative AI systems are being rapidly integrated into a broad spectrum of consumer applications—from virtual assistants and customer service chatbots to content creation tools and educational platforms (Brown et al., 2020; Bommasani et al., 2021). These systems have shifted AI's role from reactive automation to proactive creativity and communication, marking a significant milestone in the evolution of intelligent technologies.

India, with its massive and digitally engaged population, represents a compelling context to study generative AI adoption. As of 2024, over 800 million Indians have internet access, making it one of the largest digital markets globally (IAMAI, 2024). The increasing penetration of smartphones, affordable data, and the widespread availability of AI tools through platforms like ChatGPT, Bing AI, and YouTube AI assistants have democratized access to generative AI, reaching both urban professionals and rural entrepreneurs. However, the extent to which Indian users understand, accept, and trust these technologies remains underexplored in academic literature.

Trust and perception are two critical psychological and behavioral factors influencing the adoption and sustained use of AI systems. Trust refers to the user's willingness to rely on AI outputs, assuming they are accurate, ethical, and reliable (Shin, 2021). Perception includes awareness, beliefs, attitudes, and emotional responses toward AI applications (Madhavan & Wiegmann, 2007). These constructs become particularly significant with generative AI because of its opaque “black-box” decision-making, the potential for bias, and concerns over misinformation, plagiarism, and privacy (Floridi & Chiriatti, 2020; Weidinger et al., 2022).

Although global research highlights increasing reliance on AI tools, users often struggle with issues related to data privacy, transparency, algorithmic bias, and a lack of explainability (Shneiderman, 2020).

In the Indian context, additional factors such as language diversity, digital literacy, cultural norms, and regulatory uncertainty further influence how users perceive and engage with generative AI tools. Moreover, while younger, digitally native users may exhibit enthusiasm and experimentation, older or less tech-savvy populations may show skepticism or avoidance behavior (Kumar et al., 2023).

Despite the growing consumer base, academic studies focusing specifically on Indian users’ trust and perception of generative AI remain scarce. Most existing work either concentrates on technical aspects of AI models or enterprise implementations, leaving a critical gap in user-centric research at the consumer level (Dwivedi et al., 2021). This research seeks to bridge that gap by focusing on primary data from diverse Indian consumers, offering insights into what drives or hinders trust in generative AI applications.

2. Literature Review

2.1. Generative AI: Definition and Capabilities

Generative Artificial Intelligence (AI) refers to machine learning models that are capable of generating new data—text, images, code, and more—based on patterns learned from large datasets. These models include Generative Adversarial Networks (GANs) and transformer-based large language models (LLMs) like OpenAI’s GPT-4, Google’s Gemini, and Meta’s LLaMA. Unlike traditional AI, which focuses on prediction and classification, generative AI emphasizes content creation and synthesis (Goodfellow et al., 2014; Brown et al., 2020).

Bommasani et al. (2021) introduced the concept of foundation models, which are large-scale models trained on massive datasets that can be adapted to various tasks. These models power tools such as ChatGPT, DALL·E, and Claude, which have rapidly entered public and commercial use. Their ability to perform human-like tasks—writing essays, generating art, answering queries—raises significant interest and concern among users.

2.2. Trust in AI Systems

Trust is a central construct in human-computer interaction and is critical for the adoption of AI technologies.


According to Mayer, Davis, and Schoorman’s (1995) integrative trust model, trust is influenced by three factors: ability, benevolence, and integrity. These attributes, when applied to AI, refer to its perceived competence, alignment with user goals, and transparency or fairness.

Research shows that users are more likely to trust AI systems when they are explainable, consistent, and ethically aligned with human values (Shneiderman, 2020; Shin, 2021). However, trust in generative AI is complex because of its probabilistic nature and the “black box” quality of large neural networks. Users may find it difficult to understand how responses are generated, leading to uncertainty and skepticism (Binns et al., 2018).

Floridi and Chiriatti (2020) argue that the lack of transparency in GPT-like models can erode user trust, especially when AI-generated content appears authoritative but contains factual inaccuracies. Users may either over-trust or under-trust AI, depending on their prior knowledge, digital literacy, and the framing of the tool.

2.3. User Perception and Cognitive Bias

User perception includes emotional and cognitive responses to AI-generated content, such as perceived creativity, authenticity, or utility. Perception is shaped by interface design, output quality, and users’ past experiences with technology (Madhavan & Wiegmann, 2007).

In consumer-facing AI applications, users often exhibit automation bias—the tendency to favor machine-generated outputs over human judgment (Mosier et al., 1996). While this can increase efficiency, it also raises concerns about over-reliance, especially in critical tasks like education or healthcare (Weidinger et al., 2022).

Recent work by O’Keefe et al. (2022) shows that user perception of AI credibility is closely tied to interface cues such as formality, response tone, and presence of disclaimers. Furthermore, the presence of cultural alignment and linguistic personalization increases acceptance, particularly in multilingual societies like India (Kumar et al., 2023).

2.4. Ethical Concerns and Data Privacy

Trust in generative AI is also deeply tied to ethical concerns, especially regarding data privacy, bias, and misinformation.

Generative AI models are often trained on publicly available internet data, which may include biased, harmful, or plagiarized content. This leads to fears that AI could reinforce stereotypes or infringe on copyright laws (Bender et al., 2021; Henderson et al., 2018).

In India, where digital literacy and data regulation are still evolving, these concerns are amplified. Users often lack clarity on how their data is used, raising suspicion about AI’s intentions (Dwivedi et al., 2021). The absence of robust data protection laws further affects trust levels among Indian users (IAMAI, 2024).

2.5. Indian Context: Cultural and Demographic Influences

Despite the global hype around generative AI, localized user behavior studies in India are limited. However, early evidence suggests that Indian users display a mix of curiosity, caution, and rapid adoption, especially among youth and tech workers (Kumar et al., 2023).

Language diversity, varying levels of education, and social norms heavily influence how users perceive AI. For instance, while urban users may embrace AI writing tools for productivity, rural users may use it for language translation or educational assistance. Gender, age, and education also affect trust—studies show that younger, more educated males are early adopters, whereas others show hesitancy (Gupta & Yadav, 2023).

Government initiatives like Digital India and the upcoming Data Protection Bill are expected to shape user perception and regulatory trust in AI systems (MeitY, 2023).

3. Research Objectives

The primary aim of this research is to explore the evolving perceptions and trust dynamics associated with generative AI tools among Indian consumers. In line with this aim, the study is guided by the following specific objectives:
1. To assess the level of awareness and adoption of generative AI tools (such as ChatGPT, DALL·E, Gemini) among Indian consumers.
2. To examine consumer perceptions regarding the accuracy, usefulness, creativity, and ethical implications of content generated by AI applications.


3. To evaluate the degree of trust Indian users place in generative AI tools and identify the key factors influencing this trust (e.g., transparency, privacy, cultural relevance).
4. To analyze the relationship between demographic characteristics (such as age, gender, education, and technology familiarity) and users’ perception and trust in generative AI tools.

4. Research Methodology

This section outlines the methodological framework adopted to address the research objectives related to user perception and trust in generative AI tools among Indian consumers.

4.1. Research Design

This study employs a quantitative, descriptive research design based on primary data collected through a structured survey. The approach is cross-sectional, aiming to capture insights from a diverse group of Indian users at a single point in time. The design is appropriate for examining user awareness, trust factors, perception, and demographic influences systematically and objectively.

4.2. Data Source

  • Primary Data: Collected directly from respondents through a structured online questionnaire.
  • Secondary Data: Reviewed from existing literature, academic journals, industry reports (e.g., IAMAI, NASSCOM), and publications related to generative AI, trust, and human-computer interaction to frame the context and support the analysis.

4.3. Sampling Method

  • Sampling Technique:
    A combination of purposive sampling and stratified random sampling was used:

Purposive sampling ensured that only users with exposure to generative AI tools (e.g., ChatGPT, DALL·E, Midjourney, etc.) were included.

Stratified sampling ensured diversity based on region, age, gender, education, and occupation.

  • Sample Size:
    The study surveyed 400 Indian consumers across urban, semi-urban, and rural settings to ensure representation and statistical validity.

5. Data Collection and Interpretation

This section presents the analysis of primary data collected through a structured questionnaire designed to explore user perception and trust in generative AI applications. Data were collected from 400 respondents across India using purposive and stratified sampling. Each research objective is addressed individually with interpretation based on descriptive and inferential insights.

Objective 1: To assess the level of awareness and adoption of generative AI tools among Indian consumers

Respondents were asked about their familiarity with generative AI tools such as ChatGPT, Google Gemini, and DALL·E.

Table 1: Awareness and Adoption of Generative AI Tools (N=400)

Category% of Respondents
Aware and have used62%
Aware but not used24%
Not aware14%

Among those aware and using generative AI, ChatGPT (86%) emerged as the most popular tool, followed by Google Gemini/Bard (41%) and DALL·E (19%).

Interpretation:
The results indicate a relatively high level of adoption among digitally active Indian users. The popularity of ChatGPT suggests strong brand recall and accessibility. However, the 14% unaware population points toward a persistent digital gap, particularly in rural and less educated demographics.

Objective 2: To examine consumer perceptions regarding the accuracy, usefulness, creativity, and ethical implications of content generated by AI applications

Perception was measured using a 5-point Likert scale ranging from Strongly Disagree (1) to Strongly Agree (5).


Table 2: Perception Metrics Toward Generative AI (N=400)

Perception StatementAgree/Strongly Agree (%)
AI-generated content is useful77%
AI-generated content is creative72%
The output is accurate and relevant70%
AI-generated responses can be misleading62%
I have ethical concerns (e.g., bias)51%

Interpretation:
A majority of users find generative AI tools both useful and creative, but also recognize limitations in accuracy and truthfulness. Ethical concerns were noted by about half the respondents, showing an emerging awareness of issues like misinformation and plagiarism.

Objective 3: To evaluate the degree of trust Indian users place in generative AI tools and identify the key factors influencing this trust

Respondents were asked to indicate their trust level based on transparency, privacy, and consistency.

Table 3: Trust Factors in Generative AI Applications

Trust-Related StatementAgree/Strongly Agree (%)
I trust AI when outputs are consistent74%
I worry about privacy when using generative AI61%
Transparency increases my trust in the system78%
hesitate to use AI for sensitive or personal content56%
I trust AI more when it provides source references69%

Interpretation:
Trust is primarily dependent on the consistency of AI outputs, privacy protection, and transparency. A significant proportion of users prefer AI tools that disclose their information sources, indicating a demand for explainability and factual integrity.

Objective 4: To analyze the relationship between demographic characteristics and users’ perception and trust in generative AI tools

Cross-tabulation and ANOVA were used to explore demographic differences in perception and trust.

Table 4: Trust and Perception by Age Group

Age GroupPerceived Usefulness (%)Trust in AI (%)
18–25 years81%76%
26–35 years74%68%
36–50 years61%53%
50+ years45%39%

Table 5: Trust by Profession

ProfessionTrust in Generative AI (%)
IT/Tech Professionals79%
Non Tech Professionals47%
students65%

Interpretation:
Younger users and tech-savvy professionals demonstrate greater trust and positive perception of generative AI tools. The results reveal a statistically significant difference in trust levels between age groups (p < 0.05), indicating the influence of digital exposure and familiarity. Older and non-technical users exhibit more skepticism, which could be addressed through targeted awareness campaigns.

6. Limitations of the Study

While this study provides meaningful insights into user perception and trust in generative AI applications among Indian consumers, it has certain limitations:
1. Sampling Bias: The survey was conducted online, which may have excluded respondents with limited digital literacy or internet access—particularly from rural or remote regions.
2. Self-Reported Data: Responses are based on personal perceptions and may be subject to bias or social desirability, affecting the objectivity of certain results.
3. Cross-Sectional Design: Data were collected at a single point in time and do not reflect changes in perception or trust over time as users gain more experience with AI tools.
4. Tool-Specific Limitations: The study focuses primarily on popular generative AI tools like ChatGPT, Google Gemini, and DALL·E. Results may not fully apply to niche or domain-specific tools.

7. Future Scope of the Study

Based on the findings and limitations, the following directions are recommended for future research:
1. Longitudinal Studies: Future research can track how trust and usage patterns evolve over time as AI tools become more advanced and integrated into daily life.
2. Qualitative Exploration: In-depth interviews or focus groups could provide richer insights into user emotions, ethical concerns, and cognitive biases related to generative AI.


3. Sector-Specific Analysis: Further studies can examine how generative AI is perceived and trusted in specific sectors like education, healthcare, journalism, or creative arts.
4. Rural and Vernacular User Behavior: Research should explore adoption barriers and perception patterns among non-English speaking and rural consumers, including the role of regional languages in improving accessibility.

8. Conclusion

The findings show high awareness and adoption, particularly among youth and tech professionals, with ChatGPT being the most widely used tool. User perceptions are largely positive regarding usefulness and creativity, but many users also acknowledge the potential for misinformation and bias. Trust in AI tools is shaped by factors such as consistency, transparency, and data privacy. While many users appreciate AI’s utility, there remains hesitation in using it for sensitive tasks. Demographic analysis reveals that younger, more digitally literate users exhibit greater trust and engagement, whereas older and non-tech users tend to be more cautious.

References

1. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

2. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

3. Dwivedi, Y. K., Hughes, D. L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.

4. Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30, 681–694.

5. IAMAI (2024). Internet in India report 2024. Internet and Mobile Association of India. [Available on Official Website].

6. Kumar, A., Bansal, A., & Shah, H. (2023). Perception of artificial intelligence among Indian consumers: A mixed-method study. Journal of Technology Management & Innovation, 18(1), 67-78.

7. Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301.

8. Shin, D. (2021). The effects of explainability and causability on trust in AI. Computers in Human Behavior, 123, 106878.

9. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495-504.

10. Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Huang, P. S., Uesato, J., & Gabriel, I. (2022). Taxonomy of risks posed by language models. arXiv preprint arXiv:2112.04359.

11. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623.

12. Binns, R., Veale, M., Van Kleek, M., & Shadbolt, N. (2018). 'It's reducing a human being to a percentage': Perceptions of justice in algorithmic decisions. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.

13. Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

14. Brown, T. B., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

15. Dwivedi, Y. K., et al. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994.

16. Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694.


17. Goodfellow, I., et al. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.

18. Gupta, N., & Yadav, M. (2023). User adoption of AI tools in India: A demographic perspective. Asian Journal of Technology and Society, 5(2), 120–135.

19. Henderson, P., et al. (2018). Ethical challenges in data-driven dialogue systems. Proceedings of the AAAI/ACM Conference on AI Ethics and Society.

20. IAMAI. (2024). Internet in India report 2024. Internet and Mobile Association of India.

21. Kumar, A., Bansal, A., & Shah, H. (2023). Perception of artificial intelligence among Indian consumers: A mixed-method study. Journal of Technology Management & Innovation, 18(1), 67-78.

22. Madhavan, P, & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust. Theoretical Issues in Ergonomics Science, 8(4), 277-301.

23. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709-734.

24. MeitY. (2023). Draft digital personal data protection bill. Ministry of Electronics and Information Technology, Government of India.

25. Mosier, K. L., Skitka, L. J., Burdick, M. D., & Heers, S. T. (1996). Automation bias and errors: Are crews better than individuals?. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 40(4), pp. 344-348.

26. O’Keefe, R. M., Barredo Arrieta, A., & Del Ser, J. (2022). Trustworthy AI: A survey of trust-enhancing techniques. Computers in Human Behavior Reports, 6, 100169.

27. Shin, D. (2021). The effects of explainability and causability on trust in AI. Computers in Human Behavior, 123, 106878.

28. Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504.

29. Weidinger, L., et al. (2022). Taxonomy of risks posed by language models. arXiv preprint arXiv:2112.04359.

Disclaimer / Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of Journals and/or the editor(s). Journals and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.