
International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 16 Issue 2
2025
Indexing Partners



















Building Trust in AI Systems: A Study on User Perception and Transparent Interactions
Author(s) | Priyansh, Amrit Kaur Saggu |
---|---|
Country | India |
Abstract | : Building trust in AI systems is crucial for their effective adoption and integration across various domains. This research paper addresses key research gaps focusing on the socio-psychological, transparency, ethical, and interactive dimensions that influence user trust in AI systems. Firstly, understanding the key factors influencing user trust in AI systems is vital. This research explores how demographic factors such as age, gender, education, and cultural background shape perceptions of AI trustworthiness. By examining these variables, potential disparities can be uncovered, allowing AI systems to better meet the needs of diverse user groups. Secondly, the level of transparency in AI decision-making processes significantly impacts user trust and acceptance. This research investigates whether providing explanations for AI processes through interpretable outputs or user-friendly interfaces enhances trust. It also examines user perceptions of AI systems that lack transparency, highlighting the importance of clear communication in fostering trust. Thirdly, ethical considerations and biases in AI systems play a pivotal role in shaping user trust. This study examines the trust implications of perceived or actual biases within AI systems and explores how ethical design and accountability mechanisms can address these concerns effectively. By prioritizing fairness, accountability, and transparency, developers can mitigate distrust and build more reliable AI systems. Finally, interactive design elements, such as real-time feedback and customizable interfaces, have the potential to enhance user engagement and trust in AI systems. This research investigates whether specific user interaction features can bridge the gap between technical AI functions and user comprehension, thereby fostering trust. By implementing user-centric design principles, developers can create AI systems that are more approachable and trustworthy. Addressing these research gaps is essential to building AI systems that users can trust, paving the way for their successful integration into society. |
Keywords | User Trust in AI, Demographic Influence, Transparency in AI, Ethical Considerations, AI Biases, User-Centric Design. |
Published In | Volume 16, Issue 1, January-March 2025 |
Published On | 2025-02-27 |
Cite This | Building Trust in AI Systems: A Study on User Perception and Transparent Interactions - Priyansh, Amrit Kaur Saggu - IJSAT Volume 16, Issue 1, January-March 2025. DOI 10.71097/IJSAT.v16.i1.2118 |
DOI | https://doi.org/10.71097/IJSAT.v16.i1.2118 |
Short DOI | https://doi.org/g869xr |
Share this


CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
