International Journal on Science and Technology

E-ISSN: 2229-7677     Impact Factor: 9.88

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 16 Issue 2 April-June 2025 Submit your research before last 3 days of June to publish your research paper in the issue of April-June.

Building Trust in AI Systems: A Study on User Perception and Transparent Interactions

Author(s) Priyansh, Amrit Kaur Saggu
Country India
Abstract : Building trust in AI systems is crucial for their effective adoption and integration across various domains. This research paper addresses key research gaps focusing on the socio-psychological, transparency, ethical, and interactive dimensions that influence user trust in AI systems. Firstly, understanding the key factors influencing user trust in AI systems is vital. This research explores how demographic factors such as age, gender, education, and cultural background shape perceptions of AI trustworthiness. By examining these variables, potential disparities can be uncovered, allowing AI systems to better meet the needs of diverse user groups. Secondly, the level of transparency in AI decision-making processes significantly impacts user trust and acceptance. This research investigates whether providing explanations for AI processes through interpretable outputs or user-friendly interfaces enhances trust. It also examines user perceptions of AI systems that lack transparency, highlighting the importance of clear communication in fostering trust. Thirdly, ethical considerations and biases in AI systems play a pivotal role in shaping user trust. This study examines the trust implications of perceived or actual biases within AI systems and explores how ethical design and accountability mechanisms can address these concerns effectively. By prioritizing fairness, accountability, and transparency, developers can mitigate distrust and build more reliable AI systems. Finally, interactive design elements, such as real-time feedback and customizable interfaces, have the potential to enhance user engagement and trust in AI systems. This research investigates whether specific user interaction features can bridge the gap between technical AI functions and user comprehension, thereby fostering trust. By implementing user-centric design principles, developers can create AI systems that are more approachable and trustworthy. Addressing these research gaps is essential to building AI systems that users can trust, paving the way for their successful integration into society.
Keywords User Trust in AI, Demographic Influence, Transparency in AI, Ethical Considerations, AI Biases, User-Centric Design.
Published In Volume 16, Issue 1, January-March 2025
Published On 2025-02-27
Cite This Building Trust in AI Systems: A Study on User Perception and Transparent Interactions - Priyansh, Amrit Kaur Saggu - IJSAT Volume 16, Issue 1, January-March 2025. DOI 10.71097/IJSAT.v16.i1.2118
DOI https://doi.org/10.71097/IJSAT.v16.i1.2118
Short DOI https://doi.org/g869xr

Share this