Our Approach to Learning

MBA

George Mason Neuroscientist Part of Global Research Collaboration Suggesting Framework to Examine Trust in Artificial Intelligence

Professor Mislin's co-authored paper was published in Nature Humanities and Social Sciences Communications.

Kogod Hero Images_AI_Sept 15

The paper introduces the TrustNet Framework, a transdisciplinary model for understanding and strengthening trust in AI. The research draws on insights from an international collaboration of scholars across psychology, neuroscience, ethics, and technology. It addresses how trust between people, systems, and institutions can guide the development and responsible use of AI in critical areas such as healthcare, hiring, misinformation, and warfare.

  • Trust in artificial intelligence is fundamentally different from trust between humans, requiring new conceptual tools and transdisciplinary collaboration to navigate risks related to privacy, fairness, and accountability.

  • Most trust research remains siloed in academic disciplines, with only limited involvement from institutional stakeholders, hindering progress in building trustworthy AI systems that reflect real-world concerns.

  • The authors call for a transdisciplinary research agenda that integrates scientific and societal expertise, emphasizing collaboration between academics, policymakers, industry, and users to create robust trust in AI across critical domains.

    “We propose a transdisciplinary research framework to understand and bolster trust in AI and address grand challenges in domains as diverse and urgent as misinformation, discrimination, and warfare," says Mislin.

Read the full article.