

Projects
Artificial Cognition: A Framework for Self-Instantiated, Temporally Continuous, Disturbance-Driven Adaptive World-Builders
Collaborating Institutions:
CLEMSON UNIVERSITY
METACOGNITION INSTITUTE
As AI technologies evolve beyond generative engines capable of producing text, images, and media, a new paradigm emerges for robotics and autonomous systems. Our research proposes an Artificial Cognition framework that transcends current computational limitations through four essential pillars: self-instantiation, temporal continuity, disturbance-driven adaptation, and autonomous world-building.
Unlike traditional AI architectures constrained by functionalist approaches, our framework addresses the fundamental requirements for consciousness-like behaviors that cannot be achieved through computational scaling alone. By integrating self-representation mechanisms with persistent memory systems, we create agents capable of maintaining an internal narrative and identity across time. These systems respond to environmental surprises through intrinsic feedback loops, triggering genuine learning and adaptation while constructing rich internal world models.
​
​
Belief Explorer: A Multi-Perspective AI Framework for Mitigating Disinformation
​
Collaborating Institutions:
CLEMSON UNIVERSITY
METACOGNITION INSTITUTE
​
In today's digital landscape, the challenge of discerning credible information from sophisticated disinformation has reached unprecedented levels. Traditional fact-checking methods, with their binary true/false classifications, fail to address the complexity and context-dependent nature of modern claims. Our research introduces Belief Explorer, an advanced AI framework designed to empower critical evaluation through multiple analytical perspectives.
​
Belief Explorer combines persistent contextual memory with Socratic dialogue techniques and a three-lens analytical pipeline. As users interact with the system, their inputs are systematically segmented and stored for comprehensive analysis. Each claim undergoes evaluation through distinct arbiters—Empirical, Logical, and Pragmatic—leveraging specialized LLM architectures to produce nuanced assessments. These evaluations synthesize into key metrics: Verifact Score (evidence strength), Model Diversity Quotient (inter-arbiter agreement), Contextual Sensitivity Index (scenario appropriateness), and Reflective Index (exposed assumptions).
​
The framework's Perspective Generator crafts alternative viewpoints, promoting epistemic humility and moving users beyond seeking definitive "truth" toward evaluating the pragmatic utility and coherence of information models. This capability represents a vital skill in an era of amplified, often weaponized narratives, fostering resilience against manipulation through deeper understanding and critical thinking.
Neuroimaging and Machine Learning for Enhancing Functional Recovery
​
Collaborating Institutions:
CENTRE HOSPITALIER UNIVERSITAIRE DE REIMS (CHU de Reims),
CLEMSON UNIVERSITY
METACOGNITION INSTITUTE
This research outlines a collaborative effort between the above stated research institutions to investigate the potential of advanced neuroimaging techniques and machine learning algorithms in understanding and enhancing recovery processes in individuals with functional disabilities.
Specifically, the project explores the analysis of brain signals and biomechanics. By combining the medical expertise and resources of CHU de Reims with the computational and analytical capabilities of Clemson University and the Metacognition Institute biometric and biofeedback research, we aim to develop innovative interventions that can improve the lives of those facing neurological challenges.
​
​
Understanding the limits of LLMs through an in depth analysis of human language.
​
Collaborating Institutions:
CLEMSON UNIVERSITY
METACOGNITION INSTITUTE
​
We are designing a set of tests to understand the logical and reasoning limits of current LLMs. By carefully analysing the roots of language and its grounding, we aim to provide a framework to help us understand the role of language in the human scene.
​
Our approach involves dissecting the foundational aspects of language, including syntax, semantics, and pragmatics, to pinpoint where LLMs excel and where they fall short. By creating a comprehensive suite of benchmarks that reflect the complexity and nuances of human communication, we can better assess the capabilities and limitations of these models. This analysis will not only highlight the gaps in current LLMs but also guide future research and development towards more robust and human-like language processing systems.
​
Furthermore, we will explore how language is intertwined with human cognition and social interaction. Understanding the contextual and cultural elements that influence language use can reveal how LLMs interpret and generate text. This insight is crucial for developing LLMs that can navigate the intricacies of human language with greater accuracy and empathy. Our ultimate goal is to bridge the gap between artificial and human intelligence, fostering more meaningful and effective communication between humans and machines.
​