John's Research

My research focuses on issues in the philosophy of cognitive science, as well as issues in the ethics of AI, in particular I am interested in both philosophical and scientific problems surrounding self-knowledge and metacognition and the possibility of ethical technology. I have published on ethical AI, metacognition, embodiment, the phenomenology of reasoning, and on the foundations of epistemic agency, as well as some vulnerabilities this agency has in the digital age. 

Talk: Why AI Cannot be Trustworthy (Yet)

Talk given at Dresden University (Sep, 2023)

Abstract: Many current policies and ethical guidelines recommend developing “trustworthy AI”. Here we argue that developing trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the problem of vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as trust would, the anthropomorphization of artificial assistance and thus epistemically dubious behavior. Here, the normative demands of reliability for interagential action are argued to be met by an analogue to procedural metacognitive competence (i.e., the ability to evaluate the quality of one’s own informational states to regulate subsequent cognitive action). Drawing on recent empirical findings that suggest providing precision scores (such as the F1-score) to human decision-makers improves calibration on the AI-system, we argue that precision scores provide a good index of competence and enables humans to determine how much they wish to rely on the system


Presentations (peer-reviewed)