As part of my mechanistic interpretability and neurosymbolic AI research, this capstone explores the connection between semantic proto-role theory and Tensor Product Representations in cognitive architectures, proposing that deep learning models like TP-Transformers can learn interpretable role embeddings. This hypothesis was tested by training a TP-Transformer and evaluating its embeddings on a labeled dataset of semantic proto-roles.
A scholarly article published in Isonomia Quarterly exploring artificial superintelligence (ASI) control strategies through the lens of republican constitutionalism. Co-authored with Daniel Deudney, this paper examines whether historical approaches to power restraint can inform modern AI governance challenges.
A production-ready React Native mobile app prototype (Expo) built to deliver ventilator training content during COVID-19. The project repurposes a modular UI kit to organize educational modules with navigation, search, and content screens, enabling quick iteration and distribution. While archived, it demonstrates rapid prototyping of medical training materials on mobile using modern JS tooling.
An experimental architecture exploring how replacing fully connected layers with locally connected layers in RNNs improves syntactic evaluation. Building on Marvin & Linzen's (2018) targeted syntactic evaluation framework, this project implements and evaluates LCRNN variants on 27 distinct syntactic phenomena including subject-verb agreement, relative clauses, and NPIs. Trained on English Wikipedia (~100M tokens) and evaluated using controlled syntactic generalization tasks, the locally connected architecture achieved ~3% better accuracy over fully connected baselines, suggesting spatial locality constraints can enhance language models' acquisition of grammatical knowledge.