Projects & Publications

Neurosymbolic AI: Emergent Semantic Proto-role Structure in Tensor Product Representations

As part of my mechanistic interpretability and neurosymbolic AI research, this capstone explores the connection between semantic proto-role theory and Tensor Product Representations in cognitive architectures, proposing that deep learning models like TP-Transformers can learn interpretable role embeddings. This hypothesis was tested by training a TP-Transformer and evaluating its embeddings on a labeled dataset of semantic proto-roles.

Bounding Power: The ASI Control Problem, Public Safety, and Republican Constitutionalism

A scholarly article published in Isonomia Quarterly exploring artificial superintelligence (ASI) control strategies through the lens of republican constitutionalism. Co-authored with Daniel Deudney, this paper examines whether historical approaches to power restraint can inform modern AI governance challenges.

COVID-19 Ventilator Training (React Native)

A production-ready React Native mobile app prototype (Expo) built to deliver ventilator training content during COVID-19. The project repurposes a modular UI kit to organize educational modules with navigation, search, and content screens, enabling quick iteration and distribution. While archived, it demonstrates rapid prototyping of medical training materials on mobile using modern JS tooling.

Locally Connected Recurrent Neural Networks

An experimental architecture exploring how replacing fully connected layers with locally connected layers in RNNs improves syntactic evaluation. Building on Marvin & Linzen's (2018) targeted syntactic evaluation framework, this project implements and evaluates LCRNN variants on 27 distinct syntactic phenomena including subject-verb agreement, relative clauses, and NPIs. Trained on English Wikipedia (~100M tokens) and evaluated using controlled syntactic generalization tasks, the locally connected architecture achieved ~3% better accuracy over fully connected baselines, suggesting spatial locality constraints can enhance language models' acquisition of grammatical knowledge.

Speech-to-Text

These projects build both classical and neural network speech recognition models with OOP design using PyTorch & Numpy. Dynamic programming used for efficient computation of forward and/or backward passes in HMMs or LSTMs.