About

Welcome!

I am a researcher with a PhD in Computational Linguistics, with research interests focusing on the intersection of engineering and society. My work bridges the academic and applied sides of language technology, supported by both technical industry experience and classroom instruction, including both engineering concepts and social sciences.

As a prime example: current advances in Large Language Models (LLMs) have led to new consumer-facing technologies that suffer from a critical problem that lies at this intersection. Language models’ lack of factual reliability is a technical challenge; and one that interfaces with the social challenge of the psychological biases introduced by their authoritative language[1]. I sought to address topics like this with my recent course at the University of Washington for tuning and constraining LLMs

Alongside teaching, I’ve worked in industry roles focused on building natural language processing and LLM solutions for large-scale, often high-stakes applications. At KPMG, I led machine learning and generative AI initiatives that involved everything from building document understanding pipelines and semantic search tools to helping clients deploy custom-tuned open-weight models. I enjoy the technical depth of model development, but also the broader design work—thinking through how these systems should be evaluated, maintained, and integrated into workflows.

Across both domains, I’ve remained deeply interested in the ethical and social dimensions of AI. Much of my work includes helping teams navigate issues of model risk, fairness, and alignment—building systems that are not only powerful, but also transparent and responsible. I believe that thoughtful evaluation, stakeholder input, and strong model governance are as essential to LLM development as the underlying architecture. As these models become more central to how we interact with information and each other, I see it as critical that we build with both technical and human concerns in mind.