About Me

I work in the plasma group at KLA where I develop machine learning systems to examine semiconductor wafers.

I am broadly interested in building robust, performant and interpretable models. I deal with two main types of models, to paraphrase Belinda Li.

World models: of an external environment that support coherent downstream prediction.

Self models: an intelligent system's own internal computations, behaviors, and limitations.

In my spare time, I read, write and exercise.

I’m a fan of Annie Ernaux, Georg Trakl, EE Cummings, Jennifer Egan, Mahmoud Darwish.

Education

B.S.E. in Computer Science

University of Michigan, College of Engineering

Sept 2021 - May 2025

High School

Bronx High School of Science

Sept 2017 - May 2021

Interests

Machine Learning
Mechanistic Interpretability
Open Source Community

Awards & Scholarships

USA Computing Olympiad Silver
December 2020
AIME Qualification
February 2020
AMC 10 8th Place in School
February 2019

Selected Research

All research →
LoRAMBo: Fighting LoRA Memory Bottlenecks with Optimized Rank Selection

Liam Cawley

Dec 2024|ICLR Workshop 2025

A theoretical framework unifying classical matrix approximation with curvature-aware rank allocation for LoRA. We derive offline and online algorithms with near-optimality guarantees for distributing ...

MetaRepICL: In-Context Learning as Kernel Regression on Learned Representations

Liam Cawley

Jan 2025

Investigating whether transformer in-context learning implements kernel ridge regression on learned hidden representations. We construct explicit linear-attention-to-conjugate-gradient mappings and st...

StableGLM: Rashomon Sets for Generalized Linear Models

Liam Cawley

Jan 2025

A toolkit for computing ε-Rashomon sets, membership certificates, and set-level interpretability metrics for GLMs. Addresses the question: when many models fit the data equally well, which explanation...