Liam Cawley
ICLR Workshop 2025|2024
A theoretical framework unifying classical matrix approximation with curvature-aware rank allocation for LoRA. We derive offline and online algorithms with near-optimality guarantees for distributing low-rank capacity across layers.
Liam Cawley
2025
Investigating whether transformer in-context learning implements kernel ridge regression on learned hidden representations. We construct explicit linear-attention-to-conjugate-gradient mappings and study the softmax extension.
Liam Cawley
2025
A toolkit for computing ε-Rashomon sets, membership certificates, and set-level interpretability metrics for GLMs. Addresses the question: when many models fit the data equally well, which explanations are stable?
Liam Cawley, Gabe Ronan
EMAG Technologies, Inc.|2023
Calibration of Anokiwave phased array beamformers by searching over phase-shift and attenuator configurations. We compare metaheuristic search (PSO, GA, simulated annealing), gradient-boosted regressors, reinforcement learning, and a CNN-based predictor.
Liam Cawley, Alexandra Lavacek, Sophia Tesic
Course project, University of Michigan|2024
Adapting denoising diffusion concepts to single-image super resolution. We progressively build from a naive upsampler to a residual architecture with channel attention and perceptual loss, achieving 34.0 dB PSNR on DIV2K at 2x upscaling.
Liam Cawley
Course project, University of Michigan (Qing Qu)|2023
An empirical comparison of stochastic regularization methods --- Shake-Shake, Mixup, and Cutout --- in residual networks on CIFAR-10, with analysis of when and why each technique helps.
Liam Cawley
2021
Design and flight testing of an autonomous fixed-wing UAV built on the MFD Crosswind Mini platform, targeting GPS-guided waypoint missions for agricultural survey.
Liam Cawley
2025
A minimal distributed training sandbox for nanoGPT. Experiments with MIG partitioning, NCCL collectives, Kubernetes orchestration, and mixed-precision training on small-scale hardware.
Hugh van Deventer, Liam Cawley
2024
A benchmark for evaluating steering methods that operate through a language model's unembedding matrix. Provides standardized comparisons across extraction methods, models, and tasks.
Liam Cawley
2025
A benchmark for conversation history poisoning attacks on language models. We evaluate false conversation injection, gaslighting, and iterative context poisoning, measuring coherence collapse, safety bypass rates, and attention failure modes.