Work
Computer science research-y
(2019-2020) I completed my undergraduate thesis with Prof. Ellie Pavlick, titled “Data augmentation and the role of hardness for feature learning in NLP.” We studied (empirically with synthetic data) the tipping point when models switch from (1) learning simple rules and memorizing exceptions to those rules, to (2) learning more complex and generalizable rules. I received an Undergraduate Thesis and Research Award (UTRA) from Brown to work on this project. We presented an extended poster (here) at the Natural Language, Dialog and Speech (NDS) Symposium at the New York Academy of Sciences. We also submitted this work to EMNLP 2020. We're also working now on another submission on extending this work to natural language and state-of-the-art NLP models.
(2018-2019) As part of a graduate algorithms seminar with Prof. Paul Valiant, I presented the following papers with classmates: “Probabilistic encryption” by Goldwasser and Micali, “Undirected connectivity in log-space” by Reingold (slides here), “Learnability and the Vapnik-Chervonekis Dimension” by Blumer, et al. and “Neural Tangent Kernel” by Jacot, Gabriel, and Hongler. Miranda Christ and I also wrote summaries of “Hardness vs. randomness” by Nisan and Wigderson (here), Dinur's proof of the PCP theorem (here), and “A Mathematical Theory of Communication” by Shannon (here), and we previously presented “Primes in P” by Agarwal, Kayal, and Saxena.
Miscellaneous
|