A photo of me!

Hi, I’m Hamish! I’m (currently) a PhD student at the University of Washington at H2Lab, advised by Hannaneh Hajishirzi. I’m generally interested in NLP research, with interests in making language models more easy to use and open, exploring alternative architectures, and linking model abilities and data.

I’m from Sydney, and did my undergraduate at the University of Sydney, doing a Bachelor of Arts and IT and triple majoring in Linguistics, Classical Greek, and Computer Science. I also did some NLP with the UsydNLP group, examining multi-hop question answering. Throughout my undergrad (and just after), I spent some time at the Commonwealth Bank of Australia, start-up-y stuff, and Optiver. Before my PhD, I was a predoctoral researcher at AI2 on the AllenNLP team.

If you have questions about my work, general academia/software/research-related stuff, or want to chat, feel free to reach out at hamishiv [at] cs [dot] washington [dot] edu. I’m generally down to chat about whatever!


Papers

See below for papers I’ve worked on. You can also check out my Semantic Scholar and Google Scholar profiles.

    Dirk Groeneveld, Iz Beltagy, ..., Hamish Ivison, ..., Noah A. Smith, and Hannaneh Hajishirzi. 2024. OLMo: Accelerating the Science of Language Models. Preprint.
    Hamish Ivison*, Yizhong Wang*, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. arXiv preprint.
    Yasaman Razeghi*, Hamish Ivison*, Sameer Singh, and Yanai Elazar. 2023. Backtracking Mathematical Reasoning of Language Models to the Pretraining Data. In NeurIPS Workshop on Attributing Model Behavior at Scale.
    Yizhong Wang*, Hamish Ivison*, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources. In NeurIPS Datasets and Benchmarks Track.
    Rabeeh Karimi Mahabadi*, Hamish Ivison*, Jaesung Tae, James Henderson, Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2024. TESS: Text-to-Text Self-Conditioned Simplex Diffusion. EACL.
    Hamish Ivison, Akshita Bhagia, Yizhong Wang, Hannaneh Hajishirzi, and Matthew Peters. 2023. HINT: Hypernetwork Instruction Tuning for Efficient Zero-Shot Generalisation. In ACL.
    Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, and Pradeep Dasigi. 2023. Data-Efficient Finetuning Using Cross-Task Nearest Neighbors. In Findings of ACL.
    Hamish Ivison and Matthew E. Peters. 2022. Hyperdecoders: Instance-specific decoders for multi-task NLP. In Findings of EMNLP.
    Siwen Luo*, Hamish Ivison*, Soyeon Caren Han, and Josiah Poon. 2021. Local Interpretations for Explainable Natural Language Processing: A Survey. ACM Computing Surveys.
    Hamish Ivison. 2020. Would you like fries with that? Modular Multi-hop Reasoning. Honours Thesis, University of Sydney, November.