Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Hospital Event Reports

Published in N/A, 2022

Fine-tuned embeddings and BERT-style models for tasks such as text clustering, sentiment analysis, and named entity recognition on hospital reports.

Efficient Transformer Knowledge Distillation: A Performance Review

Published in Empirical Methods in Natural Language Processing (EMNLP), 2023

This paper discusses the distillation of long-context, efficient attention BERT-based models to yield models that are smaller, faster, and cheaper to deploy.

Recommended citation: Brown, Nathan and Williamson, Ashton and Anderson, Tahj and Lawrence, Logan. (2023). "Efficient Transformer Knowledge Distillation: A Performance Review" Empirical Methods in Natural Language Processing. https://arxiv.org/pdf/2311.13657.pdf

Pula: Training Large Language Models for Setswana

Published in NAACL 2025, 2025

Developed in partnership with the DSFSI group at The University of Pretoria, this work introduces Pula, the first suite of LLMs built for Setswana; Marothodi, the largest Setswana pre-training corpus; and Medupi, the first extensive Setswana instruction-tuning dataset.

Recommended citation: Brown, Nathan and Marivate, Vukosi (2025). "Pula: Training Large Language Models for Setswana" NAACL 2025 https://aclanthology.org/2025.naacl-long.338/

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.