NLP Project: Beyond Words: Enhancing Reasoning in Entity Tracking

In this paper, we delve into the impact of fine-tuning models equipped with reasoning capabilities for entity tracking. Utilizing a T5-base model, the study evaluates fine-tuning effects in two key areas: mathematical reasoning with math question-answer pairs, and computational reasoning with coding-related pairs. Performance is examined across various datasets, including general knowledge, code, and math, and their combinations. The results highlight that models trained exclusively on coding datasets demonstrate superior entity tracking abilities compared to those trained with general knowledge datasets. Conversely, models focused on mathematical reasoning face challenges with out-of-vocabulary symbols. [Download paper here](/files/6_8610_Project_Final_Paper.pdf) [Github code here](https://github.com/Kimberly97llp/NLP_research)