Hi, I'm Sasha! I'm an undergraduate student at McMaster University majoring in computer science with a minor in statistics, and currently applying for MSc and PhD programs. I've been involved with research for over two years now, starting in the MTHI Group with Swati Mishra, briefly in the METRE Lab with Jonathan Cannon, and now at the Vector Institute with Kelsey Allen. Outside of research, I love reading philosophy and literature, camping and hiking, and photography :)
I am interested in understanding the idiosyncrasies of biological intelligence, and using machine intelligence as a tool for making these pecularities clearer.
How do humans learn and generalize so quickly from such small amounts of data, such as recognizing a giraffe after seeing just one photo?
Can sufficiently intelligent machines develop and participant in complex social constructs, such as countries or ant colonies?
Can all forms of intelligence create their own culture? Is all of it as meaningful
as ours?
Within this huge problem space, I'm excited to explore cognition in humans and machines through empirical studies—particularly those involving human-AI interactions—and to use computational models to draw inferences about different social phenomena. I'm especially interested in using fun, intuitive environments (e.g., puzzle-solving games) as a microcosm for studying cognition within controllable lab settings. I'm similarly passionate to apply this work by building tools that support—rather than automate—human thinking and aligning human values with the future of machine intelligence. I'm also interested in combining computational approaches in neuroscience with cognitive science work to build a more holistic appreciation of human intelligence.
Right now, I am continuing to lead research development with my amazing group—Kelsey Allen (UBC), Katie Collins (MIT), Kerem Oktar (Meta FAIR), and Ilia Sucholutsky (NYU)—by examining game archetypes that involve different types of reasoning to build on our existing work.
Under the Influence: Quantifying Persuasion and Vigilance in Large Language Models
Sasha Robinson, Kerem Oktar, Katherine M. Collins, Ilia Sucholutsky, Kelsey R. Allen
We quantify how large language models (LLMs) exhibit vigilant and persuasive behaviors in paired puzzle-solving tasks with mixed-motive advisors using Sokoban, a spatial reasoning game. We find that each model exhibits vastly different social reasoning capabilities (e.g., GPT-5 and Grok 4 have similar puzzle-solving skills, yet diametric vigilance skills), and that, generally, unassisted performance, vigilance, and persuasion are dissociable cognitive facets.
StoryBlocks: Towards AI-Assisted Narrative Design for Data-Driven Storytelling
Yaning (Jason) Xu, Sasha Robinson, Pranav Kalsi, Swati Mishra
Combining methods from HCI, cognitive science, and narrative theory, we build StoryBlocks, an interface for supporting data-driven storytelling. Our system decomposes data-driven narratives into fundamental components (represented as blocks) and uses several mechanisms to decrease cognitive load, such as a multi-agentic architecture for data exploration and a non-linear environment for narrativization. Through user studies, we find that StoryBlocks supports hierarchical planning, reduces bias, and accelerates story creation, while retaining creative control in the hands of writers.