I am a PhD candidate at University of Michigan, School of Information—advised by Eric Gilbert and Ceren Budak. I am broadly interested in human-AI interactions at the collective level, with a focus on alignment challenges that emerge at population scales. Concretely, this has two research directions—one about building new systems and one about evaluating current systems.
System building → future AI systems. Building scalable multi-agent systems that function as engines of social science interventions—with a particular interest in interventions that (A) improve democracy and/or (B) help users by simulating perspectives. My Plurals system (CHI 2025 honorable mention) guides LLMs via simulated social ensembles and now powers follow-up RCTs. I am evaluating a second system, "The As-If Machine", for increasing action towards long-term risks.
Impact surfacing → current AI systems. I also design experiments to surface "non-obvious" AI impacts—meaning, impacts at the collective (rather than individual) and long-run (rather than short-term) level. My creativity paper (Collective Intelligence honorable mention) used a dynamic "many-worlds" design where ideas from participants in one condition fed forward to future participants in that creation, revealing how AI changes the evolution (and not just levels) of human creativity. I also design experiments for AI systems to measure alignment-relevant capabilities at collective scales. For example, my deep value benchmark (NeurIPS 2025 spotlight) used an experiment design that untangled whether models generalize deep values or shallow preferences.
These two directions have tight connections with:
(Pluralistic) alignment: If a system can align to diverse viewpoints, it can provision those as helpful aids.
Collective intelligence: The motivation for both directions is strongly rooted in CI.
Computational social science: I draw on social science theories for Direction 1 and I draw on social science methods for Direction 2.
Joshua Ashkinaze, Hua Shen, Sai Avula, Eric Gilbert, Ceren Budak
We introduce the Deep Value Benchmark (DVB), an evaluation framework that directly tests whether large language models (LLMs) learn fundamental human values or merely surface-level preferences. This distinction is critical for AI alignment: systems that capture deeper values are likely to generalize human intentions robustly, while those that capture only superficial patterns risk misaligned behavior. The DVB uses a novel experimental design with controlled confounding between deep values (e.g., moral principles) and shallow features (e.g., superficial attributes like formality). In the training phase, we expose LLMs to preference data with deliberately correlated deep and shallow features—for instance, where a user consistently prefers (non-maleficence, formal language) over (justice, informal language). The testing phase breaks these correlations, presenting choices between (justice, formal language) and (non-maleficence, informal language). This allows us to measure a model's Deep Value Generalization Rate (DVGR)—the probability of generalizing based on underlying values rather than shallow features. Across 9 models, the average DVGR is just 0.30, meaning all models generalize deep values less than chance. Counterintuitively, larger models exhibit slightly lower DVGR than smaller models. The dataset underwent three separate human validation experiments to ensure reliability. DVB provides an interpretable measure of a core feature of alignment, revealing that current models prioritize shallow preferences over deep values.
Joshua Ashkinaze, Emily Fry, Narendra Edara, Eric Gilbert, Ceren Budak
Check out the Github library!
Recent debates raised concerns that language models may favor certain viewpoints. But what if the solution is not to aim for a "view from nowhere" but rather to leverage different viewpoints? We introduce Plurals, a system and Python library for pluralistic AI deliberation. Plurals consists of Agents (LLMs, optionally with personas) which deliberate within customizable Structures, with Moderators overseeing deliberation. Plurals is a generator of simulated social ensembles. Plurals integrates with government datasets to create nationally representative personas, includes deliberation templates inspired by deliberative democracy, and allows users to customize both information-sharing structures and deliberation behavior within Structures. Six case studies demonstrate fidelity to theoretical constructs and efficacy. Three randomized experiments show simulated focus groups produced output resonant with an online sample of the relevant audiences (chosen over zero-shot generation in 75% of trials). Plurals is both a paradigm and a concrete system for pluralistic AI.
Joshua Ashkinaze, Julia Mendelsohn, Qiwei Li, Ceren Budak, Eric Gilbert
Exposure to large language model output is rapidly increasing. How will seeing AI-generated ideas affect human ideas? We conducted an experiment (800+ participants, 40+ countries) where participants viewed creative ideas that were from ChatGPT or prior experimental participants and then brainstormed their own idea. We varied the number of AI-generated examples (none, low, or high exposure) and if the examples were labeled as 'AI' (disclosure). Our dynamic experiment design -- ideas from prior participants in an experimental condition are used as stimuli for future participants in the same experimental condition -- speaks to the interdependent process of cultural creation: creative ideas are built upon prior ideas. Hence, we capture the compounding effects of having LLMs 'in the culture loop'. We find that high AI exposure (but not low AI exposure) did not affect the creativity of individual ideas but did increase the average amount and rate of change of collective idea diversity. AI made ideas different, not better. There were no main effects of disclosure. We also found that self-reported creative people were less influenced by knowing an idea was from AI and that participants may knowingly adopt AI ideas when the task is difficult. Our findings suggest that introducing AI ideas may increase collective diversity but not individual creativity.
Joshua Ashkinaze, Ruijia Guan, Laura Kurek, Eytan Adar, Ceren Budak, Eric Gilbert
Note: We were very happy that this research had substantial impact with key stakeholders: I was invited to give a talk at a Wikimedia research showcase, and our paper is cited in Wikipedia strategy around integrating AI into the platform.
Large language models (LLMs) are trained on broad corpora and then used in communities with specialized norms. Is providing LLMs with community rules enough for models to follow these norms? We evaluate LLMs' capacity to detect (Task 1) and correct (Task 2) biased Wikipedia edits according to Wikipedia's Neutral Point of View (NPOV) policy. LLMs struggled with bias detection, achieving only 64% accuracy on a balanced dataset. Models exhibited contrasting biases (some under- and others over-predicted bias), suggesting distinct priors about neutrality. LLMs performed better at generation, removing 79% of words removed by Wikipedia editors. However, LLMs made additional changes beyond Wikipedia editors' simpler neutralizations, resulting in high-recall but low-precision editing. Interestingly, crowdworkers rated AI rewrites as more neutral (70%) and fluent (61%) than Wikipedia-editor rewrites. Qualitative analysis found LLMs sometimes applied NPOV more comprehensively than Wikipedia editors but often made extraneous non-NPOV-related changes (such as grammar). LLMs may apply rules in ways that resonate with the public but diverge from community experts. While potentially effective for generation, LLMs may reduce editor agency and increase moderation workload (e.g., verifying additions). Even when rules are easy to articulate, having LLMs apply them like community members may still be difficult.
Shubham Atreja, Joshua Ashkinaze, Lingyao Li, Julia Mendelsohn, Libby Hemphill
Manually annotating data for computational social science tasks can be costly, time-consuming, and emotionally draining. While recent work suggests that LLMs can perform such annotation tasks in zero-shot settings, little is known about how prompt design impacts LLMs' compliance and accuracy. We conduct a large-scale multi-prompt experiment to test how model selection (GPT-4o, GPT-3.5, PaLM2, and Falcon7b) and prompt design features (definition inclusion, output type, explanation, and prompt length) impact the compliance and accuracy of LLM-generated annotations on four highly relevant and diverse CSS tasks (toxicity, sentiment, rumor stance, and news frames). Our results show that LLM compliance and accuracy are prompt-dependent. For instance, prompting for numerical scores instead of labels reduces all LLMs' compliance and accuracy. Concise prompts can significantly reduce prompting costs but also lead to lower accuracy on tasks like toxicity. Furthermore, minor prompt changes like asking for an explanation can cause large changes in the distribution of LLM-generated labels. By assessing the impact of prompt design on the quality and distribution of LLM-generated annotations, this work serves as both a practical guide and a warning for using LLMs in CSS research.
Joshua Ashkinaze, Eric Gilbert, Ceren Budak
Note: Selected as an oral presentation (top 9% of submissions) at the main conference and also selected for the special Online Trust and Safety day.
Many studies explore how people "come into" misinformation exposure. But much less is known about how people "come out of" misinformation exposure. Do people organically sever ties to misinformation spreaders? And what predicts doing so? Over six months, we tracked the frequency and predictors of ~900K followers unfollowing ~5K health misinformation spreaders on Twitter. We found that misinformation ties are persistent. Monthly unfollowing rates are just 0.52%. In other words, 99.5% of misinformation ties persist each month. Users are also 31% more likely to unfollow non-misinformation spreaders than they are to unfollow misinformation spreaders. Although generally infrequent, the factors most associated with unfollowing misinformation spreaders are (1) redundancy and (2) ideology. First, users initially following many spreaders, or who follow spreaders that tweet often, are most likely to unfollow later. Second, liberals are more likely to unfollow than conservatives. Overall, we observe a strong persistence of misinformation ties. The fact that users rarely unfollow misinformation spreaders suggests a need for external nudges and the importance of preventing exposure from arising in the first place.
Using RL to train a multi-agent system to surface disparate impacts of policies
Built an interactive multi-agent RAG system to reduce psychological distance to long-term risks
Measuring how AI shapes human social knowledge through a combination of prevalence estimates, experiments, and parameterized long-run simulations under different AI futures (e.g., varying alignment, pluralism, market concentration)
Using political datasets to power AI focus groups that reduce polarization of controversial issues
Li, Q., Zhang, S., Kasper, A. T., Ashkinaze, J., Eaton, A. A., Schoenebeck, S., & Gilbert, E.
Goray, C., Li, Q., Ashkinaze, J., Le, V., Gilbert, E., & Schoenebeck, S.
Kurek, L., Ashkinaze, J., Budak, C., & Gilbert, E.
Joshua Ashkinaze, Ceren Budak, Eric Gilbert
Joshua Ashkinaze, Ceren Budak, Eric Gilbert
Joshua Ashkinaze, Julia Mendelsohn, Li Qiwei, Ceren Budak, Eric Gilbert
Joshua Ashkinaze, Eric Gilbert, Ceren Budak
Joshua Ashkinaze, Eric Gilbert, Ceren Budak
Open-source multi-agent library for pluralistic artificial intelligence with extensive test suite, CI/CD workflows, and auto-deployed documentation. Plurals is an end-to-end generator of "simulated social ensembles:" (1) Agents complete tasks within (2) Structures, with communication optionally summarized by (3) Moderators. Integrates with government datasets and templates inspired by democratic deliberation theory.
Created a Python library that implements inverse-covariance weighted indexing—a method for creating indexes of outcomes by up-weighting outcomes that are least correlated with others. Validated against the gold-standard implementation, STATA's "swindex" library.
A small script that acts as a daily journal manager, can be run entirely from the command line
Using probability theory, object-oriented programming, and color theory to create 20+ mathematical art algorithms and 250+ individual compositions. Displayed artwork at 3 exhibitions; sold 75+ prints.
Built an open-source package that maps color palettes to emotions using supervised learning and NLP. Used web scraping and dominant color parsing to construct a dataset of colors and emotions; implemented a K-Nearest Neighbors classifier.
Awarded $3,000 for a project using Plurals to power RCTs related to polarization
Awarded $2,433 for a project using Plurals to power RCTs related to cooperation
Awarded $3,000 for surveys and experiments on AI's effect on society
Awarded $5,000 in credits for multiple projects related to AI creativity, AI alignment, and AI pluralism
Awarded $2,900 in grants for projects related to the effect of AI on society and mentoring undergrads
Awarded $1,500 for testing AI interventions
Finalist for Civic Health Project RFP on innovative uses of LLMs to reduce polarization
Inducted into honor society
Awarded to college senior for outstanding economics research
Awarded to college junior for excellence in economics
Received grant to study how macroeconomic conditions correlate with dream content
For a complete list of publications, see my Google Scholar.