Interview Module#
- class plurals.interview.Interview(seed: str, model: str, questions: str | List[str] = 'default', interviewee_instructions: str = 'default', **kwargs)[source]#
Bases:
objectConducts a batched interview with an LLM to build out a persona’s life story.
The LLM is prompted to roleplay as
seedand answer all interview questions in a single API call. Answers are separated by a sentinel string and parsed into individual responses, which are also joined into a combined Q&A string suitable for use as anAgentpersona.- Parameters:
seed (str) – Description of the persona being interviewed (e.g., “utah voter”).
model (str) – LiteLLM model name (e.g., “gpt-4o”).
questions (str or list[str]) –
'default'to use the built-in question bank frominstructions.yaml(word budgets already embedded), or a plain list of question strings.interviewee_instructions (str) – Controls the voice and style of the interviewee. Pass
'default'to use the built-in template frominstructions.yaml(which includes plain language, conversational tone, and guidance to speak naturally rather than in essay form), or pass any raw string. The placeholder${seed}will be substituted with the seed value.**kwargs – Additional keyword arguments forwarded to
litellm.completion(e.g.,temperature,max_tokens).
- interviewee_instructions#
The resolved system prompt used in the interview.
- Type:
str
- responses#
Per-question answers after
run_interview().- Type:
list[str] or None
- combined_response#
Full Q&A string after
run_interview(), ready to pass aspersonato anAgent.- Type:
str or None
Examples:
Basic usage: Run the default interview and feed the result into an Agent.
from plurals.interview import Interview from plurals.agent import Agent interview = Interview(seed="utah voter", model="gpt-4o") interview.run_interview() agent = Agent(persona=interview.combined_response, model="gpt-4o-mini") agent.process(task="How do you feel about immigration policy?")
Different seeds: The seed shapes the entire persona — try anything from demographic descriptions to occupational roles.
# A broad demographic seed interview = Interview(seed="retired teacher from rural Georgia", model="gpt-4o") interview.run_interview() # A more specific seed interview2 = Interview(seed="first-generation college student from the Bronx", model="gpt-4o") interview2.run_interview() # A role-based seed interview3 = Interview(seed="small business owner in the midwest", model="gpt-4o") interview3.run_interview()
Custom questions: Pass your own list of question strings. No word budget is added — include any length instructions in the question itself if desired.
my_questions = [ "Tell me about your relationship with technology and social media.", "How has your community changed over the past decade?", "What does financial security mean to you? Answer in 150 words.", ] interview = Interview(seed="suburban parent", model="gpt-4o", questions=my_questions) interview.run_interview()
Custom interviewee instructions: Override the default voice/style prompt. The
${seed}placeholder is substituted with the seed value.# Use a named template from instructions.yaml (currently only 'default') interview = Interview(seed="utah voter", model="gpt-4o", interviewee_instructions='default') # Or pass a raw string — ${seed} will be substituted interview = Interview( seed="retired steelworker from Pittsburgh", model="gpt-4o", interviewee_instructions="You are a ${seed}. Be blunt and use working-class language.", ) interview.run_interview() print(interview.interviewee_instructions) # You are a retired steelworker from Pittsburgh. Be blunt and use working-class language.
Passing model kwargs: Control model behavior with any LiteLLM-supported keyword arguments.
interview = Interview( seed="progressive activist from Seattle", model="gpt-4o", kwargs = { "temperature": 0.7, "max_tokens": 2000, }, ) interview.run_interview()
Inspecting results: After
run_interview(), you can access per-question answers, the full Q&A string, or the complete info dict.interview = Interview(seed="utah voter", model="gpt-4o") interview.run_interview() # List of answers, one per question print(interview.responses) # Full Q&A string (what gets passed to Agent as persona) print(interview.combined_response) # Full state dict print(interview.info)
{ 'seed': 'utah voter', 'model': 'gpt-4o', 'interviewee_instructions': 'You are a utah voter...', 'questions': ['To start, I would like to begin...', ...], 'responses': ['I grew up in Salt Lake City...', ...], 'combined_response': 'Q: To start...\nA: I grew up...', 'kwargs': {}, }Full workflow: Interview → Agent → Structure.
from plurals.interview import Interview from plurals.agent import Agent from plurals.deliberation import Ensemble # Build two persona-rich agents via interview interview1 = Interview(seed="conservative farmer from Iowa", model="gpt-4o") interview1.run_interview() interview2 = Interview(seed="liberal professor from Boston", model="gpt-4o") interview2.run_interview() agent1 = Agent(persona=interview1.combined_response, model="gpt-4o-mini") agent2 = Agent(persona=interview2.combined_response, model="gpt-4o-mini") ensemble = Ensemble([agent1, agent2], task="What should U.S. climate policy look like?") ensemble.process() print(ensemble.responses)
- property combined_response: str | None#
Full Q&A string, available after
run_interview().
- property info: Dict[str, Any]#
Return the full state of the Interview.
- property responses: List[str] | None#
Per-question answers, available after
run_interview().