Deliberation Module#

class plurals.deliberation.AbstractStructure(agents: List[Agent], task: str | None = None, shuffle: bool = False, cycles: int = 1, last_n: int = 1000, combination_instructions: str | None = 'default', moderator: Moderator | None = None)[source]#

Bases: ABC

AbstractStructure is an abstract class for processing tasks through a group of agents. As such, it is not meant to be instantiated directly but rather to be subclassed by concrete structures such as an Ensemble. However, all the concrete structures share the same attributes and methods, so this class provides a common interface.

Parameters:
  • agents (List[Agent]) – A list of agents to include in the structure.

  • task (Optional[str]) – The task description for the agents to process.

  • shuffle (bool) – Whether to shuffle the order of the agents. Default is False.

  • cycles (int) – The number of times to process the task. Default is 1.

  • last_n (int) – The maximum number of previous responses each Agent has access to. Default is 1000.

  • combination_instructions (Optional[str]) – The instructions for combining responses. The default is the

  • default

  • template.

  • moderator (Optional[Moderator]) – A moderator to moderate the responses. The default is None.

defaults#

A dict corresponding the YAML file of templates.

Type:

Dict[str, Any]

task#

The task description for the agents to process.

Type:

Optional[str]

agents#

A list of agents to include in the structure.

Type:

List[Agent]

combination_instructions#

The instructions for combining responses.

Type:

str

shuffle#

Whether to shuffle the order of the agents.

Type:

bool

last_n#

The number of previous responses to include in the task description.

Type:

int

cycles#

The number of times to process the task.

Type:

int

responses#

A list of responses from the agents.

Type:

List[str]

final_response#

The final response from the agents.

Type:

Optional[str]

moderator#

A moderator to moderate the responses.

Type:

Optional[Moderator]

moderated#

Whether the structure is moderated.

Type:

bool

property info: Dict[str, Any]#

Return information about the structure and its agents.

abstract process() None[source]#

Abstract method for processing agents. Must be implemented in a subclass.

class plurals.deliberation.Chain(agents: List[Agent], task: str | None = None, shuffle: bool = False, cycles: int = 1, last_n: int = 1000, combination_instructions: str | None = 'default', moderator: Moderator | None = None)[source]#

Bases: AbstractStructure

A chain structure for processing tasks through a sequence of agents. In a chain, each agent processes the task after seeing a prior agent’s response.

Examples:

Using Chain to create a panel of agents that process tasks in a sequence:

agent1 = Agent(persona='a liberal woman from Missouri', model='gpt-4o')
agent2 = Agent(persona='a 24 year old hispanic man from Florida', model='gpt-4o')
agent3 = Agent(persona='an elderly woman with a PhD', model='gpt-4o')

chain = Chain([agent1, agent2, agent3],
   task="How should we combat climate change?",
   combination_instructions="chain")
chain.process()
print(chain.final_response)
process()[source]#

Process the task through a chain of agents, each building upon the last. Use parameters from AbstractStructure to control how the chain operates (e.g: last_n for how many previous responses to include in the previous_responses string)

class plurals.deliberation.Debate(agents: List[Agent], task: str | None = None, shuffle: bool = False, cycles: int = 1, last_n: int = 1000, combination_instructions: str | None = 'debate', moderator: Moderator | None = None)[source]#

Bases: AbstractStructure

In a debate, two agents take turns responding to a task, with each response building upon the previous one. Debate differs from other structures in a few key ways:

  1. It requires exactly two agents.

  2. It alternates between agents for each response, and prefixes each response with [You]: or [Other]: to indicate the speaker.

  3. When moderated, the moderator will provide a final response based on the debate, and we will append [Debater 1] and [Debater 2] to the responses so that the moderator is aware of who said what.

Examples:

Using Debate to observe a conservative vs. a liberal viewpoint:

task = 'To what extent should the government be involved in providing free welfare to citizens?'
agent1 = Agent(persona="a liberal", persona_template="default", model='gpt-4o')
agent2 = Agent(persona="a conservative", persona_template="default", model='gpt-4o')
moderator = Moderator(persona='You are a neutral moderator overseeing this task, ${task}', model='gpt-4o', combination_instructions="default")

debate = Debate([agent1, agent2], task=task,  combination_instructions="debate", moderator=moderator)
debate.process()
print(debate.final_response)
process()[source]#

Process the debate. In a debate, two agents take turns responding to a task. Prompts for agents are prefixed with [WHAT YOU SAID] and [WHAT OTHER PARTICIPANT SAID] to indicate the speaker. For moderators the responses are prefixed with [Debater 1] and [Debater 2] to indicate the speaker.

class plurals.deliberation.Ensemble(agents: List[Agent], task: str | None = None, shuffle: bool = False, cycles: int = 1, last_n: int = 1000, combination_instructions: str | None = 'default', moderator: Moderator | None = None)[source]#

Bases: AbstractStructure

An ensemble structure for processing tasks through a group of agents. In an ensemble, each agent processes the task independently through async requests.

Examples:

Using Ensemble to brainstorm ideas:

task = "Brainstorm ideas to improve America."
agents = [Agent(persona='random', model='gpt-4o') for i in range(10)] # random ANES agents
moderator = Moderator(persona='default', model='gpt-4o') # default moderator persona
ensemble = Ensemble(agents, moderator=moderator, task=task)
ensemble.process()
print(ensemble.final_response)
process()[source]#

Requests are sent to all agents simultaneously.

class plurals.deliberation.Graph(agents: List[Agent], edges: List[tuple], task: str | None = None, last_n: int = 1000, combination_instructions: str | None = 'default', moderator: Moderator | None = None)[source]#

Bases: AbstractStructure

Initializes a network where agents are processed according to a topologically-sorted directed acyclic graph (DAG). This Structure takes in agents and a structure-specific property called edges. We offer two ways to construct the graph, with examples of each method right below.

Method 1:

  • agents is a list of Agent objects.

  • edges is a list of integer tuples (src_idx, dst_idx).

Method 2:

  • agents is a dictionary of Agent objects with keys as agent names.

  • edges is a list of string tuples (src_agent_name, dst_agent_name).

Note that the graph must be a directed acyclic graph (DAG) or else an error will be raised.

Examples:

Suppose we have three Agents, and we want to create a graph where the output of the liberal is fed to both the conservative and libertarian. Then the output of the conservative is fed to the libertarian.

Method 1:

Agents = [
    Agent(system_instructions="you are a liberal", model="gpt-3.5-turbo"),
    Agent(system_instructions="you are a conservative", model="gpt-3.5-turbo"),
    Agent(system_instructions="you are a libertarian", model="gpt-3.5-turbo")
]
edges = [(0, 1), (0, 2), (1, 2)]
# edges = (liberal -> conservative), (liberal -> libertarian), (conservative -> libertarian)
task = "What are your thoughts on the role of government in society? Answer in 20 words."
graph = Graph(agents=Agents, edges=edges, task=task)
graph.process()

Method 2:

agents = {
    'liberal': Agent(system_instructions="you are a liberal", model="gpt-3.5-turbo"),
    'conservative': Agent(system_instructions="you are a conservative", model="gpt-3.5-turbo"),
    'libertarian': Agent(system_instructions="you are a libertarian", model="gpt-3.5-turbo")
}
edges = [('liberal', 'conservative'), ('liberal', 'libertarian'), ('conservative', 'libertarian')]
task = "What are your thoughts on the role of government in society?"
graph = Graph(agents=agents, edges=edges, task=task)
graph.process()
process()[source]#

Processes the tasks within the network of agents, respecting the directed acyclic graph (DAG) structure. The order of agent deliberation is determined using Kahn’s algorithm for topological sorting.

Kahn’s Algorithm:

  1. Initialize a queue with agents that have an in-degree of 0 (no dependencies).

  2. While the queue is not empty:

    1. Remove an agent from the queue and add this agent to the topological order.

    2. For each successor of this agent:

      1. Decrease the successor’s in-degree by 1.

      2. If the successor’s in-degree becomes 0, add it to the queue.

This method ensures that agents are processed in an order where each agent’s dependencies are processed before the agent itself.

Returns:

The final response after all agents have been processed, and potentially moderated.

Return type:

str

Raises:

ValueError – If a cycle is detected in the DAG, as this prevents valid topological sorting.

class plurals.deliberation.Moderator(persona: str | None = None, system_instructions: str | None = None, combination_instructions: str = 'default', model: str = 'gpt-4o', task: str | None = None, kwargs: Dict | None = None)[source]#

Bases: Agent

A moderator agent that combines responses from other agents at the end of structure processing.

Parameters:
  • persona (str, optional) – The persona of the moderator. Default is ‘default’. The persona can take in a ${task} placeholder.

  • system_instructions (str, optional) – The system instructions for the moderator. Default is None. If you pass in ‘auto’, an LLM will generate its own system instructions automatically based on the task. system_instructions can take in a ${task} placeholder. If you use system_instructions, you cannot use persona since that is an alternative way to set the system instructions.

  • combination_instructions (str, optional) – The instructions for combining responses. Default is ‘default’. The combination instructions can take in a ${previous_responses} placeholder.

  • model (str, optional) – The model to use for the moderator. Default is ‘gpt-4o’.

  • task (str, optional) – The task description for the moderator. By default, moderators will inherit the task from the Structure so this can be left blank. It is only required in a specific case: you wish to manually generate system instructions outside of the Structure. Note that if you use auto-mods inside of the Structure, the task will be inherited from the Structure.

  • kwargs (Optional[Dict]) – Additional keyword arguments. These are from LiteLLM’s completion function. (see here: https://litellm.vercel.app/docs/completion/input)

persona#

The persona of the moderator.

Type:

str

combination_instructions#

The instructions for combining responses.

Type:

str

system_instructions#

The full system instructions for the moderator.

Type:

str

Examples

Standard Usage: Inherit task from Structure

The standard usage is for the Moderator to inherit the task from the Structure. Here, the system instructions will replace the ${task} placeholder in the default Moderator template.

# Example 1
task = "What are your thoughts on the role of government in society? Answer in 20 words."
# Uses templates for personas and combination instructions
moderator = Moderator(persona='default', model='gpt-4o', combination_instructions='default')
agent1 = Agent(model='gpt-3.5-turbo')
agent2 = Agent(model='gpt-3.5-turbo')
chain = Chain([agent1, agent2], moderator=moderator, task=task)
chain.process()

# Example 2
task = "What are your thoughts on the role of government in society? Answer in 10 words."
moderator = Moderator(
    persona="You are an expert overseeing a discussion about ${task}",
    model='gpt-4o',
    combination_instructions="Come to a final conclusion based on previous responses: $previous_responses"
)
agent1 = Agent(model='gpt-3.5-turbo')
agent2 = Agent(model='gpt-3.5-turbo')
chain = Chain([agent1, agent2], moderator=moderator, task=task)
chain.process()

Alternatively, you can set the system instructions directly.

moderator = Moderator(
    system_instructions='Summarize previous responses as neutrally as possible.',
    model='gpt-4o',
    combination_instructions='second_wave'
)

Auto-Moderator: Declared inside of Structure

If the system_instructions of a moderator are set to ‘auto’, then the moderator will, given a task, come up with its own system instructions. Here the task is inherited from the Structure.

task = ("Your goal is to come up with the most creative ideas possible for pants. We are maximizing creativity. Answer"
        " in 20 words.")
a = Agent(model='gpt-4o')
b = Agent(model='gpt-3.5-turbo')
chain = Chain([a, b], moderator=Moderator(system_instructions='auto', model='gpt-4o'), task=task)
chain.process()
print(chain.moderator.system_instructions)

Output:

Group similar ideas together, prioritize uniqueness and novelty. Highlight standout concepts and remove
duplicates. Ensure the final list captures diverse and imaginative designs.

Auto-Moderator: Declared outside of Structure

Here we use an auto-moderator again, but this time the auto-moderated system instructions come from a different task than what Agents complete.

moderator_task = "What is a creative use for pants?"
moderator = Moderator(system_instructions='auto', model='gpt-4o', task=task)
a1 = Agent(model='gpt-4o-mini')
a2 = Agent(model='gpt-4o-mini')
chain = Chain([a1, a2], moderator=moderator, task=task)
chain.process()
generate_and_set_system_instructions(task: str, max_tries: int = 10) None[source]#

Generate and set system instructions using an LLM and a task. This function will generate the system instructions and also set it as the system instructions for the moderator.

Parameters:
  • task (str) – The task description.

  • max_tries (int, optional) – The maximum number of attempts to generate valid system instructions. Default is 10.

Returns:

System instructions for the moderator.

Sets:

system_instructions (str): The system instructions for the moderator.

generate_system_instructions(task: str, max_tries: int = 10) str[source]#

Generate and instructions using an LLM and a task. This function will not automatically set the system instructions, but it will return the generated system instructions (so you can inspect or re-generate them). Then you can set system instructions using the system_instructions attribute.

See generate_and_set_system_instructions for a function that will generate and set the system instructions.

Parameters:
  • task (str) – The task description for which system instructions need to be generated.

  • max_tries (int) – The maximum number of attempts to generate valid system instructions. Default is 10.

Returns:

The generated system instructions.

Return type:

str

Raises:

ValueError – If valid system instructions are not generated after the maximum number of attempts.