Class PipelinePromptTemplate<PromptTemplateType>

Class that handles a sequence of prompts, each of which may require different input variables. Includes methods for formatting these prompts, extracting required input values, and handling partial prompts.

const composedPrompt = new PipelinePromptTemplate({
pipelinePrompts: [
{
name: "introduction",
prompt: PromptTemplate.fromTemplate(`You are impersonating {person}.`),
},
{
name: "example",
prompt: PromptTemplate.fromTemplate(
`Here's an example of an interaction:
Q: {example_q}
A: {example_a}`,
),
},
{
name: "start",
prompt: PromptTemplate.fromTemplate(
`Now, do this for real!
Q: {input}
A:`,
),
},
],
finalPrompt: PromptTemplate.fromTemplate(
`{introduction}
{example}
{start}`,
),
});

const formattedPrompt = await composedPrompt.format({
person: "Elon Musk",
example_q: `What's your favorite car?`,
example_a: "Tesla",
input: `What's your favorite social media site?`,
});

Type Parameters

Hierarchy (view full)

Constructors

Properties

PromptValueReturnType: BasePromptValueInterface
finalPrompt: PromptTemplateType
inputVariables: string[]

A list of variable names the prompt template expects

partialVariables: PartialValues<any>

Partial variables

name?: string
outputParser?: BaseOutputParser<unknown>

How to parse the output of calling an LLM on this formatted prompt

Methods

  • Convert a runnable to a tool. Return a new instance of RunnableToolLike which contains the runnable, name, description and schema.

    Type Parameters

    • T extends any = any

    Parameters

    • fields: {
          schema: ZodType<T, ZodTypeDef, T>;
          description?: string;
          name?: string;
      }
      • schema: ZodType<T, ZodTypeDef, T>

        The Zod schema for the input of the tool. Infers the Zod type from the input type of the runnable.

      • Optionaldescription?: string

        The description of the tool. Falls back to the description on the Zod schema if not provided, or undefined if neither are provided.

      • Optionalname?: string

        The name of the tool. If not provided, it will default to the name of the runnable.

    Returns RunnableToolLike<ZodType<ToolCall | T, ZodTypeDef, ToolCall | T>, BasePromptValueInterface>

    An instance of RunnableToolLike which is a runnable that can be used as a tool.

  • Generate a stream of events emitted by the internal steps of the runnable.

    Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.

    A StreamEvent is a dictionary with the following schema:

    • event: string - Event names are of the format: on_[runnable_type]_(start|stream|end).
    • name: string - The name of the runnable that generated the event.
    • run_id: string - Randomly generated ID associated with the given execution of the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a parent runnable is assigned its own unique ID.
    • tags: string[] - The tags of the runnable that generated the event.
    • metadata: Record<string, any> - The metadata of the runnable that generated the event.
    • data: Record<string, any>

    Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

    ATTENTION This reference table is for the V2 version of the schema.

    +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | event | name | chunk | input | output | +======================+==================+=================================+===============================================+=================================================+ | on_chat_model_start | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_stream | [model name] | AIMessageChunk(content="hello") | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chat_model_end | [model name] | | {"messages": [[SystemMessage, HumanMessage]]} | AIMessageChunk(content="hello world") | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_start | [model name] | | {'input': 'hello'} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_stream | [model name] | 'Hello' | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_llm_end | [model name] | | 'Hello human!' | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_start | format_docs | | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_stream | format_docs | "hello world!, goodbye world!" | | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_chain_end | format_docs | | [Document(...)] | "hello world!, goodbye world!" | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_start | some_tool | | {"x": 1, "y": "2"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_tool_end | some_tool | | | {"x": 1, "y": "2"} | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_start | [retriever name] | | {"query": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_retriever_end | [retriever name] | | {"query": "hello"} | [Document(...), ..] | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_start | [template_name] | | {"question": "hello"} | | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+ | on_prompt_end | [template_name] | | {"question": "hello"} | ChatPromptValue(messages: [SystemMessage, ...]) | +----------------------+------------------+---------------------------------+-----------------------------------------------+-------------------------------------------------+

    Parameters

    • input: any
    • options: Partial<RunnableConfig> & {
          version: "v1" | "v2";
      }
    • OptionalstreamOptions: Omit<EventStreamCallbackHandlerInput, "autoClose">

    Returns IterableReadableStream<StreamEvent>

  • Parameters

    • input: any
    • options: Partial<RunnableConfig> & {
          encoding: "text/event-stream";
          version: "v1" | "v2";
      }
    • OptionalstreamOptions: Omit<EventStreamCallbackHandlerInput, "autoClose">

    Returns IterableReadableStream<Uint8Array>

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    Returns AsyncGenerator<RunLogPatch, any, unknown>

  • Computes the input values required by the pipeline prompts.

    Returns string[]

    Array of input values required by the pipeline prompts.