LLM Node
LLM Node: The Crucial Component
The Node with Large Language Models (LLM) is a crucial component of your workflow. By incorporating LLM nodes, you can achieve the following:
Key Features of LLM Nodes
Understand and Generate Responses:
Parse input data and produce coherent, context-aware responses.
Answer questions or provide clarifications based on textual input.
Summarization:
Condense lengthy text into concise, meaningful overviews.
Analysis:
Perform sentiment analysis.
Identify key entities and topics within the text.
Rewrite and Text Expansion:
Rewrite text to enhance clarity, tone, or style.
Expand bullet points or brief notes into fully developed content.
Code Generation:
Generate code tailored to your specific requirements.
Integrate one or multiple LLM nodes into your workflow to achieve tailored outcomes. With configurable settings and versatile output formats, the LLM Node empowers you to optimize and streamline your processes.

Steps to Configure the LLM Node
Step 1 (Optional): Define Node Name and Description
Provide a name and description for the node to help identify its purpose in your workflow.
Step 2 (Required): Select a Model
Choose the desired model to power your LLM Node. Supported models include Claude, DeepSeek, Gemma2, Llama-3, GPT, and others. Keep in mind:
Model performance varies based on your use case.
Costs may differ depending on the model.
Step 3 (Optional): Configure Model Parameters
Customize the selected model with parameters, all the parameters have been set with a default value.
Frequency Penalty: Range from -2.0 to 2.0. Positive values discourage repetition by penalizing frequent tokens.
Presence Penalty: Range from -2.0 to 2.0. Positive values encourage discussing new topics by penalizing previously mentioned tokens.
Temperature: Range from 0 to 2. Higher values (e.g., 0.8) increase randomness, while lower values (e.g., 0.2) ensure focused and deterministic outputs.
Max Completion Tokens: Set an upper limit for the number of tokens generated, including visible output and reasoning tokens.
Step 4 (Optional): Configure the System Prompt
The system prompt establishes context for the LLM conversation. For example, you can instruct the LLM to provide simplified summaries or other specific outputs. If no context is required, this field can be left blank.
Step 5 (Required): Define the Message
Provide detailed instructions for the LLM Node. This is the command or task you want the node to execute. You can also include variables from previous nodes.
Step 6 (Optional): Select an Output Schema
Choose the format for the output:
Default format: Text.
Alternative format: JSON Schema, which allows you to define various properties and structures for your output.
Last updated