Swiftsell Help Center
  • 😀Welcome to Swiftsell
    • Swiftsell Basics
  • 🏁GETTING STARTED
    • Building a Chatbot
    • Testing a Chatbot
    • Installation
      • Installation on Website
        • Installation using HTML
        • Installation using Google Tag Manager
      • Installing on SquareSpace
      • Installation on WhatsApp
      • Installation on Facebook
    • Deploying a Chatbot
    • Utility Tools
  • ⚒️BOT BUILDER
    • Action Block
      • Trigger
      • Send Message
      • Collect Input
      • Buttons
      • Reply Buttons
      • Carousel
      • Answer AI
      • Set AI
      • Send an Email
      • List
      • Human Handover
      • Dynamic Data
      • Javascript
      • Form
      • Flow
      • Options
      • Calendar
      • Slider
      • Image Carousel
      • WhatsApp Flow
    • Branch
    • File Upload
  • ✨AI STUDIO
    • Building a GPT chatbot
    • Knowledge base
    • Custom Answers
    • Function call
    • Prompts
    • Tokens
    • Setting up retrain
  • 🔗Integrations
    • Overview
    • Events
    • Service Call
    • Google Sheets
    • Google Calendar
    • Calendly
  • 💬WHATSAPP BUSINESS API
    • Overview
    • WhatsApp Business API from Meta
  • 💬Live Chat
    • Overview
    • Live Chat Screen
    • Live Chat Settings
  • 📣OUTBOUND BOTS
    • Overview
    • Building One Off Campaign
    • Building Ongoing Campaign
    • Creating a WhatsApp Template
Powered by GitBook
On this page
  • Step 1: Define the prompt/instruction
  • Step 2: Store the response in a variable
  1. BOT BUILDER
  2. Action Block

Set AI

Uses an LLM-based AI model to generate an response based on the prompt given.

The Set AI action block is used to generate a response to the prompt supplied to it.

When using the Set AI block, the user submits a prompt, which is then either processed by the AI Studio or the LLM model to generate a response. This response is then stored in a variable

Swiftsell uses OpenAI GPT APIs to generate a response.

Process in which the Set AI works,

  1. A prompt is supplied

  2. AI generates the response

  3. The response is stored in a variable

Step 1: Define the prompt/instruction

LLM models require guidance to be able to generate a relevant and accurate answer. There are some tools you can use to guide your AI; they are:

Instructions/Prompt

A prompt is an instruction that helps the LLM know what to remember and follow while generating an answer.

The more clear, concise, and brief your instruction, the more accurate your answers will be from the AI.

Things to write in your prompt:

  • Objective - What is its objective

  • Output format - Typically would be HTML/markdown.

  • Writing style - How should the answers be written?

  • Don'ts - Clear instructions of what to avoid.

  • Examples - Examples of question and answer.

An example prompt could be:

Given the ‘user’s question’: “[QUESTION]”

And the detailed information provided in ‘chunks’: “[CHUNKS]”

Determine, whether a clarifying question is required.

Instructions:

1. Analyse the 'chunks' and the 'user's question' to identify the specificity of the query and the scope of the information in 'chunks'.

2. If the query is broad and the 'chunks' have multiple categories or types, output '#' and guide the chatbot to ask for clarification.

3. If the query aligns well with a specific part of the 'chunks' that provides a comprehensive answer, output '~'.

Output format: [Decision: '~' or '#', (if '#') then clarification is required. If '#' also, 'specify the type of information or category that would help better address their question. This should be based on 'chunks'.

Important: if the user's question is likely to have a device specific answer, then you should ask for more information.

If the user has given a device, then we don't need to clarify.

Step 2: Store the response in a variable

The AI would process the instruction/prompt and generate a response. To get an ideal response, make sure you include in the instructions what kind of response you want to receive.

Choose the variable you want to store the response in.

For example, if you want to check if the user has asked for a follow-up question, you can write the instructions:

Examine the user's last utterance:

"[LAST INPUT]"

Determine if the user has asked a follow-up question by looking for:

- Interrogative words (who, what, where, when, why, how, etc.)

- Phrases that indicate a desire for additional information (e.g., "I would like to know", "Can you tell me about", "I'm interested in", "Could you explain")

- Continuation phrases or conjunctions that introduce new topics or questions (e.g., "but", "however", "also", "in addition")

- Output '1' if any of these indicators suggest a follow-up question is present.

- Output '0' if no follow-up question is detected.

[Note: Only output '1' or '0' based on this analysis.]

Last updated 10 months ago

⚒️