0.0.189 • Published 1 year ago

@xflr6/chatbot v0.0.189

Weekly downloads
668
License
MIT
Repository
-
Last release
1 year ago

Chatbot

A framework for creating and executing "canned" chatbots.

This repo contains React UI components, and also re-exports the core engine.

Getting Started

Install package

npm install @xflr6/chatbot

yarn add @xflr6/chatbot

Add styles to your CSS

@import "~@xflr6/chatbot/dist/styles.css";

This library uses no CSS reset of any kind. Include your own and adjust it as you wish.

Create and run a chat

Create a JSON chat flow definition somewhere. It could be a local file, or it could be some remote resource somewhere. See the sections below for how to write flow definitions.

Once created, you need to write a class to fetch and return the definition. It must extend ChatFlow and implement getDefinition.

class MyFlow extends ChatFlow {
  async getDefinition(): Promise<ChatFlowDefinition> {
    return theJSONDefinitionYouCreated;
  }
}

Next, register the flow class with the flowFactory. The flow factory returns instances of your ChatFlow derived classes. One needs to ask for them by flow name. You can push multiple flow creator functions into the flow factory. Given a flow name, the first matching creator function is used.

flowFactory.pushFactoryFunc(
  "myFlow", // or a regex
  (flowId: FlowId, chat: Chat) => new MyFlow(flowId, chat)
);

Now create a Chat instance, and call its setPrompt method to start the chat. This causes the flow factory to create and return an instance of MyFlow to the chat, which the chat then begins to execute.

const chat = new Chat("someName");
chat.setPrompt(PromptKey.fromString("myFlow.start"));

Finally, render the ChatView component somewhere, and pass it the chat. This component must be rendered in a parent element that doesn't get its size from its child. Also, set the element's overflow-y to "scroll".

<ChatView chat={chat} />

By default, the chat is restricted to a particular maximum width, since after a point, it looks much too stretched. However, you can change this by overriding a CSS variable. In fact there are many variables you can override to suit the styling of your app. Check in the browser's developer console.

There are lots of customizations possible at every step. More on this later.

The Chat Engine

Basic Terminology

The chat engine runs "chat flows" that are defined using JSON.

A chat flow is a collection of "prompts" from the bot to the user. Each prompt contains the message that the bot shows to the user, and some response(s) that the user can send back to the bot. The response then determines which prompt is shown next to the user.

How the Chat Progresses

At each point in time, the chat can move forward in three ways, depending on how the current prompt is configured (via its JSON definition within the chat flow definition):

  1. Move forward to the next prompt without waiting for a user response. This is helpful if you want to simulate multiple messages coming from the bot one after another.
  2. Give the user a bunch of options to send as the answer. When the user clicks on an option (or multiple options) and submits, move on the next prompt accordingly.
  3. Let the user enter some free-form answer and submit (such as entering some text in a textbox etc.), and move on to the next prompt accordingly.

Creating Chat Flow Definitions

Use Case: A bunch of consecutive messages from the bot

export default chatFlowDefinition = {
  prompts: {
    hello: {
      message: "Hello",
      // The "dot" suffix is significant. More on this later.
      nextKey: ".hello2",
    },
    hello2: {
      message: "I am a chatbot",
      nextKey: ".hello3",
    },
    hello3: {
      message: "What can I do for you today?",
      nextKey: ".somePrompt",
    },
    // Other prompts
  }
}

A more concise way to write the above (omitting the enclosing chatFlowDefinition for brevity from now on):

prompts: {
  hello: {
    messages: [
      "Hello",
      "I am a chatbot",
      "What can I do for you today?"
    ],
    nextKey: ".somePrompt",
  },
  // Other prompts
}

Note that the message(s) need not be strings, they can be anything. Certain types of messages are supported out of the box, such as Markdown, video URLs etc (more on this later).

If you have some custom data shape that you want to render, or even want to render one of the supported data formats in a custom manner, you can do so easily by hooking into the rendering pipeline by supplying a custom component to render.

Not only data, you can even provide components to customize the way users provide answers to prompts (i.e. the UI that they interact with).

These customizations are done on a prompt-by-prompt basis (more on customization later).

Use Case: Multiple choices presented to the user

prompts: {
  q1: {
    message: "What is 1+1?",
    // Currently, only string messages are properly supported. More data
    // formats will be supported soon.
    answers: [
      { message: "1" },
      { message: "2" },
      { message: "3" }
    ],
    // You can show the answers as:
    // * Horizontal list of small buttons that wrap around
    // * Horizontal list of large boxes that can be horizontally scrolled
    // * Vertically stacked long boxes
    // This setting is prompt-wide, and cannot be specified for individual
    // answers within a prompt.
    inputDisplayType: "large" // or "stacked" or null (or omit) for small
    nextKey: ".q2",
  },
  q2: {
    // Prompt definition for q2
  },
}

Instead of progressing to the next prompt "q2" no matter what the answer is, you can fork out depending on which answer the user chose:

prompts: {
  q1: {
    message: "What is 1+1?",
    answers: [
      { message: "1", nextKey: ".wrong" },
      { message: "2", nextKey: ".q2" },
      { message: "3", nextKey: ".wrong" },
    ],
    // Don't use this prompt-level next key, otherwise it will override the
    // answer-level next keys.
    // nextKey: ".q2",
  },
  wrong: {
    message: "Try again!",
    answers: [
      { message: "1", nextKey: ".wrong" },
      { message: "2", nextKey: ".q2" },
      { message: "3", nextKey: ".wrong" },
    ],
  },
  q2: {
    // Prompt definition for q2
  },
}

Notice that we are having to repeat all the answers in the "wrong" prompt. We can avoid this as follows:

wrong: {
  message: "Try again!",
  answers: "q1" // This will insert the answers of the "q1" prompt here
},

Actually we can do even better. We don't really need to create the "wrong" prompt at all:

q1: {
  message: "What is 1+1?",
  answers: [
    {
      message: "1",
      // This will auto-create a new prompt, assign the answers of prompt
      // "q1" to it, and wire it up correctly.
      quickResponse: {
        message: "If 1 = 1, then can 1+1 = 1 too?" Try again",
        repeatAnswers: true,
      },
    }
    { message: "2", nextKey: ".q2" },
    {
      message: "3",
      quickResponse: {
        message: "Actually, 1+2 = 3. So now what d'you think 1+1 will be?",
        repeatAnswers: true,
      },
    }
  ],
},

Quick responses can also be used to "insert" a message before moving on to what would technically be the real next prompt:

prompts: {
  "p1": {
    message: "Greet me",
    answers: [
      {
        message: "Hi",
        quickResponse: { message: "Hi to you too" },
        nextKey: ".p2",
      },
      {
        message: "Hello",
        quickResponse: { message: "Hello to you too" },
        nextKey: ".p2",
      }
    ]
    // You can even use a prompt-level next key
    // nextKey: ".p2",
  },
  p2: {
    message: "How can I help you?"
  }
}

Use Case: Multiple choices presented to the user (multi select)

For multiple choice prompts, you can allow to the user to select more than one option to send as their answer:

somePrompt: {
  message: "Which of these are even numbers?",
  answers: [
    { message: "1" },
    { message: "2" },
    { message: "3" },
    { message: "3" },
  ],
  acceptsMultipleAnswers: true,
  // This is required. Answer-level next keys are ignored when accepting
  // multiple answers.
  nextKey: ".theNextPrompt",
}

Use Case: Custom response accepted from the user

somePrompt: {
  message: "What is your name?",
  answers: [
    // The '*' indicates that this prompt accepts a custom input from the
    // user.
    { message: "*", nextKey: ".theNextPrompt" },
  ],
  // You could even set the next key here, instead of at the answer level
  // nextKey: ".theNextPrompt",
}

Even while accepting a custom input, you can fork out based on the answer given by the user:

somePrompt: {
  message: "What is your name?",
  answers: [
    { message: "Tom", nextKey: ".factAboutMilk" },
    { message: "Jerry", nextKey: ".factAboutCheese" },
    // This now becomes sort of like a "catch-all"
    { message: "*", nextKey: ".noRelevantFacts" },
  ],
}

By default, the custom input is accepted via a text box. We plan to utilize the inputDisplayType field to support other inputs (numbers, dates etc.) in the future.

Feature: Disabling "destructive edits" to the chat

By default, the chat engine allows the user to change the answer for any prompt, at any point in the history of the chat. Of course when you go back in time and change something, you can expect the future from that point on to potentially be different. This is something you might not want to allow.

For any prompt for which you want to disallow this feature, you can do this:

somePrompt: {
  // The rest of the definition
  forbidsDestructiveAlteration: true,
}

Feature: Scoring

Any answer within a prompt can be given a numerical score. The chat instance that is running your chat flow keeps a running total of the scores encountered so far, and also other raw data use can use to aggregate scores in any other way you wish.

For the most part, the total is just the sum of the scores encountered. However, if a prompt is answered multiple times, then an average is taken of all such answers. An answer to a quick response with repeat answers is also considered as an answer to the original prompt.

Feature: Computing the answer for a prompt programmatically

For a particular prompt, if you want user input to be bypassed, and the answer to be computed programmatically, you can do this:

somePrompt: {
  // The rest of the definition
  answerProgramatically: true,
}

Along with this, you have to hook into the chat execution pipeline and provide the answer programmatically when asked (more on customization later).

Feature: Variable interpolation

In any prompt message, the occurrence of {{someVariable}} anywhere is treated as a variable named someVariable to be interpolated when the prompt is created. Mostly, it is the job of the programmer to provide values for interpolation by overriding ChatFlow#handleResolveVariables or implementing PromptHandler#resolveVariables.

However, there are some special variables that are interpolated by the engine itself. These are:

  • {{@.somePromptName}} - The answer provided to the prompt somePromptName is interpolated. Normally, this only makes sense when the answer to prompt somePromptName is a string.
  • {{userId}} - The userId (if any) passed into the chat context is interpolated

Also, in any prompt message, answer message, quick response message or prompt custom data, any occurrence of {{!someVariable}} (note the "bang") is interpolated with an argument named someVariable (if any) passed as part of the flow ID. This interpolation is done at the time the flow definition is parsed, since such arguments can't change during the lifetime of a flow.

Feature: Jumping between different flows within the same chat

TBA

Feature: Multi-step messages

Feature: Shuffling answer choices

This feature only works for prompts with answerType === "choice". To shuffle answer choices, provide the following inputDisplayConfig in the prompt definition:

somePrompt: {
  // The rest of the definition
  inputDisplayConfig: {
    shuffleChoices: true,
    // Optionally, you can explictly specify the order in which you want to
    // display the choices. If you leave this out an order is generated for
    // you.
    choicesDisplayOrder: // array of shuffled indices, e.g. [3, 1, 0, 2]
  }
}

Note that the shuffled order in which the answers are displayed it maintained across quick responses with repeating answers, and across multi-step messages with repeating answers.

It is however, not maintained for prompts that refer to the answers of the original prompt (somePrompt in the above example). This is because the former have their own independent inputDisplayConfigs. This is a conscious design decision.

Other features (TBA)

  • Handling errors
  • Loading and saving chats
  • Simulated UI delays
  • Enabling/disabling auto-answering
  • Intercepting answers
  • Tracking analytics

Publishing to npm

We use a tool called np.

  1. Install np: npm install --global np
  2. Run np and follow its instructions: np
0.0.159

2 years ago

0.0.158

2 years ago

0.0.153

2 years ago

0.0.152

2 years ago

0.0.157

2 years ago

0.0.156

2 years ago

0.0.155

2 years ago

0.0.154

2 years ago

0.0.169

2 years ago

0.0.164

2 years ago

0.0.163

2 years ago

0.0.162

2 years ago

0.0.161

2 years ago

0.0.168

2 years ago

0.0.167

2 years ago

0.0.166

2 years ago

0.0.165

2 years ago

0.0.160

2 years ago

0.0.175

2 years ago

0.0.174

2 years ago

0.0.173

2 years ago

0.0.172

2 years ago

0.0.179

2 years ago

0.0.178

2 years ago

0.0.177

2 years ago

0.0.176

2 years ago

0.0.171

2 years ago

0.0.170

2 years ago

0.0.186

1 year ago

0.0.185

1 year ago

0.0.184

1 year ago

0.0.183

2 years ago

0.0.189

1 year ago

0.0.188

1 year ago

0.0.187

1 year ago

0.0.182

2 years ago

0.0.181

2 years ago

0.0.180

2 years ago

0.0.151

2 years ago

0.0.150

2 years ago

0.0.139

2 years ago

0.0.138

2 years ago

0.0.137

2 years ago

0.0.136

2 years ago

0.0.135

2 years ago

0.0.134

2 years ago

0.0.133

2 years ago

0.0.149

2 years ago

0.0.148

2 years ago

0.0.147

2 years ago

0.0.142

2 years ago

0.0.141

2 years ago

0.0.140

2 years ago

0.0.146

2 years ago

0.0.145

2 years ago

0.0.144

2 years ago

0.0.143

2 years ago

0.0.132

3 years ago

0.0.129

3 years ago

0.0.131

3 years ago

0.0.130

3 years ago

0.0.128

3 years ago

0.0.127

3 years ago

0.0.126

3 years ago

0.0.125

3 years ago

0.0.124

3 years ago

0.0.123

3 years ago

0.0.122

3 years ago

0.0.121

3 years ago

0.0.120

3 years ago

0.0.119

3 years ago

0.0.117

3 years ago

0.0.116

3 years ago

0.0.115

3 years ago

0.0.114

3 years ago

0.0.118

3 years ago

0.0.89

3 years ago

0.0.106

3 years ago

0.0.105

3 years ago

0.0.104

3 years ago

0.0.103

3 years ago

0.0.109

3 years ago

0.0.108

3 years ago

0.0.107

3 years ago

0.0.102

3 years ago

0.0.101

3 years ago

0.0.100

3 years ago

0.0.113

3 years ago

0.0.112

3 years ago

0.0.111

3 years ago

0.0.110

3 years ago

0.0.95

3 years ago

0.0.96

3 years ago

0.0.97

3 years ago

0.0.98

3 years ago

0.0.99

3 years ago

0.0.90

3 years ago

0.0.91

3 years ago

0.0.92

3 years ago

0.0.93

3 years ago

0.0.94

3 years ago

0.0.87

3 years ago

0.0.88

3 years ago

0.0.86

3 years ago

0.0.85

3 years ago

0.0.84

3 years ago

0.0.82

3 years ago

0.0.83

3 years ago

0.0.80

3 years ago

0.0.81

3 years ago

0.0.78

3 years ago

0.0.79

3 years ago

0.0.76

3 years ago

0.0.77

3 years ago

0.0.73

3 years ago

0.0.74

3 years ago

0.0.75

3 years ago

0.0.70

3 years ago

0.0.71

3 years ago

0.0.72

3 years ago

0.0.68

3 years ago

0.0.69

3 years ago

0.0.67

3 years ago

0.0.66

3 years ago

0.0.65

3 years ago

0.0.64

3 years ago

0.0.63

3 years ago

0.0.62

3 years ago

0.0.61

3 years ago

0.0.60

3 years ago

0.0.59

3 years ago

0.0.58

3 years ago

0.0.57

3 years ago

0.0.56

3 years ago

0.0.55

3 years ago

0.0.54

3 years ago

0.0.53

3 years ago

0.0.52

3 years ago

0.0.51

3 years ago

0.0.50

3 years ago

0.0.49

3 years ago

0.0.48

3 years ago

0.0.47

3 years ago

0.0.46

3 years ago

0.0.45

3 years ago

0.0.44

3 years ago

0.0.43

3 years ago

0.0.42

3 years ago

0.0.41

3 years ago

0.0.40

3 years ago

0.0.39

3 years ago

0.0.38

3 years ago

0.0.37

3 years ago

0.0.36

4 years ago

0.0.35

4 years ago

0.0.34

4 years ago

0.0.33

4 years ago

0.0.32

4 years ago

0.0.31

4 years ago

0.0.30

4 years ago

0.0.29

4 years ago

0.0.28

4 years ago

0.0.27

4 years ago

0.0.26

4 years ago

0.0.25

4 years ago

0.0.23

4 years ago

0.0.24

4 years ago

0.0.22

4 years ago

0.0.21

4 years ago

0.0.20

4 years ago

0.0.19

4 years ago

0.0.18

4 years ago

0.0.17

4 years ago

0.0.16

4 years ago

0.0.15

4 years ago

0.0.14

4 years ago

0.0.13

4 years ago

0.0.12

4 years ago

0.0.11

4 years ago

0.0.10

4 years ago

0.0.9

4 years ago

0.0.8

4 years ago

0.0.7

4 years ago

0.0.6

4 years ago

0.0.5

4 years ago

0.0.4

4 years ago

0.0.3

4 years ago

0.0.2

4 years ago

0.0.1

4 years ago