0.0.41970 • Published 12 months ago

@curiecode/lamechain v0.0.41970

Weekly downloads
-
License
LAMC
Repository
-
Last release
12 months ago

@curiecode/lamechain - a code by oliver

LameChain

pipeable, trainable, JSON-mediated ChatGPT conversations for the smooth-brained dev

Overview

This code is a collection of tools for templating, communicating, and parsing results from ChatGPT in a composable way. The express intent is to use prompt engineering as a way to build "micro-models" for a specific job, and through a sort of functional composition, build these conversations into complex structures/interactions/data pipelines.

I was recently made aware of LangChain and found out that there exist rigorous solutions to this problem in the Python and emerging Typescript space (LangChain has TS support).

However, my smooth brain skated over their documents like a maglev train on its way to Simpletown. I would recommend that before anyone even consider using my code or anything like it, they assess whether or not a more rigorous solution exists for their use-case (mine is in a custom, home-brewed, half-baked game architecture, and I've decided to keep the third party libraries as light as possible, which is why I am not porting it to LangChain).

Package Info

  • Basically all Typescript
  • 0% Test Coverage
  • Used By No One but Me
  • Not Semantically Versioned (Yet)
    • The current version is inaccurate. Version 0.0.0.0.0.0.01 is accurate but NPM won't let me be accurate.
    • Do not use this code if you want stable software
  • Super Experimental; updating this as I improve my use-case for it.
  • Feel free to use, contribute, and etc. at your own risk. I just ask that you read about the license.

Installation:

On your machine,

  • Clone: git clone https://github.com/curiecode/lamechain.git

  • Yarn: yarn add @curiecode/lamechain

  • NPM: yarn add @curiecode/lamechain

In the project,

  • yarn to install modules
  • yarn ex:train to run the example training script
  • yarn ex:pipe to run the example pipe script
  • more utils to come

Usage:

The general pattern is to declare a conversation with some intent, some rules & restrictions, and a stated format for input and output. After doing so, messages can be sent through the conversation and received in type-safe objects rather than strings. These conversations support training and piping, but the general interface is as follows:

    import { JsonConversation } from '@curiecode/lamechain';

    const model = new JsonConversation({ logger: console }, {
        ... // <--- Prompt configuration, read below
    });

    await model.send({
        someInput: 'my typed input object'
    });

    // My typed output object:
    const { someOutput } = model.message();

An example in practice; the following model is meant generates knock-knock jokes:

model.ts:

import { JsonConversation } from "@curiecode/lamechain";

export const model = new JsonConversation({
    logger: console
}, {
    config: {
        overallContext: 'tell me jokes',
        motivations: 'take an input string, and make a joke about it',
        rulesAndLimitations: [
            `always include the phrase KNOCK KNOCK, and WHO IS THERE? in your joke`,
            `the joke should be really, really funny like something kurt vonnegut wrote`,                
        ]
    },
    inputProperties: {
        jokePrompt: 'a phrase for you to make a joke about'
    },
    responseProperties: {
        jokeString: 'the really funny joke that you invented'
    }
});
    await model.send({ jokePrompt: 'using chatGPT to tell jokes' });
    console.log(`Generated Joke: ${model.message().jokeString}`);

Training

Training conversations involves giving them a set of objects which match the input-output interface. The inputs and outputs are fed through ChatGPT with a slightly modified prompt which asks ChatGPT to validate that the example maps to its stochastic parrot brain or whatever; if so, we proceed with a normal conversation. If not, the conversation will throw an error on giveExample.

It is recommended (by me) for any complex prompts to use these kinds of examples. Anecdotally, they seem to be very useful. I don't have any good recommendations on a good number of examples, but I would suggest a minimum set that cover your different edge-cases.

import { TrainedConversation } from "@curiecode/lamechain";
import { jokeModel } from './docs/examples/shitModels/jokes';

const trainedModel = new TrainedConversation(jokeModel);

await model.giveExample({ 
    jokeInput: `no one home in oliver's head` 
}, {
    jokeString: `KNOCK KNOCK / Who's there? / Literally no one, my brain is empty af.` 
});

await model.giveExample({ 
    jokeInput: `pete townshend` 
}, {
    jokeString: `KNOCK KNOCK / Who's there? / A Who / What?  I'm confused` 
});

await model.send({
    jokeInput: 'some joke prompt'
});

const { jokeString } = model.message();

// ... 

Pipes

The conversation class provides a method pipe which accepts another conversation; the piper (calling conversation) must have the same responseProperties as the inputProperties of the pipee (pipe conversation parameter). This allows the decomposition of various tasks that OpenAI would normally have difficulty with due to complexity or scope; a problem broken into several distinct problems can be approached by having OpenAI provide a response for each distinct component of the problem. An example follows, in which we run the output of the above joke-generator through a model determining if the joke is funny or not:

jokeDeterminer.ts:

import { JsonConversation } from "../..";

export const model = new JsonConversation({
    logger: console
}, {
    config: {
        overallContext: 'tell me if a joke is quality',
        motivations: 'take an input KNOCK KNOCK joke, and tell me if it is funny',
        rulesAndLimitations: [
            `some antijokes may not always have the WHO IS THERE part`,
        ]
    },
    inputProperties: {
        jokeOutput: 'a phrase for you to judge the funniness of'
    },
    responseProperties: {
        jokeJudgement: 'a judgement of how funny the joke is'
    }
});
import { model as jokeModel } from '../the/#usage/example';
import { model as jokeDeterminerModel } from '../the/above/section';
    
jokeModel.pipe(jokeDeterminerModel);
await jokeModel.send({ jokePrompt: 'a joke about pipes' });
const jokeThatWasGenerated = jokeModel.message();
const jokeDetermination = jokeDeterminerModel.message();

console.log({
    joke: jokeThatWasGenerated.jokeString,
    jokeIsFunny: jokeDetermination.jokeJudgement
})

License

This project is licensed under the Love All My Cats (LAMC) Public License

You need to love my cats to use this code. If you do not, you're actually legally not allowed to use this code, there's a whole license file that you should really read if you want to use this code.

Curie

Curie

Anastasia

Anastasia

0.0.4197

12 months ago

0.0.4196

12 months ago

0.0.41970

12 months ago

0.0.4195

1 year ago

0.0.4194

1 year ago

0.0.4193

1 year ago

0.0.4192

1 year ago

0.0.4191

1 year ago

0.0.419

1 year ago

0.0.418

1 year ago

0.0.417

1 year ago

0.0.416

1 year ago

0.0.415

1 year ago

0.0.414

1 year ago

0.0.413

1 year ago

0.0.412

1 year ago

0.0.411

1 year ago

0.0.410

1 year ago

0.0.409

1 year ago

0.0.408

1 year ago

0.0.407

1 year ago

0.0.405

1 year ago

0.0.404

1 year ago

0.0.403

1 year ago

0.0.402

1 year ago

0.0.401

1 year ago

0.0.5

1 year ago

0.0.4

1 year ago

0.0.3

1 year ago

0.0.2

1 year ago

0.0.1

1 year ago