back to features

Continuations

Learn about the Continuation feature in the ChatBotKit API, a powerful tool that enables models to engage in extended dialogues effortlessly, bypassing context limitations and ensuring fluid and uninterrupted dialogue flow. Craft interactive and captivating conversational AI experiences with ease.

The Continuation feature in the ChatBotKit API is a robust tool that allows models with finite context sizes to engage in extended dialogues effortlessly. This feature helps developers bypass the context size hurdle, enabling the smooth generation of consistent and coherent responses throughout lengthy discussions.

By leveraging the Continuation feature, developers can push the conversation past the model's context limitations, ensuring a fluid and uninterrupted dialogue flow. This proves invaluable in situations where retaining context and producing coherent responses over prolonged interactions is essential.

Consider the following practical example. The maximum context length for GPT3.5 Turbo is 4096 tokens. The current conversation takes 3500 tokens leaving just 596 tokens for the response. In many cases this is just enough but what if the last request to the bot was to generate a lengthy poem, an assay or some other piece of information. The model will complete the request but up-to 596 tokens left and then it will stop half-way though. With continuations, the generation will automatically restart freeing space from the begging of the conversation until the full response is received.

The Continuation feature allows developers to craft more interactive and captivating conversational AI experiences. It lays the groundwork for the development of chatbots and virtual assistants capable of managing intricate dialogues and delivering insightful responses, regardless of the conversation's length or complexity.