Using LLMs can improve conversation quality with fewer examples.
|Knowledge Base||✅||❌||Only support LLMs, using Retrieval-Augmented Generation (RAG)|
1、Before you enable LLMs on Flows, you need to enable LLMs on FAQ first.
2、You can config your OpenAI API Key on our platform or use the tokens we purchased from OpenAI. If you have other need, please contact us: email@example.com
Tokens will be consumed during the training and conversations. Currently,
100,000 free tokens will be given to each registered user to start.
Where OpenAI tokens will be used:
- FAQ embedding
- Text embedding in Knowledge Bases
- User utterance embedding in Flows
- User utterance embedding during conversations
For more details please visit:Pricing Page
In the LLMs setting, you can enable the use of LLMs on
Flow. Please debug/run/re-publish your bot to make the new setting effective.
1.There are one option when activating LLMs on FAQ:
- When answering the FAQ, the bot will display other related questions.
Users can click on questions that might be related to view other possible answers.
- When answering a question not in the FAQ, the bot could generate answers from the knowledge bases.
Based on the generation capability of LLMs, summarize and answer the user’s questions. A check source link will appear next to the answer to display the original source content.
Pretrained LLMs such as ChatGPT and GPT-4 will dramatically improve conversation quality. In zero or few shot training, ChatGPT performs much better than the DIET algorithm in RASA in terms of intent classification and entity recognition.
No, the conversation will not continue:
We're sorry, but due to insufficient funds in the merchant's account, we are unable to provide our services at this time.
Before your tokens run short, we will send an alert to your registered email account.
Yes. Please contact us: firstname.lastname@example.org and we will add it.
Please contact us: email@example.com . We could purchase more with the same price offered by OpenAI.
Yes. After enabling or disabling LLMs, debug/run/re-publish your project is required.
You can think of tokens as pieces of words used for natural language processing. For English text, 1 token is approximately 4 characters or 0.75 words. As a point of reference, the collected works of Shakespeare are about 900,000 words or 1.2M tokens.
To learn more about how tokens work:
Experiment with OpenAI interactive Tokenizer tool.