A community to discuss AI, SaaS, GPTs, and more.

Welcome to AI Forums – the premier online community for AI enthusiasts! Explore discussions on AI tools, ChatGPT, GPTs, and AI in entrepreneurship. Connect, share insights, and stay updated with the latest in AI technology.


Join the Community (it's FREE)!

Is There a Way to Build GPTs Without Character Limits?

Member
Messages
89
I am looking to build my first GPTs thingy, I recently learned ChatGPT responses max out around 4000 characters. Is it possible to create a custom GPT without hard-coded response length limits? Please explain, as I'm new to developing parameters around AI output capacity.
 
New member
Messages
4
All OpenAI models (and other LLMs) have certain token limits, depending on the model, so you can't exceed the required token limit (or convert to characters if you want using tokenizer).
An example is shown in the following from OpenAI documentation:
For GPT-3.5-turbo models: https://platform.openai.com/docs/models/gpt-3-5-turbo
For GPT-4 models: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
(The "context window" token limits in OpenAI models refers to the maximum number of tokens a model can process in a single request. For example, older GPT-3.5-turbo models have a 4K token limit for each request (with 512 tokens for output), shared between the prompt (input) and completion (output). There's also a separate output token limit stated for each model on their tables. Be informed of OpenAI latest changes as the tables may not be updated in real time. )
Refer here to learn more about tokens and how to count them:
what-are-tokens-and-how-to-count-them?
how_to_count_tokens_with_tiktoken?
Knowing the limitations, your only option to increase the characters is to choose a model with higher token limits.
I'm not an expert in GPT programming, but this information could help you understand what to do next.
The following are my suggestion on your simple project:
1. Adding a token counter in the input bar to warn you beforehand to avoid response cut-off
2. Your GPT should have a chat history list in memory so that it can save and use your old conversations
3. Though not a necessity, you can add a button with a pre-defined prompt like “Could you finish the example you gave in the <.......> paragraph of your previous response?” in cases where the response was cut off. Just an example!
These ideas might trigger some thoughts on your part.
I hope this helped.
 
Member
Messages
30
If you use GPT-4 Turbo it allows up 128,000 tokens. So a lot more! Off hand, I don't know if you can select which model your GPT uses.
 
Top