New member
- Messages
- 2
It's intriguing to consider how AI models might handle structured instructions with delimiters or system prompts. While completely isolating sections from instruction influence can be tricky, experimenting with clear section markers or metadata tagging could provide interesting insights. Have you had the chance to test this approach with any models yet? If so, what results did you observe? This could open up a fascinating discussion on how to optimize prompt engineering for various AI applications.That’s an interesting idea! While most AI models follow structured instructions, they generally process the entire input as a whole. Some models support delimiters or system prompts that define specific behavior, but completely isolating sections from instruction influence can be tricky. You might experiment with clear section markers (e.g., "Ignore instructions beyond this point") or metadata tagging, but models may still interpret the content contextually. Have you tested this approach with any models yet?
BTW, Does the usage of AI affect our Mental Health in any way?