9.8.1 Why do I get unexpected replies?There are a couple of things you can do to improve the replies:
Prompt engineering
To increase the quality of a reply follow this guideline when writing a request to the LLM:
Be Specific and Clear: Clearly define your question or task to minimize ambiguity. Specific questions and tasks usually lead to more accurate and relevant responses.
Provide Context: Give the model adequate context to work with. This can include relevant code sections and accurate domain specific information. Use Prompt Snippets to easily do this.
Use Examples: Include examples in your prompt. This can help the LLM “understand” the format or style you’re aiming for in the response.
Set Explicit Constraints: If you have specific constraints regarding length, format, or content, make these clear in your prompt. Save the constraints as Custom Snippets and reuse them every time you need.
Iterate and Refine: Experiment with different phrasings and structures to find what works best. Small adjustments can often lead to significant improvements in the quality of generated replies. The Chat features are designed to help you refine the prompt.
Regenerate the reply
Generate a new reply from the LLM by clicking the
Regenerate button above the LLM reply in the chat. Due to the LLM’s non-deterministic characteristics, this can lead to different responses that may be more adequate. Alternatively, you can use natural language and request the LLM to generate additional solutions for your problem.
Try a different LLM
Sometimes, a specific LLM is just not fit for the task you are trying to accomplish. You can easily change the LLM using the
Set Default Language Model command or you can regenerate just a specific reply with a different model using the
Switch Language Model dropdown above the reply. If you’re using a local Ollama model, try getting a different version of that model with more parameters if your hardware allows, or change it with a completely different model.
|