How to Use DVT IDE AI Assistant in VS Code
Description
DVT AI Assistant boosts your productivity by streamlining interaction with a wide range of Large Language Models. Information collected by DVT during compilation can be used to enhance the prompt, resulting in more precise and focused replies. This video provides an overview of the setup, concepts and main usage flows of DVT AI Assistant.
Transcript
DVT IDE AI Assistant allows you to efficiently interact with Large Language Models to generate, explain, and improve code, or perform any project-related tasks. Information collected by DVT during compilation can be used to enhance the prompt context, resulting in more precise and focused replies.
Setting up LLMs
A wide range of LLMs, both commercial and open-weight models, are supported, including OpenAI, Github Copilot, Google AI, or locally hosted Ollama. Configuration details depend on the LLM provider.
You can set up several providers and dynamically choose which model to interact with. The model displayed in the statusbar will be used by default for any new session. Click on its name to select a different one.
Starting a Session
Now, you can work with the AI Assistant by starting a session either in the editor or in its dedicated chat view.
Let's select this function and from the Command Palette run DVT AI Blueprint: Analyze and Fix the Selected Code (in editor)
. Observe how the LLM reply is seamlessly inserted into the current editor in real-time. When finished, DVT's incremental compilation will automatically validate the changes. Should you ever get an erroneous reply, use the compare editor to revert changes either partially or completely.
Now let's run the DVT AI Blueprint: Explain the Selected Code (in chat)
. Observe how the LLM generates a reply in real-time. Notice that the blueprint prompt contains snippets such as @language
and @selected
code. Click on View Sent Message to understand how they were expanded.
You can also initiate a chat session from scratch. For example, let's create a testbench for the spi_shift
module. Prefixing the module name with a hashtag turns it into a symbol, which will then expand to include the entire module's source code. Of course, any message can be previewed before being sent, to ensure IP security when working with third party LLMs.
Both symbols and snippets enhance the prompt context by leveraging information from DVT's compilation database.
When generation is done, click Insert at cursor.