Reference

List of Commands

AI Assistant provides a list of commands that you can quickly invoke using Quick Access (Ctrl+3) in Eclipse and Command Palette (Ctrl+Shift+P) in VS Code.

Some of these commands can also be invoked from the editor’s context menu or using the buttons available in the chat view. In general, the commands are filtered based on the context in which the action is possible.

Commands for built-in blueprints appear in the list prefixed with DVT AI Blueprint. The rest of the commands are prefixed with DVT AI.

Stop Generation

  • Stop LLM reply generation.

Set Default Language Model

  • Set the default LLM that will be used from now on for new sessions.

Start a New Session from Blueprint

  • List all the built-in and custom blueprints.

  • The selected blueprint will start a new session.

Start a New Chat Session

  • Start an empty chat session.

Run Last Session in Chat

  • Rerun the last session only in chat, regardless of where it ran previously.

Predefined Blueprints

AI Assistant comes with a set of built-in blueprints for common tasks:

Explain the selected code (chat)

  • Asks the LLM to explain the selected code in a concise manner.

  • The user request defined by the blueprint and the LLM reply appear in the chat. You can change the request, regenerate the reply or continue the conversation.

Add comments to the selected code (editor)

  • Asks the LLM to add comments above the selected code.

  • Changes appear in the editor. You can review and accept or revert the generated code.

Improve the selected code (editor)

  • Asks the LLM to suggest improvements for the selected code.

  • Improvements should appear as comments above the line that needs to be improved.

  • Changes appear in the editor. You can review and accept or revert the generated code.

Analyze and fix the selected code (editor)

  • Asks the LLM to find and fix any problems identified in the selected code

  • Fixes should appear in the editor alongside a comment explaining the bug and the fix.

  • Changes appear in the editor. You can review and accept or revert the generated code.

Note

All built-in blueprints provide additional context to the LLM (e.g. the whole file containing the selection).

Custom Blueprints

AI Assistant allows you to define custom blueprints. The easiest way to create a custom blueprint is by:

  • Saving a chat request using the Save Message as Blueprint button

  • Saving a chat session using the Save Chat Session as Blueprint button

A blueprint file has the following format:

import { SessionBlueprint } from "./@api/v1"
export default {
    // Unique name used to identify the blueprint and to overwrite a built-in or custom blueprint
    name: 'Write a 4-bit counter',
    // The assistant's reply to the blueprint's messages will target the specified component
    // ('chat' | 'editor')
    target: 'editor',
    // Editor action when the assistant's reply targets the editor
    // ('replace' | 'insert')
    action: 'replace',
    // The messages (requests and replies) used to create the new session:
    // - At least one message must be present
    // - User and assistant messages must alternate
    // - Sessions started from this blueprint will automatically pull a reply from the LLM when the last message is a user message
    // - When targeting the editor, the last blueprint message must be a user message to which the LLM will reply in the editor
    messages: [{
        // Roles can be 'user' or 'assistant'
        role: 'user',
        content: `Write a 4-bit counter in @language.`
    }]
} satisfies SessionBlueprint

The blueprint must be a valid TypeScript file with the .ai.ts extension.

AI Assistant looks for the blueprint files in these locations:

  • $HOME/.dvt/ai/*.ai.ts

  • <project>/.dvt/ai/*.ai.ts

Predefined Snippets

AI Assistant provides a library of prompt snippets with different intents:

  • Structured information from DVT’s database.

  • Specific code sections from your project.

  • Reusable instructions for the LLM.

Snippets are presented with their full syntax and options. Snippet parts starting with a vertical bar ‘|’ are optional.

@language

  • Languages used in the code selection or in the project (e.g. SystemVerilog, VHDL).

@selected <type:…> | wrap between <prefix:string> <suffix:string>

  • Type options: code, file, container, element, action_block, method, port_list, class, module, interface, package, entity, architecture, configuration.

  • Code sections based on the editor selection or cursor position.

  • The selection will be adjusted to represent the specified type (e.g. the method where the cursor is placed, the entire class or module or the full content of the file).

@usages of selected element | limit to <max:number> | separate with <separator:string> | wrap between <prefix:string> <suffix:string>

@usages of <sym:symbol> | limit to <max:number> | separate with <separator:string> | wrap between <prefix:string> <suffix:string>

  • Code sections with context and the usages of the selected element or specified #symbol.

  • By default, 5 usages will be collected from your project, use limit to <max:number> to change this number.

@examples of <type:…> | limit to <max:number> | separate with <separator:string> | wrap between <prefix:string> <suffix:string>

  • Type options: uvm_agent, uvm_component, uvm_driver, uvm_env, uvm_mem, uvm_monitor, uvm_object, uvm_reg, uvm_reg_adapter, uvm_reg_backdoor, uvm_reg_block, uvm_reg_field, uvm_reg_fifo, uvm_reg_file, uvm_reg_frontdoor, uvm_reg_map, uvm_reg_predictor, uvm_reg_sequence, uvm_scoreboard, uvm_sequence, uvm_sequence_item, uvm_sequencer, uvm_test.

  • Code sections with classes implementing the specified UVM component.

  • By default, 5 examples will be collected from your project, use limit to <max:number> to change this number.

@outline of selected <type:…>

@outline of <symbol:symbol>

  • Type options: file, element, container.

  • Tree structured outline of the selected file/element/container (based on the editor selection or cursor position) or of the specified #symbol.

  • Useful for providing summarized information about specific parts of the project without sending the full source code.

@recent code sections | limit to <max:number> | separate with <separator:string> | wrap between <prefix:string> <suffix:string>

@recent code sections from open editors | limit to <max:number> | separate with <separator:string> | wrap between <prefix:string> <suffix:string>

  • Recently visited code sections from any file or restricted to the currently opened editors.

  • Useful for providing information about the current task based on the code sections recently visited.

  • By default, the last 5 code sections will be provided, use limit to <max:number> to change this number.

@design hierarchy | expand up <up_levels:number> | expand down <down_levels:number>

@verification hierarchy | expand up <up_levels:number> | expand down <down_levels:number>

  • Tree structure representation of the design or verification hierarchy starting from the current editor scope.

@replicate selected pattern <N:number> times

  • Reusable task asking the LLM to replicate the currently selected code pattern N times.

  • Useful for generating repetitive sections of code that follow a pattern (e.g. 1 2 3 …).

@reply only with code

  • Reusable instruction asking the LLM to reply only with valid code in the language of current code selection or project. It also forbids the LLM to use markdown to format its reply.

  • Useful in editor sessions that redirect the replies to the editor.

Custom Snippets

AI Assistant allows you to define custom snippets. The easiest way to create a custom snippet is by saving a chat request using the “Save Message as Snippet” button.

A snippet file has the following format:

import { PromptSnippet } from "./@api/v1";
export default {
    // Unique name used to identify the snippet and to overwrite a built-in or custom snippet
    name: 'Intro',
    // Snippet signature used to refer to this snippet
    signature: '@intro',
    // Whether to expand snippets and symbols nested in the prompt string
    expand: true,
    // The prompt string (snippet expansion)
    prompt: 'Act as a @language engineer.'
} satisfies PromptSnippet

The snippet must be a valid TypeScript file with the .ai.ts extension.

AI Assistant looks for the snippet files in these locations:

  • $HOME/.dvt/ai/*.ai.ts

  • <project>/.dvt/ai/*.ai.ts

LLM Configuration

AI Assistant supports the following providers:

  • OpenAI (openai)

  • AzureOpenAI (azure-openai)

  • Anthropic (anthropic)

  • Google AI (google)

  • GitHub Copilot (copilot)

  • Ollama (ollama)

The fastest way to set-up most of the providers is through specific environment variables:

  • OpenAI
    • OPENAI_API_KEY (required)

    • OPENAI_BASE_URL (optional)

  • AzureOpenAI
    • OPENAI_API_VERSION (required)

    • AZURE_OPENAI_API_KEY (required)

    • OPENAI_BASE_URL or AZURE_OPENAI_ENDPOINT (required)

  • Anthropic
    • ANTHROPIC_API_KEY (required)

    • ANTHROPIC_BASE_URL (optional)

    • ANTHROPIC_MODELS (optional, comma-separated values)

  • Google AI
    • GOOGLE_AI_API_KEY (required)

  • Ollama
    • OLLAMA_HOST (required)

GitHub Copilot is available only in VS Code. The process to set up GitHub Copilot in VS Code involves the following steps:

  • Install the GitHub Copilot Chat extension.

  • Sign-in with your GitHub account.

  • When asked, allow DVT to use the Copilot features.

AI Assistant allows you to refine the configuration for advanced use cases, for example:

  • To change the selection of models presented in the AI Assistant’s interface.

  • To use advanced options with the LLM provider (timeout, max_retries, …).

  • To change the LLM parameters (temperature, top_p, …).

To accomplish this, the configuration needs to be written in a JSON file using the following structure:

{
  "models":[{
      "provider": "openai",
      "model": "gpt-4*",
      "modelOptions": {
          "temperature": 0.3
          ...
      },
      "providerOptions": {
          "apiKey": "..."
          ...
      }
  }]
}

The “models” array may contain any number of configurations.

The “modeOptions” and “providerOptions” are specific to each model and provider. You should refer to vendor documentation to see the available options.

AI Assistant looks for the JSON configuration in these locations:

  • $HOME/.dvt/ai/models.json

  • <project>/.dvt/ai/models.json