Getting Started

To enable AI Assistant you need access to one or more LLM providers. The setup details are particular to each LLM provider.

The supported providers can be quickly set as follows:

  • OpenAI
    • Set the environment variable OPENAI_API_KEY='...' before starting DVT.

  • Anthropic
    • Set the environment variable ANTHROPIC_API_KEY='...' before starting DVT.

  • Google AI
    • Set the environment variable GOOGLE_AI_API_KEY='...' before starting DVT.

  • GitHub Copilot (only in VS Code)
    • Install the GitHub Copilot Chat extension.

    • Sign-in with your GitHub account.

    • When asked, allow DVT to use the Copilot.

  • Ollama
    • Works out of the box when Ollama runs locally on its default port (11434).

    • Otherwise set the environment variable OLLAMA_HOST='...' before starting DVT.

Note

For advanced LLM configuration see the LLM Configuration chapter.

To confirm that the LLM provider is properly set you can take a look in the VS Code status bar or Eclipse toolbar. Check that a default model is selected and that the list of models shows one or more models from enabled providers.

../../_images/ai-default-model-eclipse.png
../../_images/ai-default-model-vscode.png

Now, you can work with the AI Assistant in the editor or in its dedicated chat view. To test it out, select some code and run the command DVT AI Blueprint: Explain the Selected Code. A new chat session will start and the LLM will explain the selected code.