Getting Started
To enable AI Assistant you need access to one or more LLM providers. The setup details are particular to each LLM provider.
The supported providers can be quickly set as follows:
- OpenAI
- Set the environment variable - OPENAI_API_KEY='...'.
 
 
- Azure OpenAI
- Set the environment variable - OPENAI_API_VERSION='...',- AZURE_OPENAI_API_KEY='...',- AZURE_OPENAI_ENDPOINT='...'(or- OPENAI_BASE_URL='...').
 
 
- Anthropic
- Set the environment variable - ANTHROPIC_API_KEY='...'.
 
 
- Google AI
- Set the environment variable - GOOGLE_AI_API_KEY='...'.
 
 
- GitHub Copilot (only in VS Code)
- Install the GitHub Copilot Chat extension. 
- Sign-in with your GitHub account. 
- When asked, allow DVT to use the Copilot. 
- To configure which models are available, see Configuring access to AI models in Copilot. 
 
 
- Ollama
- Works out of the box when Ollama runs locally on its default port (11434). 
- Otherwise set the environment variable - OLLAMA_HOST='...'.
 
 
Note
You must set the environment variables before starting Eclipse.
When using VS Code, you can also use the setting, especially when running through Remote-SSH.
See the Advanced LLM Configuration chapter for additional options that can be used to configure providers in more complex scenarios (short-lived API keys, proxies, certificates or custom provider options).
To confirm that the LLM provider is properly set you can take a look in the VS Code status bar or Eclipse toolbar. Check that a default model is selected and that the list of models shows one or more models from enabled providers.
 
 
Now, you can work with the AI Assistant in the editor or in its dedicated chat view. To test it out, select some code and run the command DVT AI Blueprint: Explain the Selected Code. A new chat session will start and the LLM will explain the selected code.