.. _Getting Started AI:

Getting Started
===============


To enable AI Assistant you need access to one or more LLM providers. The setup details are particular to each LLM provider.

The supported providers can be quickly set as follows:


-  **OpenAI**
    -  Set the environment variable ``OPENAI_API_KEY='...'``.

-  **Azure OpenAI**
    -  Set the environment variables ``AZURE_OPENAI_API_KEY='...'``, ``AZURE_OPENAI_DEPLOYMENT_NAME='...'`` and (``AZURE_OPENAI_RESOURCE_NAME='...'`` or ``AZURE_OPENAI_BASE_URL='...'``).

-  **OpenAI-Compatible**
    -  Set the environment variables ``OPENAI_COMPATIBLE_BASE_URL='...'`` and ``OPENAI_COMPATIBLE_MODELS='...'``.

-  **Anthropic**
    -  Set the environment variable ``ANTHROPIC_API_KEY='...'``.

-  **Google AI**
    -  Set the environment variable ``GOOGLE_AI_API_KEY='...'``.

-  **Google Vertex AI**
    -  Set the environment variable ``GOOGLE_VERTEX_API_KEY='...'`` or (``GOOGLE_VERTEX_PROJECT='...'``, ``GOOGLE_VERTEX_LOCATION='...'``, and ``GOOGLE_VERTEX_SERVICE_ACCOUNT_KEY_FILE='...'``).

-  **Amazon Bedrock**
    -  Set the environment variables ``AWS_REGION='...'`` and ``AWS_BEARER_TOKEN_BEDROCK='...'`` or (``AWS_ACCESS_KEY_ID='...'`` and ``AWS_SECRET_ACCESS_KEY='...'``).

-  **GitHub Copilot** (only in VS Code)
    -  Install the GitHub Copilot Chat extension.
    -  Sign-in with your GitHub account.
    -  When asked, allow DVT to use the Copilot.
    -  To configure which models are available, see `Configuring access to AI models in Copilot <https://docs.github.com/en/copilot/how-tos/ai-models/configuring-access-to-ai-models-in-copilot>`_.

-  **Ollama**
    -  Works out of the box when Ollama runs locally on its default port (11434).
    -  Otherwise set the environment variable ``OLLAMA_HOST='...'``.

.. note:: 
    
    You must set the environment variables before starting Eclipse.
    
    When using VS Code, you can also use the :menuselection:`DVT --> Environment --> Variables` setting, especially when running through Remote-SSH.
    
    
See the :ref:`Advanced LLM Configuration` chapter for additional options that can be used to configure providers in more complex scenarios (short-lived API keys, proxies, certificates or custom provider options).

To confirm that the LLM provider is properly set you can take a look in the VS Code status bar or Eclipse toolbar. Check that a default model is selected and that the list of models shows one or more models from enabled providers.

.. figure:: ../../images/common/ai-default-model-eclipse.png
	:align: center


.. figure:: ../../images/common/ai-default-model-vscode.png
	:align: center


Now, you can work with the AI Assistant in the editor or in its dedicated chat view. To test it out, select some code and run the command **DVT AI Blueprint: Explain the Selected Code.** A new chat session will start and the LLM will explain the selected code.

