Supercharge Neovim with Avante.nvim and Custom AI Models

If you're using a modern Neovim setup like kickstart.nvim, you're already on the fast track. But what if you could integrate a powerful, multi-provider AI assistant directly into your editor? Enter avante.nvim, a versatile AI plugin that works beautifully with GitHub Copilot, allowing you to not just use one AI model, but to easily switch between several.

The Setup: Kickstart.nvim

For those using kickstart.nvim, the integration is seamless. You can add your avante.nvim configuration in a new file at lua/custom/plugins/avante.lua. This keeps your setup clean and modular.

The Configuration: Your AI Model Catalog

The real power comes from the configuration. Here’s a detailed example of how you can set up avante.nvim to work with multiple models through GitHub Copilot.

-- lua/custom/plugins/avante.lua
return {
  'yetone/avante.nvim',
  event = 'VeryLazy',
  version = false, -- Never set this value to "*"! Never!
  opts = {
    -- Set your default providers here
    provider = 'copilot/gemini-2.5',
    auto_suggestions_provider = 'copilot/gpt-4.1',
    mode = 'legacy', -- agentic | legacy

    -- Define all the models you want to access
    providers = {
      ['copilot/claude-4.0'] = {
        __inherited_from = 'copilot',
        model = 'claude-4.0',
        display_name = 'copilot/claude-4.0',
        extra_request_body = {
          max_tokens = 65536,
        },
        disable_tools = true,
      },
      ['copilot/gpt-4.1'] = {
        __inherited_from = 'copilot',
        model = 'gpt-4.1',
        display_name = 'copilot/gpt-4.1',
        extra_request_body = {
          max_tokens = 65536,
        },

        disable_tools = true,
      },
      ['copilot/gemini-2.0'] = {
        __inherited_from = 'copilot',
        model = 'gemini-2.0-flash-001',
        display_name = 'copilot/gemini-2.0-flash',
        extra_request_body = {
          max_tokens = 65536,
        },

        disable_tools = true,
      },
      ['copilot/gemini-2.5'] = {
        __inherited_from = 'copilot',
        model = 'gemini-2.5-pro',
        display_name = 'copilot/gemini-2.5-pro',
        extra_request_body = {
          max_tokens = 65536,
        },

        disable_tools = true,
      },
    },
  },
  -- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
  build = 'make',
  dependencies = {
    'nvim-treesitter/nvim-treesitter',
    'stevearc/dressing.nvim',
    'nvim-lua/plenary.nvim',
    'MunifTanjim/nui.nvim',
    'zbirenbaum/copilot.lua', -- for providers='copilot'
    -- other optional dependencies...
  },
}

Switching Models with Ease

The most interesting part of this configuration is the providers table and the top-level provider setting.

Picking Your Default Model

At the top of the opts, you'll see:

provider = 'copilot/gemini-2.5',
auto_suggestions_provider = 'copilot/gpt-4.1',

The provider key sets the default model for general tasks within Avante. In this case, we've set it to gemini-2.5-pro. The auto_suggestions_provider is used specifically for auto-suggestions, here pointing to gpt-4.1.

Defining and Switching Between Models

The providers table is your catalog of available models. Each entry defines a model you can use. For example, the entry for gemini-2.5 is:

['copilot/gemini-2.5'] = {
  __inherited_from = 'copilot',
  model = 'gemini-2.5-pro',
  display_name = 'copilot/gemini-2.5-pro',
  -- ...
},

This tells avante.nvim to use the gemini-2.5-pro model via the copilot provider.

Want to switch your main model to Claude? Simply change the top-level provider value:

-- Before
provider = 'copilot/gemini-2.5',

-- After
provider = 'copilot/claude-4.0',

Adding a New Model

Let's say a new model like "Claude 4.5 Preview" becomes available through Copilot. You could easily add it to your providers list:

-- Hypothetical example for a new model
['copilot/claude-4.5-preview'] = {
  __inherited_from = 'copilot',
  model = 'claude-4.5-preview', -- The actual model name might differ
  display_name = 'copilot/claude-4.5-preview',
  extra_request_body = {
    max_tokens = 65536,
  },
  disable_tools = true,
},

After adding it, you can make it your default by updating the provider line. This makes your Neovim setup incredibly flexible and future-proof.

Conclusion

By leveraging avante.nvim's provider system, you can turn Neovim into a powerful, multi-modal AI development environment. This configuration allows you to pick the best tool for the job, whether it's the latest model from Google, Anthropic, or OpenAI, all seamlessly integrated via GitHub Copilot.