DocsConfiguring an External Proxy Model
Connecting to an External Proxy Model
HammerAI allows users to route language generation through external LLM services using the built-in Proxy configuration. This is useful for accessing models hosted in third-party environments such as LM Studio, OpenRouter, Featherless, or any compatible API-based LLM service.
Enabling Proxy Mode
To activate proxy integration:
- Open any character chat interface within HammerAI.,
- Navigate to the Settings tab located in the right-hand panel.,
- Locate the LLM dropdown menu.,
- Scroll to the bottom of the list and select Proxy.
Selecting Proxy will reveal additional configuration fields required to establish the external connection.
Proxy Configuration Fields
Proxy URL - This is the base endpoint used to access the remote API. Examples:
- http://localhost:1234/v1 (LM Studio)
- https://api.featherless.ai/v1 (Featherless)
- https://openrouter.ai/api/v1 (OpenRouter)
Proxy Model - Enter the exact model name as defined by the provider. Examples:
- mistral-7b-instruct
- deepseek-ai/DeepSeek-R1
- gpt-3.5-turbo
Proxy Context Length (optional)
- Specifies the maximum context size (in tokens) supported by the model. Leave blank if unknown.
Proxy API Key
- Input the authentication token provided by the external service. Required for most cloud-based APIs.
Usage Notes
Once the required fields are populated, HammerAI will route all model prompts and completions through the defined proxy instead of local inference. If errors occur or the model does not respond, verify the endpoint, model name, and API key. A full application reload may be required for some services to establish a proper connection.