Skip to content

Latest commit

 

History

History
1245 lines (982 loc) · 56.4 KB

README.org

File metadata and controls

1245 lines (982 loc) · 56.4 KB

gptel: A simple LLM client for Emacs

https://elpa.nongnu.org/nongnu/gptel.svg https://stable.melpa.org/packages/gptel-badge.svg https://melpa.org/packages/gptel-badge.svg

gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.

LLM BackendSupportsRequires
ChatGPTAPI key
AzureDeployment and API key
OllamaOllama running locally
GPT4AllGPT4All running locally
GeminiAPI key
Llama.cppLlama.cpp running locally
LlamafileLocal Llamafile server
Kagi FastGPTAPI key
Kagi SummarizerAPI key
together.aiAPI key
AnyscaleAPI key
PerplexityAPI key
Anthropic (Claude)API key
GroqAPI key
OpenRouterAPI key
PrivateGPTPrivateGPT running locally
DeepSeekAPI key
CerebrasAPI key
Github ModelsToken
Novita AIToken
xAIAPI key

General usage: (YouTube Demo)

intro-demo.mp4
intro-demo-2.mp4

Media support

gptel-image-demo-1.mp4

Multi-LLM support demo:

gptel-multi.mp4
  • It’s async and fast, streams responses.
  • Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
  • LLM responses are in Markdown or Org markup.
  • Supports multiple independent conversations and one-off ad hoc interactions.
  • Supports multi-modal models (include images, documents)
  • Save chats as regular Markdown/Org/Text files and resume them later.
  • You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
  • Don’t like gptel’s workflow? Use it to create your own for any supported model/backend with a simple API.

gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.

Contents

Breaking changes!

  • gptel-model is now expected to be a symbol, not a string. Please update your configuration.

Installation

gptel can be installed in Emacs out of the box with M-x package-installgptel. This installs the latest commit.

If you want the stable version instead, add NonGNU-devel ELPA or MELPA-stable to your list of package sources (package-archives), then install gptel with M-x package-install⏎ gptel from these sources.

(Optional: Install markdown-mode.)

Straight

(straight-use-package 'gptel)

Installing the markdown-mode package is optional.

Manual

Clone or download this repository and run M-x package-install-file⏎ on the repository directory.

Installing the markdown-mode package is optional.

Doom Emacs

In packages.el

(package! gptel)

In config.el

(use-package! gptel
 :config
 (setq! gptel-api-key "your key"))

“your key” can be the API key itself, or (safer) a function that returns the key. Setting gptel-api-key is optional, you will be asked for a key if it’s not found.

Spacemacs

In your .spacemacs file, add llm-client to dotspacemacs-configuration-layers.

(llm-client :variables
            llm-client-enable-gptel t)

Setup

ChatGPT

Procure an OpenAI API key.

Optional: Set gptel-api-key to the key. Alternatively, you may choose a more secure method such as:

  • Storing in ~/.authinfo. By default, “api.openai.com” is used as HOST and “apikey” as USER.
    machine api.openai.com login apikey password TOKEN
        
  • Setting it to a function that returns the key.

Other LLM backends

Azure

Register a backend with

(gptel-make-azure "Azure-1"             ;Name, whatever you'd like
  :protocol "https"                     ;Optional -- https is the default
  :host "YOUR_RESOURCE_NAME.openai.azure.com"
  :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent
  :stream t                             ;Enable streaming responses
  :key #'gptel-api-key
  :models '(gpt-3.5-turbo gpt-4))

Refer to the documentation of gptel-make-azure to set more parameters.

You can pick this backend from the menu when using gptel. (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model 'gpt-3.5-turbo
 gptel-backend (gptel-make-azure "Azure-1"
                 :protocol "https"
                 :host "YOUR_RESOURCE_NAME.openai.azure.com"
                 :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15"
                 :stream t
                 :key #'gptel-api-key
                 :models '(gpt-3.5-turbo gpt-4)))

GPT4All

Register a backend with

(gptel-make-gpt4all "GPT4All"           ;Name of your choosing
 :protocol "http"
 :host "localhost:4891"                 ;Where it's running
 :models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models

These are the required parameters, refer to the documentation of gptel-make-gpt4all for more.

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default.

;; OPTIONAL configuration
(setq
 gptel-max-tokens 500
 gptel-model 'mistral-7b-openorca.Q4_0.gguf
 gptel-backend (gptel-make-gpt4all "GPT4All"
                 :protocol "http"
                 :host "localhost:4891"
                 :models '(mistral-7b-openorca.Q4_0.gguf)))

Ollama

Register a backend with

(gptel-make-ollama "Ollama"             ;Any name of your choosing
  :host "localhost:11434"               ;Where it's running
  :stream t                             ;Stream responses
  :models '(mistral:latest))          ;List of models

These are the required parameters, refer to the documentation of gptel-make-ollama for more.

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model 'mistral:latest
 gptel-backend (gptel-make-ollama "Ollama"
                 :host "localhost:11434"
                 :stream t
                 :models '(mistral:latest)))

Gemini

Register a backend with

;; :key can be a function that returns the API key.
(gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)

These are the required parameters, refer to the documentation of gptel-make-gemini for more.

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model 'gemini-pro
 gptel-backend (gptel-make-gemini "Gemini"
                 :key "YOUR_GEMINI_API_KEY"
                 :stream t))

Llama.cpp or Llamafile

(If using a llamafile, run a server llamafile instead of a “command-line llamafile”, and a model that supports text generation.)

Register a backend with

;; Llama.cpp offers an OpenAI compatible API
(gptel-make-openai "llama-cpp"          ;Any name
  :stream t                             ;Stream responses
  :protocol "http"
  :host "localhost:8000"                ;Llama.cpp server location
  :models '(test))                    ;Any names, doesn't matter for Llama

These are the required parameters, refer to the documentation of gptel-make-openai for more.

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'test
 gptel-backend (gptel-make-openai "llama-cpp"
                 :stream t
                 :protocol "http"
                 :host "localhost:8000"
                 :models '(test)))

Kagi (FastGPT & Summarizer)

Kagi’s FastGPT model and the Universal Summarizer are both supported. A couple of notes:

  1. Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
  2. Kagi models do not support multi-turn conversations, interactions are “one-shot”. They also do not support streaming responses.

Register a backend with

(gptel-make-kagi "Kagi"                    ;any name
  :key "YOUR_KAGI_API_KEY")                ;can be a function that returns the key

These are the required parameters, refer to the documentation of gptel-make-kagi for more.

You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model 'fastgpt
 gptel-backend (gptel-make-kagi "Kagi"
                 :key "YOUR_KAGI_API_KEY"))

The alternatives to fastgpt include summarize:cecil, summarize:agnes, summarize:daphne and summarize:muriel. The difference between the summarizer engines is documented here.

together.ai

Register a backend with

;; Together.ai offers an OpenAI compatible API
(gptel-make-openai "TogetherAI"         ;Any name you want
  :host "api.together.xyz"
  :key "your-api-key"                   ;can be a function that returns the key
  :stream t
  :models '(;; has many more, check together.ai
            mistralai/Mixtral-8x7B-Instruct-v0.1
            codellama/CodeLlama-13b-Instruct-hf
            codellama/CodeLlama-34b-Instruct-hf))

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'mistralai/Mixtral-8x7B-Instruct-v0.1
 gptel-backend
 (gptel-make-openai "TogetherAI"         
   :host "api.together.xyz"
   :key "your-api-key"                   
   :stream t
   :models '(;; has many more, check together.ai
             mistralai/Mixtral-8x7B-Instruct-v0.1
             codellama/CodeLlama-13b-Instruct-hf
             codellama/CodeLlama-34b-Instruct-hf)))

Anyscale

Register a backend with

;; Anyscale offers an OpenAI compatible API
(gptel-make-openai "Anyscale"           ;Any name you want
  :host "api.endpoints.anyscale.com"
  :key "your-api-key"                   ;can be a function that returns the key
  :models '(;; has many more, check anyscale
            mistralai/Mixtral-8x7B-Instruct-v0.1))

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'mistralai/Mixtral-8x7B-Instruct-v0.1
 gptel-backend
 (gptel-make-openai "Anyscale"
                 :host "api.endpoints.anyscale.com"
                 :key "your-api-key"
                 :models '(;; has many more, check anyscale
                           mistralai/Mixtral-8x7B-Instruct-v0.1)))

Perplexity

Register a backend with

;; Perplexity offers an OpenAI compatible API
(gptel-make-openai "Perplexity"         ;Any name you want
  :host "api.perplexity.ai"
  :key "your-api-key"                   ;can be a function that returns the key
  :endpoint "/chat/completions"
  :stream t
  :models '(;; has many more, check perplexity.ai
            pplx-7b-chat
            pplx-70b-chat
            pplx-7b-online
            pplx-70b-online))

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'pplx-7b-chat
 gptel-backend
 (gptel-make-openai "Perplexity"
   :host "api.perplexity.ai"
   :key "your-api-key"
   :endpoint "/chat/completions"
   :stream t
   :models '(;; has many more, check perplexity.ai
             pplx-7b-chat
             pplx-70b-chat
             pplx-7b-online
             pplx-70b-online)))

Anthropic (Claude)

Register a backend with

(gptel-make-anthropic "Claude"          ;Any name you want
  :stream t                             ;Streaming responses
  :key "your-api-key")

The :key can be a function that returns the key (more secure).

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model 'claude-3-sonnet-20240229 ;  "claude-3-opus-20240229" also available
 gptel-backend (gptel-make-anthropic "Claude"
                 :stream t :key "your-api-key"))

Groq

Register a backend with

;; Groq offers an OpenAI compatible API
(gptel-make-openai "Groq"               ;Any name you want
  :host "api.groq.com"
  :endpoint "/openai/v1/chat/completions"
  :stream t
  :key "your-api-key"                   ;can be a function that returns the key
  :models '(llama-3.1-70b-versatile
            llama-3.1-8b-instant
            llama3-70b-8192
            llama3-8b-8192
            mixtral-8x7b-32768
            gemma-7b-it))

You can pick this backend from the menu when using gptel (see Usage). Note that Groq is fast enough that you could easily set :stream nil and still get near-instant responses.

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model   'mixtral-8x7b-32768
      gptel-backend
      (gptel-make-openai "Groq"
        :host "api.groq.com"
        :endpoint "/openai/v1/chat/completions"
        :stream t
        :key "your-api-key"
        :models '(llama-3.1-70b-versatile
                  llama-3.1-8b-instant
                  llama3-70b-8192
                  llama3-8b-8192
                  mixtral-8x7b-32768
                  gemma-7b-it)))

OpenRouter

Register a backend with

;; OpenRouter offers an OpenAI compatible API
(gptel-make-openai "OpenRouter"               ;Any name you want
  :host "openrouter.ai"
  :endpoint "/api/v1/chat/completions"
  :stream t
  :key "your-api-key"                   ;can be a function that returns the key
  :models '(openai/gpt-3.5-turbo
            mistralai/mixtral-8x7b-instruct
            meta-llama/codellama-34b-instruct
            codellama/codellama-70b-instruct
            google/palm-2-codechat-bison-32k
            google/gemini-pro))

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model   'mixtral-8x7b-32768
      gptel-backend
      (gptel-make-openai "OpenRouter"               ;Any name you want
        :host "openrouter.ai"
        :endpoint "/api/v1/chat/completions"
        :stream t
        :key "your-api-key"                   ;can be a function that returns the key
        :models '(openai/gpt-3.5-turbo
                  mistralai/mixtral-8x7b-instruct
                  meta-llama/codellama-34b-instruct
                  codellama/codellama-70b-instruct
                  google/palm-2-codechat-bison-32k
                  google/gemini-pro)))

PrivateGPT

Register a backend with

(gptel-make-privategpt "privateGPT"               ;Any name you want
  :protocol "http"
  :host "localhost:8001"
  :stream t
  :context t                            ;Use context provided by embeddings
  :sources t                            ;Return information about source documents
  :models '(private-gpt))

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model   'private-gpt
      gptel-backend
      (gptel-make-privategpt "privateGPT"               ;Any name you want
        :protocol "http"
        :host "localhost:8001"
        :stream t
        :context t                            ;Use context provided by embeddings
        :sources t                            ;Return information about source documents
        :models '(private-gpt)))

DeepSeek

Register a backend with

;; DeepSeek offers an OpenAI compatible API
(gptel-make-openai "DeepSeek"       ;Any name you want
  :host "api.deepseek.com"
  :endpoint "/chat/completions"
  :stream t
  :key "your-api-key"               ;can be a function that returns the key
  :models '(deepseek-chat deepseek-coder))

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model   'deepseek-chat
      gptel-backend
      (gptel-make-openai "DeepSeek"     ;Any name you want
        :host "api.deepseek.com"
        :endpoint "/chat/completions"
        :stream t
        :key "your-api-key"             ;can be a function that returns the key
        :models '(deepseek-chat deepseek-coder)))

Cerebras

Register a backend with

;; Cerebras offers an instant OpenAI compatible API
(gptel-make-openai "Cerebras"
  :host "api.cerebras.ai"
  :endpoint "/v1/chat/completions"
  :stream t                             ;optionally nil as Cerebras is instant AI
  :key "your-api-key"                   ;can be a function that returns the key
  :models '(llama3.1-70b
            llama3.1-8b))

You can pick this backend from the menu when using gptel (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model   'llama3.1-8b
      gptel-backend
      (gptel-make-openai "Cerebras"
        :host "api.cerebras.ai"
        :endpoint "/v1/chat/completions"
        :stream nil
        :key "your-api-key"
        :models '(llama3.1-70b
                  llama3.1-8b)))

Github Models

Register a backend with

;; Github Models offers an OpenAI compatible API
(gptel-make-openai "Github Models" ;Any name you want
  :host "models.inference.ai.azure.com"
  :endpoint "/chat/completions"
  :stream t
  :key "your-github-token"
  :models '(gpt-4o))

For all the available models, check the marketplace.

You can pick this backend from the menu when using (see Usage).

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq gptel-model  'gpt-4o
      gptel-backend
      (gptel-make-openai "Github Models" ;Any name you want
        :host "models.inference.ai.azure.com"
        :endpoint "/chat/completions"
        :stream t
        :key "your-github-token"
        :models '(gpt-4o))

Novita AI

Register a backend with

;; Novita AI offers an OpenAI compatible API
(gptel-make-openai "NovitaAI"         ;Any name you want
  :host "api.novita.ai"
  :endpoint "/v3/openai"
  :key "your-api-key"                   ;can be a function that returns the key
  :stream t
  :models '(;; has many more, check https://novita.ai/llm-api
            gryphe/mythomax-l2-13b
            meta-llama/llama-3-70b-instruct
            meta-llama/llama-3.1-70b-instruct))

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'gryphe/mythomax-l2-13b
 gptel-backend
 (gptel-make-openai "NovitaAI"         
   :host "api.novita.ai"
   :endpoint "/v3/openai"
   :key "your-api-key"                   
   :stream t
   :models '(;; has many more, check https://novita.ai/llm-api
             mistralai/Mixtral-8x7B-Instruct-v0.1
             meta-llama/llama-3-70b-instruct
             meta-llama/llama-3.1-70b-instruct)))

xAI

Register a backend with

;; xAI offers an OpenAI compatible API
(gptel-make-openai "xAI"           ;Any name you want
  :host "api.x.ai"
  :key "your-api-key"              ;can be a function that returns the key
  :endpoint "/v1/chat/completions"
  :stream t
  :models '(;; xAI now only offers `grok-beta` as of the time of this writing
            grok-beta))

You can pick this backend from the menu when using gptel (see Usage)

(Optional) Set as the default gptel backend

The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend. Use this instead of the above.

;; OPTIONAL configuration
(setq
 gptel-model   'grok-beta
 gptel-backend
 (gptel-make-openai "xAI"           ;Any name you want
   :host "api.x.ai"
   :key "your-api-key"              ;can be a function that returns the key
   :endpoint "/v1/chat/completions"
   :stream t
   :models '(;; xAI now only offers `grok-beta` as of the time of this writing
             grok-beta)))

Usage

(There is also a video demo showing various uses of gptel.)

To send queriesDescription
gptel-sendSend all text up to (point), or the selection if region is active. Works anywhere in Emacs.
gptelCreate a new dedicated chat buffer. Not required to use gptel.
To Set options
C-u gptel-sendTransient menu for preferences, input/output redirection etc.
gptel-menu(Same)
To add context
gptel-addAdd/remove a region or buffer to gptel’s context. In Dired, add/remove marked files.
gptel-add-fileAdd a file (text or supported media type) to gptel’s context. Also available from the transient menu.
Org mode bonuses
gptel-org-set-topicLimit conversation context to an Org heading. (For branching conversations see below.)
gptel-org-set-propertiesWrite gptel configuration as Org properties, for per-heading chat configuration.

In any buffer:

  1. Call M-x gptel-send to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response.
  2. If a region is selected, the conversation will be limited to its contents.
  3. Call M-x gptel-send with a prefix argument (C-u)
    • to set chat parameters (GPT model, system message etc) for this buffer,
    • include quick instructions for the next request only,
    • to add additional context – regions, buffers or files – to gptel,
    • to read the prompt from or redirect the response elsewhere,
    • or to replace the prompt with the response.
Image showing gptel's menu with some of the available query options.

In a dedicated chat buffer:

  1. Run M-x gptel to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel) to start a new session.
  2. In the gptel buffer, send your prompt with M-x gptel-send, bound to C-c RET.
  3. Set chat parameters (LLM provider, model, directives etc) for the session by calling gptel-send with a prefix argument (C-u C-c RET):
Image showing gptel's menu with some of the available query options.

That’s it. You can go back and edit previous prompts and responses if you want.

The default mode is markdown-mode if available, else text-mode. You can set gptel-default-mode to org-mode if desired.

Including media (images, documents) with requests

gptel supports sending media in Markdown and Org chat buffers, but this feature is disabled by default.

  • You can enable it globally, for all models that support it, by setting gptel-track-media.
  • Or you can set it locally, just for the chat buffer, via the header line:
Image showing a gptel chat buffer's header line with the button to toggle media support

There are two ways to include media with requests:

  1. Adding media files to the context with gptel-add-file, described further below.
  2. Including links to media in chat buffers, described here:

To send media – images or other supported file types – with requests in chat buffers, you can include links to them in the chat buffer. Such a link must be “standalone”, i.e. on a line by itself surrounded by whitespace.

In Org mode, for example, the following are all valid ways of including an image with the request:

  • “Standalone” file link:
Describe this picture

[[file:/path/to/screenshot.png]]

Focus specifically on the text content.
  • “Standalone” file link with description:
Describe this picture

[[file:/path/to/screenshot.png][some picture]]

Focus specifically on the text content.
  • “Standalone”, angle file link:
Describe this picture

<file:/path/to/screenshot.png>

Focus specifically on the text content.

The following links are not valid, and the text of the link will be sent instead of the file contents:

  • Inline link:
Describe this [[file:/path/to/screenshot.png][picture]].

Focus specifically on the text content.
  • Link not “standalone”:
Describe this picture: 
[[file:/path/to/screenshot.png]]
Focus specifically on the text content.
  • Not a valid Org link:
Describe the picture

file:/path/to/screenshot.png

Similar criteria apply to Markdown chat buffers.

Save and restore your chat sessions

Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on gptel-mode before editing the buffer.

Include more context with requests

By default, gptel will query the LLM with the active region or the buffer contents up to the cursor. Often it can be helpful to provide the LLM with additional context from outside the current buffer. For example, when you’re in a chat buffer but want to ask questions about a (possibly changing) code buffer and auxiliary project files.

You can include additional text regions, buffers or files with gptel’s queries. This additional context is “live” and not a snapshot. Once added, the regions, buffers or files are scanned and included at the time of each query. When using multi-modal models, added files can be of any supported type – typically images.

You can add a selected region, buffer or file to gptel’s context from the menu, or call gptel-add. (To add a file use gptel-add in Dired or use the dedicated gptel-add-file command.)

You can examine the active context from the menu:

Image showing gptel's menu with the

And then browse through or remove context from the context buffer:

Image showing gptel's context buffer.

Rewrite, refactor or fill in a region

In any buffer: with a region selected, you can rewrite prose or refactor code from here:

Prose:

Code:

When the refactor is ready, you can apply it or compare against the original:

These actions are also available directly when the cursor is inside the pending rewrite region:

Extra Org mode conveniences

gptel offers a few extra conveniences in Org mode.

  • You can limit the conversation context to an Org heading with the command gptel-org-set-topic.
  • You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. This is also useful for limiting the context size of each query. See the variable gptel-org-branching-context. Note: using this option requires Org 9.6.7 or higher to be available. The ai-org-chat package uses gptel to provide this branching conversation behavior for older versions of Org.
  • You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command gptel-org-set-properties. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks, and to have simultaneous chats with different models, model settings and directives under different Org headings.

FAQ

I want the window to scroll automatically as the response is inserted

To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.

(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)

I want the cursor to move to the next prompt after the response is inserted

To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:

(add-hook 'gptel-post-response-functions 'gptel-end-of-response)

You can also call gptel-end-of-response as a command at any time.

I want to change the formatting of the prompt and LLM response

For dedicated chat buffers: customize gptel-prompt-prefix-alist and gptel-response-prefix-alist. You can set a different pair for each major-mode.

Anywhere in Emacs: Use gptel-pre-response-hook and gptel-post-response-functions, which see.

I want the transient menu options to be saved so I only need to set them once

Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:

https://github.com/karthink/gptel/assets/8607532/2ecc6be9-aa52-4287-a739-ba06e1369ec2

You can make them persistent across this Emacs session by pressing C-x C-s:

https://github.com/karthink/gptel/assets/8607532/b8bcb6ad-c974-41e1-9336-fdba0098a2fe

(You can also cycle through presets you’ve saved with C-x p and C-x n.)

Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:

;; Replace with your key to invoke the transient menu:
(keymap-global-set "<f6>" "C-u C-c <return> <return>")

Or see this wiki entry.

I want to use gptel in a way that’s not supported by gptel-send or the options menu

gptel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (C-u M-x gptel-send).

For more programmable usage, gptel provides a general gptel-request function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send. See the documentation of gptel-request, and the wiki for examples.

How does gptel distinguish between user prompts and LLM responses?

gptel uses text-properties to watermark LLM responses. Thus this text is interpreted as a response even if you copy it into another buffer. In regular buffers (buffers without gptel-mode enabled), you can turn off this tracking by unsetting gptel-track-responses.

When restoring a chat state from a file on disk, gptel will apply these properties from saved metadata in the file when you turn on gptel-mode.

gptel does not use any prefix or semantic/syntax element in the buffer (such as headings) to separate prompts and responses. The reason for this is that gptel aims to integrate as seamlessly as possible into your regular Emacs usage: LLM interaction is not the objective, it’s just another tool at your disposal. So requiring a bunch of “user” and “assistant” tags in the buffer is noisy and restrictive. If you want these demarcations, you can customize gptel-prompt-prefix-alist and gptel-response-prefix-alist. Note that these prefixes are for your readability only and purely cosmetic.

(Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode

Doom binds RET in Org mode to +org/dwim-at-point, which appears to conflict with gptel’s transient menu bindings for some reason.

Two solutions:

  • Press C-m instead of the return key.
  • Change the send key from return to a key of your choice:
    (transient-suffix-put 'gptel-menu (kbd "RET") :key "<f8>")
        

(ChatGPT) I get the error “(HTTP/2 429) You exceeded your current quota”

(HTTP/2 429) You exceeded your current quota, please check your plan and billing details.

Using the ChatGPT (or any OpenAI) API requires adding credit to your account.

Why another LLM client?

Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:

  1. Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated gptel buffer just adds some visual flair to the interaction.
  2. Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.

Additional Configuration

Connection options
gptel-use-curlUse Curl (default), fallback to Emacs’ built-in url.
gptel-proxyProxy server for requests, passed to curl via --proxy.
gptel-api-keyVariable/function that returns the API key for the active backend.
LLM request options(Note: not supported uniformly across LLMs)
gptel-backendDefault LLM Backend.
gptel-modelDefault model to use, depends on the backend.
gptel-streamEnable streaming responses, if the backend supports it.
gptel-directivesAlist of system directives, can switch on the fly.
gptel-max-tokensMaximum token count (in query + response).
gptel-temperatureRandomness in response text, 0 to 2.
gptel-use-contextHow/whether to include additional context
Chat UI options
gptel-default-modeMajor mode for dedicated chat buffers.
gptel-track-responseDistinguish between user messages and LLM responses?
gptel-track-mediaSend images or other media from links?
gptel-prompt-prefix-alistText inserted before queries.
gptel-response-prefix-alistText inserted before responses.
gptel-use-header-lineDisplay status messages in header-line (default) or minibuffer
gptel-display-buffer-actionPlacement of the gptel chat buffer.
Org mode UI options
gptel-org-branching-contextMake each outline path a separate conversation branch
Hooks for customization
gptel-save-state-hookRuns before saving the chat state to a file on disk
gptel-pre-response-hookRuns before inserting the LLM response into the buffer
gptel-post-response-functionsRuns after inserting the full LLM response into the buffer
gptel-post-stream-hookRuns after each streaming insertion
gptel-context-wrap-functionTo include additional context formatted your way

Alternatives

Other Emacs clients for LLMs include

  • llm: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command gptel-request, which see.
  • Ellama: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
  • chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
  • org-ai: Interaction through special #+begin_ai ... #+end_ai Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.

There are several more: leafy-mode, chat.el

Packages using gptel

gptel is a general-purpose package for chat and ad-hoc LLM interaction. The following packages use gptel to provide additional or specialized functionality:

  • gptel-quick: Quickly look up the region or text at point.
  • Evedel: Instructed LLM Programmer/Assistant
  • Elysium: Automatically apply AI-generated changes as you code
  • gptel-extensions: Extra utility functions for gptel
  • ai-blog.el: Streamline generation of blog posts in Hugo
  • magit-gptcommit: Generate Commit Messages within magit-status Buffer using gptel
  • consult-omni: Versatile multi-source search package. It includes gptel as one of its many sources.
  • ai-org-chat: Provides branching conversations in Org buffers using gptel. (Note that gptel includes this feature as well (see gptel-org-branching-context), but requires a recent version of Org mode (9.67 or later) to be installed.)

Acknowledgments