feat: add ai suggestion strategy

Add new AI strategy that queries OpenAI-compatible LLM APIs
to generate intelligent command completions based on partial
input, working directory, and recent shell history.

- Add AI strategy implementation with JSON escaping
- Add context gathering with PWD prioritization
- Add response normalization for clean suggestions
- Add configuration defaults for AI settings
- Add comprehensive test suite with mocked responses
- Update README with setup guide and examples

Enables LLM-powered completions via ZSH_AUTOSUGGEST_AI_API_KEY
with silent failure and fallback to next strategy. Supports
OpenAI, Ollama, and custom endpoints. Requires curl and jq.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Frad LEE 2026-02-05 10:24:52 +08:00
commit 8a9c1a2a30
5 changed files with 666 additions and 1 deletions

View file

@ -49,11 +49,12 @@ For more info, read the Character Highlighting section of the zsh manual: `man z
### Suggestion Strategy
`ZSH_AUTOSUGGEST_STRATEGY` is an array that specifies how suggestions should be generated. The strategies in the array are tried successively until a suggestion is found. There are currently three built-in strategies to choose from:
`ZSH_AUTOSUGGEST_STRATEGY` is an array that specifies how suggestions should be generated. The strategies in the array are tried successively until a suggestion is found. There are currently four built-in strategies to choose from:
- `history`: Chooses the most recent match from history.
- `completion`: Chooses a suggestion based on what tab-completion would suggest. (requires `zpty` module, which is included with zsh since 4.0.1)
- `match_prev_cmd`: Like `history`, but chooses the most recent match whose preceding history item matches the most recently executed command ([more info](src/strategies/match_prev_cmd.zsh)). Note that this strategy won't work as expected with ZSH options that don't preserve the history order such as `HIST_IGNORE_ALL_DUPS` or `HIST_EXPIRE_DUPS_FIRST`.
- `ai`: Queries an OpenAI-compatible LLM API to generate command completions based on your partial input, working directory, and recent shell history. (requires `curl` and `jq`, opt-in via `ZSH_AUTOSUGGEST_AI_API_KEY`)
For example, setting `ZSH_AUTOSUGGEST_STRATEGY=(history completion)` will first try to find a suggestion from your history, but, if it can't find a match, will find a suggestion from the completion engine.
@ -100,6 +101,63 @@ Set `ZSH_AUTOSUGGEST_COMPLETION_IGNORE` to a [glob pattern](http://zsh.sourcefor
**Note:** This only affects the `completion` suggestion strategy.
### AI Strategy Configuration
The `ai` strategy uses an OpenAI-compatible LLM API to generate intelligent command completions based on your shell context.
#### Prerequisites
- `curl` command-line tool
- `jq` JSON processor
- API key for an OpenAI-compatible service
#### Setup
To enable the AI strategy, you must set the `ZSH_AUTOSUGGEST_AI_API_KEY` environment variable and add `ai` to your strategy list:
```sh
export ZSH_AUTOSUGGEST_AI_API_KEY="your-api-key-here"
export ZSH_AUTOSUGGEST_STRATEGY=(ai history)
```
**Security Note:** Never commit your API key to version control. Consider storing it in a separate file (e.g., `~/.zsh_secrets`) that is sourced in your `.zshrc` and excluded from version control.
#### Configuration Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `ZSH_AUTOSUGGEST_AI_API_KEY` | (unset) | **Required.** API key for the LLM service. Strategy is disabled if unset. |
| `ZSH_AUTOSUGGEST_AI_ENDPOINT` | `https://api.openai.com/v1/chat/completions` | API endpoint URL |
| `ZSH_AUTOSUGGEST_AI_MODEL` | `gpt-3.5-turbo` | Model name to use |
| `ZSH_AUTOSUGGEST_AI_TIMEOUT` | `5` | Request timeout in seconds |
| `ZSH_AUTOSUGGEST_AI_MIN_INPUT` | `3` | Minimum input length before querying |
| `ZSH_AUTOSUGGEST_AI_HISTORY_LINES` | `20` | Number of recent history lines to send as context |
| `ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY` | `yes` | Prioritize history from current directory |
#### Examples
**OpenAI (default):**
```sh
export ZSH_AUTOSUGGEST_AI_API_KEY="sk-..."
export ZSH_AUTOSUGGEST_STRATEGY=(ai history)
```
**Ollama (local LLM):**
```sh
export ZSH_AUTOSUGGEST_AI_API_KEY="not-needed" # Required but unused by Ollama
export ZSH_AUTOSUGGEST_AI_ENDPOINT="http://localhost:11434/v1/chat/completions"
export ZSH_AUTOSUGGEST_AI_MODEL="codellama"
export ZSH_AUTOSUGGEST_STRATEGY=(ai history)
```
**Custom OpenAI-compatible endpoint:**
```sh
export ZSH_AUTOSUGGEST_AI_API_KEY="your-key"
export ZSH_AUTOSUGGEST_AI_ENDPOINT="https://your-endpoint.com/v1/chat/completions"
export ZSH_AUTOSUGGEST_AI_MODEL="your-model"
export ZSH_AUTOSUGGEST_STRATEGY=(ai history)
```
### Key Bindings

173
spec/strategies/ai_spec.rb Normal file
View file

@ -0,0 +1,173 @@
describe 'the `ai` suggestion strategy' do
let(:options) { ["ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
let(:before_sourcing) do
-> {
session.run_command('curl() {
if [[ "$*" == *"max-time"* ]]; then
cat <<EOF
{"choices":[{"message":{"content":"git status --short"}}]}
200
EOF
fi
}')
}
end
context 'when API key is not set' do
it 'returns no suggestion and falls through to next strategy' do
with_history('git status --short') do
session.send_string('git st')
wait_for { session.content }.to eq('git status --short')
end
end
end
context 'when API key is set' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
context 'and curl/jq are available' do
it 'suggests completion from AI' do
session.send_string('git st')
wait_for { session.content }.to eq('git status --short')
end
end
context 'when input is below minimum threshold' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_AI_MIN_INPUT=5", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
it 'returns no suggestion for short input' do
with_history('git status') do
session.send_string('git')
wait_for { session.content }.to eq('git status')
end
end
end
end
context 'when curl fails' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
let(:before_sourcing) do
-> {
session.run_command('curl() { return 1 }')
}
end
it 'falls through to next strategy' do
with_history('git status') do
session.send_string('git st')
wait_for { session.content }.to eq('git status')
end
end
end
context 'when API returns HTTP error' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
let(:before_sourcing) do
-> {
session.run_command('curl() {
if [[ "$*" == *"max-time"* ]]; then
cat <<EOF
{"error":"Unauthorized"}
401
EOF
fi
}')
}
end
it 'falls through to next strategy' do
with_history('git status') do
session.send_string('git st')
wait_for { session.content }.to eq('git status')
end
end
end
context 'when API returns malformed JSON' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
let(:before_sourcing) do
-> {
session.run_command('curl() {
if [[ "$*" == *"max-time"* ]]; then
cat <<EOF
not valid json
200
EOF
fi
}')
}
end
it 'falls through to next strategy' do
with_history('git status') do
session.send_string('git st')
wait_for { session.content }.to eq('git status')
end
end
end
context 'response normalization' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai)"] }
context 'when response has markdown code fences' do
let(:before_sourcing) do
-> {
session.run_command('curl() {
if [[ "$*" == *"max-time"* ]]; then
cat <<EOF
{"choices":[{"message":{"content":"```\\ngit status\\n```"}}]}
200
EOF
fi
}')
}
end
it 'strips the code fences' do
session.send_string('git st')
wait_for { session.content }.to eq('git status')
end
end
context 'when response includes the input prefix' do
let(:before_sourcing) do
-> {
session.run_command('curl() {
if [[ "$*" == *"max-time"* ]]; then
cat <<EOF
{"choices":[{"message":{"content":"git status --short"}}]}
200
EOF
fi
}')
}
end
it 'uses the complete command' do
session.send_string('git st')
wait_for { session.content }.to eq('git status --short')
end
end
end
context 'fallback strategy' do
let(:options) { ["ZSH_AUTOSUGGEST_AI_API_KEY=test-key", "ZSH_AUTOSUGGEST_STRATEGY=(ai history)"] }
let(:before_sourcing) do
-> {
session.run_command('curl() { return 1 }')
}
end
it 'uses history when AI fails' do
with_history('git status --long') do
session.send_string('git st')
wait_for { session.content }.to eq('git status --long')
end
end
end
end

View file

@ -93,3 +93,28 @@ typeset -g ZSH_AUTOSUGGEST_ORIGINAL_WIDGET_PREFIX=autosuggest-orig-
# Pty name for capturing completions for completion suggestion strategy
(( ! ${+ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME} )) &&
typeset -g ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME=zsh_autosuggest_completion_pty
# AI strategy configuration
# API endpoint for AI suggestions (OpenAI-compatible)
(( ! ${+ZSH_AUTOSUGGEST_AI_ENDPOINT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_ENDPOINT='https://api.openai.com/v1/chat/completions'
# AI model to use for suggestions
(( ! ${+ZSH_AUTOSUGGEST_AI_MODEL} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_MODEL='gpt-3.5-turbo'
# Timeout in seconds for AI API requests
(( ! ${+ZSH_AUTOSUGGEST_AI_TIMEOUT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_TIMEOUT=5
# Minimum input length before querying AI
(( ! ${+ZSH_AUTOSUGGEST_AI_MIN_INPUT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_MIN_INPUT=3
# Number of recent history lines to include as context
(( ! ${+ZSH_AUTOSUGGEST_AI_HISTORY_LINES} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_HISTORY_LINES=20
# Prefer history entries from current directory
(( ! ${+ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY=yes

192
src/strategies/ai.zsh Normal file
View file

@ -0,0 +1,192 @@
#--------------------------------------------------------------------#
# AI Suggestion Strategy #
#--------------------------------------------------------------------#
# Queries an OpenAI-compatible LLM API to generate command
# completions based on partial input, working directory, and
# recent shell history.
#
_zsh_autosuggest_strategy_ai_json_escape() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local input="$1"
local output=""
local char
for ((i=1; i<=${#input}; i++)); do
char="${input:$((i-1)):1}"
case "$char" in
'\\') output+='\\\\' ;;
'"') output+='\"' ;;
$'\n') output+='\n' ;;
$'\t') output+='\t' ;;
$'\r') output+='\r' ;;
[[:cntrl:]]) ;; # Skip other control chars
*) output+="$char" ;;
esac
done
printf '%s' "$output"
}
_zsh_autosuggest_strategy_ai_gather_context() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local max_lines="${ZSH_AUTOSUGGEST_AI_HISTORY_LINES:-20}"
local prefer_pwd="${ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY:-yes}"
local pwd_basename="${PWD:t}"
local -a context_lines
local -a pwd_lines
local -a other_lines
local line
# Iterate from most recent history
for line in "${(@On)history}"; do
# Truncate long lines
if [[ ${#line} -gt 200 ]]; then
line="${line:0:200}..."
fi
# Categorize by PWD relevance
if [[ "$prefer_pwd" == "yes" ]] && [[ "$line" == *"$pwd_basename"* || "$line" == *"./"* || "$line" == *"../"* ]]; then
pwd_lines+=("$line")
else
other_lines+=("$line")
fi
done
# Prioritize PWD-relevant lines, then fill with others
context_lines=("${pwd_lines[@]}" "${other_lines[@]}")
context_lines=("${(@)context_lines[1,$max_lines]}")
# Return via reply array
reply=("${context_lines[@]}")
}
_zsh_autosuggest_strategy_ai_normalize() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local response="$1"
local buffer="$2"
local result=""
# Strip \r
response="${response//$'\r'/}"
# Strip markdown code fences
response="${response##\`\`\`*$'\n'}"
response="${response%%$'\n'\`\`\`}"
# Strip surrounding quotes
if [[ "$response" == \"*\" || "$response" == \'*\' ]]; then
response="${response:1:-1}"
fi
# Trim whitespace
response="${response##[[:space:]]##}"
response="${response%%[[:space:]]##}"
# Take first line only
result="${response%%$'\n'*}"
# If response starts with buffer, extract suffix
if [[ "$result" == "$buffer"* ]]; then
local suffix="${result#$buffer}"
result="$buffer$suffix"
# If response looks like a pure suffix, prepend buffer
elif [[ -n "$buffer" ]] && [[ "$result" != "$buffer"* ]] && [[ "${buffer}${result}" == [[:print:]]* ]]; then
result="$buffer$result"
fi
printf '%s' "$result"
}
_zsh_autosuggest_strategy_ai() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
typeset -g suggestion
local buffer="$1"
# Early return if API key not set (opt-in gate)
[[ -z "$ZSH_AUTOSUGGEST_AI_API_KEY" ]] && return
# Early return if curl or jq not available
[[ -z "${commands[curl]}" ]] || [[ -z "${commands[jq]}" ]] && return
# Early return if input too short
local min_input="${ZSH_AUTOSUGGEST_AI_MIN_INPUT:-3}"
[[ ${#buffer} -lt $min_input ]] && return
# Gather context
local -a context
_zsh_autosuggest_strategy_ai_gather_context
context=("${reply[@]}")
# Build context string
local context_str=""
for line in "${context[@]}"; do
if [[ -n "$context_str" ]]; then
context_str+=", "
fi
context_str+="\"$(_zsh_autosuggest_strategy_ai_json_escape "$line")\""
done
# Build JSON request body
local system_prompt="You are a shell command auto-completion engine. Given the user's partial command, working directory, and recent history, predict the complete command. Reply ONLY with the complete command. No explanations, no markdown, no quotes."
local user_message="Working directory: $PWD\nRecent history: [$context_str]\nPartial command: $buffer"
local json_body
json_body=$(printf '{
"model": "%s",
"messages": [
{"role": "system", "content": "%s"},
{"role": "user", "content": "%s"}
],
"temperature": 0.3,
"max_tokens": 100
}' \
"${ZSH_AUTOSUGGEST_AI_MODEL:-gpt-3.5-turbo}" \
"$(_zsh_autosuggest_strategy_ai_json_escape "$system_prompt")" \
"$(_zsh_autosuggest_strategy_ai_json_escape "$user_message")")
# Make API request
local endpoint="${ZSH_AUTOSUGGEST_AI_ENDPOINT:-https://api.openai.com/v1/chat/completions}"
local timeout="${ZSH_AUTOSUGGEST_AI_TIMEOUT:-5}"
local response
response=$(curl --silent --max-time "$timeout" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZSH_AUTOSUGGEST_AI_API_KEY" \
-d "$json_body" \
-w '\n%{http_code}' \
"$endpoint" 2>/dev/null)
# Check curl exit status
[[ $? -ne 0 ]] && return
# Split response body from HTTP status
local http_code="${response##*$'\n'}"
local body="${response%$'\n'*}"
# Early return on non-2xx status
[[ "$http_code" != 2* ]] && return
# Extract content from JSON response
local content
content=$(printf '%s' "$body" | jq -r '.choices[0].message.content // empty' 2>/dev/null)
# Early return if extraction failed
[[ -z "$content" ]] && return
# Normalize response
local normalized
normalized="$(_zsh_autosuggest_strategy_ai_normalize "$content" "$buffer")"
# Set suggestion
[[ -n "$normalized" ]] && suggestion="$normalized"
}

View file

@ -120,6 +120,31 @@ typeset -g ZSH_AUTOSUGGEST_ORIGINAL_WIDGET_PREFIX=autosuggest-orig-
(( ! ${+ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME} )) &&
typeset -g ZSH_AUTOSUGGEST_COMPLETIONS_PTY_NAME=zsh_autosuggest_completion_pty
# AI strategy configuration
# API endpoint for AI suggestions (OpenAI-compatible)
(( ! ${+ZSH_AUTOSUGGEST_AI_ENDPOINT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_ENDPOINT='https://api.openai.com/v1/chat/completions'
# AI model to use for suggestions
(( ! ${+ZSH_AUTOSUGGEST_AI_MODEL} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_MODEL='gpt-3.5-turbo'
# Timeout in seconds for AI API requests
(( ! ${+ZSH_AUTOSUGGEST_AI_TIMEOUT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_TIMEOUT=5
# Minimum input length before querying AI
(( ! ${+ZSH_AUTOSUGGEST_AI_MIN_INPUT} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_MIN_INPUT=3
# Number of recent history lines to include as context
(( ! ${+ZSH_AUTOSUGGEST_AI_HISTORY_LINES} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_HISTORY_LINES=20
# Prefer history entries from current directory
(( ! ${+ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY} )) &&
typeset -g ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY=yes
#--------------------------------------------------------------------#
# Utility Functions #
#--------------------------------------------------------------------#
@ -494,6 +519,198 @@ _zsh_autosuggest_partial_accept() {
done
}
#--------------------------------------------------------------------#
# AI Suggestion Strategy #
#--------------------------------------------------------------------#
# Queries an OpenAI-compatible LLM API to generate command
# completions based on partial input, working directory, and
# recent shell history.
#
_zsh_autosuggest_strategy_ai_json_escape() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local input="$1"
local output=""
local char
for ((i=1; i<=${#input}; i++)); do
char="${input:$((i-1)):1}"
case "$char" in
'\\') output+='\\\\' ;;
'"') output+='\"' ;;
$'\n') output+='\n' ;;
$'\t') output+='\t' ;;
$'\r') output+='\r' ;;
[[:cntrl:]]) ;; # Skip other control chars
*) output+="$char" ;;
esac
done
printf '%s' "$output"
}
_zsh_autosuggest_strategy_ai_gather_context() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local max_lines="${ZSH_AUTOSUGGEST_AI_HISTORY_LINES:-20}"
local prefer_pwd="${ZSH_AUTOSUGGEST_AI_PREFER_PWD_HISTORY:-yes}"
local pwd_basename="${PWD:t}"
local -a context_lines
local -a pwd_lines
local -a other_lines
local line
# Iterate from most recent history
for line in "${(@On)history}"; do
# Truncate long lines
if [[ ${#line} -gt 200 ]]; then
line="${line:0:200}..."
fi
# Categorize by PWD relevance
if [[ "$prefer_pwd" == "yes" ]] && [[ "$line" == *"$pwd_basename"* || "$line" == *"./"* || "$line" == *"../"* ]]; then
pwd_lines+=("$line")
else
other_lines+=("$line")
fi
done
# Prioritize PWD-relevant lines, then fill with others
context_lines=("${pwd_lines[@]}" "${other_lines[@]}")
context_lines=("${(@)context_lines[1,$max_lines]}")
# Return via reply array
reply=("${context_lines[@]}")
}
_zsh_autosuggest_strategy_ai_normalize() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
local response="$1"
local buffer="$2"
local result=""
# Strip \r
response="${response//$'\r'/}"
# Strip markdown code fences
response="${response##\`\`\`*$'\n'}"
response="${response%%$'\n'\`\`\`}"
# Strip surrounding quotes
if [[ "$response" == \"*\" || "$response" == \'*\' ]]; then
response="${response:1:-1}"
fi
# Trim whitespace
response="${response##[[:space:]]##}"
response="${response%%[[:space:]]##}"
# Take first line only
result="${response%%$'\n'*}"
# If response starts with buffer, extract suffix
if [[ "$result" == "$buffer"* ]]; then
local suffix="${result#$buffer}"
result="$buffer$suffix"
# If response looks like a pure suffix, prepend buffer
elif [[ -n "$buffer" ]] && [[ "$result" != "$buffer"* ]] && [[ "${buffer}${result}" == [[:print:]]* ]]; then
result="$buffer$result"
fi
printf '%s' "$result"
}
_zsh_autosuggest_strategy_ai() {
# Reset options to defaults and enable LOCAL_OPTIONS
emulate -L zsh
typeset -g suggestion
local buffer="$1"
# Early return if API key not set (opt-in gate)
[[ -z "$ZSH_AUTOSUGGEST_AI_API_KEY" ]] && return
# Early return if curl or jq not available
[[ -z "${commands[curl]}" ]] || [[ -z "${commands[jq]}" ]] && return
# Early return if input too short
local min_input="${ZSH_AUTOSUGGEST_AI_MIN_INPUT:-3}"
[[ ${#buffer} -lt $min_input ]] && return
# Gather context
local -a context
_zsh_autosuggest_strategy_ai_gather_context
context=("${reply[@]}")
# Build context string
local context_str=""
for line in "${context[@]}"; do
if [[ -n "$context_str" ]]; then
context_str+=", "
fi
context_str+="\"$(_zsh_autosuggest_strategy_ai_json_escape "$line")\""
done
# Build JSON request body
local system_prompt="You are a shell command auto-completion engine. Given the user's partial command, working directory, and recent history, predict the complete command. Reply ONLY with the complete command. No explanations, no markdown, no quotes."
local user_message="Working directory: $PWD\nRecent history: [$context_str]\nPartial command: $buffer"
local json_body
json_body=$(printf '{
"model": "%s",
"messages": [
{"role": "system", "content": "%s"},
{"role": "user", "content": "%s"}
],
"temperature": 0.3,
"max_tokens": 100
}' \
"${ZSH_AUTOSUGGEST_AI_MODEL:-gpt-3.5-turbo}" \
"$(_zsh_autosuggest_strategy_ai_json_escape "$system_prompt")" \
"$(_zsh_autosuggest_strategy_ai_json_escape "$user_message")")
# Make API request
local endpoint="${ZSH_AUTOSUGGEST_AI_ENDPOINT:-https://api.openai.com/v1/chat/completions}"
local timeout="${ZSH_AUTOSUGGEST_AI_TIMEOUT:-5}"
local response
response=$(curl --silent --max-time "$timeout" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZSH_AUTOSUGGEST_AI_API_KEY" \
-d "$json_body" \
-w '\n%{http_code}' \
"$endpoint" 2>/dev/null)
# Check curl exit status
[[ $? -ne 0 ]] && return
# Split response body from HTTP status
local http_code="${response##*$'\n'}"
local body="${response%$'\n'*}"
# Early return on non-2xx status
[[ "$http_code" != 2* ]] && return
# Extract content from JSON response
local content
content=$(printf '%s' "$body" | jq -r '.choices[0].message.content // empty' 2>/dev/null)
# Early return if extraction failed
[[ -z "$content" ]] && return
# Normalize response
local normalized
normalized="$(_zsh_autosuggest_strategy_ai_normalize "$content" "$buffer")"
# Set suggestion
[[ -n "$normalized" ]] && suggestion="$normalized"
}
#--------------------------------------------------------------------#
# Completion Suggestion Strategy #
#--------------------------------------------------------------------#