Codestral API [Mistral AI]
Codestral 在选定的代码补全插件中可用,但也可以直接查询。查看文档获取更多详情。
API Key
# 环境变量
os.environ['CODESTRAL_API_KEY']
FIM / 补全
info
官方 Mistral API 文档: https://docs.mistral.ai/api/#operation/createFIMCompletion
示例用法
import os
import litellm
os.environ['CODESTRAL_API_KEY']
response = await litellm.atext_completion(
model="text-completion-codestral/codestral-2405",
prompt="def is_odd(n): \n return n % 2 == 1 \ndef test_is_odd():",
suffix="return True", # 可选
temperature=0, # 可选
top_p=1, # 可选
max_tokens=10, # 可选
min_tokens=10, # 可选
seed=10, # 可选
stop=["return"], # 可选
)
预期响应
{
"id": "b41e0df599f94bc1a46ea9fcdbc2aabe",
"object": "text_completion",
"created": 1589478378,
"model": "codestral-latest",
"choices": [
{
"text": "\n assert is_odd(1)\n assert",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
示例用法 - 流式传输
import os
import litellm
os.environ['CODESTRAL_API_KEY']
response = await litellm.atext_completion(
model="text-completion-codestral/codestral-2405",
prompt="def is_odd(n): \n return n % 2 == 1 \ndef test_is_odd():",
suffix="return True", # 可选
temperature=0, # 可选
top_p=1, # 可选
stream=True,
seed=10, # 可选
stop=["return"], # 可选
)
async for chunk in response:
print(chunk)
预期响应
{
"id": "726025d3e2d645d09d475bb0d29e3640",
"object": "text_completion",
"created": 1718659669,
"choices": [
{
"text": "This",
"index": 0,
"logprobs": null,
"finish_reason": null
}
],
"model": "codestral-2405",
}
支持的模型
此处列出的所有模型 https://docs.mistral.ai/platform/endpoints 均受支持。我们在此处主动维护模型列表、价格、令牌窗口等信息。这里。
| 模型名称 | 函数调用 |
|---|---|
| Codestral Latest | completion(model="text-completion-codestral/codestral-latest", messages) |
| Codestral 2405 | completion(model="text-completion-codestral/codestral-2405", messages) |
聊天补全
info
官方 Mistral API 文档: https://docs.mistral.ai/api/#operation/createChatCompletion
示例用法
import os
import litellm
os.environ['CODESTRAL_API_KEY']
response = await litellm.acompletion(
model="codestral/codestral-latest",
messages=[
{
"role": "user",
"content": "Hey, how's it going?",
}
],
temperature=0.0, # 可选
top_p=1, # 可选
max_tokens=10, # 可选
safe_prompt=False, # 可选
seed=12, # 可选
)
预期响应
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "codestral/codestral-latest",
"system_fingerprint": None,
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"logprobs": null,
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
示例用法 - 流式传输
import os
import litellm
os.environ['CODESTRAL_API_KEY']
response = await litellm.acompletion(
model="codestral/codestral-latest",
messages=[
{
"role": "user",
"content": "嘿,最近怎么样?",
}
],
stream=True, # 可选
temperature=0.0, # 可选
top_p=1, # 可选
max_tokens=10, # 可选
safe_prompt=False, # 可选
seed=12, # 可选
)
async for chunk in response:
print(chunk)
预期响应
{
"id":"chatcmpl-123",
"object":"chat.completion.chunk",
"created":1694268190,
"model": "codestral/codestral-latest",
"system_fingerprint": null,
"choices":[
{
"index":0,
"delta":{"role":"assistant","content":"gm"},
"logprobs":null,
" finish_reason":null
}
]
}
支持的模型
此处列出的所有模型 https://docs.mistral.ai/platform/endpoints 均受支持。我们积极维护模型列表、定价、令牌窗口等信息,详情请参见 这里。
| 模型名称 | 函数调用 |
|---|---|
| Codestral Latest | completion(model="codestral/codestral-latest", messages) |
| Codestral 2405 | completion(model="codestral/codestral-2405", messages) |