与Langsmith无缝支持¶
一个常见的误解是LangChain的LangSmith仅与LangChain的模型兼容。实际上,LangSmith是一个统一的DevOps平台,用于开发、协作、测试、部署和监控LLM应用程序。在本博客中,我们将探讨如何使用LangSmith来增强OpenAI客户端以及instructor。
首先,安装必要的包:
LangSmith¶
为了使用langsmith,您首先需要设置您的LangSmith API密钥。
接下来,您需要安装LangSmith SDK:
在这个例子中,我们将使用wrap_openai函数将OpenAI客户端与LangSmith包装在一起。这将使我们能够在使用OpenAI客户端时使用LangSmith的可观察性和监控功能。然后我们将使用instructor以TOOLS模式修补客户端。这将使我们能够使用instructor为客户端添加额外的功能。
import instructor
import asyncio
from langsmith import traceable
from langsmith.wrappers import wrap_openai
from openai import AsyncOpenAI
from pydantic import BaseModel, Field, field_validator
from typing import List
from enum import Enum
# Wrap the OpenAI client with LangSmith
client = wrap_openai(AsyncOpenAI())
# Patch the client with instructor
client = instructor.from_openai(client)
# Rate limit the number of requests
sem = asyncio.Semaphore(5)
# Use an Enum to define the types of questions
class QuestionType(Enum):
CONTACT = "CONTACT"
TIMELINE_QUERY = "TIMELINE_QUERY"
DOCUMENT_SEARCH = "DOCUMENT_SEARCH"
COMPARE_CONTRAST = "COMPARE_CONTRAST"
EMAIL = "EMAIL"
PHOTOS = "PHOTOS"
SUMMARY = "SUMMARY"
# You can add more instructions and examples in the description
# or you can put it in the prompt in `messages=[...]`
class QuestionClassification(BaseModel):
"""
Predict the type of question that is being asked.
Here are some tips on how to predict the question type:
CONTACT: Searches for some contact information.
TIMELINE_QUERY: "When did something happen?
DOCUMENT_SEARCH: "Find me a document"
COMPARE_CONTRAST: "Compare and contrast two things"
EMAIL: "Find me an email, search for an email"
PHOTOS: "Find me a photo, search for a photo"
SUMMARY: "Summarize a large amount of data"
"""
# If you want only one classification, just change it to
# `classification: QuestionType` rather than `classifications: List[QuestionType]``
chain_of_thought: str = Field(
..., description="The chain of thought that led to the classification"
)
classification: List[QuestionType] = Field(
description=f"An accuracy and correct prediction predicted class of question. Only allowed types: {[t.value for t in QuestionType]}, should be used",
)
@field_validator("classification", mode="before")
def validate_classification(cls, v):
# sometimes the API returns a single value, just make sure it's a list
if not isinstance(v, list):
v = [v]
return v
@traceable(name="classify-question")
async def classify(data: str) -> QuestionClassification:
"""
Perform multi-label classification on the input text.
Change the prompt to fit your use case.
Args:
data (str): The input text to classify.
"""
async with sem: # some simple rate limiting
return data, await client.chat.completions.create(
model="gpt-4-turbo-preview",
response_model=QuestionClassification,
max_retries=2,
messages=[
{
"role": "user",
"content": f"Classify the following question: {data}",
},
],
)
async def main(questions: List[str]):
tasks = [classify(question) for question in questions]
for task in asyncio.as_completed(tasks):
question, label = await task
resp = {
"question": question,
"classification": [c.value for c in label.classification],
"chain_of_thought": label.chain_of_thought,
}
resps.append(resp)
return resps
if __name__ == "__main__":
import asyncio
questions = [
"What was that ai app that i saw on the news the other day?",
"Can you find the trainline booking email?",
"what did I do on Monday?",
"Tell me about todays meeting and how it relates to the email on Monday",
]
resp = asyncio.run(main(questions))
for r in resp:
print("q:", r["question"])
#> q: what did I do on Monday?
print("c:", r["classification"])
#> c: ['SUMMARY']
如果你按照我们所做的,就是包装了客户端并迅速使用asyncio来分类一系列问题。这是一个简单的例子,展示了如何使用LangSmith来增强OpenAI客户端。你可以使用LangSmith来监控和观察客户端,并使用instructor来为客户端添加额外的功能。
要查看此运行的跟踪,请查看此可共享的链接。