ChatGPT API开发完全指南
本文将系统介绍如何使用ChatGPT API开发智能应用,包括环境配置、接口调用、异常处理等关键环节。
环境准备
1. API 密钥获取
- 注册 OpenAI 账号
- 访问 API keys 页面:https://platform.openai.com/api-keys
- 创建新的 API key
2. 开发环境配置
API 基础使用
1. 初始化客户端
1 2 3 4 5 6 7 8 9
| from openai import OpenAI
client = OpenAI(api_key='your-api-key')
client = OpenAI( api_key='your-api-key', base_url="your-proxy-url" )
|
2. 基础对话
1 2 3 4 5 6 7 8 9 10 11 12
| def chat_with_gpt(prompt): try: response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": prompt} ] ) return response.choices[0].message.content except Exception as e: print(f"Error: {e}") return None
|
3. 多轮对话
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
| def chat_conversation(messages): try: response = client.chat.completions.create( model="gpt-4", messages=messages, temperature=0.7 ) return response.choices[0].message except Exception as e: print(f"Error: {e}") return None
messages = [ {"role": "system", "content": "你是一个专业的Python开发者"}, {"role": "user", "content": "如何实现快速排序?"} ]
|
高级功能
1. 流式响应
1 2 3 4 5 6 7 8 9 10
| def stream_chat(): response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "讲个故事"}], stream=True ) for chunk in response: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")
|
2. 函数调用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
| functions = [ { "name": "get_weather", "description": "获取指定城市的天气信息", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "城市名称" } }, "required": ["city"] } } ]
response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "北京今天天气怎么样?"}], functions=functions, function_call="auto" )
|
性能优化
1. 并发请求
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| import asyncio import aiohttp
async def async_chat(prompt): async with aiohttp.ClientSession() as session: async with session.post( "https://api.openai.com/v1/chat/completions", headers={"Authorization": f"Bearer {API_KEY}"}, json={ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": prompt}] } ) as response: return await response.json()
async def main(): prompts = ["问题1", "问题2", "问题3"] tasks = [async_chat(prompt) for prompt in prompts] results = await asyncio.gather(*tasks) return results
|
2. 缓存机制
1 2 3 4 5
| from functools import lru_cache
@lru_cache(maxsize=100) def cached_chat(prompt: str): return chat_with_gpt(prompt)
|
错误处理
1. 重试机制
1 2 3 4 5
| from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) def chat_with_retry(prompt): return chat_with_gpt(prompt)
|
2. 异常处理
1 2 3 4 5 6 7 8 9
| def safe_chat(prompt): try: response = chat_with_gpt(prompt) if not response: raise ValueError("Empty response") return response except Exception as e: logger.error(f"Chat error: {e}") return "抱歉,服务出现异常,请稍后重试。"
|
最佳实践
API Key 管理
1 2 3 4 5 6
| import os from dotenv import load_dotenv
load_dotenv() api_key = os.getenv("OPENAI_API_KEY")
|
成本控制
1 2 3 4
| def count_tokens(text): import tiktoken encoding = tiktoken.encoding_for_model("gpt-3.5-turbo") return len(encoding.encode(text))
|
请求限速
1 2 3 4 5 6 7
| import time from ratelimit import limits, sleep_and_retry
@sleep_and_retry @limits(calls=60, period=60) def rate_limited_chat(prompt): return chat_with_gpt(prompt)
|
应用示例
1. 智能客服
1 2 3 4 5 6 7
| def customer_service_bot(user_input): system_prompt = "你是一个专业的客服代表,请用友善的语气回答用户问题。" messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_input} ] return chat_conversation(messages)
|
2. 代码助手
1 2 3 4 5 6 7
| def code_assistant(code_question): system_prompt = "你是一个专业的程序员,请提供详细的代码示例和解释。" messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": code_question} ] return chat_conversation(messages)
|
安全考虑
- 输入验证
- 敏感信息过滤
- 响应内容审核
- 用户认证授权
参考资料
- OpenAI API 文档
- Python OpenAI 库文档
- API 最佳实践指南
本文将持续更新,欢迎交流讨论。