没有大模型? 上星河社区直接部署文心大模型!

没有AI Agent应用程序?直接安装使用camel AI框架!

camel AI框架与文心大模型适配困难?直接写一个转发接口!

三者齐备,现在开始开始我们的AI 之旅了!

提前准备

安装camel见这里:https://skywalk.blog.csdn.net/article/details/147413360

部署文心大模型见这里:https://skywalk.blog.csdn.net/article/details/145802334

可以部署ERNIE-4.5-21B-A3B-Thinking这个模型,这个模型非常出色!

文心和camel的转发接口代码:

#!/usr/bin/env python3
"""
星河社区部署模型中继服务
基于星河社区API提供OpenAI兼容的API接口
"""

from fastapi import FastAPI, Request, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from openai import OpenAI
import json
import os
import time
import asyncio
import aiohttp

app = FastAPI(title="星河社区部署模型中继服务")

# 允许跨域请求
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# 星河社区API配置
XINGHE_API_KEY = "6cac673af748cec3440270f6bbfe02b662dxx"
XINGHE_BASE_URL = "https://api-rakbyb46k9xcc5t0.aistudio-app.com/v1"

# XINGHE_API_KEY="6cac673af748cec3440270f6bbfe02b662d4xx",
# XINGHE_BASE_URL="https://api-4cnac8h6y3w0uehd.aistudio-app.com/v1"




# 初始化OpenAI客户端
client = OpenAI(
    api_key=XINGHE_API_KEY,
    base_url=XINGHE_BASE_URL
)

@app.post("/v1/chat/completions")
async def chat_completions(request: Request):
    """
    OpenAI兼容的聊天完成端点
    支持流式和非流式响应
    """
    try:
        # 1. 解析请求数据
        data = await request.json()
        
        # 调试日志控制
        DEBUG = os.environ.get("XINGHE_DEBUG", "false").lower() == "true"
        if DEBUG:
            print(f"==== 收到星河社区请求: {data}")
        
        # 提取消息内容
        messages = data.get("messages", [])
        if not messages:
            raise HTTPException(status_code=400, detail="No messages provided")
        
        # 2. 构建星河社区API调用参数
        stream = data.get("stream", False)
        
        # 构建调用参数
        completion_params = {
            "model": data.get("model", "default"),
            "temperature": data.get("temperature", 0.6),
            "messages": messages,
            "stream": stream
        }
        
        # 可选参数
        if "max_tokens" in data:
            completion_params["max_tokens"] = data["max_tokens"]
        if "top_p" in data:
            completion_params["top_p"] = data["top_p"]
        
        # 3. 调用星河社区API
        if stream:
            # 流式响应处理
            async def generate_stream():
                try:
                    # 使用同步客户端在异步环境中调用
                    completion = client.chat.completions.create(**completion_params)
                    
                    for chunk in completion:
                        if hasattr(chunk.choices[0].delta, "reasoning_content") and chunk.choices[0].delta.reasoning_content:
                            content = chunk.choices[0].delta.reasoning_content
                        else:
                            content = chunk.choices[0].delta.content
                        
                        # 只有当内容不为None时才发送
                        if content is not None:
                            yield f"data: {json.dumps({'choices': [{'delta': {'content': content}}]})}\n\n"
                    
                    # 流结束时发送[DONE]信号
                    yield "data: [DONE]\n\n"
                    
                except Exception as e:
                    yield f"data: {json.dumps({'error': str(e)})}\n\n"
            
            return StreamingResponse(
                generate_stream(),
                media_type="text/event-stream",
                headers={
                    "Cache-Control": "no-cache",
                    "Connection": "keep-alive",
                }
            )
        else:
            # 非流式响应
            completion = client.chat.completions.create(**completion_params)
            
            # 获取响应内容
            content = completion.choices[0].message.content
            
            # 估算token使用量
            estimated_prompt_tokens = len(str(messages)) // 4
            estimated_completion_tokens = len(content) // 4
            
            # 转换为OpenAI兼容格式
            return {
                "id": f"chatcmpl-xinghe-{int(time.time())}",
                "object": "chat.completion",
                "created": int(time.time()),
                "model": data.get("model", "default"),
                "choices": [{
                    "index": 0,
                    "message": {
                        "role": "assistant",
                        "content": content
                    },
                    "finish_reason": "stop"
                }],
                "usage": {
                    "prompt_tokens": estimated_prompt_tokens,
                    "completion_tokens": estimated_completion_tokens,
                    "total_tokens": estimated_prompt_tokens + estimated_completion_tokens
                }
            }
        
    except Exception as e:
        error_msg = str(e)
        
        # 处理特定错误类型
        if "429" in error_msg:
            raise HTTPException(
                status_code=429, 
                detail="星河社区API请求频率限制,请稍后重试"
            )
        elif "401" in error_msg:
            raise HTTPException(
                status_code=401,
                detail="星河社区API认证失败,请检查API密钥"
            )
        elif "403" in error_msg:
            raise HTTPException(
                status_code=403,
                detail="星河社区API认证失败,请检查API密钥配置"
            )
        elif "404" in error_msg:
            raise HTTPException(
                status_code=404,
                detail="星河社区API端点不存在"
            )
        else:
            raise HTTPException(status_code=500, detail=f"星河社区API错误: {error_msg}")

@app.get("/")
async def root():
    """根端点"""
    return {
        "message": "星河社区部署模型中继服务",
        "status": "running",
        "endpoint": "/v1/chat/completions",
        "base_url": XINGHE_BASE_URL
    }

@app.get("/health")
async def health_check():
    """健康检查端点"""
    return {
        "status": "healthy",
        "service": "星河社区中继",
        "timestamp": int(time.time())
    }

@app.get("/models")
async def list_models():
    """列出可用模型"""
    return {
        "object": "list",
        "data": [
            {
                "id": "default",
                "object": "model",
                "created": 0,
                "owned_by": "星河社区"
            }
        ]
    }

if __name__ == "__main__":
    import uvicorn
    
    # 启动服务器
    print("🚀 启动星河社区部署模型中继服务...")
    print(f"📡 服务地址: http://127.0.0.1:1337")
    print(f"🔗 后端API: {XINGHE_BASE_URL}")
    print("=" * 60)
    
    uvicorn.run(app, host="0.0.0.0", port=1337)

camel测试代码:

from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
from camel.agents import ChatAgent
from camel.toolkits import SearchToolkit
import os

os.environ["OPENAI_COMPATIBILITY_API_KEY"] = "your_custom_api_key"  # 自定义API密钥(可为任意值,仅作占位)
os.environ["OPENAI_COMPATIBILITY_API_BASE_URL"] = "http://127.0.0.1:1337/v1"  # 自定义大模型API地址


 
#model = ModelFactory.create(
  #model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
  # model_type=ModelType.GPT_4O,
  #model_type="default",
  #model_config_dict={"temperature": 0.0},
#)
 
model = ModelFactory.create(
    model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
    #model_type="gpt-4o",
    model_type="default",
    api_key=os.environ.get("OPENAI_COMPATIBILITY_API_KEY"),
    url=os.environ.get("OPENAI_COMPATIBILITY_API_BASE_URL"),
    model_config_dict={"temperature": 0.4, "max_tokens": 8192},
)
 
search_tool = SearchToolkit().search_duckduckgo
 
#agent = ChatAgent(model=model, tools=[search_tool])
agent = ChatAgent(model=model, )
 
response_1 = agent.step("What is CAMEL-AI?")
print(response_1.msgs[0].content)
# CAMEL-AI is the first LLM (Large Language Model) multi-agent framework
# and an open-source community focused on finding the scaling laws of agents.
# ...
 
response_2 = agent.step("What is the Github link to CAMEL framework?")
print(response_2.msgs[0].content)
# The GitHub link to the CAMEL framework is
# [https://github.com/camel-ai/camel](https://github.com/camel-ai/camel).

注意到的问题:duckduckgo目前无法在国内直接访问

所以需要安装一个科学上网的工具

测试输出

以下为三种情况的输出,可能是ernie文心大模型太厉害的缘故,没有functional call,也能拿到网址。

没有用工具functional

**CAMEL-AI** (often stylized as **CAMEL.AI**) is an open-source framework designed for 
**training, deploying, and serving large-scale machine learning models**, particularly 
focusing on **computer vision** and **multimodal learning tasks**. Here's a breakdown of its key aspects:

---

### 1. **Core Purpose**
- **Scalability**: Handles massive datasets and complex models efficiently.
- **Multimodal Learning**: Processes combinations of inputs (e.g., images + text) to learn rich embeddings.
- **Deployment-Ready**: Simplifies deploying models into production environments.      

---

### 2. **Key Features**
- **Embedding-Based Models**: Uses vector embeddings to represent data (e.g., images, text) for tasks like classification or retrieval.
- **Client-Server Architecture**:
  - **Client**: Runs on local machines/clusters for training/inference.
  - **Server**: Hosts deployed models via REST/gRPC APIs for real-time inference.      
- **Distributed Training**: Scales training across GPUs/nodes (e.g., using PyTorch/TensorFlow).
- **Pre-trained Models**: Offers transfer learning with architectures like ResNet, Vision Transformers, or CLIP.
- **Data Pipelines**: Built-in tools for processing images/videos, text, or multimodal 
data.

---

### 3. **Technical Workflow**
1. **Client Workflow**:
   - Load datasets (local/cloud storage).
   - Train models using pre-built or custom architectures.
   - Export trained models to the server.
2. **Server Workflow**:
   - Deploy models via lightweight APIs.
   - Serve inferences with low latency.
   - Manage requests/responses via health checks and metrics.

---

### 4. **Use Cases**
- **Computer Vision**: Image classification, object detection, segmentation.
- **Multimodal AI**: Vision-and-Language Tasks (e.g., image captioning, VQA).
- **Industrial Applications**: Defect detection in manufacturing, medical imaging analysis.

---

### 5. **Ecosystem & Tools**
- **Libraries**: Built on PyTorch/TensorFlow with optimized CUDA kernels.
- **Integrations**: Supports ONNX for model export and TensorRT for acceleration.      
- **Orchestration**: Works with Kubernetes for cluster management.
- **Dataset Tools**: Handles large-scale datasets (e.g., ImageNet, COCO).

---

### 6. **Why CAMEL-AI?**
- **Efficiency**: Reduces training costs via distributed computing.
- **Flexibility**: Customizable for research or production.
- **Community**: Open-source collaboration with ongoing updates.

---

### 7. **Current Status**
- **Active Development**: Maintained by researchers/engineers (e.g., University of Freiburg).
- **Latest Version**: As of 2024, CAMEL.AI evolves with support for newer architectures (e.g., diffusion models).
- **Repository**: [GitHub - camel-ai](https://github.com/camel-ai)

---

### Example Use Case
A team training a **vision-language model**:
1. **Client**: Fine-tunes CLIP using custom data.
2. **Server**: Deploys the model as a REST API to answer image-query questions.        

---

CAMEL-AI simplifies the lifecycle of large-scale AI, bridging research and deployment while optimizing for scalability and performance. For the latest updates, check the [official GitHub](https://github.com/camel-ai).
The official GitHub repository for **CAMEL-AI** (the framework for large-scale machine 
learning, multimodal tasks, and deployment) is:

[https://github.com/camel-ai](https://github.com/camel-ai)

This repository contains the core code, documentation, pre-trained models, and examples for training, deploying, and serving models like CLIP, vision transformers, and multimodal embeddings.

If you're referring to a different "CAMEL" framework (e.g., one focused on a specific task), clarify, and I’ll adjust the link accordingly! 😊

用functional但没科学上网

**CAMEL-AI** is an acronym representing a framework for **Cognitive Adaptive Multimodal Embodied Learning with Artificial Intelligence**. It describes an educational technology system designed to create adaptive, engaging, and personalized learning experiences through AI-driven multimodal interactions. Here's a breakdown of its components:

### 1. **Cognitive**
   - Focuses on understanding and adapting to learners' **mental processes** (e.g., memory, attention, problem-solving).
   - Uses AI to analyze cognitive states (e.g., engagement, confusion levels) to dynamically adjust content difficulty, pacing, or teaching methods.

### 2. **Adaptive**
   - Systems learn from learner interactions to **personalize content** in real-time.
   - Adapts to individual needs (e.g., varying skill levels, learning styles) using algorithms that track progress and performance.

### 3. **Multimodal**
   - Engages learners via **multiple sensory channels** (e.g., text, audio, visuals, haptic feedback, VR/AR).        
   - Combines text, images, voice, and interactive elements to cater to diverse learning preferences.

### 4. **Embodied**
   - Emphasizes **physical interaction** with technology (e.g., robotics, wearables, or motion sensors).
   - Uses body movement, gestures, or haptic feedback to deepen understanding (e.g., simulating physical experiments 
or 3D models).

### 5. **Learning Analytics**
   - Collects and analyzes data to **measure learning outcomes**.
   - Uses AI to predict gaps in knowledge, recommend interventions, and improve long-term educational strategies.    

---

### **Key Features of CAMEL-AI**
- **Personalization**: Tailors content to individual learner needs.
- **Real-time Adaptation**: Adjusts difficulty/content based on learner responses.
- **Immersive Experiences**: Combines VR/AR for hands-on learning.
- **Data-Driven Insights**: Uses analytics to optimize teaching strategies.
- **Accessibility**: Designed for diverse devices and environments (e.g., classrooms, remote learning).

---

### **Applications**
CAMEL-AI is used in:
- **K-12 Education**: Adaptive quizzes, immersive history/science simulations.
- **Higher Education**: AI tutors for complex subjects (e.g., coding, language learning).
- **Corporate Training**: Simulated scenarios for workplace skills (e.g., leadership, safety).
- **Assistive Technology**: Supporting learners with disabilities through multimodal interfaces.

---

### **Why CAMEL-AI Matters**
It bridges gaps in traditional education by making learning:
- **More Engaging**: Multimodal interactions boost motivation.
- **More Effective**: Cognitive adaptations improve knowledge retention.
- **More Inclusive**: Caters to global audiences with varying resources.

By integrating **cognition**, **adaptability**, **multimodality**, **embodiment**, and **analytics**, CAMEL-AI represents a new generation of educational technology poised to transform learning worldwide.
The **CAMEL-AI framework** is an open-source project hosted on GitHub. You can access its repository using the following link:

### GitHub Link for CAMEL-AI:
[https://github.com/camel-ai/camel-ai](https://github.com/camel-ai/camel-ai)

### Key Details:
- This repository contains the source code, documentation, and examples for the **CAMEL-AI framework**, which is designed for adaptive, multimodal, and embodied learning environments.
- The project focuses on integrating **cognitive adaptability**, **multimodal interactions**, and **learning analytics** using AI.

### What to Expect in the Repository:
- Core AI algorithms for cognitive state detection and content adaptation.
- Implementation of multimodal interfaces (e.g., text, voice, VR/AR).
- Learning analytics tools for tracking learner progress.
- Example applications (e.g., educational robots, simulations).

If you plan to contribute or use the code, be sure to check the repository’s **README** and **documentation** for setup instructions and guidelines. Let me know if you need further details! 🚀

用了functional和科学上网

**CAMEL-AI** (short for **C**omprehensive **A**I **M**odel **L**ibrary) is an **open-source, scalable, and modular framework** for training, fine-tuning, and deploying large-scale machine learning models. Developed by researchers at the **Chinese Academy of Sciences (CAS)** and widely adopted in academic and industrial research, CAMEL-AI focuses on 
efficiency, flexibility, and handling complex, real-world datasets.

### Key Features:
1. **Scalability**:
   - Designed for **distributed training** on clusters (e.g., using MPI or PyTorch Distributed Data Parallel).       
   - Optimized for **large datasets** (e.g., billions of examples) and **high-compute environments**.

2. **Modularity**:
   - Components are decoupled, allowing users to mix-and-match:
     - **Models**: Supports transformers (e.g., BERT, GPT), CNNs, custom architectures.
     - **Optimizers**: Adam, SGD, AdamW, etc.
     - **Data Loaders**: PyTorch `DataLoader`, Horovod, or custom sharding.
     - **Frameworks**: Integrates PyTorch, TensorFlow, or plain NumPy.

3. **Performance**:
   - Uses **efficient data pipelines** (e.g., memory-mapped datasets, lazy loading).
   - Supports **mixed-precision training** (FP16/FP32) to accelerate convergence.

4. **Deployment-Ready**:
   - Integrates with **TorchScript**, ONNX, or TensorRT for model serialization and deployment.
   - Supports cloud-native environments (e.g., Kubernetes).

5. **Research-Focused**:
   - Optimized for **few-shot learning**, meta-learning, and large-scale pretraining.
   - Includes utilities for hyperparameter tuning (e.g., WandB, Optuna).

### Core Architecture:
- **Client-Server Model**: Clients request computations from a distributed server cluster.
- **Dynamic Resource Management**: Auto-scales resources based on workload.
- **Checkpointing**: Saves model states to persistent storage (e.g., HDFS, S3).

### Use Cases:
- **Computer Vision**: Training large-scale image classifiers (e.g., ImageNet).
- **Natural Language Processing (NLP)**: Fine-tuning transformers for language understanding.
- **Recommendation Systems**: Collaborative filtering on massive user-item graphs.
- **Multimodal AI**: Fusing vision, language, and other modalities.

### Ecosystem Tools:
- **CAMEL-AI Toolkit**: A companion library for benchmarking and easy model replication.
- **Integration with Hugging Face**: Directly load pre-trained transformers.
- **Visualization**: Built-in logging for tracking training metrics.

### Why It Matters:
- **Addresses the complexity of modern ML**: Large-scale training demands infrastructure-level optimizations.        
- **Accelerates research**: Standardizes experiments for reproducibility.
- **Industry Adoption**: Used by companies like **Baidu, JD.com, and Alibaba** for real-time recommendation systems. 

### Getting Started:
- GitHub Repository: [https://github.com/camel-ai](https://github.com/camel-ai)
- Documentation: [CAMEL-AI Documentation](https://camel-ai.readthedocs.io)

### Comparison to Other Frameworks:
| **Framework**       | **Focus**                         | **Distributed Support**      | **Flexibility**       |   
|---------------------|----------------------------------|------------------------------|-----------------------|    
| **CAMEL-AI**        | Large-scale ML/DL               | High (customizable DDP)      | ✅ Excellent         |       
| PyTorch/TensorFlow | General ML                      | Built-in                      | ✅ Excellent         |       
| Hugging Face Diffusers | NLP/Vision微 | Limited                  | ✅ Good                |

### Summary:
CAMEL-AI is a **next-gen framework** for large-scale machine learning that prioritizes **scalability, modularity, and performance**. It bridges the gap between research prototypes (e.g., transformers) and production-grade systems, making cutting-edge AI more accessible for clusters and cloud environments. Ideal for researchers and engineers tackling compute-intensive tasks like LLMs, vision models, or hybrid AI systems.
The GitHub repository for **CAMEL-AI** (Comprehensive AI Model Library) is:  

[https://github.com/camel-ai](https://github.com/camel-ai)

This repository contains the core code, documentation, and resources for training, fine-tuning, and deploying large-scale machine learning models using CAMEL-AI. You can clone the repository, explore the code, contribute, or use it for your projects.

Logo

火山引擎开发者社区是火山引擎打造的AI技术生态平台,聚焦Agent与大模型开发,提供豆包系列模型(图像/视频/视觉)、智能分析与会话工具,并配套评测集、动手实验室及行业案例库。社区通过技术沙龙、挑战赛等活动促进开发者成长,新用户可领50万Tokens权益,助力构建智能应用。

更多推荐