DIY ClickHouse MCP:自己动手实现高性能数据引擎!
最近看到一个greptimedb的mcp文章,效果也是挺酷的。这里也试着一个clickhouse的实现,大概花了几个小时,这里也开源在https://github.com/dubin555/clickhouse_mcp_server ,github上也有2 3个实现,我写的大概是代码和注释最全的实现。
背景
最近看到一个greptimedb的mcp文章,效果也是挺酷的。这里也试着一个clickhouse的实现,大概花了几个小时,这里也开源在https://github.com/dubin555/clickhouse_mcp_server ,github上也有2 3个实现,我写的大概是代码和注释最全的实现
效果
写数据
先向clickhouse写点假数据,这里弄一些假的销售数据(什么数据都可以, 可以让豆包 元宝帮你造些假数据都可以)
-- Create sales analysis table with commentsCREATE TABLE IF NOT EXISTS default.city_sales(city String COMMENT 'Name of the city where the sale occurred',product_category Enum('Electronics' = 1, 'Apparel' = 2, 'Grocery' = 3) COMMENT 'Category of the product sold',sale_date Date COMMENT 'Date of the sales transaction',units_sold UInt32 COMMENT 'Number of units sold in the transaction',unit_price Float32 COMMENT 'Price per unit in USD',total_sales Float32 MATERIALIZED units_sold * unit_price COMMENT 'Calculated total sales amount') ENGINE = MergeTree()PARTITION BY toYYYYMM(sale_date)ORDER BY (city, product_category, sale_date)COMMENT 'Table storing city-wise product sales data for business analysis';-- Generate 10,000 random sales recordsINSERT INTO default.city_sales (city, product_category, sale_date, units_sold, unit_price)SELECT['New York', 'London', 'Tokyo', 'Paris', 'Singapore', 'Dubai'][rand() % 6 + 1] AS city,toInt16(rand() % 3 + 1) AS product_category,today() - rand() % 365 AS sale_date,rand() % 100 + 1 AS units_sold, -- Units between 1-100randNormal(50, 15) AS unit_price -- Normal distribution around $50FROM numbers(10000);
这里主要是:城市,销售品类,产品, 销售额 等一些列
提问
我们开始提问(这里用的客户端是vscode的cline插件)
🏆
各城市的销售额是多少?哪个商品最畅销?
llm的第一次调用



实现
在github大概参考了2 3个实现,实现起来也不复杂, 完整的可以去github看代码,这里讲解一个最重要的server.py
整体
里面有5个主要的类组成
-
ClickHouseClient: 负责创建clickhouse的连接,发起查询
-
TableMetadataManager:负责查询表的元数据,例如有哪些列,注释等信息
-
ResourceManager:负责构造给LLM的提示,有哪些Resource可以访问,会调用TableMetadataManager
-
ToolManager:负责提示LLM有哪些tool可以使用,以及调用这些tool,会调用ClickHouseClient
-
DatabaseServer:整合上面4个类的信息
具体实现
ClickHouseClient
class ClickHouseClient:"""ClickHouse database client"""def __init__(self, config: Config, logger: Logger):self.logger = loggerself.db_config = {"host": config.host,"port": int(config.port),"user": config.user,"password": config.password,"database": config.database}self._client = Nonedef get_client(self):"""Get ClickHouse client, singleton pattern"""if self._client is None:self._client = self._create_client()return self._clientdef _create_client(self):"""Create a new ClickHouse client"""try:self.logger.debug(f"Creating ClickHouse client with config: {self.db_config}")client = clickhouse_connect.get_client(**self.db_config)version = client.server_versionself.logger.info("ClickHouse client created successfully")return clientexcept Exception as e:self.logger.error(f"Failed to create ClickHouse client: {e}")raisedef execute_query(self, query: str, readonly: bool = True):"""Execute a query against the ClickHouse database"""try:client = self.get_client()settings = {"readonly": 1} if readonly else {}res = client.query(query, settings=settings)# convert result to list of dictsrows = []for row in res.result_rows:row_dict = {}for i, col_name in enumerate(res.column_names):row_dict[col_name] = row[i]rows.append(row_dict)self.logger.debug(f"Query executed successfully: {query}")return rowsexcept Exception as e:self.logger.error(f"Failed to execute query: {e}")raise
TableMetadataManager
class TableMetadataManager:"""Manage table metadata in ClickHouse"""def __init__(self, client: ClickHouseClient, logger: Logger):self.client = clientself.logger = loggerdef get_table_list(self, database: str) -> List[str]:"""Get list of tables in the database"""query = f"SHOW TABLES FROM {quote_identifier(database)}"result = self.client.execute_query(query)if not result:return []return [row[next(iter(row.keys()))] for row in result]def get_table_comments(self, database: str) -> Dict[str, str]:"""Get comments for the tables in the database"""query = f"SELECT name, comment FROM system.tables WHERE database = {format_query_value(database)}"result = self.client.execute_query(query)return {row['name']: row['comment'] for row in result}def get_column_comments(self, database: str) -> Dict[str, Dict[str, str]]:"""Get comments for the columns in the tables in the database"""query = f"SELECT table, name, comment FROM system.columns WHERE database = {format_query_value(database)}"result = self.client.execute_query(query)column_comments = {}for row in result:table, col_name, comment = row['table'], row['name'], row['comment']if table not in column_comments:column_comments[table] = {}column_comments[table][col_name] = commentreturn column_commentsdef format_table_description(self, table_name: str, table_comment: str, columns_info: Dict[str, str]) -> str:"""Format table description for the model"""description = f"Table: {table_name}\n"if table_comment:description += f"Description: {table_comment}\n"else:description += "Description: No description provided\n"if columns_info:# Add column descriptionsdescription += "Columns:\n"for col_name, col_comment in columns_info.items():if col_comment:description += f" - {col_name}: {col_comment}\n"else:description += f" - {col_name}: No description provided\n"return description
ResourceManager
class ResourceManager:"""MCP resource manager"""def __init__(self, client: ClickHouseClient, logger: Logger, resource_prefix: str = DEFAULT_RESOURCE_PREFIX, results_limit: int = DEFAULT_RESULTS_LIMIT):self.client = clientself.logger = loggerself.metadata_manager = TableMetadataManager(client, logger)self.resource_prefix = resource_prefixself.results_limit = results_limitasync def list_resources(self) -> List[Resource]:"""List all resources in the database"""self.logger.debug("Listing resources")database = self.client.db_config.get("database")try:# Get table listtable_list = self.metadata_manager.get_table_list(database)if not table_list:return []# Get table comments and column commentstable_comments = self.metadata_manager.get_table_comments(database)column_comments = self.metadata_manager.get_column_comments(database)# Format table descriptionsresources = []for table_name in table_list:table_comment = table_comments.get(table_name, "")columns_info = column_comments.get(table_name, {})description = self.metadata_manager.format_table_description(table_name, table_comment, columns_info)# Create resourcesresource = Resource(uri=f"{self.resource_prefix}/{table_name}/data",name=f"Table: {table_name}",mimeType="text/plain",description=description,type="table",metadata = {"columns": [{"name": col_name,"description": col_comment}for col_name, col_comment in columns_info.items()]})resources.append(resource)self.logger.debug(f"Found {len(resources)} resources")return resourcesexcept Exception as e:self.logger.error(f"Failed to list resources: {e}")return []async def read_resource(self, uri: AnyUrl) -> str:"""Read resource data"""self.logger.debug(f"Reading resource: {uri}")uri_str = str(uri)try:# Parse URIif not uri_str.startswith(self.resource_prefix):self.logger.error(f"Invalid resource URI: {uri}")return ""# get talbe nametable_name = uri_str[len(self.resource_prefix):].split("/")[0]# get queryquery = f"SELECT * FROM {quote_identifier(table_name)} LIMIT {self.results_limit}"result = self.client.execute_query(query)# format resultif not result:return "No data found"return json.dumps(result, default=str , indent=2)except Exception as e:self.logger.error(f"Failed to read resource: {e}")return f"Error reading resource: {str(e)}"
ToolManager
class ToolManager:"""MCP tool manager"""def __init__(self, client: ClickHouseClient, logger: Logger):self.client = clientself.logger = loggerasync def list_tools(self) -> List[Tool]:"""List all tools"""self.logger.debug("Listing tools")return [Tool(name="execute_sql",description="Execute a query against the ClickHouse database",inputSchema={"type": "object","properties": {"query": {"type": "string","description": "The SQL query to be executed"}},"required": ["query"],})]async def call_tool(self, name: str, arguments: Dict[str, Any]) -> List[TextContent]:"""Call a tool"""self.logger.debug(f"Calling tool: {name} with arguments: {arguments}")# Tool handler mappingtool_handlers = {"execute_sql": self._handle_execute_sql}# Get handlerhandler = tool_handlers.get(name)if not handler:self.logger.error(f"Tool not found: {name}")return []# Call handlerreturn await handler(arguments)async def _handle_execute_sql(self, arguments: Dict[str, str]) -> List[TextContent]:"""Handle execute_sql tool"""self.logger.debug("Handling execute_sql tool")# Get queryquery = arguments.get("query")if not query:self.logger.error("Query is required")return []# Check queryis_dangerous, pattern = dangerous_check(query)if is_dangerous:self.logger.error(f"Dangerous query detected: {pattern}")return [TextContent(value=f"Error: Dangerous query detected: {pattern}")]try:# Execute queryresult = self.client.execute_query(query)json_result = json.dumps(result, default=str, indent=2)return [TextContent(type='text',text=json_result,mimeType='application/json')]except Exception as e:self.logger.error(f"Failed to execute query: {e}")return [TextContent(type='text', text=f"Error executing query: {str(e)}")]
DatabaseServer
class DatabaseServer:"""MCP database server"""def __init__(self, config: Config, logger: Logger):self.app = Server("clickhouse_mcp_server")self.logger = logger# create componentsself.client = ClickHouseClient(config, logger)self.resource_manager = ResourceManager(self.client, logger)self.tool_manager = ToolManager(self.client, logger)# register componentsself.app.list_resources()(self.resource_manager.list_resources)self.app.read_resource()(self.resource_manager.read_resource)self.app.list_tools()(self.tool_manager.list_tools)self.app.call_tool()(self.tool_manager.call_tool)async def run(self):"""Run the server"""from mcp.server.stdio import stdio_serverself.logger.info("Starting server")async with stdio_server() as (read_stream, write_stream):try:await self.app.run(read_stream,write_stream,self.app.create_initialization_options())except Exception as e:self.logger.error(f"Server error: {e}")raise
结尾

大模型&AI产品经理如何学习
求大家的点赞和收藏,我花2万买的大模型学习资料免费共享给你们,来看看有哪些东西。
1.学习路线图

第一阶段: 从大模型系统设计入手,讲解大模型的主要方法;
第二阶段: 在通过大模型提示词工程从Prompts角度入手更好发挥模型的作用;
第三阶段: 大模型平台应用开发借助阿里云PAI平台构建电商领域虚拟试衣系统;
第四阶段: 大模型知识库应用开发以LangChain框架为例,构建物流行业咨询智能问答系统;
第五阶段: 大模型微调开发借助以大健康、新零售、新媒体领域构建适合当前领域大模型;
第六阶段: 以SD多模态大模型为主,搭建了文生图小程序案例;
第七阶段: 以大模型平台应用与开发为主,通过星火大模型,文心大模型等成熟大模型构建大模型行业应用。
2.视频教程
网上虽然也有很多的学习资源,但基本上都残缺不全的,这是我自己整理的大模型视频教程,上面路线图的每一个知识点,我都有配套的视频讲解。


(都打包成一块的了,不能一一展开,总共300多集)
因篇幅有限,仅展示部分资料,需要点击下方图片前往获取
3.技术文档和电子书
这里主要整理了大模型相关PDF书籍、行业报告、文档,有几百本,都是目前行业最新的。

4.LLM面试题和面经合集
这里主要整理了行业目前最新的大模型面试题和各种大厂offer面经合集。

👉学会后的收获:👈
• 基于大模型全栈工程实现(前端、后端、产品经理、设计、数据分析等),通过这门课可获得不同能力;
• 能够利用大模型解决相关实际项目需求: 大数据时代,越来越多的企业和机构需要处理海量数据,利用大模型技术可以更好地处理这些数据,提高数据分析和决策的准确性。因此,掌握大模型应用开发技能,可以让程序员更好地应对实际项目需求;
• 基于大模型和企业数据AI应用开发,实现大模型理论、掌握GPU算力、硬件、LangChain开发框架和项目实战技能, 学会Fine-tuning垂直训练大模型(数据准备、数据蒸馏、大模型部署)一站式掌握;
• 能够完成时下热门大模型垂直领域模型训练能力,提高程序员的编码能力: 大模型应用开发需要掌握机器学习算法、深度学习框架等技术,这些技术的掌握可以提高程序员的编码能力和分析能力,让程序员更加熟练地编写高质量的代码。

1.AI大模型学习路线图
2.100套AI大模型商业化落地方案
3.100集大模型视频教程
4.200本大模型PDF书籍
5.LLM面试题合集
6.AI产品经理资源合集***
👉获取方式:
😝有需要的小伙伴,可以保存图片到wx扫描二v码免费领取【保证100%免费】🆓

火山引擎开发者社区是火山引擎打造的AI技术生态平台,聚焦Agent与大模型开发,提供豆包系列模型(图像/视频/视觉)、智能分析与会话工具,并配套评测集、动手实验室及行业案例库。社区通过技术沙龙、挑战赛等活动促进开发者成长,新用户可领50万Tokens权益,助力构建智能应用。
更多推荐
所有评论(0)