logo

Midjourney

生成内容: 一、总框架:两阶段对比图(分

MJ Bot 更新时间:2025-08-18 01:03:46
提示词:Generating content: 1. Overall Framework: Two-stage comparison diagram (can be divided left/right or up/down; recommended to be left-right for clearer comparison) Title: BEC-Pred Innovation Logic — "Pre-training lays the foundation → Fine-tuning captures the specifics" Core Structure: Place "Traditional Model Pain Points" on the left, and "BEC-Pred Innovation Process" on the right 2. Left Side: Issues with Traditional Models (using “negative cases” to highlight innovation) 1. Data Singularity Draw a "small box" (rectangle, light gray color), inside write: Only enzyme reaction data training Next to it add a red cross ❌ + text: Lacking universal reaction logic support → Few and unbalanced enzyme reaction data, the model hasn’t even learned “how chemical reactions change”! 2. Simple Architecture Draw a "shallow network" (for example, draw 3 stacked rectangles representing a simple NN), label: Traditional shallow model (like single-layer NN) Add accompanying text: Hard to capture complex reaction features → Changes in chemical bonds and relationship between substrates and products, traditional models “can’t understand”! 3. Right Side: Innovations of BEC-Pred (divided into two stages, connected by arrows) Stage 1: Pre-training (General reactions → Learning basic rules) 1. Data Input Draw a "large database" (3D rectangle, choose blue color), label: USPTO Database (1.1 million+ organic reactions) Add an arrow connecting it to the "BERT Pre-training Module" (draw the iconic “multi-layer stacking” of Transformer, label BERT pre-training) 2. Advantages of BERT Architecture Draw "multi-layer Transformers" (at least draw 3 layers, each layer labeled multi-head attention), next to it add text: BERT's multi-layer attention, like a “magnifying glass” focusing on molecular changes → Learning thoroughly “how chemical reactions change” (how bonds break/form, how molecules transform) Stage 2: Fine-tuning (Enzyme reactions → Learning specific features) 1. Data Input Draw a "small but refined enzyme library" (3D rectangle, choose orange color), label: ECREACT Database (enzyme reactions with EC labels) Add an arrow connecting it to the "BERT Fine-tuning Module" (reuse the Transformer graphic from Stage 1, label BERT fine-tuning) 2. Model Output Draw a "classification head" (small rectangle, label Classifier), arrow connected to "EC number" (e.g., label EC 3.1.1.2, corresponding to esterase) Add text: Using enzyme-specific reactions + EC labels, allowing the model to focus on “how enzyme catalysis is special” → Precisely corresponding to EC numbers! 4. Auxiliary Elements: Details to make the diagram more vivid Molecular Change Example Next to the "Pre-training" and "Fine-tuning" modules, draw a small case study (e.g., esterase catalytic reaction): Substrate SMILES: OC(OC(C)(C)O)O... (ester structure) Product SMILES: OC(C)(C)O + ... (products after ester bond cleavage) Use arrows to indicate bond cleavage points, reflecting BERT's attention “focusing on key changes”. Comparison Arrows Between the traditional model and BEC-Pred, draw bidirectional comparison arrows, label: Traditional Model: Directly learn enzyme reactions → “No foundation, can't grasp accurately” BEC-Pred: First learn general → then learn specific → “Foundation strong, positioning accurate” --ar 1:1 --v 7 --stylize 100

Midjourney官网

中文提示词:生成内容: 一、总框架:两阶段对比图(分左右 / 上下都行,推荐左右对比更直观) 标题:BEC-Pred 创新逻辑 ——「预训练打基础 → 微调抓专属」 核心结构:左边放「传统模型痛点」,右边放「BEC-Pred 创新流程」 二、左边:传统模型的问题(用 “负面案例” 衬托创新) 1. 数据单一 画一个 “小盒子”(矩形,颜色浅灰),里面写: 只用酶反应数据训练 旁边加 红色叉号❌ + 文字: 缺乏通用反应逻辑支撑 → 酶反应数据少、不均衡,模型连“化学反应咋变的”都没学透! 2. 架构简单 画一个 “浅层网络”(比如画 3 层矩形堆叠,代表简单 NN),标: 传统浅模型(如单层NN) 旁边配文字: 难抓复杂反应特征 → 底物→产物的化学键变化、位点关系,传统模型“看不懂”! 三、右边:BEC-Pred 的创新(分两阶段,用箭头串联) 阶段 1:预训练(通用反应→学基础规律) 1. 数据输入 画一个 “大数据库”(立体矩形,颜色选蓝色),标: USPTO数据库(110万+有机反应) 加箭头连到 “BERT 预训练模块”(画 Transformer 标志性的 “多层堆叠”,标BERT预训练) 2. BERT 架构优势 画 “多层 Transformer”(至少画 3 层,每层标多头注意力),旁边配文字: BERT的多层注意力,像“放大镜”盯分子变化 → 学透“化学反应咋变的”(键咋断/咋生成、分子咋转化) 阶段 2:微调(酶反应→学专属特征) 1. 数据输入 画一个 “小而精的酶库”(立体矩形,颜色选橙色),标: ECREACT数据库(带EC标签的酶反应) 加箭头连到 “BERT 微调模块”(复用阶段 1 的 Transformer 图,标BERT微调) 2. 模型输出 画 “分类头”(小矩形,标Classifier),箭头连到 “EC 编号”(比如标EC 3.1.1.2,对应酯酶) 配文字: 用酶专属反应+EC标签,让模型聚焦“酶催化咋特殊” → 精准对应EC数! 四、辅助元素:让图更生动的细节 分子变化示例 在 “预训练” 和 “微调” 模块旁,画个小案例(比如酯酶催化反应): 底物 SMILES:OC(OC(C)(C)O)O...(酯类结构) 产物 SMILES:OC(C)(C)O + ...(酯键断裂后的产物) 用箭头标键断裂位点,体现 BERT 注意力 “盯关键变化”。 对比箭头 在传统模型和 BEC-Pred 之间,画 双向对比箭头,标: 传统模型:直接学酶反应→“没基础,抓不准” BEC-Pred:先学通用→再学专属→“基础牢,定位准” 一键画同款
素材来源:Midjourney国内版官网

上一篇 : 恶魔大叔,额头有一对显眼的红色恶魔角,红
下一篇 : 疗愈、治愈、轻松氛围、儿童插画、手绘感、
Copyright©2017 Midjourney9.com All Right Reserved 版权所有:成都金翼云科技有限公司 蜀ICP备2023008999号