西门子CPU模块6ES7505-0RA00-0AB0
西门子CPU模块6ES7505-0RA00-0AB0
西门子触摸屏代理商,西门子一级代理商,西门子中国授权总代理
----浔之漫智控技术(上海)有限公司
本公司经销合信/CO-TRUST科思创西门子PLC;S7-200S7-300 S7-400 S7-1200 触摸屏,变频器,6FC,6SNS120 V10 V60 V80伺服数控备件:原装进口电机,电线,电缆,希望能跟您有更多的合作机会
概述
AI Inference Server standardizes AI model execution on Siemens Industrial Edge. It facilitates data collection/ac, orchestrates data traffic, and is compatible with all powerful AI frameworks.
More information is available at this link.
Ordering option
The app can be ordered from the Industrial Edge Marketplace at this link.
应用
AI Inference Server is a Siemens Industrial Edge application that can run on Siemens Industrial Edge devices.
AI Inference Server enables AI models to be executed using the built-in Python Interpreter for the inference purposes.
The application guides the user to set up execution of the AI model on the Siemens Industrial Edge platform using the ready-to-use data connectors.
AI Inference Server standardizes logging, monitoring and debugging of AI models
AI Inference Server is designed to integrate MLOps with the AI Model Monitor.
AI Inference Server with GPU acceleration:
AI Inference Server in the variant with GPU acceleration standardizes the execution of the AI model on GPU-accelerated hardware using AI-enabled inference in the Edge ecosystem.
功能
AI Inference Server
Supports the most popular AI frameworks that are compatible with Python
Orchestrates and controls AI model execution
Can run AI pipelines with both an older and a newer version of Python
Enables horizontal scaling of the AI pipelines for optimum performance
Simplifies tasks such as input mapping (thanks to integration with Databus and other Siemens Industrial Edge connectors), data collection/ac, and pipeline visualization
Permits monitoring and debugging of AI models based on inference statistics
Features logging and image visualization
Includes pipeline version management
Permits the import of models via the user interface or via a remote connection
Supports persistent data storage on the local device for each pipeline
AI Inference Server variant for 3 pipelines
Supports the simultaneous execution of up to 3 AI pipelines