SDB:安装 OpenVINO
- 提升计算机视觉、自动语音识别、自然语言处理和其他常见任务中的深度学习性能
- 使用使用 TensorFlow、PyTorch 等流行框架训练的模型
- 降低资源需求,并在从边缘到云的各种 Intel® 平台上高效部署
此开源版本包含几个组件:即模型优化器、OpenVINO™ 运行时、训练后优化工具,以及 CPU、GPU、GNA、多设备和异构插件,以加速 Intel® CPU 和 Intel® 处理器图形上的深度学习推理。它支持来自 Open Model Zoo 的预训练模型,以及 TensorFlow、ONNX、PaddlePaddle、MXNet、Caffe、Kaldi 等流行格式的 100 多个开源和公共模型。
要求
CPU 处理器要求
以下基于 Intel® 64 架构的系统均受支持,可用作主机和目标平台。
- 第 6 代至第 13 代 Intel® Core™ 处理器
- 第 1 代至第 4 代 Intel® Xeon® 可扩展处理器
- 配备 Intel® HD Graphics 的 Intel® Pentium® 处理器 N4200/5、N3350/5、N3450/5
- 配备 Intel® 流式 SIMD 扩展 4.2 (Intel® SSE4.2) 的 Intel Atom® 处理器
第 10 代和第 11 代 Intel Core 处理器、第 11 代 Intel Core 处理器 S 系列、第 12 代和第 13 代 Intel Core 处理器或第 4 代 Intel Xeon 可扩展处理器可能需要较新的操作系统内核版本才能支持具有 CPU 功能的 CPU、GPU、Intel GNA 或混合内核。
Intel® 高斯和神经网络加速器 (Intel® GNA)
支持的硬件
- Intel® GNA
支持的 GPU 处理器
- Intel® HD Graphics
- Intel® UHD Graphics
- Intel® Iris® Pro Graphics
- Intel® Iris® Xe Graphics
- Intel® Iris® Xe MAX Graphics
支持的独立显卡
- Intel® 数据 GPU Flex 系列中心(前身为代号 Arctic Sound)
- Intel® Arc™ GPU(前身为代号 DG2)
其他软件要求
- GNU 编译器集合 (GCC)*
- CMake
- Python* 3.7-3.11
- OpenCV
软件包要求
安装 CMake*、pkg-config 和 GNU* 开发工具以构建示例。虽然 OpenVINO 工具和工具包不需要 CMake 和 pkg-config 构建工具,但许多示例都作为 CMake 项目提供,需要 CMake 进行构建。在某些情况下,pkg-config 对于查找完成应用程序构建所需的库是必要的。
Intel 编译器利用现有的 GNU 构建工具链来提供完整的 C/C++ 开发环境。如果您的 Linux 发行版不包含完整的 GNU 开发工具集,则需要安装这些工具。要在您的 Linux 系统上安装 CMake、pkg-config、opencl 和 GNU 开发工具,请打开一个终端会话并输入以下命令
$ sudo zypper update
$ sudo zypper --non-interactive install cmake pkg-config ade-devel \
patterns-devel-C-C++-devel_C_C++ \
opencl-headers ocl-icd-devel opencv-devel \
pugixml-devel patchelf opencl-cpp-headers \
python311-devel ccache nlohmann_json-devel \
ninja scons git git-lfs patchelf fdupes \
rpm-build ShellCheck tbb-devel libva-devel \
snappy-devel ocl-icd-devel \
opencl-cpp-headers opencl-headers \
zlib-devel gflags-devel-static \
protobuf-devel
通过显示安装位置来验证安装
$ which cmake pkg-config make gcc g++
将显示其中一个或多个位置
/usr/bin/cmake /usr/bin/pkg-config /usr/bin/make /usr/bin/gcc /usr/bin/g++
使用 Zypper 安装 OpenVINO 说明
请按照此处说明进行操作 --> https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-zypper.html
从源代码编译
我们将开始使用直接从 GitHub 提取的源代码编译 OpenVINO(RPM 包即将推出)。作为 openSUSE 和 Intel Edge Innovator 的成员,我正在主动进行打包过程以及将 rpm 包发布到 openSUSE Linux 平台的个人举措。因此,很快就可以仅通过 zypper 命令进行安装。
下载:Github 说明
以下是下载版本 2023.3(本文发布之日的最新版本)的命令
$ git clone -b 2024.0.0 https://github.com/openvinotoolkit/openvino.git $ cd openvino && git submodule update --init --recursive
如果您选择了其他选项:安装软件包依赖项
$ sudo ./install_build_dependencies.sh
通过显示安装位置来验证安装
$ which cmake pkg-config make gcc g++
将显示其中一个或多个位置
/usr/bin/cmake /usr/bin/pkg-config /usr/bin/make /usr/bin/gcc /usr/bin/g++
安装用于构建 python wheel 的 python 依赖项
$ python3 -m pip install -U pip $ python3 -m pip install -r ./src/bindings/python/wheel/requirements-dev.txt $ python3 -m pip install -r ./thirdparty/onnx/onnx/requirements-dev.txt
现在我们将使用以下说明编译并安装 openvino
$ mkdir build && mkdir openvino_dist && cd build
$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=../openvino_dist \
-DBUILD_SHARED_LIBS=ON -DENABLE_OV_ONNX_FRONTEND=ON \
-DENABLE_OV_PADDLE_FRONTEND=ON -DENABLE_OV_PYTORCH_FRONTEND=ON \
-DENABLE_OV_IR_FRONTEND=ON -DENABLE_INTEL_GNA=OFF \
-DENABLE_OV_TF_FRONTEND=ON -DENABLE_OV_TF_LITE_FRONTEND=ON \
-DENABLE_PYTHON=ON -DENABLE_WHEEL=ON \
-DPYTHON_EXECUTABLE=`which python3.11` \
-DPYTHON_LIBRARY=/usr/lib64/libpython3.11.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.11 ..
$ make --jobs=$(nproc --all)
$ make install
安装用于 OpenVINO 运行时和 OpenVINO-dev 工具的构建的 python wheel
python3 -m pip install openvino-dev --find-links ../openvino_dist/tools
构建的 OpenVINO 运行时快速测试
OpenVINO 环境
现在,OpenVino 已编译并安装在分发文件夹中,为了对其进行测试,我们必须使用以下命令初始化 OpenVino 开发环境
# cd ../openvino_dist/ # source ./setupvars.sh [setupvars.sh] OpenVINO environment initialized
将 omz 路径插入环境变量。
export PATH=$PATH:/home/cabelo/.local/bin export PYTHONPATH=$PYTHONPATH:<openvino_repo>/openvino/bin/intel64/Release/python/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<openvino_repo>/openvino/bin/intel64/Release/
现在,创建模型目录并安装模型优化器依赖项
$ mkdir ~/ov_models $ pip3 install onnxruntime protobuf==3.19.0 openvino-dev[pytorch]
重要说明:在本教程中,我们使用了 onnx==1.15.0、onnxruntime==1.16.3 和 protobuf==3.19.0 或 3.20.2 的版本。但这并非强制使用相同版本。使用 omz_downloader 下载 resnet50 pytorch 模型
$ omz_downloader --name resnet-50-pytorch -o ~/ov_models/ ################|| Downloading resnet-50-pytorch ||################ ========== Downloading /home/cabelo/ov_models/public/resnet-50-pytorch/resnet50-19c8e357.pth ... 100%, 100100 KB, 383 KB/s, 261 seconds passed
现在我们将使用 omz_converter 实用程序将 resnet50 pytorch 模型转换为 OpenVINO FP32 IR
$ omz_converter --name resnet-50-pytorch -o ~/ov_models/ -d ~/ov_models/ ========== Converting resnet-50-pytorch to ONNX Conversion to ONNX command: /usr/bin/python3 -- /home/cabelo/.local/lib/python3.11/site-packages/openvino/model_zoo/internal_scripts/pytorch_to_onnx.py --model-name=resnet50 --weights=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet50-19c8e357.pth --import-module=torchvision.models --input-shape=1,3,224,224 --output-file=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx --input-names=data --output-names=prob ONNX check passed successfully. ========== Converting resnet-50-pytorch to IR (FP16) Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=onnx --output_dir=/home/cabelo/ov_models/public/resnet-50-pytorch/FP16 --model_name=resnet-50-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP16/resnet-50-pytorch.xml [ SUCCESS ] BIN file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP16/resnet-50-pytorch.bin ========== Converting resnet-50-pytorch to IR (FP32) Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=onnx --output_dir=/home/cabelo/ov_models/public/resnet-50-pytorch/FP32 --model_name=resnet-50-pytorch --input=data '--mean_values=data[123.675,116.28,103.53]' '--scale_values=data[58.395,57.12,57.375]' --reverse_input_channels --output=prob --input_model=/home/cabelo/ov_models/public/resnet-50-pytorch/resnet-v1-50.onnx '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=True '--layout=data(NCHW)' '--input_shape=[1, 3, 224, 224]' --compress_to_fp16=False Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.xml [ SUCCESS ] BIN file: /home/cabelo/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.bin
使用 resnet50 FP32 IR 模型在 CPU 上运行基准测试应用程序
$ benchmark_app -m ~/ov_models/public/resnet-50-pytorch/FP32/resnet-50-pytorch.xml -d CPU [Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [Step 2/11] Loading OpenVINO Runtime [ INFO ] OpenVINO: [ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD [ INFO ] [ INFO ] Device info: [ INFO ] CPU [ INFO ] Build ................................. 2023.2.0-13089-cfd42bd2cb0-HEAD [ INFO ] [ INFO ] [Step 3/11] Setting device configuration [ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to PerformanceMode.THROUGHPUT. [Step 4/11] Reading model files [ INFO ] Loading model files [ INFO ] Read model took 10.81 ms [ INFO ] Original model I/O parameters: [ INFO ] Model inputs: [ INFO ] data (node: data) : f32 / [N,C,H,W] / [1,3,224,224] [ INFO ] Model outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 5/11] Resizing model to match image sizes and given batch [ INFO ] Model batch size: 1 [Step 6/11] Configuring input of the model [ INFO ] Model inputs: [ INFO ] data (node: data) : u8 / [N,C,H,W] / [1,3,224,224] [ INFO ] Model outputs: [ INFO ] prob (node: prob) : f32 / [...] / [1,1000] [Step 7/11] Loading the model to the device [ INFO ] Compile model took 178.74 ms [Step 8/11] Querying optimal runtime parameters [ INFO ] Model: [ INFO ] NETWORK_NAME: main_graph [ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4 [ INFO ] NUM_STREAMS: 4 [ INFO ] AFFINITY: Affinity.CORE [ INFO ] INFERENCE_NUM_THREADS: 8 [ INFO ] PERF_COUNT: False [ INFO ] INFERENCE_PRECISION_HINT: <Type: 'float32'> [ INFO ] PERFORMANCE_HINT: PerformanceMode.THROUGHPUT [ INFO ] EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE [ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0 [ INFO ] ENABLE_CPU_PINNING: True [ INFO ] SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE [ INFO ] ENABLE_HYPER_THREADING: True [ INFO ] EXECUTION_DEVICES: ['CPU'] [ INFO ] CPU_DENORMALS_OPTIMIZATION: False [ INFO ] CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0 [Step 9/11] Creating infer requests and preparing input tensors [ WARNING ] No input files were given for input 'data'!. This input will be filled with random values! [ INFO ] Fill input 'data' with random values [Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration) [ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop). [ INFO ] First inference took 48.68 ms [Step 11/11] Dumping statistics report [ INFO ] Execution Devices:['CPU'] [ INFO ] Count: 1172 iterations [ INFO ] Duration: 60360.89 ms [ INFO ] Latency: [ INFO ] Median: 224.46 ms [ INFO ] Average: 205.72 ms [ INFO ] Min: 106.17 ms [ INFO ] Max: 296.32 ms [ INFO ] Throughput: 19.42 FPS
在执行 hello_classification.py 测试之前,我们应该使用以下命令下载模型 alexnet
$ cd samples/python/hello_classification/ $ omz_downloader --name alexnet ################|| Downloading alexnet ||################ ========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt ... 100%, 3 KB, 18505 KB/s, 0 seconds passed ========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.caffemodel ... 100%, 238146 KB, 5134 KB/s, 46 seconds passed ========== Replacing text in /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt
现在我们将再次将 alexnet caffe 模型转换为 OpenVINO。
$ omz_converter --name alexnet ========== Converting alexnet to IR (FP16) Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=caffe --output_dir=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16 --model_name=alexnet --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.caffemodel --input_proto=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.prototxt '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=True Please expect that Model Optimizer conversion might be slow. You are currently using Python protobuf library implementation. Check that your protobuf package version is aligned with requirements_caffe.txt. For more information please refer to Model Conversion API FAQ, question #80. (https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=80#question-80) Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16/alexnet.xml [ SUCCESS ] BIN file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP16/alexnet.bin ========== Converting alexnet to IR (FP32) Conversion command: /usr/bin/python3 -- /home/cabelo/.local/bin/mo --framework=caffe --output_dir=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32 --model_name=alexnet --input=data '--mean_values=data[104.0,117.0,123.0]' --output=prob --input_model=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.caffemodel --input_proto=/dados/fontes/openvino/samples/python/hello_classification/public/alexnet/alexnet.prototxt '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=True '--layout=data(NCHW)' '--input_shape=[1, 3, 227, 227]' --compress_to_fp16=False Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html [ SUCCESS ] Generated IR version 11 model. [ SUCCESS ] XML file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32/alexnet.xml [ SUCCESS ] BIN file: /dados/fontes/openvino/samples/python/hello_classification/public/alexnet/FP32/alexnet.bin
好的,如果一切正常,请运行以下命令以测试 Python 语言中的分类示例。
$ python3 hello_classification.py public/alexnet/FP32/alexnet.xml /dados/openvino/banana.jpg CPU [ INFO ] Creating OpenVINO Runtime Core [ INFO ] Reading the model: public/alexnet/FP32/alexnet.xml [ INFO ] Loading the model to the plugin [ INFO ] Starting inference in synchronous mode [ INFO ] Image path: /dados/openvino/banana.jpg [ INFO ] Top 10 results: [ INFO ] class_id probability [ INFO ] -------------------- [ INFO ] 954 0.9988611 [ INFO ] 951 0.0003525 [ INFO ] 950 0.0002846 [ INFO ] 666 0.0002556 [ INFO ] 502 0.0000543 [ INFO ] 945 0.0000491 [ INFO ] 659 0.0000155 [ INFO ] 600 0.0000136 [ INFO ] 953 0.0000134 [ INFO ] 940 0.0000102 [ INFO ] [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
此文本由官方 Edge Innovator Intel 和 openSUSE 成员 Alessandro de Oliveira Faria 基于 Intel 教程构建。有关更多信息,请参见官方页面 此处

