模型间的相互转换在深度学习应用中很常见,paddlelite和tensorflowlite是移动端常用的推理框架,有时候需要将模型在两者之间做转换,本文将对转换方法做说明。
环境准备
建议使用tensorflow2.14,paddlepaddle 2.6
docker pull tensorflow/tensorflow:2.14.0
step1:from paddle to onnx
直接参考https://github.com/paddlepaddle/paddle2onnx/blob/develop/docs/zh/compile.md 源码编译paddle2onnx
然后执行
paddle2onnx --model_dir . --model_filename your.pdmodel --params_filename your.pdiparams --save_file model.onnx
会看到输出
[paddle2onnx] start to parse paddlepaddle model...
[paddle2onnx] model file path: ./pdmodel.pdmodel
[paddle2onnx] parameters file path: ./pdmodel.pdiparams
[paddle2onnx] start to parsing paddle model...
[paddle2onnx] [bilinear_interp_v2: bilinear_interp_v2_1.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] [pixel_shuffle: pixel_shuffle_1.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] [pixel_shuffle: pixel_shuffle_2.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] due to the operator: bilinear_interp_v2, requires opset_version >= 11.
[paddle2onnx] opset version will change to 11 from 9
[paddle2onnx] use opset_version = 11 for onnx export.
[paddle2onnx] paddlepaddle model is exported as onnx format now.
2024-04-09 11:55:50 [info] ===============make paddlepaddle better!================
2024-04-09 11:55:50 [info] a little survey: https://iwenjuan.baidu.com/?code=r8hu2s
step2:from onnx to tensorflow
使用https://github.com/onnx/onnx-tensorflow
pip install tensorflow-addons
pip install tensorflow-probability==0.22.1
pip install onnx-tf
接下来
onnx-tf convert -i model.onnx -o model.pb
会看到输出
2024-04-09 07:03:32,346 - onnx-tf - info - start converting onnx pb to tf saved model
2024-04-09 07:03:41,015 - onnx-tf - info - converting completes successfully.
info:onnx-tf:converting completes successfully.
在model.pb目录下可以看到saved_model.pb
step3:from tensorflow to tflite
参考https://www.tensorflow.org/lite/convert?hl=zh-cn 编写python脚本
import tensorflow as tf
# convert the model
converter = tf.lite.tfliteconverter.from_saved_model(saved_model_dir) # path to the savedmodel directory
tflite_model = converter.convert()
# save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
运行python脚本,会看到输出
2024-04-09 07:16:45.514656: w tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] ignored output_format.
2024-04-09 07:16:45.514767: w tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] ignored drop_control_dependency.
2024-04-09 07:16:45.515630: i tensorflow/cc/saved_model/reader.cc:83] reading savedmodel from: .
2024-04-09 07:16:45.517291: i tensorflow/cc/saved_model/reader.cc:51] reading meta graph with tags { serve }
2024-04-09 07:16:45.517352: i tensorflow/cc/saved_model/reader.cc:146] reading savedmodel debug info (if present) from: .
2024-04-09 07:16:45.523781: i tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:382] mlir v1 optimization pass is not enabled
2024-04-09 07:16:45.524480: i tensorflow/cc/saved_model/loader.cc:233] restoring savedmodel bundle.
2024-04-09 07:16:45.543346: i tensorflow/cc/saved_model/loader.cc:217] running initialization op on savedmodel bundle at path: .
2024-04-09 07:16:45.559402: i tensorflow/cc/saved_model/loader.cc:316] savedmodel load for tags { serve }; status: success: ok. took 43775 microseconds.
2024-04-09 07:16:45.584171: i tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling mlir crash reproducer, set env var `mlir_crash_reproducer_directory` to enable.
2024-04-09 07:16:45.635201: i tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2245] estimated count of arithmetic op
到此大功告成!
发表评论