当前位置: 代码网 > it编程>编程语言>Javascript > 如何将Paddle(Lite)模型转换为TensorFlow(Lite)模型

如何将Paddle(Lite)模型转换为TensorFlow(Lite)模型

2024年08月04日 Javascript 我要评论
模型间的相互转换在深度学习应用中很常见,paddlite和TensorFlowLite是移动端常用的推理框架,有时候需要将模型在两者之间做转换,本文将对转换方法做说明。

模型间的相互转换在深度学习应用中很常见,paddlelite和tensorflowlite是移动端常用的推理框架,有时候需要将模型在两者之间做转换,本文将对转换方法做说明。

环境准备

建议使用tensorflow2.14,paddlepaddle 2.6

docker pull tensorflow/tensorflow:2.14.0

step1:from paddle to onnx

直接参考https://github.com/paddlepaddle/paddle2onnx/blob/develop/docs/zh/compile.md 源码编译paddle2onnx
然后执行

paddle2onnx --model_dir . --model_filename your.pdmodel --params_filename your.pdiparams --save_file model.onnx   
会看到输出                           
[paddle2onnx] start to parse paddlepaddle model...
[paddle2onnx] model file path: ./pdmodel.pdmodel
[paddle2onnx] parameters file path: ./pdmodel.pdiparams
[paddle2onnx] start to parsing paddle model...
[paddle2onnx] [bilinear_interp_v2: bilinear_interp_v2_1.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] [pixel_shuffle: pixel_shuffle_1.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] [pixel_shuffle: pixel_shuffle_2.tmp_0] requires the minimal opset version of 11.
[paddle2onnx] due to the operator: bilinear_interp_v2, requires opset_version >= 11.
[paddle2onnx] opset version will change to 11 from 9
[paddle2onnx] use opset_version = 11 for onnx export.
[paddle2onnx] paddlepaddle model is exported as onnx format now.
2024-04-09 11:55:50 [info]	===============make paddlepaddle better!================
2024-04-09 11:55:50 [info]	a little survey: https://iwenjuan.baidu.com/?code=r8hu2s

step2:from onnx to tensorflow

使用https://github.com/onnx/onnx-tensorflow

pip install tensorflow-addons
pip install tensorflow-probability==0.22.1 
pip install onnx-tf

接下来

onnx-tf convert -i model.onnx -o model.pb

会看到输出

2024-04-09 07:03:32,346 - onnx-tf - info - start converting onnx pb to tf saved model
2024-04-09 07:03:41,015 - onnx-tf - info - converting completes successfully.
info:onnx-tf:converting completes successfully.

在model.pb目录下可以看到saved_model.pb

step3:from tensorflow to tflite

参考https://www.tensorflow.org/lite/convert?hl=zh-cn 编写python脚本

import tensorflow as tf
# convert the model
converter = tf.lite.tfliteconverter.from_saved_model(saved_model_dir) # path to the savedmodel directory
tflite_model = converter.convert()

# save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

运行python脚本,会看到输出

2024-04-09 07:16:45.514656: w tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] ignored output_format.
2024-04-09 07:16:45.514767: w tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] ignored drop_control_dependency.
2024-04-09 07:16:45.515630: i tensorflow/cc/saved_model/reader.cc:83] reading savedmodel from: .
2024-04-09 07:16:45.517291: i tensorflow/cc/saved_model/reader.cc:51] reading meta graph with tags { serve }
2024-04-09 07:16:45.517352: i tensorflow/cc/saved_model/reader.cc:146] reading savedmodel debug info (if present) from: .
2024-04-09 07:16:45.523781: i tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:382] mlir v1 optimization pass is not enabled
2024-04-09 07:16:45.524480: i tensorflow/cc/saved_model/loader.cc:233] restoring savedmodel bundle.
2024-04-09 07:16:45.543346: i tensorflow/cc/saved_model/loader.cc:217] running initialization op on savedmodel bundle at path: .
2024-04-09 07:16:45.559402: i tensorflow/cc/saved_model/loader.cc:316] savedmodel load for tags { serve }; status: success: ok. took 43775 microseconds.
2024-04-09 07:16:45.584171: i tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling mlir crash reproducer, set env var `mlir_crash_reproducer_directory` to enable.
2024-04-09 07:16:45.635201: i tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2245] estimated count of arithmetic op

到此大功告成!

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com