且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何将TensorFlow Lite构建为静态库并从单独的(CMake)项目链接到它?

更新时间:2023-10-23 09:14:46

我最终为CMake的target_link_libraries(在TFLite_LIBS中)手动列出了所有必需的TFLite对象文件,并且可以正常工作.

I ended up listing all necessary TFLite object files manually for CMake's target_link_libraries (in the TFLite_LIBS) and it works.

我使用了一个简单的shell脚本来获取必要的目标文件列表. 首先,我将构建日志中所有未定义的引用收集到bash数组中,如下所示:

I used a simple shell script to get the list of necessary object files. First I collected all undefined references from build log into a bash-array as following:

SYMBOLS=(\
    'tflite::CombineHashes('\
    'tflite::IsFlexOp('\
    'tflite::ConvertArrayToTfLiteIntArray('\
    'tflite::EqualArrayAndTfLiteIntArray('\
    ...
    'tflite::ConvertVectorToTfLiteIntArray(')

然后对于该数组中的每个符号,我遍历了bazel build输出中的每个*.o文件:

Then for every symbol in that array, I went through every *.o file in bazel build output:

for SYMBOL in $SYMBOLS[@]; do
    for OBJ in $(find -L /path/to/tensorflow/bazel-bin/ -name '*.o'); do
        nm -C $OBJ | grep "T $SYMBOL" > /dev/null && echo $OBJ
    done
done | sort | uniq

,并将输出添加到CMake的TFLite_LIBS中(当然具有正确的路径前缀).之后,我得到了未定义引用的新部分,但是经过几次迭代,它解决了所有问题.

and added the output to TFLite_LIBS in CMake (with correct path prefix, of course). After that, I got a new portion of undefined references, but after a few iterations, it resolved everything.

也许我也可以从我最初的树内构建中的*-params文件中获取依赖项的完整列表,但是快速检查表明它具有一些冗余项,并且脚本仅收集了必要的项.

Probably I could also obtain the full list of dependencies from *-params file from my initial in-tree build, but a quick check showed that it had some redundant items, and the script collected only the necessary ones.

对于包含位置,我用${TENSORFLOW_DIR}/bazel-tensorflow/external/flatbuffers/include/替换了硬编码到bazel缓存中平面缓冲区的路径.感谢 jdehesa 的提示.

For include locations, I replaced that hardcoded path to flatbuffers in bazel cache with ${TENSORFLOW_DIR}/bazel-tensorflow/external/flatbuffers/include/. Thanks jdehesa for the hint.

更新:
全包式TF Lite静态库的本机构建可以非常类似于 RPi iOS

UPDATE:
The native build of all-inclusive TF Lite static library can be done very similar to official build instructions for RPi, iOS or ARM64 using plain old make:
1. ./tensorflow/lite/tools/make/download_dependencies.sh
2. make -f tensorflow/lite/tools/make/Makefile

输出库将存储为<tensorflow-root>/tensorflow/lite/tools/make/gen/<platform>/lib/libtensorflow-lite.a.外部依赖项及其标头将进入<tensorflow-root>/tensorflow/tensorflow/lite/tools/make/downloads(例如flatbuffers标头位于<tensorflow-root>/tensorflow/tensorflow/lite/tools/make/downloads/flatbuffers/include中).

The output library will be stored as <tensorflow-root>/tensorflow/lite/tools/make/gen/<platform>/lib/libtensorflow-lite.a. And the external dependencies with their headers would go into <tensorflow-root>/tensorflow/tensorflow/lite/tools/make/downloads (for example flatbuffers headers are in <tensorflow-root>/tensorflow/tensorflow/lite/tools/make/downloads/flatbuffers/include).

指南没有提到可以直接调用make.有针对不同交叉编译目标的包装器脚本,它们仅设置适当的变量并运行make.但是默认情况下,make只会执行本机构建.可以在CMakeLists.txt中将此定制发票添加为自定义命令.

Guide doesn't mention that make could be called directly. There are wrapper-scripts for different cross-compilation targets, which just set appropriate variables and run make. But by default make would just do native build. This make invokation can be added as a custom command in CMakeLists.txt.