Android devices support
We have support for Android relying on TensorFlow Lite, with Java and JNI bindinds. For more details on how to experiment with those, please refer to the section below.
Please refer to TensorFlow documentation on how to setup the environment to build for Android (SDK and NDK required).
Using the library from Android project
We provide uptodate and tested libdeepspeech
usable as an AAR
package, for Android versions starting with 7.0 to 11.0. The package is published on JCenter, and the JCenter
repository should be available by default in any Android project. Please make sure your project is setup to pull from this repository. You can then include the library by just adding this line to your gradle.build
, adjusting VERSION
to the version you need:
implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar'
Building libdeepspeech.so
You can build the libdeepspeech.so
using (ARMv7):
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
Or (ARM64):
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++14 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
Building libdeepspeech.aar
In the unlikely event you have to rebuild the JNI bindings, source code is available under the libdeepspeech
subdirectory. Building depends on shared object: please ensure to place libdeepspeech.so
into the libdeepspeech/libs/{arm64-v8a,armeabi-v7a,x86_64}/
matching subdirectories.
Building the bindings is managed by gradle
and should be limited to issuing ./gradlew libdeepspeech:build
, producing an AAR
package in ./libdeepspeech/build/outputs/aar/
.
Please note that you might have to copy the file to a local Maven repository and adapt file naming (when missing, the error message should states what filename it expects and where).
Building C++ deepspeech
binary
Building the deepspeech
binary will happen through ndk-build
(ARMv7):
cd ../DeepSpeech/native_client
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../tensorflow/ TARGET_ARCH_ABI=armeabi-v7a
And (ARM64):
cd ../DeepSpeech/native_client
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../tensorflow/ TARGET_ARCH_ABI=arm64-v8a
Android demo APK
Provided is a very simple Android demo app that allows you to test the library. You can build it with make apk
and install the resulting APK file. Please refer to Gradle documentation for more details.
The APK
should be produced in /app/build/outputs/apk/
. This demo app might require external storage permissions. You can then push models files to your device, set the path to the file in the UI and try to run on an audio file. When running, it should first play the audio file and then run the decoding. At the end of the decoding, you should be presented with the decoded text as well as time elapsed to decode in miliseconds.
This application is very limited on purpose, and is only here as a very basic demo of one usage of the application. For example, it’s only able to read PCM mono 16kHz 16-bits file and it might fail on some WAVE file that are not following exactly the specification.
Running deepspeech
via adb
You should use adb push
to send data to device, please refer to Android documentation on how to use that.
Please push DeepSpeech data to /sdcard/deepspeech/
, including:
output_graph.tflite
which is the TF Lite model- External scorer file (available from one of our releases), if you want to use the scorer; please be aware that too big scorer will make the device run out of memory
Then, push binaries from native_client.tar.xz
to /data/local/tmp/ds
:
deepspeech
libdeepspeech.so
libc++_shared.so
You should then be able to run as usual, using a shell from adb shell
:
user@device$ cd /data/local/tmp/ds/
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...]
Please note that Android linker does not support rpath
so you have to set LD_LIBRARY_PATH
. Properly wrapped / packaged bindings does embed the library at a place the linker knows where to search, so Android apps will be fine.
Delegation API
TensorFlow Lite supports Delegate API to offload some computation from the main CPU. Please refer to TensorFlow’s documentation for details.
To ease with experimentations, we have enabled some of those delegations on our Android builds: * GPU, to leverage OpenGL capabilities * NNAPI, the Android API to leverage GPU / DSP / NPU * Hexagon, the Qualcomm-specific DSP
This is highly experimental:
- Requires passing environment variable
DS_TFLITE_DELEGATE
with values ofgpu
,nnapi
orhexagon
(only one at a time) - Might require exported model changes (some Op might not be supported)
- We can’t guarantee it will work, nor it will be faster than default implementation
Feedback on improving this is welcome: how it could be exposed in the API, how much performance gains do you get in your applications, how you had to change the model to make it work with a delegate, etc.
See the support / contact details
标签:deepspeech,--,support,libdeepspeech,devices,build,file,android,Android From: https://www.cnblogs.com/ChuenSan/p/17375428.html