@Vengineerの戯言 : Twitter
SystemVerilogの世界へようこそ、すべては、SystemC v0.9公開から始まった
TensorFlow Blog のこの記事
この中で、下記のように、NNAPI (Android)、GPU (iOS/Android)にて、Hardware Acceleratorがサポートされているけど、先日の Qualcomm Hexagon DSP用の Delegate により、Android 8.1以前の NNAPIをサポートしていないけど、Hexagon DSPをサポートしているものでも Hardware Accelerator をサポート。また、iOS では、CoreML delegate をサポートし、Appleの Neural Engine もサポート。
Each platform has its own hardware accelerator that can be used to speed up model inference. TensorFlow Lite has already supported running models on NNAPI for Android, GPU for both iOS and Android. We are excited to add more hardware accelerators:
- On Android, we have added support for Qualcomm Hexagon DSP which is available on millions of devices. This enables developers to leverage the DSP on older Android devices below Android 8.1 where Android NN API is unavailable.
- On iOS, we have launched CoreML delegate to allow running TensorFlow Lite models on Apple’s Neural Engine.
そして性能も下記の図のように(URLを組み込んで引用しています)。Google I/O 2019 (May 2019)から TF Dev Summit 2020 (Feb 2020)の間に、CPU、GPUでざっくり半分のLatencyになったと。進化してますね。
将来的には、下記のようなことをやるようです。
- Continuously release up-to-date state-of-the-art on-device models, including better support for BERT-family models for NLP tasks and new vision models.
- Publish new tutorials and examples demonstrating more use cases, including how to use C/C++ APIs for inference on mobile.
- Enhance Model Maker to support more tasks including object detection and several NLP tasks. We will add BERT support for NLP tasks, such as question and answer. This will empower developers without machine learning expertise to build state-of-the-art NLP models through transfer learning.
- Expand the metadata and codegen tools to support more use cases, including object detection and more NLP tasks.
- Launch more platform integration for even easier end-to-end experience, including better integration with Android Studio and TensorFlow Hub.
NLP関連のタスクがおおいですね。
画像関連はもうだいぶいい感じになったのでしょうかね。
それから、Android Studio との統合も進めるのですね。iOS はどうなのだろうか?
