nncase

2.9.0last stable release 8 months ago
Complexity Score
High
Open Issues
14
Dependent Projects
0
Weekly Downloadsglobal
1,853

Downloads

Readme

切换中文

nncase is a neural network compiler for AI accelerators.

Telegram: nncase community Technical Discussion QQ Group: 790699378 . Answer: 人工智能

K230

  • Usage
  • FAQ
  • Example
  • Colab run
  • Version relationship between nncase and K230_SDK
  • update nncase runtime library in SDK

Install

  • Linux:

    pip install nncase nncase-kpu
    
  • Windows:

    1. pip install nncase
    2. Download `nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl` in below link.
    3. pip install nncase_kpu-2.x.x-py2.py3-none-win_amd64.whl
    

All version of nncase and nncase-kpu in Release.

Supported operators

  • TFLite ops
  • Caffe ops
  • ONNX ops

benchmark test

kind model shape quant_type(If/W) nncase_fps tflite_onnx_result accuracy info Image Classification mobilenetv2 [1,224,224,3] u8/u8 600.24 top-1 = 71.3%
top-5 = 90.1% top-1 = 71.1%
top-5 = 90.0% dataset(ImageNet 2012, 50000 images)
tflite resnet50V2 [1,3,224,224] u8/u8 86.17 top-1 = 75.44%
top-5 = 92.56% top-1 = 75.11%
top-5 = 92.36% dataset(ImageNet 2012, 50000 images)
onnx yolov8s_cls [1,3,224,224] u8/u8 130.497 top-1 = 72.2%
top-5 = 90.9% top-1 = 72.2%
top-5 = 90.8% dataset(ImageNet 2012, 50000 images)
yolov8s_cls(v8.0.207) Object Detection yolov5s_det [1,3,640,640] u8/u8 23.645 bbox
mAP50-90 = 0.374
mAP50 = 0.567 bbox
mAP50-90 = 0.369
mAP50 = 0.566dataset(coco val2017, 5000 images)
yolov5s_det(v7.0 tag, rect=False, conf=0.001, iou=0.65) yolov8s_det [1,3,640,640] u8/u8 9.373 bbox
mAP50-90 = 0.446
mAP50 = 0.612
mAP75 = 0.484 bbox
mAP50-90 = 0.404
mAP50 = 0.593
mAP75 = 0.45dataset(coco val2017, 5000 images)
yolov8s_det(v8.0.207, rect = False) Image Segmentation yolov8s_seg [1,3,640,640] u8/u8 7.845 bbox
mAP50-90 = 0.444
mAP50 = 0.606
mAP75 = 0.484
segm
mAP50-90 = 0.371
mAP50 = 0.578
mAP75 = 0.396 bbox
mAP50-90 = 0.444
mAP50 = 0.606
mAP75 = 0.484
segm
mAP50-90 = 0.371
mAP50 = 0.579
mAP75 = 0.397 dataset(coco val2017, 5000 images)
yolov8s_seg(v8.0.207, rect = False, conf_thres = 0.0008) Pose Estimation yolov8n_pose_320 [1,3,320,320] u8/u8 36.066 bbox
mAP50-90 = 0.6
mAP50 = 0.843
mAP75 = 0.654
keypoints
mAP50-90 = 0.358
mAP50 = 0.646
mAP75 = 0.353 bbox
mAP50-90 = 0.6
mAP50 = 0.841
mAP75 = 0.656
keypoints
mAP50-90 = 0.359
mAP50 = 0.648
mAP75 = 0.357 dataset(coco val2017, 2346 images)
yolov8n_pose(v8.0.207, rect = False) yolov8n_pose_640 [1,3,640,640] u8/u8 10.88 bbox
mAP50-90 = 0.694
mAP50 = 0.909
mAP75 = 0.776
keypoints
mAP50-90 = 0.509
mAP50 = 0.798
mAP75 = 0.544 bbox
mAP50-90 = 0.694
mAP50 = 0.909
mAP75 = 0.777
keypoints
mAP50-90 = 0.508
mAP50 = 0.798
mAP75 = 0.54 dataset(coco val2017, 2346 images)
yolov8n_pose(v8.0.207, rect = False) yolov8s_pose [1,3,640,640] u8/u8 5.568 bbox
mAP50-90 = 0.733
mAP50 = 0.925
mAP75 = 0.818
keypoints
mAP50-90 = 0.605
mAP50 = 0.857
mAP75 = 0.666 bbox
mAP50-90 = 0.734
mAP50 = 0.925
mAP75 = 0.819
keypoints
mAP50-90 = 0.604
mAP50 = 0.859
mAP75 = 0.669 dataset(coco val2017, 2346 images)
yolov8s_pose(v8.0.207, rect = False)

Demo

eye gaze space_resize face pose

K210/K510

  • Usage
  • FAQ
  • Example

Supported operators

  • TFLite ops
  • Caffe ops
  • ONNX ops

Features

  • Supports multiple inputs and outputs and multi-branch structure
  • Static memory allocation, no heap memory acquired
  • Operators fusion and optimizations
  • Support float and quantized uint8 inference
  • Support post quantization from float model with calibration dataset
  • Flat model with zero copy loading

Architecture

Build from source

It is recommended to install nncase directly through pip. At present, the source code related to k510 and K230 chips is not open source, so it is not possible to use nncase-K510 and nncase-kpu (K230) directly by compiling source code.

If there are operators in your model that nncase does not yet support, you can request them in the issue or implement them yourself and submit the PR. Later versions will be integrated, or contact us to provide a temporary version. Here are the steps to compile nncase.

git clone https://github.com/kendryte/nncase.git
cd nncase
mkdir build && cd build

# Use Ninja
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
ninja && ninja install

# Use make
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./install
make && make install

Resources

Canaan developer community

Canaan developer community contains all resources related to K210, K510, and K230.

  • 资料下载 –> Pre-compiled images available for the development boards corresponding to the three chips.
  • 文档 –> Documents corresponding to the three chips.
  • 模型库 –> Examples and code for industrial, security, educational and other scenarios that can be run on the K210 and K230.
  • 模型训练 –> The model training platform for K210 and K230 supports the training of various scenarios.

Bilibili

  • Canaan AI tutorial and application demonstration

K210 related repo

  • K210_Yolo_framework
  • Shts!'s Blog (Japanese)
  • Examples

K230 related repo

  • C: K230_SDK
    • Documents
    • K230 end-to-end tutorial
  • MicroPython: Canmv_k230
    • Documents
    • Examples

Dependencies

CVE IssuesActive
0
Scorecards Score
3.90
Test Coverage
73.00%
Follows Semver
No
Github Stars
718
Dependenciestotal
0
DependenciesOutdated
0
DependenciesDeprecated
0
Threat Modelling
No
Repo Audits
No

Learn how to distribute nncase in your own private PyPI registry

pip install nncase
Processing...
Done

25 Releases

PyPI on Cloudsmith

Getting started with PyPI on Cloudsmith is fast and easy.