ncnn is a high-performance neural network inference computing framework optimized for mobile platforms.
ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design.
ncnn does not have third-party dependencies.
It is cross-platform and runs faster than all known open-source frameworks on mobile phone cpu.
Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, creating intelligent APPs, and bringing artificial intelligence to your fingertips.
ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu, and so on.
ncnn 是一个为手机端极致优化的高性能神经网络前向计算框架。
ncnn 从设计之初深刻考虑手机端的部署和使用。
无第三方依赖,跨平台,手机端 cpu 的速度快于目前所有已知的开源框架。
基于 ncnn,开发者能够将深度学习算法轻松移植到手机端高效执行,
开发出人工智能 APP,将 AI 带到你的指尖。
ncnn 目前已在腾讯多款应用中使用,如:QQ,Qzone,微信,天天 P 图等。
Supports convolutional neural networks, supports multiple input and multi-branch structure, can calculate part of the branch
No third-party library dependencies, does not rely on BLAS / NNPACK or any other computing framework
Pure C++ implementation, cross-platform, supports Android, iOS and so on
ARM NEON assembly level of careful optimization, calculation speed is extremely high
Sophisticated memory management and data structure design, very low memory footprint
Supports multi-core parallel computing acceleration, ARM big.LITTLE CPU scheduling optimization
Supports GPU acceleration via the next-generation low-overhead Vulkan API
Extensible model design, supports 8bit quantization and half-precision floating point storage, can import caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) models
Support direct memory zero copy reference load network model
Can be registered with custom layer implementation and extended
Well, it is strong, not afraid of being stuffed with 卷 QvQ
https://github.com/k2-fsa/sherpa Use ncnn for real-time speech
recognition (i.e., speech-to-text); also support embedded devices and provide
mobile Apps (e.g., Android App)
ncnn
ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third-party dependencies. It is cross-platform and runs faster than all known open-source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, creating intelligent APPs, and bringing artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu, and so on.
ncnn 是一个为手机端极致优化的高性能神经网络前向计算框架。 ncnn 从设计之初深刻考虑手机端的部署和使用。 无第三方依赖,跨平台,手机端 cpu 的速度快于目前所有已知的开源框架。 基于 ncnn,开发者能够将深度学习算法轻松移植到手机端高效执行, 开发出人工智能 APP,将 AI 带到你的指尖。 ncnn 目前已在腾讯多款应用中使用,如:QQ,Qzone,微信,天天 P 图等。
637093648 (超多大佬)
答案:卷卷卷卷卷(已满)
https://t.me/ncnnyes
https://discord.gg/YRsxgmF
677104663 (超多大佬)
答案:multi-level intermediate representation
818998520 (新群!)
Download & Build status
https://github.com/Tencent/ncnn/releases/latest
how to build ncnn library on Linux / Windows / macOS / Raspberry Pi3, Pi4 / POWER / Android / NVIDIA Jetson / iOS / WebAssembly / AllWinner D1 / Loongson 2K1000
Support most commonly used CNN network
支持大部分常用的 CNN 网络
HowTo
use ncnn with alexnet with detailed steps, recommended for beginners :)
ncnn 组件使用指北 alexnet 附带详细步骤,新人强烈推荐 :)
use netron for ncnn model visualization
use ncnn with pytorch or onnx
ncnn low-level operation api
ncnn param and model file spec
ncnn operation param weight table
how to implement custom layer step by step
FAQ
ncnn throw error
ncnn produce wrong result
ncnn vulkan
Features
功能概述
supported platform matrix
Project examples
https://github.com/magicse/ncnn-colorization-siggraph17
https://github.com/mizu-bai/ncnn-fortran Call ncnn from Fortran
https://github.com/k2-fsa/sherpa Use ncnn for real-time speech recognition (i.e., speech-to-text); also support embedded devices and provide mobile Apps (e.g., Android App)
License
BSD 3 Clause