
TensorLayerX is a multi-backend AI framework, supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU, Huawei-Ascend, Cambricon and more. This project is maintained by researchers from Peking University, Imperial College London, Princeton, Stanford, Tsinghua, Edinburgh and Peng Cheng Lab.
TensorLayerX 是一个跨平台开发框架,支持TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow和Jittor,用户不需要修改任何代码即可以运行在各类操作系统和AI硬件上(如Nvidia-GPU 和 Huawei-Ascend),并支持混合框架的开发。这个项目由北京大学、鹏城实验室、爱丁堡大学、帝国理工、清华、普林斯顿、斯坦福等机构的研究人员维护。
Document
TensorLayerX has extensive documentation for both beginners and professionals.

Deep Learning course
We have video courses for deep learning, with example codes based on TensorLayerX.
Bilibili link (chinese)
Design Features
Compatibility: Support worldwide frameworks and AI chips, enabling one code runs on all platforms.
Model Zoo: Provide a series of applications containing classic and SOTA models, covering CV, NLP, RL and other fields.
Deployment: Support ONNX protocol, model export, import and deployment.
Multi-backend Design
You can immediately use TensorLayerX to define a model via Pytorch-stype, and switch to any backends easily.
import os
os.environ['TL_BACKEND'] = 'tensorflow' # modify this line, switch to any backends easily!
#os.environ['TL_BACKEND'] = 'mindspore'
#os.environ['TL_BACKEND'] = 'paddle'
#os.environ['TL_BACKEND'] = 'torch'
import tensorlayerx as tlx
from tensorlayerx.nn import Module
from tensorlayerx.nn import Linear
class CustomModel(Module):
def __init__(self):
super(CustomModel, self).__init__()
self.linear1 = Linear(out_features=800, act=tlx.ReLU, in_features=784)
self.linear2 = Linear(out_features=800, act=tlx.ReLU, in_features=800)
self.linear3 = Linear(out_features=10, act=None, in_features=800)
def forward(self, x, foo=False):
z = self.linear1(x)
z = self.linear2(z)
out = self.linear3(z)
if foo:
out = tlx.softmax(out)
return out
MLP = CustomModel()
MLP.set_eval()
Resources
- Examples for tutorials
- GammaGL is series of graph learning algorithm
- TLXZoo a series of pretrained backbones
- TLXCV a series of Computer Vision applications
- TLXNLP a series of Natural Language Processing applications
- TLX2ONNX ONNX model exporter for TensorLayerX.
- Paddle2TLX model code converter from PaddlePaddle to TensorLayerX.
More resources can be found here
Installation
- The latest TensorLayerX compatible with the following backend version
TensorLayerX |
TensorFlow |
MindSpore |
PaddlePaddle |
PyTorch |
OneFlow |
Jittor |
v0.5.8 |
v2.4.0 |
v1.8.1 |
v2.2.0 |
v1.10.0 |
– |
– |
v0.5.7 |
v2.0.0 |
v1.6.1 |
v2.0.2 |
v1.10.0 |
– |
– |
via pip for the stable version
# install from pypi
pip3 install tensorlayerx
build from source for the latest version (for advanced users)
# install from Github
pip3 install git+https://github.com/tensorlayer/tensorlayerx.git
For more installation instructions, please refer to Installtion
Docker is an open source application container engine. In the TensorLayerX Docker Repository,
different versions of TensorLayerX have been installed in docker images.
# pull from docker hub
docker pull tensorlayer/tensorlayerx:tagname
Contributing
Join our community as a code contributor, find out more in our Help wanted list and Contributing guide!
Getting Involved
We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayerX in the following slack channel.
Citation
If you find TensorLayerX useful for your project, please cite the following papers:
@inproceedings{tensorlayer2021,
title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia
@inproceedings{tensorlayer2021,
title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
}
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
}
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
TensorLayerX is a multi-backend AI framework, supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU, Huawei-Ascend, Cambricon and more. This project is maintained by researchers from Peking University, Imperial College London, Princeton, Stanford, Tsinghua, Edinburgh and Peng Cheng Lab.
TensorLayerX 是一个跨平台开发框架,支持TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow和Jittor,用户不需要修改任何代码即可以运行在各类操作系统和AI硬件上(如Nvidia-GPU 和 Huawei-Ascend),并支持混合框架的开发。这个项目由北京大学、鹏城实验室、爱丁堡大学、帝国理工、清华、普林斯顿、斯坦福等机构的研究人员维护。
Document
TensorLayerX has extensive documentation for both beginners and professionals.
Deep Learning course
We have video courses for deep learning, with example codes based on TensorLayerX.
Bilibili link (chinese)
Design Features
Compatibility: Support worldwide frameworks and AI chips, enabling one code runs on all platforms.
Model Zoo: Provide a series of applications containing classic and SOTA models, covering CV, NLP, RL and other fields.
Deployment: Support ONNX protocol, model export, import and deployment.
Multi-backend Design
You can immediately use TensorLayerX to define a model via Pytorch-stype, and switch to any backends easily.
Resources
More resources can be found here
Installation
via pip for the stable version
build from source for the latest version (for advanced users)
For more installation instructions, please refer to Installtion
Docker is an open source application container engine. In the TensorLayerX Docker Repository, different versions of TensorLayerX have been installed in docker images.
Contributing
Join our community as a code contributor, find out more in our Help wanted list and Contributing guide!
Getting Involved
We suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayerX in the following slack channel.
Contact
Citation
If you find TensorLayerX useful for your project, please cite the following papers: