The project is a tool for non-intrusive automated GUI testing based on a robotic arm, camera, and technologies such as visual understanding. It is designed for GUI testing on ubiquitous operating system terminal devices and can also be used for GUI testing on traditional systems like Android and iOS.
The robotic arm component of this project is developed based on the DOFBOT AI Vision Robotic Arm (Raspberry Pi version), and the integrated development environment JupyterLab is already deployed in the robotic arm’s factory system.
Deploying the runtime environment on a PC, connecting the camera to the PC for easy modification and debugging. Communication between the PC and the robotic arm is achieved through the Python RPyC library.
Enter JupyterLab and copy the files from the robot_arm folder to the robotic arm system.
Execution Steps
Enter JupyterLab and start the robotic arm: /usr/bin/python3 ./server.py
On the PC:
Open the project in PyCharm and set src as the source code directory.
In PyCharm, run host_main.py. Please note that after starting the execution, you can adjust the surrounding lighting environment and camera position (suitable position directly above the device to be tested) through the preview screen to ensure readiness before officially starting the test.
Fine-tuning Parameters
Configuration parameters for filtering the results of control recognition can be adjusted according to the specific requirements of the scene: ./src/config/detect.ini.
Instructions for Integration
Integrate the environment dependencies and code of this repository into the robotic arm system using JupyterLab. (Keep only the core code and remove any code used for visualization and debugging.)
Please note that the integration approach is more challenging to debug. It is recommended to use this method when the code version is stable, the testing scenario is well-defined, and the position of the running device is fixed.
RoveTest
项目简介
本项目是一款基于机械臂与摄像头、使用视觉理解等技术实现非侵入自动化 GUI 测试的工具,针对实现在泛在操作系统终端设备上的 GUI 测试,也可用于传统系统(如安卓、iOS)的 GUI 测试。
本项目机械臂部分基于 DOFBOT AI视觉机械臂(树莓派版) 进行开发,机械臂出厂系统中已经部署好了集成开发环境 JupyterLab。
机械臂的装配与使用详见:https://www.yahboom.com/study/Dofbot-Pi
以开发方式使用(推荐)
环境准备
git clone git@code.gitlink.org.cn:Yoson_L/rovetest.git
cd rovetest & pip install -r requirements.txt
运行方式
/usr/bin/python3 ./server.py
src
设置为源代码目录参数微调
用于控件识别的结果过滤的配置参数:
./src/config/detect.ini
可根据具体场景的需要进行调整以集成方式使用
将本仓库的环境依赖与代码通过 JupyterLab 集成到机械臂系统中。(仅保留核心代码,需要删去用于可视化和调试的代码)
由于集成方式难以调试,我们设想在代码版本稳定、测试场景明确、运行设备位置固定的情况下使用。
Introduction
The project is a tool for non-intrusive automated GUI testing based on a robotic arm, camera, and technologies such as visual understanding. It is designed for GUI testing on ubiquitous operating system terminal devices and can also be used for GUI testing on traditional systems like Android and iOS.
The robotic arm component of this project is developed based on the DOFBOT AI Vision Robotic Arm (Raspberry Pi version), and the integrated development environment JupyterLab is already deployed in the robotic arm’s factory system.
For assembly and usage of the robotic arm, please refer to: https://www.yahboom.com/study/Dofbot-Pi
Instructions for Development (Recommended)
Environment Setup
git clone git@code.gitlink.org.cn:Yoson_L/rovetest.git
cd rovetest & pip install -r requirements.txt
robot_arm
folder to the robotic arm system.Execution Steps
/usr/bin/python3 ./server.py
src
as the source code directory.host_main.py
. Please note that after starting the execution, you can adjust the surrounding lighting environment and camera position (suitable position directly above the device to be tested) through the preview screen to ensure readiness before officially starting the test.Fine-tuning Parameters
Configuration parameters for filtering the results of control recognition can be adjusted according to the specific requirements of the scene:
./src/config/detect.ini
.Instructions for Integration
Integrate the environment dependencies and code of this repository into the robotic arm system using JupyterLab. (Keep only the core code and remove any code used for visualization and debugging.)
Please note that the integration approach is more challenging to debug. It is recommended to use this method when the code version is stable, the testing scenario is well-defined, and the position of the running device is fixed.