目录
目录README.md

RoveTest

项目简介

本项目是一款基于机械臂与摄像头、使用视觉理解等技术实现非侵入自动化 GUI 测试的工具,针对实现在泛在操作系统终端设备上的 GUI 测试,也可用于传统系统(如安卓、iOS)的 GUI 测试。

本项目机械臂部分基于 DOFBOT AI视觉机械臂(树莓派版) 进行开发,机械臂出厂系统中已经部署好了集成开发环境 JupyterLab。

机械臂的装配与使用详见:https://www.yahboom.com/study/Dofbot-Pi

以开发方式使用(推荐)

在 PC 中部署运行环境,摄像头与 PC 连接,便于修改和调试,PC 与 机械臂通过 Python RPyC 库通信

环境准备

  • 在 PC 中:
    • git clone git@code.gitlink.org.cn:Yoson_L/rovetest.git
    • cd rovetest & pip install -r requirements.txt
  • 进入 JupyterLab 将 robot_arm 文件夹中的文件复制粘贴到机械臂系统中

运行方式

  • 进入 JupyterLab 启动机械臂:/usr/bin/python3 ./server.py
  • 在 PC 中:
    • 使用 PyCharm 打开项目并将 src 设置为源代码目录
    • 在 PyCharm 中运行 host_main.py,注意开始运行后可通过预览画面来调整周围光照环境、摄像头位置(于待测应用所在设备正上方合适位置)等,确定准备完毕正式开始测试

参数微调

用于控件识别的结果过滤的配置参数:./src/config/detect.ini 可根据具体场景的需要进行调整

以集成方式使用

将本仓库的环境依赖与代码通过 JupyterLab 集成到机械臂系统中。(仅保留核心代码,需要删去用于可视化和调试的代码)

由于集成方式难以调试,我们设想在代码版本稳定、测试场景明确、运行设备位置固定的情况下使用。


Introduction

The project is a tool for non-intrusive automated GUI testing based on a robotic arm, camera, and technologies such as visual understanding. It is designed for GUI testing on ubiquitous operating system terminal devices and can also be used for GUI testing on traditional systems like Android and iOS.

The robotic arm component of this project is developed based on the DOFBOT AI Vision Robotic Arm (Raspberry Pi version), and the integrated development environment JupyterLab is already deployed in the robotic arm’s factory system.

For assembly and usage of the robotic arm, please refer to: https://www.yahboom.com/study/Dofbot-Pi

Deploying the runtime environment on a PC, connecting the camera to the PC for easy modification and debugging. Communication between the PC and the robotic arm is achieved through the Python RPyC library.

Environment Setup

  • On the PC:
    • git clone git@code.gitlink.org.cn:Yoson_L/rovetest.git
    • cd rovetest & pip install -r requirements.txt
  • Enter JupyterLab and copy the files from the robot_arm folder to the robotic arm system.

Execution Steps

  • Enter JupyterLab and start the robotic arm: /usr/bin/python3 ./server.py
  • On the PC:
    • Open the project in PyCharm and set src as the source code directory.
    • In PyCharm, run host_main.py. Please note that after starting the execution, you can adjust the surrounding lighting environment and camera position (suitable position directly above the device to be tested) through the preview screen to ensure readiness before officially starting the test.

Fine-tuning Parameters

Configuration parameters for filtering the results of control recognition can be adjusted according to the specific requirements of the scene: ./src/config/detect.ini.

Instructions for Integration

Integrate the environment dependencies and code of this repository into the robotic arm system using JupyterLab. (Keep only the core code and remove any code used for visualization and debugging.)

Please note that the integration approach is more challenging to debug. It is recommended to use this method when the code version is stable, the testing scenario is well-defined, and the position of the running device is fixed.

邀请码
    Gitlink(确实开源)
  • 加入我们
  • 官网邮箱:gitlink@ccf.org.cn
  • QQ群
  • QQ群
  • 公众号
  • 公众号

©Copyright 2023 CCF 开源发展委员会
Powered by Trustie& IntelliDE 京ICP备13000930号