Armlab

The Armlab project, part of the ROB 550 curriculum in Fall 2023, was a comprehensive exploration into the autonomy of a 5-DOF robotic arm. The project involved implementing advanced computer vision, kinematics, and path-planning techniques. It culminated in a series of challenges where the robotic arm autonomously manipulated objects in its workspace. The project was not only a demonstration of robotics theory but also a practical test of implementation methodologies.

Project Objectives

  • Acting: Develop forward and inverse kinematics models for precise robotic arm movement and object manipulation.
  • Sensing: Perform 3D camera calibration, object detection, and workspace mapping using depth sensors and computer vision algorithms.
  • Reasoning: Design and implement state machines for automated task execution, integrating sensor data, and kinematics.

Implementation Methodology

  1. Forward and Inverse Kinematics (IK/FK):
  2. Forward Kinematics: Utilized the Denavit-Hartenberg (DH) parameterization to calculate the end-effector’s position and orientation. This involved defining the RX200 robotic arm’s geometry and solving transformation matrices to map joint angles to global coordinates.

    Inverse Kinematics: Developed algorithms to compute joint angles from a desired end-effector position and orientation. The implementation included error handling for unreachable configurations and degenerate poses.

  3. Automatic Camera Calibration with AprilTags
    • Leveraged AprilTags for camera extrinsic calibration. By detecting known AprilTag positions in the workspace, the system calculated the transformation matrix between the camera and the robot’s base frame.
    • Applied projective transformations to rectify the workspace view, ensuring accurate mapping between image and world coordinates.

  4. Object Detection
    • Implemented a block detection algorithm using OpenCV. The algorithm identified block positions, shapes, and colors (red, green, blue, and more) using a combination of depth imaging and RGB image analysis.
    • Enhanced robustness by filtering false positives and calibrating thresholds for color and depth consistency.

  5. Pick-and-Place Task
    • Designed a state machine to automate the pick-and-place process. The system integrated IK, FK, and camera data to:
      • Detect blocks in the workspace.
      • Plan and execute an approach trajectory.
      • Grasp blocks using the gripper and move them to specified locations.
    • Added functionality for "click-to-grab" and "click-to-drop," allowing users to interact with the system via GUI.

Results and Challenges

  • Competitions : Successfully participated in challenges such as sorting, stacking, and arranging blocks, achieving high accuracy and efficiency under time constraints.
  • Accuracy : Verified FK/IK outputs using controlled test cases and calibrated camera data. Errors were minimized through iterative adjustments and robust algorithm design.
  • Future Improvements : While the implementation was successful, enhancements in gripper design and motion smoothing could improve performance further.

Armlab Project Report