Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define Piece Position using YOLO and Aruco Markers #2

Open
1 of 4 tasks
JELAshford opened this issue May 9, 2021 · 1 comment
Open
1 of 4 tasks

Define Piece Position using YOLO and Aruco Markers #2

JELAshford opened this issue May 9, 2021 · 1 comment
Assignees
Labels
blog A blog post might come out of this task help wanted Extra help is needed software About programming or debugging code theory About theoretical or abstract concepts

Comments

@JELAshford
Copy link
Collaborator

JELAshford commented May 9, 2021

Description

Localising the 3D location of the chess pieces on/off the board requires identifying the piece in the camera frame, and converting that 2D position into the 3D relative coordinates. We believe that this can be done by using known positions from Aruco Markers placed around the board.

Details

Ensure that distances between Aruco markers are measured in the units used by RViz. Match the length-scales of our Computer Vision and Motion Planning systems.

Roadmap

  • Research previous 3D localisation using space markers
  • Implement basic position tracking
  • Scale this to all points in images
  • Implement memory of board layout/use this to confirm estimates
@JELAshford JELAshford added help wanted Extra help is needed software About programming or debugging code theory About theoretical or abstract concepts blog A blog post might come out of this task labels May 9, 2021
@JELAshford JELAshford self-assigned this May 9, 2021
@dyamon dyamon self-assigned this May 16, 2021
@dyamon
Copy link
Member

dyamon commented May 16, 2021

Following OpenCV and this (rather outdated) tutorial we were able to perform a calibration of the camera computing intrinsic, extrinsic and distortion parameters of the camera in use.

calib

Calibration is performed using a grid of know size and the resulting parameters can be computed once and stored for future use.

Once the camera is calibrated we can interact with the camera feed and introduce some augmented reality features.

cube

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blog A blog post might come out of this task help wanted Extra help is needed software About programming or debugging code theory About theoretical or abstract concepts
Projects
None yet
Development

No branches or pull requests

2 participants