Philadelphia, USA / March 2025
Univsersity of Pennsylvina
Instructor: Jeffrey S. Anderson
Teaching Assistant: Shunta Moriuchi
ARI Robotics Lab Manager: Nicholas Sideropoulos, Shunta Moriuchi
This project demonstrates the integration of robotics, computer vision, and game logic. It leverages a Grasshopper-based workflow to create an interactive Tic-Tac-Toe game, where a user competes against an ABB robotic arm. The system handles board calibration, image processing for move detection, and AI-driven decision-making, showcasing advanced techniques in robotic control and human-robot interaction.

Hardware Components
ABB Robotic Arm (Penn-IRB120)
Web Camera
Paper and Pen
Pen and Cam holder (3D printed End-effector)
Software
Grasshopper (Rhino7&8) – R7 for Robotic IO, R8 for TTT System
RobotExMachina – For communication with the ABB robot
OpenCV – For move detection using the camera
Keras – for CNN
Grasshopper Plugins
Firefly.X = 0.0.0.69
Human = 1.7.3.0
Pufferfish = 3.0.0.0
Telepathy = 1.0.0.0
WombatGH = 1.3.1.0
TTT System
Dependencies & External Resources
1. Rhino & Grasshopper
RhinoCommon (
Rhino.Geometry
) for core geometry.Grasshopper SDK for custom components and data trees.
GhPython (
ghpythonlib.treehelpers
) to shape Python outputs into GH trees.
2. Image Capture & Processing
Firefly Bridge (C#) → Windows bitmap → Python via
io.MemoryStream
.OpenCV (
cv2
) for grayscale, thresholding, perspective transforms, contour detection.Pillow (
PIL.Image
) to inspect BMP dimensions and aspect ratio.
3. Numerical & Utility
NumPy for array ops and geometry math.
Python’s
math
for scaling and aspect-ratio calculations.C# (
System
,System.Linq
) for collection handling and I/O.
4. Machine Learning
Keras (TensorFlow backend) CNN model trained in Google Colab → export as
.h5
.Paper-size calibration (A3, A4, Tabloid) for real-world board scaling.
5. Extras
Google Colab Notebook: TTT Model Training
Random (Python) for tie-breaker AI move selection.
Data Parse and Exchange
These files facilitate data exchange across modules and define board behavior
Board Creation & Calibration
Generates a 3×3 (or custom-sized) Tic-Tac-Toe grid aligned to a given paper size, including the board’s cell geometry, cell centers, and initial state mapping.
Image Processing
This stage captures and interprets the current state of the game board using webcam input and OpenCV methods.
ConvertFFbmp2Bmp
- Purpose:
- Converts Firefly webcam image (
Firefly_Bitmap
) to a standard Windows bitmap (Wbitmap
) for further processing.
- Converts Firefly webcam image (
- Key Dependencies:
Firefly_Bridge
System.Drawing
Thresh Process
- Purpose:
- Processes an input image (bitmap or filepath), converts it to grayscale, applies binary thresholding, and saves the result for symbol detection.
- Key Dependencies:
cv2
(OpenCV)PIL.Image
System.Drawing.Bitmap
NumPy
DetectPaper_Calibrated
- Purpose:
- Detects the sheet of paper in the camera frame using a binary threshold image. Finds its four corners, applies a perspective transform, and outputs calibration data.
- Key Dependencies:
cv2
(OpenCV)NumPy
DetectBoardGrid
- Purpose:
- Applies thresholding to the calibrated paper image, detects the board grid layout, and draws the 3×3 cells on the image.
- Key Dependencies:
cv2
(OpenCV)NumPy
Move Detection and Gameplay
Processes parsed webcam data and interprets player’s move. Then passes the current board state to the AI system to calculate the robot’s next move.
Model Training (Google Colab)
- Purpose:
- Trains a neural network using Keras to classify user-drawn X and O symbols from thresholded images.
- Platform: TTT Model Training
- Process Overview:
- Upload labeled image dataset.
- Define and train CNN model.
- Export model in
.h5
format.
RecordMoves
- Purpose:
- Uses the trained model to classify symbols in each cell of the board and updates game state.
- Key Dependencies:
Keras
,NumPy
,cv2
TTTSolver
- Purpose:
- Integrates player and AI moves, manages game logic, and renders results on the board image. The algorithm used here is a minimax algorithm enhanced with alpha-beta pruning. In essence, it evaluates potential moves by simulating the game to its conclusion, then chooses the move that maximizes the computer’s chance of winning while minimizing the opponent’s advantage. The alpha-beta pruning helps by eliminating moves that won’t affect the final decision, thus reducing computation time.
- Logic:
- Uses a custom
Tic
class to track game state and evaluate win/tie conditions. - Implements alpha-beta pruning to choose the best move for the AI (‘O’).
- Updates
history
, adds computer move, and draws the move on the paper image.
- Uses a custom

Pixel To User-Defined Board In Rhino
This step is to take the selected cell from the existing grid, compute its center and dimensions, draw the corresponding move shape (a circle sized to fit), and package that single “last move” into a Grasshopper data tree so it can be merged back into our board visualization
CornersCalibration2Rhino
- Purpose:
- Calibrates a set of corner points from an image to align with Rhino’s geometry. It achieves this by applying a perspective transformation and scaling the points to match a designated paper size (A3, A4, or Tabloid). Ultimately, the script outputs the corrected corner points, a corresponding edge polyline, and the transformation used for scaling.
- Logic:
- Input Verification: Checks if corner points are provided; if not, it sets an error message.
- Perspective Transformation: Uses the
TransformCorners
method to warp the input corner points based on a provided transformation matrix. The warped points are then converted into Rhino 3D points. - Scaling Calculation: The
ScaleByDimension
function splits the corners into two groups (even and odd indices) to calculate the average distance between pairs. This distance is compared against predefined dimensions for paper sizes (A3, A4, Tabloid) to derive a scaling factor. - Edge Construction & Transformation: Constructs a closed polyline curve by appending the first point to the list of corners, applies the scaling transformation to this polyline, and prints the scale.
- Output & Messaging: Returns the calibrated corners, the transformed edge, and the transformation matrix, while also updating the component’s message to indicate successful calibration.
ResampleImage
- Purpose:
- Extracts essential information from a BMP image file—specifically, its width, height, and aspect ratio. It then applies a scaling factor to the dimensions and outputs a formatted dictionary containing the new dimensions and the aspect ratio as a message in the Grasshopper component.
- Logic:
- Image Information Extraction: Opens the BMP image using the Python Imaging Library (PIL) and retrieves its original width and height.
- Aspect Ratio Calculation: Computes the aspect ratio by dividing the width by the height. It then checks if the width and height are divisible (resulting in a clean ratio like “width:height”) or not (in which case the ratio is rounded to three decimal places).
- Dimension Scaling: Uses the provided
scale
factor to calculate new dimensions by multiplying the original width and height by this factor and applying a floor operation. - Output Formatting: Constructs a new dictionary containing the scaled width, height, and the formatted aspect ratio, and then assigns this dictionary (as a string) to the Grasshopper component message.
Board2Rhino
- Purpose:
- Integrates board cell information with historical game moves to generate Rhino geometry that represents the game board. It outputs a grid of cells (as rectangles) and overlays the corresponding game shapes (“O” as circles, “X” as two crossing lines) based on a history record, after applying a specified transformation.
- Logic:
- Input Processing:
- Receives a list of board cells (
xBoards
), an image/paper (xPaper
), a history list (xHistory
) containing past moves, and a transformation matrix (xform
). - Converts the history list to a dictionary for easier lookup.
- Receives a list of board cells (
- Grid and Geometry Creation:
- Iterates over each cell defined by its position and size
(x, y, w, h)
from the board grid. - Converts the cell values to floats and adjusts the y-coordinate to account for image coordinate differences (flipping y based on the image height).
- Constructs a rectangle for each cell using a plane centered at
(x, y)
and defined by widthw
and heighth
. - Applies the transformation
xform
to the rectangle, and stores it in a dictionary (recs
).
- Iterates over each cell defined by its position and size
- Game Move Overlay:
- Checks if a move is recorded in the history for the current cell.
- For an “O” move:
- Calculates the cell’s centroid.
- Creates a circle centered at the centroid with a radius equal to half of the smaller cell dimension.
- Transforms the circle using
xform
and stores it.
- For an “X” move:
- Retrieves the four corners of the rectangle.
- Creates two crossing lines (diagonals) by connecting opposite corners.
- Stores the lines as the shape for that cell.
- If no move is present, it assigns an empty shape list for that cell.
- Output Structuring:
- Converts the dictionary of game shapes into a Grasshopper tree using
ghpythonlib.treehelpers.list_to_tree
to maintain structured data output. - Returns the list of transformed cell rectangles (
cellGrid
) and the tree of game shapes (geos_tree
).
- Converts the dictionary of game shapes into a Grasshopper tree using
- Input Processing:
AppendMove2Rhino
- Purpose:
- Processes a specific cell from a grid to record the last move. It generates a circle at the center of that cell, using half the smaller dimension as the radius. The move is then stored and formatted as a Grasshopper tree for further use in the script.
- Logic:
- Data Preparation: Retrieves a list of cells (
xCellGrid
) and an existing history dictionary (xHistory
), and initializes alast_move
dictionary withNone
for each cell index. - Cell Processing: Selects the target cell using the provided
index
. It calculates the cell’s centroid and dimensions (width and height). - Circle Creation: Creates a circle centered at the cell’s centroid, with the radius set to half of the smaller dimension (ensuring the circle fits within the cell).
- Recording Move: Updates the
last_move
dictionary at the specific index with the created circle. - Output Conversion: Converts the
last_move
dictionary into a Grasshopper tree structure usingghpythonlib.treehelpers.list_to_tree
for structured downstream processing.
- Data Preparation: Retrieves a list of cells (
Map Back to user-defined Board
- Purpose:
- In this final stage, the script maps both the human and AI moves back onto a custom, user-defined board in Rhino. It ensures that the board geometry is accurately placed (via “SafeZoning”) and provides a visual preview of the combined state of the game—where each cell, circle (“O”), or cross (“X”) is correctly aligned with the user’s board dimensions.
- Logic:
- SafeZoning:
- Ensures the geometry (cells, shapes) is confined within user-defined boundaries. This might involve checking or adjusting coordinates so they remain valid on the custom board.
- PreviewBoard:
- Offers a temporary or secondary visualization of the moves (from both the human player and the AI) before they are finally mapped onto the board. This helps confirm placement and appearance.
- Map On Defined Board:
- Takes the computed moves—whether circles (“O”) or crosses (“X”)—and projects them onto the user’s board geometry using transformation logic.
- Combines any “HistoryBoard” data (past moves) with newly computed “Computer Move” or “Move” data, ensuring the final output correctly represents all moves made so far.
- Outputs a fully updated board geometry in Rhino, reflecting the current game state on the user-defined board.
- SafeZoning:

Robot IO

Define TCP
Generate three tool-center-point planes (PenTCP, WebcamTCP, RealSenseTCP) from the input mesh.

Setup Machina
Simulate Camera / Drawing / Last Position: Toggle between simulated camera pose, simulated drawing, or actual robot’s last TCP.
Connect to Bridge: Click “Connect” to open the Machina Bridge link and enable command sending.
Reset Axes
DefineTool / AttachTool / SpeedTo / Axesto / TransformTo: Set tool, attach it, choose speed, zero all axes, and apply plane transform in one merged “Send Instructions” call.
Calibration Control Panel
XYZ Jog Buttons: Six color-coded buttons (+/– X, Y, Z) move the robot by the “Distance per Click” value.
KeyPress Listener: Optionally use arrow-key input (polled every 50 ms) instead of clicking.
Record Paper Corners
Capture Four Points: Click to log Far-Left, Far-Right, Near-Left, Near-Right TCP positions.
Merge & Order: Combine into a consistent corner list for downstream use.


Webcam Capture & Program Generation
Send Webcam Location: “Send Program to Machina” button emits the capture routine.
Compile Program (Move Absolute): Define tools, attach TCP, set speed & precision, transform to plane, then merge into a text program.
Send Program: Push the assembled program string straight to the Machina Bridge.
Curve Processing & Toolpath Sending
Select Move & Send: Toggle simulation vs. real, cull any curves below the tolerance.
Make Toolpath on Surface: Project curves, divide into points, build normal drawing planes, duplicate endpoints if closed, insert clearance planes, and preview.
Compile Program: Use the same define/attach/speed/precision/transform merge to turn the plane list into a program.
Break into Chunks: Partition the action list into timed batches for smoother sends.
Send Program: Stream each chunk to Machina via the bridge.