Learning an Egocentric Basketball Ghosting Model using Wearable Cameras and Deep Convolutional Networks

Download Full Paper Here

Abstract: The growing availability of players’ tracking data has led to a number of data-driven ghosting models that aim to imitate players’ behaviors in various sports. However, models trained on such tracking data typically assume that the future behavior of the players depends only on their (x,y) locations in the court. Such an assumption makes these models overly simplistic and prevents them from learning subtle behavior patterns of real players. To address this issue, we present an egocentric basketball ghosting model. Our model predicts a player’s future behavior from an egocentric image, which we obtain from a wearable GoPro camera on a player’s head. In contrast to prior methods that use tracking data or third-person cameras, our approach of using first-person cameras allows us to capture exactly what the players see during a game – making it easier to understand and imitate their behavior. Our model uses a single egocentric image to generate a plausible behavior sequence in the form of 12D egocentric camera configurations, which encode a player’s 3D location and his 3D head orientation. We accomplish this via two deep convolutional networks, which are both trained in an unsupervised fashion and do not require manual human annotations. In our experimental section, we demonstrate that our egocentric ghosting model generates realistic basketball sequences, that can be used to predict a player’s future behavior. Furthermore, we show that by inspecting intermediate neuron activations in our trained networks, we can better understand how the model decides what the player will do next.

Back to Videos