Learning basketball tactic via virtual reality environment requires real-time feedback to improve the realism and interactivity. For example, the virtual defender should move immediately according to the player's movement. However, the currently existing basketball player trajectory generation models can only generate a fragment of trajectories instead of plausible one-step position.
In this paper, we proposed an autoregressive generative model for basketball defensive trajectory generation. To learn the continuous Gaussian distribution of player position, we adopt a differentiable sampling process to sample the candidate location with a standard deviation loss, which can preserve the diversity of the trajectories. Furthermore, we design several additional loss functions based on the domain knowledge of basketball to make the generated trajectories match the real situation in basketball games.
The experimental results show that the proposed method can achieve better performance than previous works in terms of different evaluation metrics.
Team tactics is an important part of basketball training. Generally, coaches explain the tactics via traditional or electronic tactic board with 2D trajectories, which is quite different from training on a real court. Therefore, it is difficult for the players to clearly understand the tactics described by the coach. Aiming at enhancing the effectiveness of tactic learning, Tsai proposed a VR basketball training system to assist players in comprehending details of offensive tactics with the aid of a 2D electronic basketball tactic board (2D-BTB) and a virtual training environment. By using the 2D-BTB, coaches can set up the target training tactic by assigning the trajectories of offensive players intuitively and the trainee can observe the details of tactics in a more immersive way. In the 2D-BTB system, real-time simulation of defensive players is important because the trainee needs to learn who might be a wide-open player to pass the ball and where might be a better path to avoid being blocked by the defenders during experiencing the VR training system. The trajectory of defenders can be input by the coach through the 2D-BTB as well as the offensive trajectories, but it might be a burden for the coach. In order to provide real-time realistic virtual training with both offender and defender animation, the goal of this work is to instantly simulate defensive trajectories according to the offensive/ball positions input by the coach or by the user interacting with the VR system.
Deep generative models such as latent variable generative models and autoregressive generative models are commonly used to generate time series data. Latent variable generative models, e.g, variational autoencoders (VAE) and generative adversarial networks (GAN), directly generate the data for all time-steps efficiently. To be more precise, given a sequential data (O) as the condition, VAEs or GANs generate another sequential data (D)at a time. In our application that aims to instantly simulate defensive trajectories according to the offensive/ball positions, latent variable generative models need to collect data of several time-steps as the condition for the generation process, resulting in a period of time latency to receive the feedback according to the current offensive movement. In contrast, autoregressive models generate the data of the next time-step given the previous data, which is more appropriate for our task. A famous autoregressive generative model is WaveNet, which utilizes causal convolution to model the temporal relation of time series data and generates the multinomial distribution of the discrete amplitude values for the synthetic audios. WaveNet utilizes the multinomial distribution to model discrete values, which might have the problem of inaccurate position output in our application because position values are continuous. Instead of using multinomial distribution, we utilize continuous Gaussian distribution to model the output data. In summary, we modified the WaveNet and propose an Autoregressive Generative Model to learn the distribution of the defensive trajectory. A convolutional autoregressive network is introduced to generate smooth 2D trajectories based on the previous defensive trajectories, the current offender positions, the current ball position, and the basket position. Moreover, a differentiable position sampling mechanism is applied to the output distribution and Huber loss is employed on the sampled data points to make the model converge more easily. To keep the diversity of the generated trajectories, we propose a standard deviation loss to ensure that the variance of the Gaussian distribution is large enough. In addition, several heuristic losses are designed to make the generated trajectory consistent with the common sense of players.
Our proposed amodel with the convolutional autoregressive anetwork and the differentiable position sampling mechanism.