On the Importance of Building High-quality Training Datasets for Neural Code Search
On the Importance of Building High-quality Training Datasets for Neural Code Search
The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on average with the three validation benchmarks.
In this experiment, we compare the models respectively trained with HTML tags partly removed and fully removed. This is a previous experiment of which the experimental setting is different from the experiments in the paper. Considering the better performance achieved by the partly removed group, we finally decide to keep the content enclosed by HTML tags.
We additionally performed a comparison on the code and query length between each dataset. As shown in Table 2, The average lengths of queries and code snippets in COFIC are 9.1 and 61.3, respectively. Interestingly, the statistics of COFIC is close to StaQC, the dataset collected from StackOverflow. It demonstrates that our filtering framework can filter out texts that resides far from the distribution of high qualiy queries, such as the StackOverflow question titles.