Virtual Reality (VR) is an emerging technique that provides a unique real-time experience for users. VR technologies have provided revolutionary user experiences in various scenarios (e.g., training, education, gaming, etc.). However, testing VR applications is challenging due to their nature which necessitates physical interactivity, and their reliance on specific hardware systems. Despite the recent advancements in VR technology and its usage scenarios, we still know little about VR application testing. To fill up this knowledge gap, we performed an empirical study on 314 open-source VR applications. Our analysis identified that 79% of the VR projects evaluated did not have any automatic tests, and for the VR projects that did, the median functional-method to test-method ratio were lower than those of other project types. Moreover, we uncovered tool support issues concerning the measurement of VR code coverage, and the assertion density results we were able to generate were relatively low, with an average of 17.63%. Finally, through a manual analysis of 370 test cases, we identified the different categories of test cases being used to validate VR application quality attributes. Furthermore, we extracted which of these categories are VR-attention, meaning that test writers need to pay special attention to VR characteristics when writing tests of these categories. We believe that our findings constitute a call to action for the VR development community to improve their automatic testing practices and provide directions for software engineering researchers to develop advanced techniques for automatic test case generation and test quality analysis for VR applications. Our replication package, containing the dataset we used, software tools we developed, and the results we found, is accessible at https://figshare.com/s/7e3aa28b1415d26a7222.