In a code review, developers inspect code changes written by a peer and write comments to provide feedback. Good review comments can not only pinpoint defects but may also help improve code quality and the outcome of the review process. As there is no commonly accepted approach to evaluate the quality of review comments, we aim to (1) devise a conceptual model for an explainable evaluation of the quality of review comments, and (2) develop models for the automated evaluation of comments according to the conceptual model. To address these two goals, we conduct mixed-method studies and propose a new approach: EvaCRC (Evaluating Code Review Comments). To achieve the first goal, we collect and synthesize quality attributes of review comments, by triangulating data from authoritative documents defining and standardizing code review, as well as from academic literature. Then, we validate these attributes with real-world examples. Finally, we establish mappings between quality attributes and grades by inquiring domain experts, thus leading to our final explainable conceptual model. To achieve the second goal, we leverage multi-label learning. Given a set of code review comments, EvaCRC automatically determines their quality attributes and then generates overall quality grades based on the aforementioned mappings. To evaluate and improve EvaCRC, we conduct an industrial case study at one global ICT enterprise. The case study indicates that EvaCRC can effectively evaluate code review comments while offering reasons for the grades given.