Evolutionary theory predicts that, all else equal, the mutation rate should evolve to zero because deleterious mutations are so much more prevalent than beneficial mutations. All else is not equal: the mutation rate is never zero. Further, mutation rate demonstrably varies between and within species. In principle, the strength of natural selection to reduce the mutation rate should be stronger in self-fertilizing organisms than in related outcrossing organisms, perhaps much stronger. However, the relative efficacy of selection on mutation rate relative to the many other factors influencing the evolution of any species is poorly understood - that is, what is the empirical relevance of the theory? To address this question we allowed mutations to accumulate in the relative absence of natural selection for ~100 generations in several sets of "mutation accumulation" (MA) lines in several species of gonochoristic Caenorhabditis (C. remanei, C. brenneri, C.
sp5); we have previously conducted similar experiments in self-compatible rhabditids. The results are very clear: in every case the rate of mutational decay is substantially greater in the gonochoristic taxa than in the self-compatible C. elegans (~4X greater) and C. briggsae (~2X greater). Residual heterozygosity in the ancestral controls of these MA lines introduces some complications in interpreting the results, but there is reason to believe the results are not primarily due to inbreeding depression resulting from ancestral variation. The results suggest that natural selection operates to optimize the mutation rate in Caenorhabditis and that the strength (or efficiency) of selection differs consistently on the basis of mating system, as predicted by theory.