0
$\begingroup$

As far as I understood, in proc glm, the variance in the whole dataset are pooled together (because of the assumption of the homogeneity of variance?) and each estimate of lsmeans (of the experimental treatment) have the same standard error regardless of how different the estimates could be (this looks unrealistic to me).

If I have the dataset to run proc glm, I could have as well just calculate the treatment means and SE manually and then I could use something like t.test() in R to accomplish the same thing, which is not that time-consuming either. What is the advantage of just using proc glm for testing differences or should we be in fact wary about it because of the identical standard error it computes?

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Browse other questions tagged or ask your own question.