|
Post by rgs on Apr 17, 2014 23:14:57 GMT
I am looking at two phenotypes that are highly correlated ( r ~ 0.5). When I do GCTA on one of the quantities, I get Vg/Vp ~ 0.99999 (0.3) but when I do GCTA on the second phenotype, Vg/VP ~ 0 (0.3). I guess I am trying to understand what causes the code to give a value of 0.99999 or 0.00000 with large errorbars? I understand small sample size can make the error large, but what causes the algorithm to take these extreme values?
Thanks
|
|
|
Post by Jian Yang on Apr 18, 2014 11:01:42 GMT
If the SE is too large, say 0.3 in your case, given a true parameter of 0.4, the 95% CI is roughly -0.2 ~ 1. It's simply because the sample size is too small.
|
|