We looked inside some of the tweets by @GuyProchilo and here's what we found interesting.
Inside 100 Tweets
#Rstats: When p <. 05 but a 95%CI is large, we know little about an effect except its direction. Try planning a study so that the 95%CI on an effect is no greater than a desired width. If Cohens d= .5, try N= 133 per group for a 95%CI no larger than .5, 99% of the time #phdchat https://t.co/deqJyvfG8J
Are you interested in the "specific value" of a correlation, or merely want to know r =/= 0? If the latter, a standard power analysis is for you. If the former: consider planning a study that will yield a sufficiently small 95%CI on r. See r values & CI widths attached. #phdchat https://t.co/B2KZOf8F7Y
An 'expected effect size' in a power analysis is a best guess based on the evidence available. If wrong, this can have huge implications for detecting an effect if it exists. Consider varying your effect size under different scenarios & examine how this affects power. #phdchat https://t.co/xHdc819q28
#Rstats: Is your QQ-plot generated from a normally distributed process? Draw random data from a norm dist of the same length as your data set over & over. This will give you a good idea of how a normal QQ-plot should look. Try it with `ggpubr::ggqqplot()` for easy plots #phdchat https://t.co/YLm7O32rdw
#rstats: Why is it important to provide a scatterplot alongside summary correlation data? This: the same summary data could be generated by wildly different distributions - some that render the test misleading. In the gif all data are consistent with r = .36 in N = 50. #phdchat https://t.co/uF5uGns6ad