Guy Prochilo πŸ³οΈβ€πŸŒˆ

PhD Candidate (exp 2020). I tweet statistics, open-science, & useful #Rstats tools. Based @unimelb. He/him

Melbourne, Victoria
Joined on January 25, 2013
Statistics

We looked inside some of the tweets by @GuyProchilo and here's what we found interesting.

Inside 100 Tweets

Time between tweets:
a day
Average replies
1
Average retweets
8
Average likes
33
Tweets with photos
68 / 100
Tweets with videos
21 / 100
Tweets with links
0 / 100

We’ve had a hard time finding reviewers lately. We understand that many have neither the time or energy for it right now. But for those of you who do, please send us a dm :)

Estimation made easy: 

esci is now a module for jamovi (@jamovistats)

Estimate means, mean differences, interactions, correlations, proportions, and do meta-analysis.  

Beautiful figures focusing on effect sizes and uncertainty.

Alpha release, lots of improvements to come. https://t.co/AfsrfzI9FT

Estimation made easy: esci is now a module for jamovi (@jamovistats) Estimate means, mean differences, interactions, correlations, proportions, and do meta-analysis. Beautiful figures focusing on effect sizes and uncertainty. Alpha release, lots of improvements to come. https://t.co/AfsrfzI9FT

"In terms of data analysis, it appears as though many researchers have a tendency to do things in a complicated manner even when a simpler procedure can accomplish the same goal. Possibly they feel that a complex analysis makes their research ... more publishable". 

#phdchat https://t.co/eZwOk8IAYs

"In terms of data analysis, it appears as though many researchers have a tendency to do things in a complicated manner even when a simpler procedure can accomplish the same goal. Possibly they feel that a complex analysis makes their research ... more publishable". #phdchat https://t.co/eZwOk8IAYs

The second most common reason for not reporting effect sizes were that scholars found them unimportant, uninformative, or irrelevant to their research questions (21.67%). 

#phdchat https://t.co/gvXt2Cfz2D

The second most common reason for not reporting effect sizes were that scholars found them unimportant, uninformative, or irrelevant to their research questions (21.67%). #phdchat https://t.co/gvXt2Cfz2D

Studies that yield large p values should be considered informative in that they leave open the possibility of real effects, as well as no effect. This is captured by the range of values "not rejected" by the confidence interval. 

#phdchat https://t.co/zuH7zgFLQl

Studies that yield large p values should be considered informative in that they leave open the possibility of real effects, as well as no effect. This is captured by the range of values "not rejected" by the confidence interval. #phdchat https://t.co/zuH7zgFLQl

#Rstats: When p <. 05 but a 95%CI is large, we know little about an effect except its direction. Try planning a study so that the 95%CI on an effect is no greater than a desired width. If Cohens d= .5, try N= 133 per group for a 95%CI no larger than .5, 99% of the time #phdchat https://t.co/deqJyvfG8J

Are you interested in the "specific value" of a correlation, or merely want to know r =/= 0? If the latter, a standard power analysis is for you. If the former: consider planning a study that will yield a sufficiently small 95%CI on r. See r values &amp; CI widths attached.

#phdchat https://t.co/B2KZOf8F7Y

Are you interested in the "specific value" of a correlation, or merely want to know r =/= 0? If the latter, a standard power analysis is for you. If the former: consider planning a study that will yield a sufficiently small 95%CI on r. See r values & CI widths attached. #phdchat https://t.co/B2KZOf8F7Y

An 'expected effect size' in a power analysis is a best guess based on the evidence available. If wrong, this can have huge implications for detecting an effect if it exists. Consider varying your effect size under different scenarios &amp; examine how this affects power.

#phdchat https://t.co/xHdc819q28

An 'expected effect size' in a power analysis is a best guess based on the evidence available. If wrong, this can have huge implications for detecting an effect if it exists. Consider varying your effect size under different scenarios & examine how this affects power. #phdchat https://t.co/xHdc819q28

Kim &amp; James (2015) performed 20 correlations &amp; required one p&lt;.05 to reject their null. In this context Ξ± is 1-(1-.05)^20 = .64. Some may consider a 64% error rate a little unacceptable ...

Read more in https://t.co/DCV6Y9qTEN

#phdchat #IOPsych https://t.co/FQkLeY7DxC

Kim & James (2015) performed 20 correlations & required one p<.05 to reject their null. In this context Ξ± is 1-(1-.05)^20 = .64. Some may consider a 64% error rate a little unacceptable ... Read more in https://t.co/DCV6Y9qTEN #phdchat #IOPsych https://t.co/FQkLeY7DxC

Different fields of psychology substantially differ from one another in terms of a *typical* effect size. This means that using Cohen’s definitions of small, medium, and large as the basis for power analyses (or interpretation of data) is misleading.

#phdchat https://t.co/PAvbfQa3l9

Different fields of psychology substantially differ from one another in terms of a *typical* effect size. This means that using Cohen’s definitions of small, medium, and large as the basis for power analyses (or interpretation of data) is misleading. #phdchat https://t.co/PAvbfQa3l9

"... the fact that the data were collected using fMRI and can be rendered in dramatic fashion on an anatomically precise MRI may not help address the psychological or theoretical significance of the data."

Full-text link: https://t.co/u2dLJdmlzj

#phdchat https://t.co/m6ipxAdgTy

"... the fact that the data were collected using fMRI and can be rendered in dramatic fashion on an anatomically precise MRI may not help address the psychological or theoretical significance of the data." Full-text link: https://t.co/u2dLJdmlzj #phdchat https://t.co/m6ipxAdgTy

If a field of study is very narrow, there are fewer suitably qualified peer reviewers that can offer a *fair-minded* critique of a submitted manuscript. Everyone has some vested interest in the outcome. This is why *post-publication peer review* is so important.

#phdchat https://t.co/1Au9gtWl8u

If a field of study is very narrow, there are fewer suitably qualified peer reviewers that can offer a *fair-minded* critique of a submitted manuscript. Everyone has some vested interest in the outcome. This is why *post-publication peer review* is so important. #phdchat https://t.co/1Au9gtWl8u

#Rstats: Is your QQ-plot generated from a normally distributed process? Draw random data from a norm dist of the same length as your data set over & over. This will give you a good idea of how a normal QQ-plot should look. Try it with `ggpubr::ggqqplot()` for easy plots #phdchat https://t.co/YLm7O32rdw

I'm going to use this phrase in all my papers ... for reasons that are difficult to explain at the moment. 

#phdchat #AcWri https://t.co/1ze82Gzoqh

I'm going to use this phrase in all my papers ... for reasons that are difficult to explain at the moment. #phdchat #AcWri https://t.co/1ze82Gzoqh

#rstats: Why is it important to provide a scatterplot alongside summary correlation data? This: the same summary data could be generated by wildly different distributions - some that render the test misleading. In the gif all data are consistent with r = .36 in N = 50. #phdchat https://t.co/uF5uGns6ad

my methods section: "The 95% CI on this parameter was computed using a percentile bootstrap approach with 2000 bootstrap samples." my overly honest methods section: "The arithmetic solution was too hard so I did a bootstrap yeehaw." #phdchat #rstats

#Rstats: A confidence interval is best understood as a region of null hypothesis values that your data would fail to reject. Try it yourself with a t test in R. Set `mu` to any value in your CI & get p > .05. Set `mu` to exactly the limits of your CI & get p = .05. #phdchat https://t.co/YqaCvKTu24

If there was a single statistics textbook I'd recommend all students (undergrad &amp; postgrad) it is Introduction to the New Statistics. As scientists, more often than not we want estimate the magnitude &amp; precision of the effects we study. Not merely whether p &lt; .05

#phdchat https://t.co/NzY3yIvb7Y

If there was a single statistics textbook I'd recommend all students (undergrad & postgrad) it is Introduction to the New Statistics. As scientists, more often than not we want estimate the magnitude & precision of the effects we study. Not merely whether p < .05 #phdchat https://t.co/NzY3yIvb7Y

If you're unsure where to submit your scholarly manuscript - try JANE. The Journal/Author Name Estimator will compare your abstract to documents in PubMed &amp; try find the best matching journals. 

https://t.co/El4zVYuLmB

#phdchat https://t.co/NTTmsyBNFg

If you're unsure where to submit your scholarly manuscript - try JANE. The Journal/Author Name Estimator will compare your abstract to documents in PubMed & try find the best matching journals. https://t.co/El4zVYuLmB #phdchat https://t.co/NTTmsyBNFg

Next Page