Thursday, May 10, 2018

The benefits of academic workshops?



All IB graduate students should have received an email announcing the new batch of programs offered by the university to help graduate students with one of the most difficult academic skills: writing.

"Dear Graduate Faculty, Coordinators, and Graduate students, Our analytics indicate that those students who receive writing instruction and consultation early in their graduate programs complete their degree requirements sooner. Our programs are designed for graduate student writers at all levels and stages. Individual Writing Consultations: (starts January 15)."
The claims made at the start of the email ("Our analytics indicate [...] complete their degree requirements sooner.") are contrary to a recent publication that received some exposure: "Null effects of boot camps and short-format training for PhD students in life sciences" by Feldon et al. 2017. The title already give the results away, but here is the main conclusion:

"Here we show that participation in such programs is not associated with detectable benefits related to skill development, socialization into the academic community, or scholarly productivity for students in our sample."

This study was no slouch, with 295 participating students, followed for several years with performances evaluated using standard rubrics, etc, and thus the pushback has been struggling with the quality of the study. For instance, the Software Carpentry organization, whose reason for existence is offering these workshops, responded to this almost existential threat in quite some detail. While I think I perceived an undertone of defensiveness in their response, I think that the easiest explanation for these null effects are identified by the Software Carpentry people:
"Benefits of a short course are easily lost in a sea of positive outcomes resulting from graduate training, but that has little bearing on the impact such courses may have when they stand alone."
But that is of course also a sort of cop out. Although given that "only" 16% of the participants participated in such a workshop, the lack of detecting the influence of a single workshop in the wide range of personal outcomes is not that surprising. This is one of the cases where a post-hoc power analysis might have been informative.

The second, standard, scientific cop out is that the authors did not measure the correct outcome. The end point identified by the original notification email (time to completion) was very different from the the endpoints measured in the long-term study (productivity, academic skills development, socialization into the academic community). However, in the appendix of the PNAS article, there is a variable "Time to Degree (T2)" that is not significant. It would be interesting to take a closer look at the Guelph data in that respective.

So while the effectiveness of these workshops at a global level is not exactly clear, I think that it is clear that, from a graduate student perspective, you have to start somewhere. You will not immediately become the best writer in your cohort because you took a 2-day writing tune-up workshop, but you will become a better writer compared to the old you.

Monday, January 15, 2018

Significance versus explanation

Next time I have to teach the difference between explanation and significance, this will be my go-to example: "Scientific aptitude better explains poor responses to teaching of evolution than psychological conflicts" by Mead et al. in Nature Ecology & Evolution.

The article has several figures that look like this:
The P-value looks amazing (9 x 10^-15), but the spread is equally large. I would not even want to guess what the R-square of that relationship is. The authors in the figure legend also specify that:
"The regression line is the best-fit line of y predicted by x. However, as assumptions of linear regression are not fully met it is provided for illustrative purposes alone to indicate the trend."
Good point, because without that line it would be impossible to tell what the relationship would be, which is always a dire sign. But those two sentences should be part of a master class in scientific writing: Necessary, to the point, anticipating a reader's needs and confusion, packaged in a short and succinct statement.

This study works perfectly well in a hypothetico-deductive framework with well-laid out hypotheses, logical predictions, and strong and convincing statistical tests. It also leaves the reader (or some of the readers, for instance me) with this "but is it really important?" question. I could not find any mention of this variability issue, and only lots of highly significant p-values.  What is actually the best explanation of these students' evolutionary understanding?

Monday, January 8, 2018

Let's talk about mental health

Today. Tomorrow. Every day. Because it requires attention year round. 

And the start of the semester is an appropriate time to bring up this study by Levecque and co-authors, covered in Science, that focuses on graduate students. It quantifies the problem, but also identified consistent predictors of mental health issues: 

  • work-life conflicts
  • high job demands
  • low job control 
The story I come up with based on these results is that low job controls leads to (a perception of) high job demands, to work-life conflicts, to mental health symptoms. And what could be related to low job control and/or high job demands? Maybe supervisor style, which can either diminish the prevalence of mental health issues when it is perceived as inspirational, or increase when it is perceived as laissez-faire. This again stresses the crucial importance of the supervisor-graduate student relationship, one of the recurrent themes in these news items. 

Another recurrent theme in these news items is the importance of reasonable job expectations, potentially non-academic ones. It is thus maybe not a coincidence that "Positive perception of career outside academia" has a strong positive effect on graduate mental health. If you know that all the hard work will eventually lead to a satisfying job that requires this education, and not necessarily a low probability faculty position after an endless series of postdocs, maybe that makes all the hard work worth it.

Friday, September 15, 2017

Career preparation for graduate students.


This is an important issue, and I have already spent 4 news items on this. A recent publication in PLOS ONE investigated some of the strategies and resources PhD students (and postdocs) use for non-academic jobs. This is very important because more than 80% of our graduate students end in these non-academic jobs (although these numbers also includes MSc students). I should not have been surprised, but there are sociological theories and active research programs around this issue, resulting in publications where I am only comfortable reading the abstract and discussion, and hoping that the reviewers did their job. The crux from the results section to me was this section:

Interestingly, our results also show that a trainee’s perception of high program support for career goals had direct and significant effects on their career development search efficacy, while perceived advisor support was not significant at all. It may be that perceived program support for career goals enables trainees to develop a broader support base within their graduate programs (as opposed to being driven by dissatisfaction with one’s advisor).
This presents a glass half full/empty perspective on career preparation: a grad student readiness does not depend on an advisor's experience that will vary wildly between faculty members, but more on the support provided by the department/college/university. And since universities are explicitly tasked to prepare their graduates for the job market, there is some incentive to address any shortcoming institutionally. The question, of course, still remains if Integrative Biology and the university as a whole provides sufficient support. So is your glass half full, or half empty? Let me know.

Thursday, August 24, 2017

Reviewers should (also) be paid by publishers

Stephen Heard touched on a controversial issue in his latest blog post: Can we stop saying reviewers are unpaid? This (contrarian?) point of view resulted in some push-back in the comments, and on other blogs (e.g. Let's keep saying it, and say it louder: REVIEWERS ARE UNPAID by Mick Watson). I agree with most of Stephen Heard's points, and also with his updated point "publishers (mostly) don't pay for reviews". Where I disagree with him (there has to be at least one disagreement, because why otherwise waste digital bandwidth on writing this?), is that "reviewers are (mostly) paid" is actually an important admission.

I struggled to find a useful analogy to explain my unease with his admission, and it suddenly dawned on me. Several years ago, I read a blog post by Alex Bond on Why volunteer field techs are a bad idea. I hope I summarize their (further developed with Auriel Fournier in a published opinion piece that received a lot of pushback) arguments correctly with these points:

  • it is an essential part of the research process, so it should be rewarded accordingly;
  • not paying reduces the field tech's value, and thus "the professionalism of science as a whole";
  • prevents underprivileged scientists from participating;
  • financial restrictions, tradition, and CV building for the field techs are not good enough justifications in light of these criticisms.
The analogy with peer review is immediately obvious in the first line: it is an essential part of the scientific process, so it should be rewarded. Stephen Heard's argument is that reviewers are being rewarded (paid). What the analogy with volunteer field techs exposes, though, is that it is important to look at who benefits from the work. In the field tech case, it is first the scientist who got funding for the study that benefits from the volunteer work, and then in a second step science as a whole. In the peer review case, what I think Stephen Heard overlooks is that publishers benefit first, and science as a whole only in a second step. 

And this benefit to publishers is important. A recent Science paper by Gretchen Vogel and Kai Kupferschmidt report on a quick back-of-the-envelop type of calculation:
"Collectively, the world's academic libraries pay some €7.6 billion in subscription fees for access to between 1.5 million and 2 million new papers annually, or between €3800 and €5000 per paper, according to an estimate by the Max Planck Society."
While the net benefit to the publishers will be lower, of course, nobody can argue that publishers do not benefit directly from the reviewers. And publishers do not pay reviewers, directly or indirectly. Yes, society as whole pays scientists partly for their many contributions, but eliminating the publishers from that argument is wrong. If we make the analogy with the volunteer field tech case again, that would be similar to imagining a PI flush with money saying that she will not pay her field tech because the "pay" provided by the experience will improve his CV and his changes for a scholarship or job. Or more succinctly: Volunteer field techs are (mostly) paid.

If that above scenario raises some problems, then arguing that reviewers are (mostly) paid, is not the "right" statement to ponder. If you agree that in the above scenario that the field tech should receive monetary compensation for his contribution (or be paid) from the PI, and that that is the important debate, then I think that "reviewers should (also) be paid by publishers" is the more important debate to have.