Seen the “98% of studies were ignored!” one doing the rounds on social media. The editorial in the BMJ put it in much better terms:
“One emerging criticism of the Cass review is that it set the methodological bar too high for research to be included in its analysis and discarded too many studies on the basis of quality. In fact, the reality is different: studies in gender medicine fall woefully short in terms of methodological rigour; the methodological bar for gender medicine studies was set too low, generating research findings that are therefore hard to interpret.”
Supplementary Table 4 (from the first review) is a list of each of the 53 studies included in the review and how they were scored based on the Newcastle-Ottawa scale.
The “data” is in supplementary tables 3, 5, 6 and 7. Only studies that were scored as low quality were excluded from the synthesis.
“They dismissed 98% of data” is a lie.
No it’s not. None of the dismissals are statistically/ scientifically supported, and the data they present is blurbed and incompletely presented in a way that isn’t inclusive of what those studies actually say.
Nothing was dismissed at all (and “statistics” has nothing to do with it so curious to mention it).
Studies were scored for quality on the well established Newcastle-Ottawa Score. High and Moderate quality studies were included in the synthesis. Low quality studies were not, but their outcomes are still reported.
Outcomes from each study were included in tables 3, 5, 6 and 7.
'They dismissed 98% of the data" remains a lie.
You can’t remove a study from a scientific paper without having statistical analysis to back it up. Each of those removed studies all had a statistical analysis of how confident they remained in their data even with the gaps. Because there aren’t completed 100% studies in science it just doesn’t happen so you use the data you have and test it for a confidence value you obtain using statistics. And the idea that some trans people don’t make it to the completion of a study due to personal reasons or even suicide isn’t that rare. Not using 98% of the data because of that would be stupid.
You can of course. Statistics are not required to explain why a self selective Facebook poll is low quality while a multi centre 5 year study with followup and compartor is of a much higher quality.
Studies are also scored low on quality if, for example, they don’t control for important sociodemographic confounders. Study that do control these, will have more reliable results.
You can read how the scoring works in supplementary material 1.
“They dismissed 98% of the data” remains a lie. Repeating it doesn’t change anything.
“You can of course. Statistics are not required to explain why a self selective Facebook poll is low quality while a multi centre 5 year study with followup and compartor is of a much higher quality”.
That’s wrong when you are trying to be scientifically correct. A science paper without that math isn’t science my dude. And comparing trans healthcare data to Facebook polls is ridiculous
It’s remarkably common in systematic reviews, a feature even. You give the impression that this is a new or foreign concept to yourself and are just encountering these ideas for the first time.
Search on pubmed or the bmj or the Cochrane library for other systematic reviews using the Newcastle-Ottawa score. You’ll trip over them.
One of the studies reviewed recruited patients over Facebook and polled them.
“They dismissed 98% of the data” remains a lie.
Again I’ve written these reports. It is absolutely not common practice to disclude data without scientific reason and analysis. It is explicitly taught not to do it that way in college. And it is not scientific to do that without a statistical threshold and confidence analysis of your reasoning.
I am forced to strongly doubt this given your whole misunderstanding of the basic concepts on assessing methodical quality…
Certainly, you’ve never authored a systematic review for a reputable medical journal.
But don’t take my word for it…
https://handbook-5-1.cochrane.org/chapter_13/13_5_2_3_tools_for_assessing_methodological_quality_or_risk_of.htm
You mean such as using a method like the Newcastle-Ottawa score to assess data quality?
If your college course covered systematic reviews and didn’t include a review of study assessment methods, ask for a refund.
Statistics are not required to assess that a study without a comparator is weaker than one with.
“They dismissed 98% of data” remains a lie.