Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A happ(ier) medium between methodological pluralism and dogmatism #102

Open
guerzh opened this issue Jul 9, 2023 · 1 comment
Open

A happ(ier) medium between methodological pluralism and dogmatism #102

guerzh opened this issue Jul 9, 2023 · 1 comment

Comments

@guerzh
Copy link
Contributor

guerzh commented Jul 9, 2023

This is officially about the General Standard "invalid criticisms" "Cross-paradigmatic criticism (e.g. attacking an interpretivist study for not conforming to positivist norms)." and "Rejecting a study because the reviewer would have used a different methodology or design.", but I think it applies to many "invalid criticisms"

In my opinion, there is not a bright dividing line between "this paradigm is wrong" and "this paradigm is inappropriate for this particular study" and "conclusions of this type are impossible using this paradigm"

For example, in the Qualitative Surveys standard, "lack of [...] causal analysis; objectivity, internal validity, reliability, or generalizability." are considered invalid criticisms. One could be consistent about it and disallow the use of language implying a causal theory in any qualitative surveys, but that would be extremely stringent and is not done in the document. Similarly, I just don't agree that "lack of generalizability" is an invalid criticism. I think a survey of 5 students in one school really would have severe generalizability problems. I think the General Social Survey may not generalize beyond the US, but has a lot fewer problems.

In my opinion, there should be space to say, sometimes, that some combination of the inherent issues with the paradigm and the fit between the paradigm and the problem are a weakness of a paper. I don't think it's enough to always reduce things to "fit between paradigm and problem", because sometimes it's really a combination.

Examples:
"Fast astrological prediction using an improved system of epicycles and GPU utilization"
"A Freudian analysis of teacher-student gender dynamics in the CS classroom"
"On the importance of avoiding making important project management decisions on the 13th of the month, particularly on Fridays"

Here is text I would propose:

"Rejecting a study because the reviewer would have used a different methodology or design without a strong argument that there is a significant flaw in the proposed methodology/design"

"Criticizing a study for limitations intrinsic to that kind of study or the methodology used (e.g. attacking a case study for low generalizability) without a strong argument that the issues identified make the study not informative to researchers.

FTR, I ran qualitative studies and faced problems with reviewers not agreeing with the methodology and (to my mind) rejecting the paper inappropriately (we got it in eventually, I don't really have anything to complain about). I know exactly what it's like. But I also think there should be space to ask "at the end of the day, is this paper telling us something about the world", and this does involve thinking about both the paradigm employed and the interplay between the paradigm and the problem at hand.

@drpaulralph
Copy link
Collaborator

drpaulralph commented Jul 11, 2023

I agree with "Rejecting a study because the reviewer would have used a different methodology or design without a strong argument that there is a significant flaw in the proposed methodology/design" - please submit a pull request.

I'm not sure about this one: "Criticizing a study for limitations intrinsic to that kind of study or the methodology used (e.g. attacking a case study for low generalizability) without a strong argument that the issues identified make the study not informative to researchers." See an interpretivist researcher could argue that any randomized controlled experiment is not informative because it misuses the methods of the natural sciences to study human behaviour. Indeed, even a positivist could claim that any randomized controlled experiment is not informative because it uses a convenience sample of participants and therefore doesn't generalize to a broader population. We have to be really careful with the "without a strong argument" caveat because many reviewers think they have a strong argument when they're really just criticizing a study for something it's not intended to do.

If someone can suggest alternative wording I'd be happy to consider it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants