To follow-up on my previous post, it seems that some progress has been made. The more I delve into the complexity of debates in qualitative methodology, where the researcher is continually referred to as a bricoleur, a craftsperson, a story-weaver, etc. (unfortunately never a multi-billionaire), the more it seems that one should simply approach debates by becoming educated on both sides and then taking a decision. That’s what I did with the constructivist grounded theory vs classic grounded theory battle. I decided that my theoretical framework (based on Goffman and interactionism) and my methods necessitated constructivism. I wouldn’t feel right analysing interviews without admitting the role of participants in shaping reality as well as my own role in interpreting their accounts. When looking at others’ struggles with this, I noted that some proponents (Bryant, 2009) of constructivist ground theory have added a ‘pragmatic’ emphasis that focuses on the data at hand – so that’s what I added as a caveat: I will look at my data and see what comes out of it, which is the whole purpose of grounded theory in the first place.
All that to move us along to what I’m thinking about today: It all needs to fit together. Regardless of whether you’re a postpositivist or a poststructuralist, it seems that the most important aspect of research design is making sure your decisions make sense and are warranted throughout the whole study. There should be a logical flow that epistemology lends itself to ontology, which fits with the theoretical framework, is congruent with the methods, frames the analysis, and out pops your methodologically sound conclusion. However, before you make a social science-y adaptation of the “hip bone’s connected to the ____” song, even the proponents of this thinking seem to have some trouble working it all out and coming up with the holy grail of ‘truth’ in qualitative research (aka “trustworthiness“, “validity and reliability“, “qualitative goodness“).
In some of the literature about claiming truth or quality in research, it seems the main problem comes from trying to lump together good research practices with ways to make sound analyses. I agree with Tracy (2010) that people can usually tell robust research practices when they see them:
- Don’t burn your notes; keep an audit trail so that people can see what you collected and how you developed your ideas. Guess I’m going to need to write more legibly.
- Look at many data sources and types of data – whether you’re a realist and this counts for you as triangulation or you’re a poststructuralist and this is more like crystallization, the idea is similar in that you need to ensure you’re getting all the data relevant to your research question and looking at the larger picture. My current plan includes interviews, (quantitative) descriptive data, network metrics and content analysis of Facebook posts – ambitious but likely necessary!
- Ask your participants if you’re on the right track; ask other researchers if they see the same things.
And the list goes on. Other professions have best practices, so why wouldn’t research? The issues come in when these practices are mixed with criteria that have different value judgements when evaluated from different paradigms. While Tracy argues that she has presented a ‘universal’ set of criteria for quality qualitative research, she includes items that are associated with the overall assessment and analysis of the research, which I have always found to be rooted in some paradigm or another.
For example, her first criterion is that you must have a ‘worthy topic’ – this is equivalent to the recurring question: “Is your research interesting?” My response is always, “To whom?” My research might be interesting to avid users of Facebook, it might be relevant to people on other social networking sites, it might grab the attention of site owners, it might be one day be cited by other Internet researchers, and it’s also by default interesting to my mom. However, I’m sure it’s completely boring to someone who rarely uses the Internet for anything but e-mail. It’s also completely irrelevant to someone who can’t afford food and clean water, let alone a computer.
As we skip down Tracy’s list, she talks about quality research reverberating with an audience, being transferrable to other situations, being significant, and accomplishing its goals. I understand that all these are ‘good’ in that they are signs of quality in specific paradigms or fields. However, what is significant to some researchers is hardly news to others. In general, the significance of a discovery is only relative:
(Cartoon found here)
Perhaps this is why Hammersley’s quality criteria of truth and relevance are not given equal weight (according to Seale). Hammersley drives home all the research best practices under truth but leaves relevance as something more vague that can even develop over time as research progresses.
How does this fit in with my research and the long journey to justify my methods in a way that sets a solid foundation for future analyses? Currently my flow chart looks like this: (you may sing the song in your head as you read it)
While following best practices, such as triangulation and developing a sound theoretical framework, I am running into paradigmatic issues. How can I use a constructivist ontology and theoretical framework when it alludes to multiple realities (or reality as a social construction) and critical realism posits that there is still just one reality? How can I have deductive aspects that look to test theory if I’m (mostly) applying constructivist grounded theory? How can I reconcile the positivist nature of quantitative data with qualitative data?
I’ve started to address these questions in my thesis but I feel the way Guba and Lincoln must have felt when they realised that most criteria for judging trustworthiness is simply relative. As Seale describes it, this seems like a hurdle that cannot be tackled except by finding a middle ground that embraces some research values (best practices) while allowing for the flexibility to debate approaches so they fit with the research questions. This brings me back to my opening resolution:
(Thanks Meme Generator)
But in this process of figuring out the best way to do justice to my research and the potential ‘truth’ it will uncover, I need to make a solid decision with a logical explanation and then move forward with the research itself.
Note: In full disclosure, this post is a reflection for my Advanced Qualitative Methods class and is centred upon thoughts and ideas presented in these two readings:
Seale, C. (2007). Quality in qualitative research. In C. Seale, G. Gobo, J. F. Gubrium, & D. Silverman (Eds.). Qualitative Research Practice (pp. 409-419). London: Sage.