Wednesday, October 12, 2011

Week Five

Week five posts here please!

8 comments:

  1. In a posting from Week 4, Leslie said:

    "Having started out loving Luker and her advocacy of a more free-wheeling research style,
    I am now clinging to Knight and hoping his measured mapping of the terrain can inject
    just the right amount of structure into my research proposal."

    I second that. While, I appreciate Luker's calling our attention to some of the limitations of canonical research methods, at the end of the day we actually have to do research (or at least design it) and, especially when we're starting out, canonical methods seem to be integral to making the task doable. I'm proposing to evaluate the effectiveness RFID self-checkout at a small branch of the TPL (the one Desiree and I work at). While it could be fun to add some salsa (if that's what you're into), the good, old-fashioned dependent/independent variable route seems the only feasible way to go, given the limited scope of the subject. Sure my very choice of a statistical analysis, observation, and questionnaire will shape my results, but I honestly don't see how I can avoid that fate, or why I should even want to try. I'm a person asking a question and using the tools at my disposal to get an answer. I make no pretense to objectivity (though I would probably have to write up the research as if I did).

    ReplyDelete
  2. And/or attempt to incorporate reflexivity in both your analysis and in your discussion (e.g. be sure to talk about potential bias/assumptions and how these may represent a limitation of the study, etc.).

    ReplyDelete
  3. I have definitely been thinking about the potential for bias. It's practically all I've been thinking about. I have to remember not to let it paralyze me. It reminds me of the quote in Kline's article: Foucault notes "our scientific quests for truth remain social discourses deeply embedded int he broader struggles over social power" - key word: power. I found this article useful because it demonstrated how even what we may perceive to be a logical claim: the relation between violence in media and violence in real life, things aren't what they appear. A does not cause B. A in fact exists in a complex web of interrelation. We can never isolate it. We CAN, as researchers, identify trends and patterns that can nevertheless help us understand complex phenomena.

    Drawing from Luker: OPERATIONALIZE YOUR VARIABLES/ CONCEPTS!!! Your view on a topic is just that YOUR view. It is not an overall truth. Gather other perspectives, take a poll, do something! (lol)

    Drawing from Knight: I like the coverage of surveys. I have taken surveys before and I know all too well the tendency to just check things off in the middle. sometimes agree, sometimes disagree, neither agree nor disagree. Switch those questions around, flip the wording, keep them on their toes and paying attention to the questions!

    That's all for today (:

    ReplyDelete
  4. Greg: goodness we are both doing RFID! lol, could that have to do with what is happening at our branch? (it's just being introduced at our branch). My study will be different then yours as I am looking at one group of users: seniors. cool beans (:

    ReplyDelete
  5. Drawing on Kline's reading: funny how neither canonical or salsa-style approaches can make the leap from "claim" to "cause". Yes, we know that sampling cannot guarantee absolute statistical significance of various populations, but we have to draw boundaries around our subject group somehow. Is it criticisms of sampling error that prevent researchers from winning the argument that exposure to violence in children influences future behaviour? Or is it the way that the problem gets operationalized as 'censorship' that scares regulators away from taking preventative action? I know that feeding my toddler sugary treats will be followed by an hour-long blast of frenzied activity (then a crash that may or may not include vomiting) without needing empirical 'proof' of physical effects. I think that it's ok to say that "in many cases, the interplay of x and y factors often results in z" to help communicate to affected individuals what their risks may be.

    I completely reorganized my research question after considering how important operationalization really is. Naming and definitions often betray our values and assumptions about social phenomena and provide rich discourses to analyse and study.

    ReplyDelete
  6. Hello everyone,

    If you were not in class today you will have missed a critical improvement to our blog. We are merging with another group- the Quasi-experimental blog group! Hopefully by next week we will have added our new members!

    I though the discussion in class yesterday was quite interesting, and I wish we could have kept going! I especially thought the question of 'what is violence' -- in regards to the Kline article -- was particularly relevant. As social science researchers, we must understand not only what we are trying to study (does violent video game play make children violent?) but we must also recognize that there are a multitude of extraneous variables that interplay with our dependent variable. How are we to deal with these? Certainly we cannot elucidate each and every one, can we? And if we do recognize all of the variables at play in our study, must we always state them?

    Consider a study addressing boys in the classroom. We want to find out why boys aren't excelling as quickly as girls in school (there is a lot of evidence to suggest this-- more girls attend university), but is this actually true? There is a high correlation between girls and language skills, and as our school system is predominantly language driven, we could suggest that this has stymied boys participation and subsequent success. We know that there are hundreds of variables at play here, but how are we ever to make a claim of any sort if we keep stifling our correlations with these extra variables! What would the results of this study look like then? "Our education system does not target boys..... but! This may be because these particular boys in this particular place, in this particular point in time had these particular extraneous variables, which interplayed with the dependent variable......" This to me is an exceptionally difficult issue to get around! How are we ever to make a difference if we cannot make any sort of claims!

    ReplyDelete
  7. I sympathize (empathize?) profoundly with you, Tamara, over the dilemma of defining terms (what is “violence”?), limited variables, and sorting our which correlations to focus on. What this tells me is that any conceivable area of research is so rich, varied, textured and nuanced that the researcher has to pull at it like a ball of tangled wool, to follow any one single strand and hope to arrive at meaningful results or conclusions. I know as I did my SSHRC proposal, that I did in fact feel more and more paralyzed (as Desiree said she feared) with the horrible suspicion that my area of inquiry was far too broad and had way too many assumptions buried in it – so I threw my topic out. Now I appreciate the value of simplicity of design… of making a few direct correlations…perhaps just discovering some very good questions that need answering, and not the answers themselves. Luker makes a great (if long-winded) point about rape in her section on operationalization – if we try to talk about or define terms for one thing, we realize that we have to map all the associated terrain (gender stereotypes, etc.) and unpack any assumptions housed there. How can one study one “thing” when we can’t be sure where that phenomenon begins or ends, or even if it truly exists!? At any rate, I’m scaling back my aspirations and feel more comfortable with a modest goal.

    ReplyDelete
  8. I like the idea of using scales and instruments, in part because I'm quantitatively oriented but also because of the high level of confidence that these measures often assure. However, creating your own instrument leaves you open to worries over reliability and validity and the question of whether it is sufficiently normalized to be useful. And then there is the time and cost of the entire process to consider. Luckily testing instruments are widely published and easily available (for a price). The profitability of publishing these things means that more often than not multiple tests will exist to measure the same thing, which offers up some choice but also requires that you weigh the relative strength of each test, particularly as in the rush to get something to market the publisher may overlook flaws in the design of the instrument. Still, even if the theoretical underpinnings or some other aspect of a test is contested by some, the chances are its weaknesses will be fairly well documented and you'll understand the risk from the outset if you choose to use it. There's some comfort in that, I suppose.

    ReplyDelete