Carol A. Hand
Once upon a time, I would sit all day running statistical analyses. These were among my least favorite jobs. Crosstabs, T-Tests, ANOVAs, and occasionally, Chi-Square Tests. One didn’t really need to understand why these statistical procedures were supposedly trustworthy explanations and predictors of people’s behavior. One was merely programmed to believe that these were valid and reliable sources of information about groups of people based on self-reported, yes/no, forced choice answers to questionnaires.
One job in particular was noteworthy. My job was to construct, run, and interpret statistical analyses to determine the interaction of stressors and health for older men and women who were caring for adult children with disabilities. It was someone else’s job to enter the data correctly. I was told by the project director not to try to speak to the data entry expert face-to-face because he spoke Korean and struggled with English. “Send him emails if you need something.”
I tried this approach, but he would always ask to meet with me so I could explain my confusing emails. We became friends in the process and I learned more about his life, studies and culture. He valued the chance to practice speaking English. Emails were a poor and isolating substitute.
Of course, the data I needed were always quickly and impeccably entered. Still, it was such a boring job. After an hour or two of staring at the computer screen and data printouts, I snuck into the room where the seldom viewed narrative data were stored. These were hand-recorded replies of “research subjects” to the question, “Is there anything else you would like to add?” I discovered a fascinating (and troubling) project oversight. More than 75 percent of the study participants (calculated in my head) had indicated that occasional respite from continual caregiving responsibilities would significantly improve the quality of their lives.
The project director wasn’t pleased when I asked her if we had followed up on this information. It appeared to be a policy we should advocate at the state level. We had research data to support the need and cost-effective benefits for such a policy.
Research should, after all, be used to improve people’s lives, right? Not just to count and catalog their deficits and misery?
Yet this project wasn’t the most distressing. That one would come later. It was a project that was supposedly measuring the effectiveness of a particular intervention to improve the academic performance of Native American elementary school students. The researchers who directed the project were more concerned with fancy research methods than with the intrusiveness and cultural dissonance of those methods in tribal contexts.
When those methods failed to produce evidence of significant improvements in grades and attendance, project directors blamed Native American families, schools, and cultures. Unethical? Yes, but their draft conclusions were subjected to an elegantly argued rebuttal that pointed out the absurdity of the methods. Observers and beeping computers in a room with third graders, questionnaires families refused to complete for a variety of reasons, and student performance forms filled out by too many substitute teachers who didn’t even know the names of students let alone how they performed in class.
Most alarming, however, was the failure to actually think about the data that were collected in human terms. Reviewing and synthesizing the data we did have with the narrative comments, I noticed one young boy whose condition was serious. Every quantitative measure suggested he was severely disturbed and clearly in need of help. The narrative comments, again seldom read, reported violent behavior at home and school and mentioned the repeated threats he had voiced to harm himself and others. I asked my colleagues and the project director what we planned to do about this. Didn’t we have ethical obligations to make sure the young boy accessed the help he so obviously needed? The idea had never occurred to them before that researchers had obligations to the people they studied.
Even though these experiences soured me on quantitative research, the qualitative approach I chose for my dissertation study, critical ethnography, wasn’t much of an improvement. I had to listen to troubling stories and witness oppressive situations and conditions as a somewhat aloof objective observer. I could convince myself to some degree that it did help those who chose to share their experiences with an empathetic, nonjudgmental listener. I wouldn’t count or catalog their suffering. I would privilege their words and perspectives.
Yesterday, I discovered one of the flaws inherent in this approach. As I reread and edited an interview with the tribal child welfare director in the community I was studying, I came across the list of child welfare cases she read off to me. The list remained buried in my fieldnotes. Yesterday, I wondered what value the information added in terms of the overall purpose of a book I’m writing about the study. I wondered if I should just delete it, even though there were no names or identifying details. Just to be sure, I decided to reduce the list to numbers – how many children were in one of five different intervention categories:
1. returned home,
2. placed in community kinship care with Ojibwe relatives,
3. placed in off-reservation non-Indian foster care homes,
4. placed in an off-reservation non-Indian residential treatment facility, or
5. adopted or in the adoption process by an off-reservation non-Indian family.
I decided to see if I could figure out how to use Microsoft Excel to do a simple calculation. I was alarmed that I hadn’t thought to do this before when I looked at the results. Despite the intention of the Indian Child Welfare Act of 1978 to end the outflow of Native American children from their communities, an ongoing process of cultural genocide, more than 40 percent of the child welfare placements were in non-Native settings outside of the tribal community. The colorful Excel pie-chart below illustrates the magnitude of loss.
Lesson learned. Quantitative approaches do sometimes have merit. Stories, when backed up by numbers, may be far more effective advocacy tools than either approach alone. So the list will stay for now, accompanied by a new graphic.
Copyright Notice: © Carol A. Hand and carolahand, 2013-2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Carol A. Hand and carolahand with appropriate and specific direction to the original content.