25. Stories from the Field



Stories from the Field


Here we are at the end of our book but certainly not, we hope, at the end of your involvement in research. The beauty of research is that you learn from doing, and it is a never-ending process. At each step of the process, new, more refined, and complex queries and questions will emerge and whet your appetite.


We have introduced you to what some may believe are “heavy” philosophical thoughts, technical language, and logical ways of thinking and acting. These are your tools to explore creatively the challenges you identify in your practice, daily life, or professional experience and in your reading of the literature. As you apply these thinking and action tools to health and human service–related issues, you will discover both the artistry and the science involved in research and its application to your professional practice.


Research, like any other human activity, has its low and high points, its tedium and thrill, its frustrations and challenges, its drastic mistakes, and its clever applications of research principles. Research is foremost a thinking process. If you think about what you are doing and reflect on what you have done, you can learn from mistakes, and you can keep refining your skill as an investigator.


We would like to share with you some of our own stories from the research field to highlight the twists and turns of research and how human this activity really is.


Just beginning


It was an urban anthropology class, and we were split into groups to conduct urban ethnographies on health practices by different ethnic groups in the city. The big assignment? The Chinese community. The research group? A hippie-type woman, a rock musician, and a topless go-go dancer who dressed the part day and night. The threesome entered the Chinese community, a relatively self-contained 5- by 10-block area of the inner city. As you can imagine, we were quite a sight. How would we ever be able to “enter” the world of this community and engage in passive observation and active participation in community activities? We started out by walking around and scoping out the area. We made observations of the physical environment and spent some great times eating lunch and dinner in various restaurants.


We became Chinese restaurant experts for friends and family, but we had no breakthrough. No Chinese family agreed to meet with us or to be interviewed. Young people were curious and asked lots of questions about us, but their parents remained removed, detached, and unavailable. We caused quite a stir in the community. We split up a couple of times to see whether one of us could gain access, but we had no luck.


Then one day we happened to pass a small building with an announcement for a Chinese Political Youth Club. The sign was posted in Chinese and English (a sign of a new acculturated generation?), and we walked directly to the address of the club. We were welcomed and engaged in long discussions of the political climate in the university and the dilemmas confronting the community. Bingo! This was a beginning. Although our access to the community remained through the ears, eyes, and thoughts of this radical subgroup, we were able to delineate health practices and health issues as perceived by this group. This was a lesson in nonreactive research, gaining access, and the impact of key informants on the type of information and understandings that are obtained.


What did you expect?


One of our students was assigned to an anti-graffiti program for urban adolescents who had been convicted of defacing public property. This program was new, and so this student decided to conduct research with the adolescents to find out why they engaged in graffiti and to ask what strategies they would find helpful in reducing this behavior. So the student gave each adolescent several sheets of paper with open-ended questions. You guessed it. When she went back to her office and opened the file of paper, the teens responded with graffiti. What did she expect by giving the tools of the trade to offenders?


In search of significance!


Five years of intensive interviewing, data entry, data cleaning, and sophisticated statistical analyses, but where is the significance? Accepting no significance when you want to find statistically significant differences between an experimental and a control group can be difficult. Nonsignificance can be as important a finding as obtaining significant differences, but it can also present a challenge to getting the findings published.


Is health care effective?


In our research class, students are required to develop a research question or query and a proposal that describes how they intend to answer the problem. Our most frustrating but popular “research question” is the one posed by many beginning students in research, “Is nursing effective?” or “Is occupational therapy effective?”


Is this a researchable question? Can you explain what is wrong with the way this question is posed?


Elevator insight


The values of occupational therapy students in a particular program were studied by administering a values test to incoming students, first-year students, and second-year students. It was hypothesized that a change in values would occur, which would indicate that students had acquired the professional values that were intricate components of the curriculum.


After the test was administered, some of the students were talking about it on the elevator without realizing that the investigator was present. The students discussed the content of the questions; they were comparing responses and hoped that they had answered them correctly! The investigator realized at that incredible, serendipitous moment that the students had answered the questions as they thought they should. They thought it was a test with only one set of correct answers. Their responses did not reflect how they actually felt or believed but what was socially and professionally desirable.


A “good” research subject


We established what we believed were clear, objective criteria by which to identify eligible subjects from a pool of older adults who were in rehabilitation for a stroke, a lower limb amputation, or an orthopedic deficit. The purpose of the study was to identify the initial perceptions and attitudes of older patients toward the assistive devices they received in the hospital and whether these issued devices were used at home after hospitalization.


As the recruitment process began, periodic meetings were held with therapists who were responsible for identifying potential study subjects. At one of these initial meetings, we asked how recruitment was proceeding and if everyone understood the study criteria. Therapists relayed that all was going well, and that they were very pleased they had been able to identify the first few subjects who would “do very well” in this study. The therapists were asked what they meant by “doing well.” We learned that therapists had referred patients to the study who they believed greatly valued their assistive devices and would use them at home. Therapists were systematically referring only the “good” patients and were ignoring the potential study eligibility of those patients whom they did not like or whom they suspected would be noncompliant to device use.


This represented a fatal flaw in the recruitment process that would have contaminated the entire data collection effort and study results. We immediately changed the recruitment process and examined the hospital census records to determine who may have been available for study participation but systematically excluded by the therapists at this initial study phase.


A “bad” research subject


In a study of family caregiving, an eligible subject enrolled in the study and was randomly assigned to receive an experimental intervention. The intervention involved in-home occupational therapy visits designed to help the caregiver modify the home environment to support caregiving efforts. This particular subject had unusual religious beliefs and social practices that upset the member of the research team providing the experimental intervention. Initially, the interventionist was unable to handle these value differences and suggested to the investigators that the subject was inappropriate for the study and should be considered ineligible. In actuality, there were no objective criteria for excluding this caregiver from the study. The caregiver fit all eligibility criteria. Additionally, the individual had agreed to participate in the study and had signed an informed consent form.


It is for this very reason that criteria for subject selection are developed. Furthermore, oversight of their implementation and protection of the protocol must be maintained by the investigator. Subjective reasons for excluding individuals from study participation are a serious source of bias. In this case, the investigators worked with the interventionist to alleviate her anxiety and to proceed with the implementation of the intervention according to protocol.


Literacy is not literacy


Recently, we developed a Web portal that translated higher literacy into lower literacy on tobacco prevention Web sites. In testing comprehension, we went to an adult literacy center and recruited subjects who were reading at an equivalent literacy level of fourth-grade English. Given their similar literacy level, we hypothesized that they would show consistent comprehension. Their responses were highly dispersed, leading us to query what had gone wrong with the translation. We neglected to look at the numerous variables that influenced not only reading level but comprehension and found that the greatest difference in comprehension existed between immigrants and nonimmigrants. Immigrants who were well educated in another language were able not only to read the words but to comprehend complex written ideas despite the reading level in which they were presented. Dissimilarly, citizens of the United States, who did not score well, were not educated in written text. Thus, although they could easily comprehend oral information, they struggled with meanings of text even if they could read the words and sentences.


Native american?


Because we were interested in the ethnic background of our sample of elder women, we added an item on our survey seeking that information. We were surprised when we found that 98% of the respondents had checked “Native American.” Living in Maine, and knowing that almost all persons in our sample were Caucasian, we were perplexed. So we asked our respondents why they checked Native American. One woman replied, “Well, what else would I check? I was born here, lived here all my life, and expect that I will die in America.”


The pearson, or the moral of the coding story


A student was almost finished with her dissertation when a crisis occurred. She had measured two constructs: years that faculty members were teaching and their attitudes toward their jobs. She coded years in actual numbers of years teaching from 0 and then ascending. She coded attitudes, measured with interval level data from 1 to 5, with 1 denoting most positive and 5 denoting least positive. When she conducted her analysis, she calculated a Pearson r value of −.7 and interpreted it as a strong association between years teaching and positive attitudes. She finished her conclusion section on the basis of this finding. Because the Pearson value was negative, however, the student’s faculty advisor told her that she would have to rewrite the findings section. So she missed her desired graduation date. But who was correct? The student was correct. What are the morals of the story?


Code clearly. If you choose to code as this student did, be clear in your discussion of findings. And if you are a faculty advisor, read carefully, and think!


If you can’t deliver, don’t ask


We were embarking on a needs assessment study in which we convened a series of focus groups composed of individuals who were receiving long-term care services. We wanted to know what could be improved in the service system. When we arrived, all the focus group members were present. One by one, each spoke and clearly told us that if we could not deliver what they requested, we should not take their time by conducting another group interview.


Don’t ask if you’re not prepared to answer


In a randomized controlled trial to evaluate an intervention to support family caregivers, we were concerned that families assigned to the “no treatment” control group would lose interest in the study and decide to withdraw. We thus decided to conduct monthly telephone check-in calls to control group caregivers to maintain their interest in the study and enhance retention. The problem, however, was what to say to this group of highly stressed and emotionally vulnerable population. We learned quickly that a simple statement (e.g., “Hello, Mrs. Smith; how are you today?”) elicited clinically revealing statements about the person’s psychological and physical health. Some caregivers began to cry, expressed feeling extremely depressed and not knowing who to turn to, and revealed serious health complaints and incidents of physical abuse. We were ethically bound to respond appropriately, and this quick check-in telephone call to maintain study contact turned into a meaningful clinical intervention.


No detail too small


To set up a randomization scheme based on stratification by gender, a blocking scheme was developed for both men and women and provided to a research assistant to create envelopes with appropriate group assignment sheets (experimental or control). As study participants were enrolled, we noticed that the first 10 women to enter the study were assigned to control, and all four men were assigned to intervention. With the blocked randomization scheme we were using, this was not possible, so an investigation was performed.


It was discovered that the research assistant had placed only control sheets in the envelopes for women and only intervention sheets in the envelopes for men. This required official notification of the institutional review board and the data and safety monitoring board, as well as a plan of action to determine how best to preserve the original randomization scheme and manage the errors to date. The lesson learned? Even the smallest details, such as reading a person’s handwriting and stuffing envelopes, can have profound methodological implications for a study.


Wow, you got it!


Your heart starts to beat, you feel a rush; it all clicks, falls into place; you uncovered a pattern, a finding, something really striking, and it is significant. The data are right before your eyes. The result has the potential of having an impact on how professionals practice, on how clients feel, and on their health and well-being! Wow, you got it! You feel great.


This is so important—you have to do it again.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 5, 2017 | Posted by in MEDICAL ASSISSTANT | Comments Off on 25. Stories from the Field

Full access? Get Clinical Tree

Get Clinical Tree app for offline access