History and origin
GT originated in the 1960s by Barney Glaser and Anselm Strauss, who worked together on research about health professionals’ interaction with dying patients in 1965. From research, writing and teaching, the classic text The Discovery of Grounded Theory (Glaser and Strauss, 1967) emerged. Four other books on GT followed – Field Research: Strategies for a Natural Sociology (Schatzman and Strauss, 1973), Theoretical Sensitivity (Glaser, 1978), Qualitative Analysis for Social Scientists (Strauss, 1987); Basics of Qualitative Research (Strauss and Corbin, 1990, 1998) was an attempt by Strauss and Corbin (Corbin is a researcher with a nursing background) to modify earlier ideas on GT. The last book on which Strauss (who died in 1996) worked, is a clear and practically useful book on GT; it describes an approach which has been tried and developed. In 2008 Corbin followed this up with a later book. Although at times called formulaic and prescriptive, the 1990 and 1998 editions have helped many nurse researchers as handbooks to find certain elements on GT such as theoretical sampling and saturation (this will be explained later). The book edited by Chenitz and Swanson (1986) discussed GT in relation to nursing research. Strauss and Corbin (1997) edited a book in which they show how researchers have applied GT in practice. In the early nineties Glaser (1992) criticised the approach taken by Strauss and Corbin and asserted that what they described was not true GT but ‘conceptual description’. Since then, Glaser has written prolifically – including various Readers on GT – and developed his own perspective in books and website. He founded his press (Sociology Press, Mill Valley, CA), established a ‘Grounded Theory Institute’ and now publishes the international journal The Grounded Theory Review. The ideas of Glaser and Strauss diverged in later years. Recently Kathy Charmaz (2006) developed a constructivist GT from earlier work which led to an edited handbook (Bryant and Charmaz, 2007).
In nursing and healthcare the GT approach has been popular from its inception; Benoliel, (1996: 419–21) lists the GT research studies that have been carried out in nursing between 1980 and 1994 and gives a good overview of the history of GT. There are chapters in Munhall’s (2006) edited book and in particular Morse’s publications in the United States. Melia in the eighties and nineties and recently Cutcliffe (2000, 2005) in Britain are some of the better known nurse researchers who have used and/or discussed GT approaches. Schreiber and Stern (2001) edited a GT text specifically for nurses.
Symbolic interactionism
The theoretical framework for GT has its roots in SI, focusing on the processes of interaction between people exploring human behaviour and social roles (although it must be said that Glaser has now a somewhat different perspective and sees SI as just one of the contributions). SI explains how individuals attempt to fit their lines of action to those of others (Blumer, 1971), take account of each others’ acts, interpret them and reorganise their own behaviour. Mead (1934) established the philosophical framework, and Blumer contributed to GT the idea that human beings are active participants in their situation rather than passive respondents. Jeon (2004) shows the debt GT owes to SI; the notion of self as ever changing and adapting is central to this.
Mead, the main proponent of SI, sees the self as a social rather than a psychological phenomenon. Members of society affect the development of a person’s social self by their expectations and relationships. Initially, individuals model their roles on the important people in their lives, ‘significant others’; they learn to act according to others’ expectations, thereby shaping their own behaviour. The observation of these interacting roles is a source of data in GT, and individual actions can only be understood in context.
SI focuses on actions and perceptions of individuals and their ideas and intentions. The Thomas theorem states: ‘If men [sic] define situations as real, they are real in their consequences’ (Thomas, 1928: 584), thereby claiming that individual definitions of reality shape perceptions and actions. Participant observation and interviewing trace this process of the ‘definition of the situation’. Researchers should see the situation from the perspective of the participants rather than their own. Qualitative methods suit the theoretical assumptions of SI. Researchers use GT to investigate the interactions, behaviours and experiences as well as individuals’ perceptions and thoughts about them. The intention of the research is ‘the idiographic study of particular cases rather than the nomothetic study of mass data’ (Alvesson and Sköldberg, 2000: 13).
The main features of grounded theory
The main aim of GT is the systematic generation of theory from the data collected by researchers. Existing theories can be modified or extended through GT. Researchers start with an area of interest, collect and analyse the data and allow relevant ideas to develop, without preconceived theories to be tested for confirmation. Glaser and Strauss (1967) advised that rigid preconceived assumptions prevent development of the research; imposing a framework might block the awareness of major concepts emerging from the data. The approach seeks explanation rather than being descriptive.
The theory generated through the research must be applicable to a variety of similar settings and contexts. GT researchers are able to adopt alternative perspectives rather than follow previously developed ideas. For this, they need flexibility and open minds, qualities related to the processes involved in nursing. The following gives an example of the use of GT and the need for flexibility by researchers.
MacIntosh (2003), a nurse educator, examined the socialisation of nurses in a nursing education programme through open-ended interviewing of experienced nurses and gaining their perceptions of the process of becoming professional and the problems inherent in this process. She justified her use of GT in this study by pointing to the lack of previous research about the influences of socialisation and perceptions of professional development. The researcher followed the tenets of GT by revising and reworking ideas when new data emerged from the interviews. The development of the study was not linear but flexible and changed direction when the need arose.
This research demonstrates that GT research is not a simple and orderly process, though it is systematic.
The GT style of research uses constant comparison. The researcher compares each section of the data with every other throughout the study for similarities, differences and connections. Included in this process are the themes and categories identified in the literature. All the data are coded and categorised, and from this process, major concepts and constructs are formed. The researcher takes up a search for major themes that link ideas to find a core category for the study.
Strauss (1987) sees the processes of induction, deduction and verification as essential in GT, and he believes that the approach should be both inductive and deductive. GT does not start with a hypothesis though researchers might have ‘hunches’. After collecting the initial data, however, relationships are established and provisional hypotheses conceived. These are verified by checking them out against further data. Glaser (1992) however, questions the process of verification as discussed later in this chapter and stresses the inductive element and the ‘emergence’ of theory. Theoretical sampling, one of the main features of GT, is discussed below.
Grounded theorists accept their role as interpreters of the data and do not stop at merely reporting them or describing the experiences of participants. Researchers search for relationships between concepts, while other forms of qualitative research often generate major themes but, generally, do not develop theories.
Data collection, theoretical sampling and analysis
Data collection
Data are collected through observations in the field, interviews of participants, diaries and other documents such as letters or even newspapers. Researchers use interviews and observations more often than other data sources, and they supplement these through literature searches. Indeed, the literature becomes part of the data that are analysed. Everything, even researchers’ experience, can become sources of data; Glaser (1978) believes that ‘everything is data’. The work is based on prior interest and problems that researchers have experienced and reflected on, even when there is no hypothesis. Data collection and analysis are linked from the beginning of the research, proceed in parallel and interact continuously. The analysis starts after the first few steps in the data collection have been taken; the emerging ideas guide the collection of data and analysis. This process does not finish until the end of the research because ideas, concepts and new questions continually arise which guide the researcher to new data sources and concepts. Researchers collect data from initial interviews, observations or documents and take their cues from the first emerging ideas to develop further interviews and observations. This means that the collection of data becomes more focused and specific as the process develops (progressive focusing).
The researcher writes fieldnotes from the beginning of the data collection throughout the project. Certain occurrences in the setting, or ideas from the participants that seem of vital interest, are recorded either during or immediately after data collection. They remind the researcher of the events, actions and interactions and trigger thinking processes.
According to Glaser (1978) the following are necessary for GT:
- Theoretical sensitivity
- Theoretical sampling
- Data analysis: coding and categorising
- Constant comparison
- Literature as a source of secondary data
- Integration of theory
- Theoretical memos and fieldnotes
- The core category
Theoretical sensitivity
Researchers must be theoretically sensitive (Glaser, 1978). Theoretical sensitivity means that researchers can differentiate between significant and less important data and have insight into their meanings. There are a variety of sources for theoretical sensitivity. It is built up over time, from reading and experience which guides the researcher to examine the data from all sides rather than stay fixed on the obvious.
Professional experience can be one source of awareness, and personal experiences, too, can help make the researcher sensitive.
A specialist nurse, an expert on anorexia nervosa, explores this condition from the perspectives of those who suffer from it. He has expert knowledge in the field gained in his long professional career. His professional experience makes him sensitive to patients’ feelings and perceptions (Newell, 2008).
A general practitioner has had diabetes from an early age. When she observes or interviews patients about their condition, she might include questions on the feelings patients had on the diagnosis of diabetes or their thoughts about living with this condition.
The literature sensitises, in the sense that documents, research studies or autobiographies create awareness in the researcher of relevant and significant elements in the data. Strauss and Corbin (1998) believe that theoretical sensitivity increases when researchers interact with the data because they think about emerging ideas, ask further questions and see these ideas as provisional until they have been examined over time and are finally confirmed by the data.
Theoretical sampling
Sampling guided by ideas with significance for the emerging theory is called theoretical sampling. In theoretical sampling ‘the emerging theory controls the research process throughout’ (Alvesson and Sköldberg, 2000: 11). One of the main differences between this and other types of sampling is time and continuance. Unlike other sampling, which is planned beforehand, theoretical sampling in GT continues throughout the study and is not planned before the study starts. Cutcliffe (2000) shows that the initial data collection and analysis guides the direction of further sampling.
At the start of the project researchers make initial sampling decisions. They decide on a setting and on particular individuals or groups of people able to give information on the topic under study. Once the research has started and initial data have been analysed and examined (one must remember that data collection and analysis interact) new concepts arise, and events and people are chosen who can further illuminate the problem. Researchers then set out to sample different situations, individuals or a variety of settings, and focus on new ideas to extend the emerging theories. The selection of participants, settings, events or documents is a function of developing theories.
Theoretical sampling continues until the point of data and theoretical saturation. Students do not always understand the meaning of the concept ‘saturation’, and believe it to be a stage when no new information or concepts are obtained through data collection and analysis. For Glaser and Strauss (1967) ‘theoretical saturation’ has occurred when no more data emerge that can be used to find dimensions and develop properties of the categories the researcher has established and not when a concept is mentioned frequently and is described in similar ways by a number of people, or when the same ideas arise over and over again. Instead, it only occurs when no new data of importance for the developing theory and for the achievement of the aim of the research emerge. It is very difficult to reach saturation; indeed, one might ask if it can ever truly be established, but the attempt at saturation is necessary. Saturation occurs at a different stage in each research project and is difficult to recognise. Draucker et al. (2007) present a sampling guide to assist in both systematic decisionmaking and category development.
Theoretical sampling, though originating in GT, is occasionally used in other types of qualitative analysis.
Data analysis: coding and categorising
Coding and categorising goes on throughout the research. From the start of the study, analysts code the data. Coding in GT is the process by which concepts or themes are identified and named during the analysis. Data are transformed and reduced to build categories which are named and given a label. Through the emergence of these categories theory can be evolved and integrated. Researchers form clusters of interrelating concepts, not merely descriptions of themes. Sometimes these codes consist of words and phrases used by the participants themselves to describe a phenomenon. They are called in vivo codes (Strauss, 1987). A new recruit to the profession might declare in an interview: ‘I was thrown in at the deep end’, for instance. The code might be ‘thrown in at the deep end’. In vivo codes can give life and interest to the study and can be immediately recognised as reflecting the reality of the participants. In this process of analysis, the first step is concerned with open coding which starts as soon as the researcher receives the data. Open coding is the process of breaking down and conceptualising the data.
In GT, all the data are coded. Initial codes tend to be provisional and are modified or transformed over the period of analysis. At the beginning of a project or a study, line-by-line analysis is important, although it may be a long drawn-out process for analysts. Codes are based directly on the data, and therefore the researcher avoids preconceived ideas. An example of an interview with a nurse tutor gives some idea of level 1 coding.
Well I suppose most people get fed up with doing | Getting bored |
the same things year in, year out. | |
I really felt like a change. | Desire for change |
Regular hours are important to me. | Wish for regularity |
I hadn’t been promoted to the level to which I | Lack of promotion |
could function. |