Appendix D- Marné’s critique of her own study

Appendix D-A Critique

Criteria for Judging the Quality of my Study (See Appendix B for the Study)

by Marné Isakson

Four types of criteria were considered throughout this study so that its trustworthiness could be determined. Trustworthiness in qualitative inquiry is comparable to the standards of reliability and validity demanded of experimental studies. While these terms are irrelevant in a qualitative inquiry, achieving disciplined observations, quality insights, and substantiated conclusions are vital to any study. Credibility, transferability, dependability, and confirmability were the standards used. See Guba and Lincoln (1989). To show to what degree these criteria were met in this study, the checklist suggested by Williams (1992) was followed.

1. Is a meaningful topic addressed?

The topic of the uses of teaching journals has certainly been meaningful to me. I came into the project knowing that journal writing was a way that I made sense of what was happening in my classroom; I came away understanding why this is so. The potential for its being meaningful to other teachers depends on them. I am convinced of the efficacy of journal writing in helping me improve as a facilitator of learning. Others might discover the power of this process to the degree they describe events in their classrooms explicitly and reflect on the meaning of those events in order to support the learners in their care.

2. Is qualitative inquiry appropriate for the topic?

Given that I went into this study of my journals cold with no hypotheses and no preconceived notions of what I would find, an experimental design was inappropriate or surely premature. Sometimes qualitative inquiry can be used as a pre-study to gather important contextual information before designing an experiment. Yet even after a focus and an overriding question was formulated, an experiment would not be appropriate for this study because of the nature of the documents, the questions being addressed, and the purposes for the study. One does not design an experiment with treatment groups and control groups to look at one teacher’s journals written over a period of five years for the purpose of understanding learning and learners. The purpose of the study was to describe what was occurring in the journals and finally was refined after lengthy observations to discovering the uses of the journals in the teacher’s professional life. No experimental or quasi-experimental design could answer this question from the database, five years of reflective journals. No other methodology could have uncovered the insights gained from doing these analyses. Certainly no other type of research could have lead to the same type of discoveries.

3. Are people treated ethically?

This question might not apply, as I am the only person involved, though it could be argued that I did not treat myself ethically since I made myself skip meals, skip sleep, and miss family events to involve myself in the research process. The students mentioned in the journals were not met face to face in this study but nevertheless were treated ethically when I changed their names in all the field notes and in any discussion including them.

4. Are natural conditions maintained as closely as possible?

The original journal entries were copied in full into the field notes and then were analyzed. The journals themselves reflected the actual events in my classroom to the best of my descriptive abilities given that I knew I could not capture everything going on and so focused on some event or student who caught my attention at the time. As much as time and memory allowed, I wrote exact conversations and sensory details of what I saw and heard. I tried to draw the picture of the occurrences using words. The other “natural conditions,” held in place by the writing, were my reactions, feelings, meaning-making, and insights concerning those occurrences.

5. Is the report well written?

a. Does it communicate well?

Although it could be improved through continual revision, the primary audience for this version of the report (my instructor) acknowledged that this criterion is met.

b. Does it address conflicting results?

Conflicting results in this study were not a matter of contradictions as much as a matter of unclassified data, data that did not seem to fit the taxonomy constructed. This was handled by rethinking the taxonomy to include each finding and give explanation for it. This study is also seen as a beginning. I only looked at a month and two days worth of entries out of five years of journals. I fully expect to discover more uses of the journal as I analyze additional journal entries. Thus the new findings will alter the present taxonomy also.

c. Does it include descriptions of the researcher, the data gathered, and the conditions under which they were gathered?

The journals, the journal writer, and the field notes are described in this study. A portrayal of the researcher is provided not only because the reader ought to know the assumptions the researcher has but also because the “subject” of the research, the journal writer, is also the researcher. The journals are described by showing excerpts from the journals, the actual journal entries, and by paraphrases, summaries, and responses to the entries. Furthermore, a history of the journals provides the reasons and the contexts from which the journals evolved.

d. Does it include analysis and synthesis of the data?

The in-depth domain, taxonomic, and componential analyses on the journals and their accompanying field notes are reported via text and charts. A portion of the report is devoted to a synthesis of themes that emerged from the study of the journals. Discussion and implications of the analyses and synthesis are provided.

6. Is the study CREDIBLE?

a. Is prolonged engagement adequate? The prolonged engagement involved five years of keeping journals and seven years of thinking about their value. The actual field note-taking involved seventeen hours of looking at the journals. This was adequate as evidenced by the broad results. More time can be spent but the results from this short look in the context of prolonged engagement by the journal-keeper as researcher is justified.

b. Is persistent observation adequate?

Seventeen hours of observation of only ten pages of original journal entries shows persistence and the determination to make meaning of the entries. Another thirty-eight hours was spent on the analyses, much of the time during such necessitated going back in to the journals to “observe” again.

c. Is triangulation used appropriately?

The journals were the only source of information available to answer the question, “What are the uses of the journal for me?” I submit this criterion does not apply in the study. However, my husband was interviewed and an appointment was made with the department chair, in both cases, to explore evidence of my change as a teacher over the years. Furthermore, the journal entries themselves reveal a great deal of triangulation–conversations, formal interviews, student document analysis, parent visits, other teachers’ insights, survey results, and, of course, my observations of student activity.

d. Is peer debriefing used appropriately?

I shared some of my confusions about what I was seeing in the journal with another doctoral student, a professor, and several people in a class about qualitative inquiry. However, the encounters were informal and short rather than intensive debriefings. Nevertheless, these “disinterested peers” helped me grapple with some difficult issues, and I made headway because of their input.

e. Is negative case analysis used appropriately?

Negative case analysis was performed within the limits put on the study–ten pages of journal entries. I checked and rechecked the data to see if all instances could fit within the categories. New categories did emerge, and the taxonomy was modified to account for the new data. What I did not do was look beyond the specific pages analyzed to see if there was evidence for other conclusions about how I use journals. I fully expect to find other uses; therefore, I conclude that this study is deficient in negative case analysis. To do an adequate job, I must dip into the unanalyzed journals. To do a thorough job, I must analyze all the journal entries for five years. These journals are considered archival data and at some point I can go into them to do a negative case analysis. However, I did not do so for this study.

f. Are progressive subjectivity checks made?

The fifty-four page audit trail contains an in-depth recording of my mental state as I worked through all parts of this study. I recorded my confusions, feelings, ideas, insights, predictions, and struggles to make sense of the data. The audit trail shows that I did not go into the study with the same expectations that I ended up with. I was not tied to an initial interpretation; in fact, twenty-one days of working in the project passed before I was able to decide on a focus for the study.

g. Is the emic perspective highlighted?

The emic or folk perspective and the etic or inquirer’s perspective are inseparable in this study. I am the person being studied (emic perspective) through looking at the journals I wrote; I am the person doing the studying (etic perspective). So, indeed, the emic perspective is highlighted–the five years of journals are from my perspective. The field notes about those journal entries are the etic perspective. I did try to come to understand the 1985 and the 1989 self I was.

Nevertheless, over the years of journal keeping, I tried to capture the emic perspective of my students in my classroom. I did this by describing behavior, settings, conversations, and interactions. Sometimes the entries focused exclusively on one student and provided sensory descriptions of events; sometimes I wrote about interactions between students I witnessed or between students and me. I tried to see events from their points of view through observations, interviews, surveys, and document analysis of their productions.

h. Are member checks used appropriately?

This criterion does not apply in this study because I am the only member in the study. I was studying myself as a journal keeper. As far as doing member checks on the original journal entries themselves, I rarely did. If I did, the checks were usually after an extended time at a shared moment of evaluation. I would tell the person some of what I had observed and ask for their reactions. I never did let students read my journals, and they never asked. Few if any were aware of my written observations.

7. Is thick description adequate to make TRANSFERABILITY of the study likely?

Thick descriptions were created of the researcher, the journals, the process of creating the journals, and the procedures for conducting the study. Moreover, many samples from the journals were put into all sections of the report to provide a rich context from which readers can draw their own conclusions about the value of keeping teaching journals. The transferability of the study seems obvious to me. If the reader agrees with the findings that journal keeping holds valuable information in place for making instructional decisions, the process is easily personalized at any teaching level–preschool to graduate school, dog-training to piano teaching, third-grade music to high school science. The only thing a prospective journal keeper needs to do is “Just try it.”

8. Is the study DEPENDABLE?

a. Is an adequate audit trail maintained?

A fifty-four page, handwritten audit trail was kept. In it are records of the decisions I made, the organizations of data I used, the reasons for the decisions I made, the struggles I was experiencing, and the activities I involved myself in concerning this study. Additional audit trail information is in the field notes themselves. They are labeled as MN which means Methodological Notes.

b. Was an audit conducted? Results?

Not yet. However, David Williams, a seasoned qualitative inquirer has agreed to audit my study. A brief report of his audit will be included with this study.

c. Are data collection and analysis procedures adequate? Has the researcher been careless or made mistakes in conceptualizing the study, sampling people and events, collecting the data, interpreting the findings, or reporting the results?

Samples of journal entries were selected from the first year I kept journals and from the most recent year I kept a journal. The reason for this procedure was to see if I had changed as a teacher. Although this focus was immediately dismissed, the sample was kept. The sample size was reduced several times as time constraints impinged on the process. Initially, I had hoped to analyze all five years of journals; this was reduced to two years, then to one month in each journal, then to one month in one and two days in the other. However, within the sample, every event was analyzed. A thorough domain analysis was performed for fifteen semantic relationships followed by a taxonomic analysis, componential analysis, and theme synthesis. A report was prepared to share the results of the study with practicing teachers who might be interested in the concept of being a researcher in their own classrooms. The process of conceptualizing the study began several years before the study actually began and continued throughout the study. Evidence for the latter is how many times I changed my mind about a focus, how extensive the study would be, who the audience would be for the report, and finally the decisions about reviewing literature.

9. Is the study CONFIRMABLE?

a. Is an adequate audit trail maintained?

See response to #10a above.

b. Was an audit conducted? Results?

See response to #10b above.

c. How adequate are the findings? How well are they supported by people and events that are independent of the inquirer?

Several avenues have been pursued so the study would meet confirmability standards. One, the original journal entries, the field notes, and the records of the reasoning I went through to arrive at the analyses are available for review and were reviewed by an auditor. Others viewing these documents would likely come to the same conclusions as I did. Second, a review of the literature was made after the analyses were completed to see what other writers, theorists, and researchers have concluded about journal keeping. The findings from this review are meshed with the research findings in the discussion section of the report. A thorough review of the literature was not done but the sources searched did reveal a philosophical view similar to mine. Thus points of view expressed in the review may give an incomplete view of the issues. Therefore, readers should not conclude that a thorough review of literature would also show as much support for the findings as the more than twenty sources searched for this study. Nevertheless, the support is substantial and the reader could decide that these writers confirm my findings.