Intuition

Intuiton

leadership leadership

ContactContact

 

Trustworthiness

 

 

 

The aim of trustworthiness in a qualitative inquiry is to support the argument that the inquiry’s findings are “worth paying attention to” (Lincoln & Guba, 1985, p.290). This is quite different from the conventional experimental precedent of attempting to show validity, soundness, and significance. In any qualitative research project, four issues of trustworthiness demand attention: credibility, transferability, dependability, and confirmability. Credibility is an evaluation of whether or not the research findings represent a “credible” conceptual interpretation of the data drawn from the participants’ original data (Lincoln & Guba, 1985, p.296). Transferability is the degree to which the findings of this inquiry can apply or transfer beyond the bounds of the project. Dependability is an assessment of the quality of the integrated processes of data collection, data analysis, and theory generation. Confirmability is a measure of how well the inquiry’s findings are supported by the data collected. (Lincoln & Guba, 1985). In this inquiry, trustworthiness was enhanced through the strategies detailed below.

To address credibility,  I employed three techniques. First, in designing the research procedure, I deliberately included three experiential learning initiatives rather than just one or two. My intention here was to generate three layers of data from each participant. This technique, while not meeting the technical definition of “triangulation” (Lincoln & Guba, 1985), nonetheless provided a richer, more multilayered and more credible data set than one or two initiatives would have generated. In addition, future studies could supplement the grounded theory developed here by analyzing this inquiry’s data solely by participant or solely by initiative.

Second, I enlisted the help of a competent Peer Debriefer (Lincoln & Guba, 1985). Dr. Jeff Galbraith was my Peer Debriefer for this project. Dr Galbraith holds his Ph.D. in Educational Psychology from UVA and is an active private-sector practitioner in the field of professional staff/executive development. Dr. Galbraith was responsible for meeting with me as I refined my procedure via pilot studies, after I collected the data, and periodically during the process of data analysis. During our meetings, Dr. Galbraith received regular progress reports of the project, and posed questions regarding the research question, methodology, ethics, trustworthiness, and other research issues. He made pointed observations, and suggestions, and posed “Devil’s Advocate” questions throughout the process. His role was generally consistent with that defined in the literature (Lincoln & Guba, 1985). This qualitative inquiry has been updated to take into account Dr. Galbraith’s comments as a Peer Debriefer. Each significant interaction from our meetings, and the subsequent changes I made appear in my field journal. Lastly, a letter from Dr. Galbraith detailing his experiences as my Peer Debriefer appears in Appendix H.

Third, I completed “Member Checking” (Lincoln & Guba, 1985) with three of the four participants. The fourth was unreachable. In the process of Member Checking, each of the research participants reviewed a summary of the data analysis procedure and a summary of the final results of the inquiry. They answered several standardized interview questions and offered comments on whether or not they felt the data were interpreted in a manner congruent with their own experiences. All participants surveyed rated the findings of the data analysis as a “moderately” to “strongly” credible interpretation of the reality they experienced in the project. In addition, all three participants made comments that directly connected the findings to one or more personal experiences they had in the procedure. The exact documents used in the Member Checking process are on file and available upon request.

To address transferability, I include in Appendix D several of the data analysis documents used to generate the answer to the research question. The complete set of data analysis documents are on file and available upon request. This access to the inquiry’s “paper trail” gives other researchers the ability to transfer the conclusions of this inquiry to other cases, or to repeat, as closely as possible the procedures of this project.

To address the issues of dependability and confirmability, I relied on an independent audit of my research methods by a competent peer (Lincoln & Guba, 1985; Patton, 1990). My auditor, Kris Kreuger, is a practicing professional in the field of Experiential Training and Development with experience generally equivalent to my own. She is also in the process of finishing her own Ph.D. in Educational Evaluation and is familiar with Grounded Theory Methodology and Qualitative Research in general. After I completed my data analysis and wrote the bulk of Chapters Four and Five, my auditor thoroughly examined my audit trail consisting of the original transcripts, data analysis documents, field journal, comments from the member checking, and the text of the dissertation itself. Based on established precedent in Qualitative Research, she assessed both the dependability and confirmability of the project, as well as the completeness and availability of auditable documents. She also evaluated the degree and significance of researcher influence she found. In brief, the auditor assessed the confirmability, the dependability, and the degree to which researcher influence was handled as “excellent.” The auditor’s letter of attestation (Lincoln & Guba, 1985) appears in Appendix E. The letter requesting the audit and detailing the questions for her to answer is on file and available upon request.

 

 

 

Researcher Role & Prevention of Researcher Influence

 

 

 

 

In this qualitative inquiry as in most others, the researcher was the instrument through which the data was collected. In most qualitative inquiries, a “Person as Instrument Statement” normally provides enough explanation of this issue. In this project, I am a researcher who by necessity both facilitated and analyzed the research procedure under study. Thus I have taken more rigorous steps to address the issues of researcher role and prevention of researcher influence. Patton (1990) states that there are four ways in which a researcher might unduly influence the data of a qualitative inquiry. I address each one in turn.

 

(1) Researcher Presence – the reactions of program participants to the researcher’s presence,

 

(2) Instrument Change – in a qualitative inquiry, changes in the researcher over the course of the project,

 

(3) Professional Incompetence – either a lack of sufficient training or preparation,

 

(4) Value Imposition - undue influence of the values or biases of the researcher (Patton, 1990).

 

 

 

Researcher Presence. Participants sometimes react unusually to the presence of a researcher in the research procedure thus unduly influencing the data generated. To minimize this kind of influence, Patton suggests that participant researchers allow an appropriate period of time for themselves and participants to, “get used to each other” (Patton, 1990, p.473). In this inquiry, I included into the design of the research procedure two complimentary relationship-building techniques. The first relationship-building phase occurred at the start of my first interview with each participant. The second occurred at the start of the group experiences in Step Two. As described earlier in this chapter, I designed these phases specifically to develop what counselors call a “working alliance” between all the participants and myself. Development of a working alliance tends to increase genuine interaction and thus minimize participants’ need to employ artificial roles, routines, scripts or games with me or for me  (Meier & Davis, 2000). I contend that the use of such relationship-building techniques helped build an atmosphere of trust that decreased the likelihood of undue researcher influence.

Instrument Change. In long-term participant observation projects, particularly ethnographic studies, there is a concern that prolonged participation can change the researcher and thus bias the data. In common parlance, the concern is that researchers will “go native” (Patton, 1990, p.474). In this inquiry, data collection occurred over a period of approximately six weeks from late August to early October of 2001. During this time, my total contact time with all participants was approximately 24 hours. Thus any concern about “going native” was negligible in this study. To minimize any other instrument changes over time, I employed two strategies. First, I relied on many years of experience in facilitating adult learners to allow me get into a well-trained, consistent professional mindset before each interaction with participants. Second, I depended on the interview protocol detailed earlier in this chapter, and attempted to ask the same questions to each participant in the same order with approximately the same verbal and non-verbal cues. Any deviations from this protocol were made only when no other realistic options seemed available in the moment. I contend that my abilities to both get into a well-developed, consistent mindset and adhere to the approved interview protocol minimized any potential for undue influence from instrument changes over time.

Professional Incompetence. An ill-trained or inexperienced researcher’s professional incompetence can cause undue influence in a project’s data. Because I played dual roles of facilitator and researcher in this inquiry, it is appropriate to summarize my experience in this field and briefly discuss the implications this might have had for the project. In the past three years, I have studied the relevant literature in the areas of process facilitation, organizational learning, experiential learning, counseling, and the psychology of tacit knowledge. Moreover, since 1996, I have designed and facilitated over 100 experiential learning programs, mostly for college age and professional populations. I have also trained over 100 novice and experienced practitioners in group process facilitation skills through courses offered by UVA’s Division of Continuing Education and through in-house trainings offered by UVA’s Poplar Ridge Experiential Learning and Training, and through Longwood College. I have completed approximately 250 contact hours of training and supervised practice including an intensive 9 day training from Learning Forum, Inc. open only to 12 hand chosen facilitators per year. Since 1996, I have facilitated programs for six different experiential learning organizations including Challenge Discovery Outdoor Adventures, Learning Forum, Inc., Passages, Inc., Falls River Inc., and of course, UVA. Recently, I have also begun to independently design and lead experiential training and development programs for my own clients such as the Virginia Department of Emergency Medical Services.

As a result of this background, I felt very confident facilitating this research process professionally. I contend that my substantial experience in the design and facilitation of adult experiential training and development programs minimizes any undue influence associated with researcher competence.

Value Imposition. Despite growing agreement that, “Value-free interpretive research is impossible” (Denzin, 1989, p.23), the criticism that a qualitative researcher may have unknowingly imposed his values, beliefs, or biases onto the participants and may have thus unduly influenced the data is perhaps the most common criticism of any qualitative inquiry (Patton, 1990). Qualitative research projects such as this one may appear to be more subjective and more open to researcher influence than statistical or experimentally-based studies. But appearances often deceive. The tendency for qualitative inquiries to appear more open to influence does not mean that they actually are so. The apparent surplus of researcher influence expected in qualitative inquiries may be due to the inherent frankness and candor with which qualitative methodologies expose the inevitability of such influence. Similarly, the relative paucity of researcher influence expected in experimental or statistical studies may be due more than anything else to the relative stealth with which such influence can be buried deeply within quantitative, experimentally-based methodologies.

 

...the ways in which measures are constructed in psychological tests, questionnaires, cost-benefit indicators, and routine management information systems are no less open to the intrusion of the evaluator’s biases than making observations in the field or asking questions in interviews. Numbers do not protect against bias, they merely disguise it. All statistical data are based on someone’s definition of what to measure and how to measure it” (Patton, 1990, p.480).

 

I agree with Patton’s argument above, and contend that researcher influence is not a fatal research defect that can be eliminated without also eliminating the difficult to describe but inherently meaningful quality central to worthwhile, practical, and innovative research within the human sciences. I contend that researcher influence is in reality, an inevitable artifact of meaningful action-based participatory research like this which should be discussed, considered, and understood so that each reader can determine on their own whether the influence is acceptable or undue. Below, I summarize and explain in detail the four steps I have taken to minimize unnecessary or undue influence.

 

·        A statement of some of my values, beliefs, etc. that are relevant to this project

·        An explanation of my relevant expertise in facilitating process without content

·        Evidence of situational member checking I used to clarify vague interactions

·        Evidence of 22 instances of influence in the data and the significance of each

 

First, I reveal six of my own values, beliefs, assumptions and biases that I see as pertinent to this inquiry.

 

·        I believe that experiential learning is an extremely powerful aspect of both formal and informal endeavors of human learning.

·        I think that better understanding of how humans learn from experience can improve, and for lack of a better word, humanize, a wide range of formalized educational programs.

·        I hold that the distinction between cognitive and intuitive thinking is subtle, and subjective, and at the same time explicit and categorical.

·        I think that researchers and practitioners interested in the field of applied experiential learning have not yet combined their expertise to generate a practical, and conceptually stable model of how the cognition and intuition interact, but that they may in time.

·        I assume that the data from my first attempt at large-scale research will demonstrate only a very little conclusive theory, but will generate a large amount of unique, interesting hypotheses and connections with existing literature.

·        I suspect that the data will reveal some significant connection between the process of experiential learning and the concept of intuitive thinking, but that this connection will be far from completely understood, and will be widely interpretable.

 

Second, my experience in facilitating adult learning groups has given me the expertise to facilitate the process of a discussion while only minimally if at all influencing the content. This particular expertise called “Process Facilitation” is dramatically different from many content-based pedagogical skills typically taught in the field of education. Process Facilitation is however a well-established and theoretically sound practice in the fields of counseling and organizational development, and is integral to any formalized experiential learning procedure (Schein, 1999). In this inquiry, I did influence the process of the discussion, as does any researcher who asks any question of any participant. However, my specialized background described above allowed me to take greater care than most researchers can to avoid influencing participants towards or away from any particular content as a response to a given question. As I participated in each interaction with participants, I made deliberate and conscious efforts to avoid facilitating or influencing content in any way, and contend that this helped minimize undue researcher influence.

Third, at eleven points during the various interviews, I became aware that I was not 100% confident of understanding what a given participant said or meant (Bridget, Step Three, pp.8, 11, 31; Karen, Step One, p.12, Step Three, p.4, 17, Step Four, p.5; Molly, Step Three, p.15, 17; Taylor, Step One, p.7, 8). In each of these eleven cases, I took one of two actions to minimize the influence I had while still maximizing clarity and focus of the interview. I either carefully reflected the comment verbatim back to the participant for clarification as is the established precedent  in non-directive counseling (Rogers, 1961), or directly asked for clarification. I contend that these careful reflections or requests for clarification minimized any undue researcher influence. One example appears below.

 

Karen:    As I interviewed, every single interview, I interviewed with 21 different people [laughter].  Um, each one of them highlighted the challenge of moving from operations to moving to, um, leadership of the department and how am I going to let go of … these operational … and as you can hear, the first things come to my mind are always … operations and … how to let go of that and have more of a leadership view rather than, um, get this work done sort of view, and so every single person mentioned it [laughter] during the interview process, like Becky, you need to figure out how to delegate.

 

JM:         You’re coming from a worldview or frame of mind that’s all about solving problems.

 

Karen:     Yes.

 

JM:         And moving into a worldview about managing people, about facilitating an organization full of people who solve problems.

 

Karen:     Right (Step One, p.13).

 

Fourth, after completing the data analysis and developing the final Grounded Theory, I conducted a final investigation of the data to search specifically for instances in which I might have unduly influenced participants without being aware of it at the time. My intention was to conduct an investigation of the data as thoroughly, as skeptically, and as  objectively as my toughest critic would. Specifically, I read through each participant’s transcripts in reverse chronological order, Step Four to Step Three to Step One, and essentially from the end of each interview to the beginning. Exit interview data was exempted because it was not used in the generation of the Grounded Theory. I collected from the transcripts evidence indicating that I might have unduly influenced participants in any significant way. Specifically, I searched for incidences in which:

 

·        a participant inquired in any way about what I wanted

·        a participant expressed uncertainty about something I said

·        the process of the participant-researcher interactions deviated significantly from the interview protocol

 

            I collected twenty-one incidences of potential influence. The document quoting all incidences, including my assessment of the significance of each, is on file and available upon request. Below I summarize the significance of each incidence grouped by participant.

            Only one incident occurred with Taylor. At one point, she asked “Is that [her previous comment] what you want?” (Taylor, Step Three, p.32). I rephrased the original interview question back to her without specifying the content. She continued her response with no apparent undue influence.

            Molly’s data contained six incidences. Field notes indicate that Molly was the most reluctant participant of the four. Her interviews were the most non-standard and contained the most examples of my deviating from the interview protocol. At her insistence, and against my better judgement, we conducted Steps Three and Four in her office during the middle of the workday. In the first incidence, I sensed that her comments were inappropriately focused on the group’s performance rather than on her own performance. I deviated from the interview protocol to try to turn her focus towards her own performance as always without specifying content. Molly quickly adapted to the change and continued without apparent undue influence (Molly, Step One, pp.1-2).

            The next five incidences all follow the same pattern (Molly, Step One, pp.2-3; Step Three, pp.5-7, 8-10, 18-19; Step Four, pp.2-4). In these incidences, Molly’s responses became increasingly brief and superficial. In addition, I sensed increasingly consistent cues indicating her desire to get to the end of the interviews as quickly as possible without outright quitting. I made efforts to re-engage the interest of the participant. Specifically, I deviated from the protocol only by skipping interview questions that she considered redundant, electing to use very few probing or follow-up questions, and by asking the following, admittedly more directed question. “What strikes you about that? You seem to always say in your sentences or in your stories, ‘...and I did that.’” (Molly, Step Three, pp.19). To this, Molly responded that it was the structure of the protocol-based thought fragment that influenced her answer. This response seemed to indicate undue influence. However, because of Molly’s clear pattern of impatience and disinterest with the substance of interview process, and because of the lack of supporting evidence from other participants, I am inclined to generally disregard her comment. I contend that Molly’s data, while unusual and substandard, contains no substantial evidence of undue researcher influence.

            Karen’s data contained nine incidences of possible influence. Seven of these were brief interactions in which she asked in regards to her most recent response, “Is this ok?,” or “Is this what you’re looking for?” (Karen, Step One, p.2, 2-3, 3-4, 7-8, 18-19; Step Three, pp.5-7, 11-14). The large number of these nearly identical interactions unprecedented in the interviews with any other participant, indicate to me a stable personality trait of Karen’s. I sensed this pattern after the 4th incident, and decided to do two things. First, I maintained the consistency of my responses with a brief, affirmative, “Yes” or “yeah.” Only once regrettably, did I slip and remark instead, “That’s perfect” (Karen, Step Three, pp.5-7). Second, I deviated from the interview protocol to give Karen a brief summary of the questions I would ask her from that point until the end of the interview. My assumption here was that the more knowledge she had about the scope and depth of the interview within reason, the less she would need to ask me the questions she had so far consistently asked about the appropriateness of her responses. Because the interview protocol followed a predictable pattern, I concluded that the influence incurred by my decision to summarize upcoming questions was negligible.

            In the eighth incident, I deviated from the interview protocol to request that Karen  shift her focus from organizational challenges to personal challenges she faced within the organization. I deliberately avoided suggesting a particular challenge here, and instead highlighted the difference between organizational and personal challenges in general. She expressed slight hesitation at discussing such personal topics. I immediately paused, and reaffirmed her control of the scope and depth of the interview and reassured her that her continued emotional comfort would determine this scope and depth. She agreed, seemed satisfied, and continued without apparent undue influence (Karen, Step One, pp.5-6).

            Finally, in the ninth incident, as Karen was working through the section of the processing session concerned with similarities in the three narratives, she noticed her tendency to wonder whether or not she was cooperating with me (Karen, Step Three, p.16). She realized this pattern existed in all three narratives, but declined to expound further on it. I decided that probing further on this topic could easily lead to a self-referential discussion rife with complexity and contradiction. I decided to preserve the established momentum and direction of the basic interview, acknowledged her comment, and listened as she continued onto a subsequent topic. It is unclear to me how much undue researcher influence, if any, this interaction indicates. In retrospect, I wish that I had revisited this topic in the Exit Interview section.

            Bridget’s data revealed five incidences. Four were instances of her asking essentially whether her responses were of the scope and depth I was seeking (Bridget, Step One, p.12; Step Three, p.11, 17; Step Four, p.6-7). All four times, I answered briefly and positively without offering any details. I tried to shift the focus of the interview away from me and onto her as quickly as possible. During the fifth and final incident (Bridget, Step Three, pp.23-25), I sensed that a Bridget’s comments were inappropriately focused on the group’s performance rather than on her own performance. I deviated from the interview protocol to try to shift her focus towards her own performance, again without specifying content. The participant offered slight resistance, but quickly adapted to the change and continued without apparent undue influence.

            Four last incidences of potential undue influence deserve attention here. In accordance with the established protocol, during the first interview with each participant, I directed the participant to describe several meaningful personal challenges she faced. I then asked her to choose one of these to explore in more depth. Two participants, Bridget and Molly, made this decision without indicating a need or desire for any input from me.

            The third participant, Taylor indicated several issues, only one of which I considered appropriate for the scope and depth of this project. I gently suggested my preference that we explore that challenge. Taylor indicated that she had simultaneously made the same choice as I (Taylor, Step One, p.10). I contend that this interaction, while indicative of significant researcher influence, was of minimal significance because the choice of which personal challenge each participant explored more deeply wound up being irrelevant in the development of the final grounded theory.

            Finally, the fourth participant, Karen, outlined three challenges that her organization faced, and three personal challenges she faced within the organization. The transcriptions indicate she noticed the same organizational/personal distinction as I did. I suggested she choose one of the three personal challenges to explore further (Karen, Step One, p.11). She decisively agreed and continued the interview. I contend that my interactions with Karen do not indicate evidence of undue researcher influence.

            In conclusion, after thorough analysis of the twenty-one incidences of possible influence above, I contend that no significant evidence of undue researcher influence exists, and find myself in agreement with Patton’s opinion that concerns over, “...evaluator effects are often considerably overrated...” (Patton, 1990, p.474).

 

 

 

 

 

 

leadership leadership