We can work on Example dissertation on the applicability of waiver of informed consent to clinical research in ICUs

Chapter 1: Introduction

More than 2 million people sign consent forms in the United States each year for participation in more than 58,200 clinical research protocols (Centerwatch, 2008). Each participant’s signature on the form is considered evidence that he or she has provided informed consent—that he or she understands the clinical research and has agreed to participate voluntarily in it.

However, serious questions continue to be raised about whether the informed consent process is accomplishing its mandated goals. Such challenges have come from critical events, expert testimony, policy statements, and a substantial body of research. The gravity and urgency of these issues have been highlighted most recently by a report in the New York Times (Foderaro, 2009) of an investigation into the participation of children with HIV/AIDS who were cared for as a part of the New York City foster care system. According to Timothy A. Ross, the codirector of the Vera Institute of Justice, who conducted the audit, “We found a disturbing lack of consent forms, and in some cases we found hand-written consent forms instead of the official consent forms” (p. A26). Ross added that such makeshift permissions violate both city policy and federal regulations. Moreover, the study found serious breakdowns in the implementation of the informed consent process. For example, informed consent forms were missing in 21% of the cases.

 

Even in clinical trials in which informed consent is ostensibly obtained, serious challenges can result, as illustrated by the deaths in 1999 and 2007 of two research participants in gene transfer trials. Jesse Gelsinger and Jolee Mohr were both volunteers in gene therapy trials (Caplan, 2008). Gelsinger had a mild form of ornithine transcarbamylase deficiency, controlled with a low-protein diet and medication. He died at age 18 after an injection of a gene product as part of a clinical trial. Eight years later, Jolee Mohr, a 36-year-old with mild arthritis in her knee, died after receiving an experimental gene product, leaving behind a 5-year-old daughter (R. Weiss, 2007).

These incidents invoked criticism by the media, self-scrutiny by institutional review boards (IRBs), and new legislation. In response, policies about research participant protection were examined at the highest levels of the federal government, spurring reform and an improved process for informed consent (Shalala, 2000). The reforms included restricting the use of confusing words such as therapy in gene therapy, instead using the term gene transfer trials to preclude misunderstandings by volunteers that such “therapy” was a treatment.

Despite these reforms, challenges continue to be raised by bioethicists in the field. “We have not come all that far between the deaths of Jesse Gelsinger and Jolee Mohr when it comes to informed consent practices,” according to Arthur Caplan, professor of bioethics at the University of Pennsylvania. “Despite a lot of rhetoric and hand-wringing, very little has changed in the past decade about the process of informed consent. But if it’s broken, shouldn’t it be fixed?” (Caplan, 2008, p. 5). Caplan’s observation is supported by ample evidence from studies of comprehension of written informed consent that have shown that the process often fails to achieve its intended goal of participant comprehension and an informed decision to participate (Siminoff, 2003).

Attempts at improvement in the informed consent process have been ongoing for several decades without notable change (Cohn & Larson, 2007; Verheggen, Jonkers, & Kok, 1996). To measure the quality of the process, investigators have used questionnaires or interviews, primarily in the post-consent period (Agre & Rapkin, 2003; Astley, Chew, Aylward, Malloy, & Pasquale, 2008; Flory & Emanuel, 2004; Sugarman et al., 2005).

Purpose of the Research

The purpose of this research was to develop and test a new tool for evaluating the quality of the informed consent process for clinical research. The tool was designed to (a) be observational, (b) contain the mandated elements of informed consent as established by the U.S. Department of Health and Human Services (DHHS), (c) contain the essential elements of communication in a health care encounter as outlined in the Kalamazoo Consensus Statement (Bayer Fetzer Conference, 2001) and (d) represent a balance between the informational content and the style and clarity of the communication process in the informed consent interaction.

Aims and Objectives

To develop and test an observational tool to assess the informed consent encounter in clinical research.

To develop a set of standardized vignettes (in the form of DVDs) to be used to assess the psychometrics of this observational tool.

Study Model

A concept analysis of informed consent was performed following the process proposed by Walker and Avant (2005). This concept analysis produced a twodimensional, interactive model of the consent encounter. This model represents both the essential informational content of informed consent for clinical research, and the communication process between the research participant and the researcher. Informed consent is most likely to occur at the intersection of conveying accurate factual information, and using an interactive communication process.

Operational Definitions of Central Concepts

Informational Content of the Interaction

Informational content of interaction is defined as a factually accurate and complete description of the eight study-specific elements of informed consent as articulated by DHHS {Protection of Human Subjects, 2008).

Communication Process of the Interaction

The communication process (referred to in the model as process) was developed on the basis of published research (Joosten et al., 2008; Makoul & Clayman, 2006; Schirmer et al., 2005; Weiss & Peters, 2008) and the recommendations of the KalamazooConsensus Statement (Bayer Fetzer Conference, 2001).

Informed Consent as Defined in the Model

Informed consent for clinical research is represented in Figure 1 at the intersection of the presentation of accurate informational content and a comprehensive communication process.

Informed consent for clinical research is not reaching its intended goal. Accurate information and effective communication techniques have been identified as the components of a quality informed consent encounter. This study proposes the development and testing of an observational tool to measure the quality of informed consent for clinical research.

Historical Overview of Informed Consent for Clinical Research

Concerns about informed consent have a history back to the 1900s, when early written consent for human participant experimentation is credited to U.S. Army physician Walter Reed. After seeing the devastating effects of yellow fever during the Spanish-American War, George Miller Sternberg (the U.S. Surgeon General at that time) created a medical commission to study it and appointed Walter Reed as its chair. The major barrier faced by the commission was that the disease required human experimentation; no animal model could be used in because the disease affected only humans (Moreno, 1999). The members of the commission concluded that if they were going to experiment, they should be willing to experiment on themselves, and they did so (Pierce, 2003).

However, incidents of unethical human experimentation continued to occur, with especially egregious cases provoking further concern and increased regulations. In the 1932 Tuskegee Syphilis Study, African American men with syphilis remained untreated, even after penicillin had been proven to be a cure. In the 1956 Willowbrook studies, parents of mentally retarded children were refused residential placement if they were unwilling to have their children deliberately infected with the hepatitis virus (Krugman, 1986).

Regrettably, the implementation of this standard protocol has not proven effective in ensuring that patients fully understand the consent they are signing (Joffe, Cook, Cleary, Clark, & Weeks, 2001a). This form of consent may refer to simply an administrative procedure of presenting to a prospective research participant materials, inwritten or oral form, consisting of these mandated informational elements. It puts researchers in compliance with legal requirements and produces signed forms, but it may not achieve genuine consent from patients, consent based on adequate understanding and informed judgment (Kurtz, Silverman, Benson, & Draper, 2003).

Interventions to Improve Informed Consent

The result of this review of descriptive and interventional studies was that current interventions designed to address the quality of informed consent are inconclusive. Outcome measures used either the essential elements of informed consent or communication skills. Eighteen of the 25 studies identified (72%) used the essential elements of informed consent as the outcome measure, and 7 of the 25 (28%) used communication with the potential participant as the outcome measure. Higher level of education (reading level) was identified in 5 studies (20%) as the only independent variable positively associated with comprehension (Agre & Rapkin, 2003; Henderson et al., 2006; Joffe, Cooke, Cleary, Clark, & Weeks, 2001; Sudore et al., 2006; Verheggen et al., 2006).

Interventions to improve comprehension included modification and simplification of the consent form (Bjorn, Rossel, & Holm, 1999; Coyne et al., 2003; Davis, Holcombe, Berkel, Pramanik, & Divers, 1998; Dresden & Levitt, 2001; Paris et al., 2007), the use of alternative methods of consent such as audiovisual aids (Agre & Rapkin, 2003; Astley et al., 2008; Joseph et al., 2006; Strevel, Newman, Pond, MacLean, & Siu, 2007), and the addition of a conversation with a consent educator or nurse about study participation (Aaronson et al., 1996; Coletti et al., 2003; Woodsong & Karim, 2005, Yuval et al., 2000). The use of a modified form (simplified language and the addition of illustrations) showed significant improved comprehension in three studies and no improvement in two studies. Bjorn et al. (1999) studied healthy volunteers who were randomized to read either a simplified or a standard consent form for one of actual two studies. These volunteers reported improved comprehension in the group reading the simplified consent (p<.05). Dresden and Levitt (2001) showed that comprehension was improved in the emergency department setting when asthmatic participants were recruited into a treatment trial using a simplified or a standard written consent form. These patients also reported improved comprehension with the simplified form (p < .05). Last, Paris et al. (2007) showed improved comprehension with a “working group” consent form, which had been modified by a research nurse, an IRB ethicist, and a healthy volunteer. However, simplified or modified consent was not shown to improve comprehension for patients enrolling in a chemotherapy trial (Davis, 1998), in which the simplified form was found to be easier to read (p = .0001) and less frightening (p = .0079) but did not improve comprehension (p > .05). Nor was improvement seen in Coyne et al. (2003) whorandomized 207 cancer patients enrolling in a clinical trial to receive a simplified consent form or a standard consent form and found that there was no significant difference in comprehension (p = .21). The use of alternative presentation methods including such visual aids as videos or DVDs was tested in 4 (16%) of the 25 studies. In Agre and Rapkin (2003), Strevel et al. (2007), and Joseph et al. (2006), videos or DVDs were found to be beneficial in improving understanding; however, Astley et al. (2008) found no difference in either satisfaction (p = .099) or information recall (p = .29). Last, Weston, Hannah, and Downes (1997) studied a standard consent versus a video and found no significant difference in initial comprehension; however, 2-4 weeks later, the video group showed some improvement.

The combination of written consent and providing someone to speak with about the trial was tested by Coletti et al. (2003) with 4,892 healthy volunteers and by Aaronson et al. (1996), both of whom combined the written form with a person to explain it. They found that the addition of a person, either a nurse or a consent educator, improved comprehension significantly. Aaronson et al. studied 180 cancer patients and compared written consent alone to written consent with the addition of a telephone-based research nurse; the intervention group had greater comprehension of informed consent (p< .01). Yuval et al. (2000) similarly found that the opportunity for discussion and the ability to ask questions was the only intervention to improve comprehension significantly (n = 150, p < .001) in cardiac patients.

In this review, each study evaluated the effectiveness of its intervention using a different outcome measure and measured a different part of the encounter, most in the postconsent period. Most investigators (22 of 25, or 88%) used nontested survey tools tomeasure effectiveness or recall. This created difficulty in comparing results across studies because the outcome measures and evaluation tools did not match. Moreover, it should be noted that testing postconsent recall may itself be problematic.

Tools That Measure Informational Content in Informed Consent

As noted earlier, the DHHS (2004) has mandated information that must be included in an informed consent document. The Belmont Report (U.S. Department of Health, Education, and Welfare, 1979) has placed the responsibility for the selection of an effective communication process with the research investigators on behalf of their participants. But how can this informed consent process be measured, and why did investigators choose to design their own instruments?  The tools to measure the outcome variables were different for each study and included investigator-developed and psychometrically tested tools (Joffe et al., 2001b; Verheggen et al., 1996), investigator-developed and psychometrically untested tools based on the essential elements of consent (Coletti et al., 2003; Dresden & Levitt, 2001; Weston et al., 1997), tools translated from essential elements required in France (Paris et al., 2007), investigator-developed questionnaires (Agre & Rapkin, 2003), semistructured (Aaronson et al., 1996) or open-ended interviews (Henderson et al., 2006), true-false (Joseph et al., 2006; Sudore et al., 2006) or yes-no questionnaires (Ferguson, 2002), and analysis of verbatim transcripts (Stead et al., 2005).

A literature search was conducted in PubMed with limitations set to the English language and from January 1982 to January 2009, with the following terms in either the title or the abstract: measurement tools and informed consent, yielding 8 studies; tools to measure informed consent, yielding 16 studies; measure consent and clinical trials, yielding 340 studies, and measuring informed consent in clinical trials, yielding 261 studies, many of which were repeated from the earlier search. Articles were searched bytitle, then by abstract, and those that were relevant were printed and reviewed. The reference lists of relevant studies were additionally hand searched to identify additional papers that might have been missed in the electronic search. Seven tools were identified that measure the quality of informed consent (Table 2).

All of the tools identified evaluated the informed consent encounter in the postconsent period. But such postconsent self-reports and forced-answer questionnaires have been shown to result in an overestimation of patient comprehension specifically interms of trial aims (p = .0005), eligibility to participate (p = .048), risk and benefits (p = .0005), and compensation for injury (p = .0005) when compared with vignette or narrative measurements of comprehension (Lindegger et al., 2006). There remains, therefore, a need for a new observational tool to measure the events of the encounter, including both the informational content conveyed and the communication between the practitioner and the participant. Without such a tool, researchers are thwarted in evaluating new strategies for improvement. That is, we cannot improve what we cannot measure (Hutchison et al., 2007).

The Issue of Capacity to Consent

In the design of the P-QIC, the critical issue of the capacity of the participant to consent was considered. This capacity is different from competence or incompetence as a legal construct, which is usually decided in a court of law. Assessments of capacity to consent remain in the purview of researcher or clinician.

To aid in such assessments, several tools have been developed, which were reviewed in developing the P-QIC. The MacCAT-CR is the most well-developed and widely-tested tool for assessing the decisional capacity of patients. This instrument includes assessment of understanding, reasoning, appreciation and choice as they relate to consent for research (or the MacCAT-T for consent-related to treatment).

These tools were developed specifically for assessing capacity to consent, and can be administered as either screening tools or as part of the post-consent process. In developing the P-QIC, we were guided by the thoughtful framework and examination of issues of informed consent and capacity which underlay these instruments. So while the P-QIC was not designed to test subjects for decisional capacity, the items of the P-QIC were constructed with recognition of these principles.

The Problem of Therapeutic Misconception

Both patients and clinicians often exhibit a confusion that undermines the informed consent process. This misconception arises because the goals of clinical research differ from the goals of medical care (Henderson et al., 2006). Specifically, patients know that medical care is designed to benefit them individually. However, they may not realize that research is often conducted to benefit not them, but other patients in the future. This problem may be further compounded by ambiguous messages from researchers. This misunderstanding, termed therapeutic misconception (Appelbaum, Roth, & Lidz, 1982), results when participants believe that medical researchers are making decisions on the basis of the participant’s best interests as an individual rather than realizing that their assignment to a treatment group may depend on the research design. This specifically affects the participant’s ability to make an informed decisionwhen choosing between the best known medical treatment and the experimentaltreatment of a research protocol.

There is a parallel rationalization on the part of researchers when they themselves erroneously believe, or inadvertently convey, that experimental therapies might be more beneficial than the study design allows. Henderson et al. (2006) focused on Phase 1 studies to examine this misconception because such studies are designed to assess safety and side effects and hold little potential for improved clinical outcomes. When experimental treatments, usually medications, successfully advance through the study phases, clinical trials become larger and the potential for receiving an experimental medication at a therapeutic dose is increased (“Clinical Trials,” 2008). In a 2008 PBS broadcast, Dr. David Ryan, a clinical researcher from Massachusetts General Hospital and assistant professor of medicine at Harvard Medical School, explained a Phase 1 trial to a participant in the following way: “No good drug that we have today wasn’t a Phase 1 study at some point. And so, what you’re really doing is trying to see if you can latch onto that next great drug” (Gannon, 2008). This statement is an optimistic presentation of a Phase 1 trial, in which the medication is often delivered at a low dose to test for safety and effectiveness. At Phase 1, investigators have not yet established a therapeutic range, so it is unclear how much benefit can be realized, if any. The phases of clinical trials are presented here as Table 3.

The overly optimistic view of clinical trials is consistent with previous research (Cheng et al., 2000; Cox, Fallowfield, & Jenkins, 2006; Meropol et al., 2003) illustrating that many participants report an unrealistic expectation of benefits from a Phase 1 trial. These misguided expectations in early-phase oncology clinical trials were described byWeinfurt et al. (2008), in which participants reported a median expectation of 80 (out of 100) that the clinical trial would cure or control their cancer. This finding was further substantiated by Chang (2008), who revealed that almost half of the nurses surveyed (47%) felt that families who had enrolled a child with cancer into a Phase 1 clinical trial did not understand the goal of the trial. The consequences of participants not fully understanding the purpose of the research, or having erroneous expectations of outcomes, can be distrust, attrition, and dissatisfaction with research participation (Moutsiakis & Chin, 2007).

Therapeutic misconception may go undetected by the tools currently available to measure consent encounters. The reason is that those tools do not permit observers to discern the aspects of the consent encounter that would reveal its presence. For example, a tool that measures only whether the consent administrator has included all the informational elements but does not consider whether the consent administrator invited the potential participant to “play back,” or to repeat in his or her own words, the purpose of the study would fail to capture the fact that some participants are misinterpreting the purpose of the study and actually think that the study will treat or cure their disease. Hewitt (2007) explored how nurse researchers can minimize the misunderstandings of research participants by using techniques identified in the literature on the effective communication.

Summary

The prevailing approach for obtaining informed consent is flawed, and attempts to improve the process have been largely ineffective. The results of the individual studies were inconclusive; however, taken collectively, one promising approach was suggested. The interventions that produced the most positive results were those that incorporated “best practice skills” in communication. Progress has been limited in part because practitioners could not improve what they could not measure. In the next chapter, methods to develop and test a tool that can measure the quality of both the information and the communication in informed consent for clinical research are described.

 

 

Chapter II: Method

The aim of this study was to develop and test an observational tool designed to assess the quality of the informed consent process for clinical research. This chapter describes the methodology for the development and testing of this tool, the Process and Quality of Informed Consent (P-QIC; Appendix A).

Sample Size

The calculation of sample size for studies comparing two or more population means is well-established (Cohen, 1969). The conventional formula requires values for power (1-P), level (a), minimum population mean difference of importance (A), and the standard deviation (a) (Browne, 2001). This is designed to ensure that an adequate number of participants are recruited to minimize type I and type II errors, but not overpower the study which would result in a waste of resources and imply a lack of respect for the time of the participants. Dell, Holleran and Ramakrishnan (2002) reviewed sample size calculations based on the design of the experiment and type of data, and recommended that the standard deviation be divided by the difference to be detected, then squared to obtain the sample size in a two-group mean study and for studies which analyze paired data. Whitley and Ball (2002) advocate the use of predetermined nanograms for sample size calculation to avoid under-powering the study. This study has two parameters which would estimate the ideal pilot to be 60 patients.

The tool was developed using a methodology adopted from Dillman (2007) and tested for validity and reliability using standard psychometric testing methods (Munro, 2005). To establish the psychometric properties, a set of DVDs depicting informed consent encounters was developed. Each of the four DVDs portrayed a different type of consent encounter. The DVDs were uniform (i.e., the same actors, same setting, and same consent form) except for the intentional variations described below:

The information scenario included all of the eight mandated informational elements but was lacking in communication skills,

The communication scenario incorporated the important aspects of communication identified in the Kalamazoo Consensus Statement (BayerFetzer Conference, 2001) and the associated literature, but only included three of the eight informational elements.

The combination scenario included six of the eight mandated informational elements and incorporated principles of communication from the Kalamazoo Consensus Statement.

The null scenario included only two of the mandated informational elements and did not include the principles of communication.

Testing was then conducted to determine whether the raters using the tool would be able to detect these intentional differences by scoring the scenarios on the DVD with the PQIC. The following sections describe (a) the development and testing of the observational tool and (b) the development of the DVDs (Table 4).

Development of the P-QIC

An evidence-based approach was used in the development of the P-QIC. The P-QIC was based on the literature review and a tool initially developed by the IRB at Columbia University Medical Center (Appendix B) for quality assessment purposes. The first tool, called the Observer Checklist, consisted of 25 items that focused primarily on whether factual information regarding the research was given during the consent, but it did not assess the quality of communication of the interaction. As part of preliminary data for an R0-1 grant application (Impact of Consent Administrator Training on Participant Comprehension of Research), the original checklist was adapted by investigators at Columbia University and Morehouse School of Medicine and tested in the adapted form.

In the first adaptation, the form was modified and restructured using principles from the Kalamazoo Consensus Statement (Bayer Fetzer Conference, 2001) while retaining thefactual items of the essential elements of informed consent. The new 25-item tool included five items based on the principles of good communication in a health care interaction between a patient and a provider and 19 items that measured the mandated elements of informed consent.

They reported that the pre- and post-tests accurately reflected information important to the consent process (content and construct validity). Overall, the Checklist was reported to be clear and easy to use, and slight modifications were made. The inter-rater reliability of the Observer Checklist was assessed by eight research assistants who had not been part of development but who had extensive experience in obtaining informed consent. Using regression analysis, intraclass correlations (ICCs) were calculated as a measure of the inter-rater agreement. The ICC was .887. This finding, that 88.7% of the variability in ratings was contributed by the items and groups and 11.3% was contributed by the raters, represented a high level of inter-observer agreement.

After initial pilot work was completed at Columbia University, the Observer Checklist was sent to Morehouse School of Medicine in Atlanta, Georgia. At that site, the Observer Checklist was reviewed by five individuals, including one principal investigator, two experienced research coordinators, one inexperienced research assistant, and one research participant, to ensure that questions were worded in a culturally appropriate way. A few changes in wording were made on the basis of this additional input.

The P-QIC tool used in this dissertation study was a further modification of the Observer Checklist. The P-QIC revisions included reducing the total number of items to 20 and balancing the tool so that 9 of the items represented communication skills based on the Kalamazoo recommendations (Bayer Fetzer Conference, 2001) and the other 11 represented the mandated process of consent based on the DHHS guidelines {Protection of Human Subjects, 2008). In scaling down and refining the items, some items were eliminated and others added. For example, the approximate number of participants in the study, which is included as part of the Columbia University research consent but is not mandated for consent in general, was eliminated from the form. After revisions, the tool was tested for reliability and validity (Table 5).

Development of DVDs

The goal of developing the DVDs was to establish the psychometrics for the PQIC, between and within raters using a set of standardized vignettes depicting the consent interaction. Since the encounter needed to standardized, role-plays or live scripted scenarios were thought to contain too much individual variation. The P-QIC and the DVDs were both designed to represent critical elements of the informed consent process.

The DVDs were intended to illustrate or omit important aspects of the consent encounter, in as realistic a form as possible, and the tool was designed to measure them. To establish that the scenarios on the DVDs reflected current mandates and practice for informed consent and that the portrayals depicted such encounters realistically (i.e., content validity), there was initial and continued consultation and expert review. The review for content validity was performed by the former and the current directors of the IRB of the North Shore-Long Island Jewish Health System, the IRB at Columbia University, and four currently practicing consent administrators (two from North Shore-Long Island Jewish Health System and two from Columbia University). The DVDs were directed and produced by educational film consultants of Long Island Video (http://www.longislandvideo.com) under the direction of Jim Odie (http://www.jimodie.com), who directs and produces health-related educational videos for Healthology Webcases (www.healthology.com), an online medical education company.

The DVDs were recorded in a nursing school laboratory at Adelphi University in Garden City, New York. The laboratory had mock patient rooms with curtains, bedside tables, and trays set up to simulate a hospital room. All the DVDs featured the same actresses: a middle-aged Hispanic woman on an examination table and a female consent administrator wearing a lab coat. The consent form was based on Columbia University’s standard informed consent format for a study on the transmission of influenza in the community. In all the DVDs, the consent administrator approaches the participant, explains the study, and obtains consent from the participant. The scripts appear as Appendix C.

Reliability and Validity of the P-QIC Using DVD Scenarios

The P-QIC’s reliability and validity were established using the standardized DVDs as described below.

Procedure

Based on the process outlined in Dillman (2007) and modified for development of an observational tool, a systematic approach was undertaken in tool development for this study.

Dillman (2007) established six steps iterative steps for tool development and pilot testing (Figure 2). After development of the tool, it was tested for reliability and validity using the methods established by Brown (2006) and Munro (2005). An outline of the steps in development, total number of respondents, rationale for the number of respondents, and the function of each step are provided in summary form as Table 6.

Sample, Setting, Population, and Selection of Participants

The setting for the psychometric testing was the Columbia University Medical Center. Inclusion criteria for participants were health sciences students and faculty of Columbia University who were attending a course for which informed consent was a stated curriculum objective and who had completed the human participants research training, including that in Good Clinical Practice (GCP) and the Health Insurance Portability and Accountability Act (HIPAA; 1996) required in such courses.

Step 1: Literature Review

The results of the literature review are summarized in Chapter 2.

Step 2: Content Review

The final tool was developed to measure both dimensions of the informed consent encounter: information (content) as described in the DHHS guidelines {Protection of Human Subjects, 2008) and communication (process) as delineated in the Kalamazoo Consensus Statement (Bayer-Fetzer Conference, 2001). The essential aspects of both of these documents were incorporated into the 20-item P-QIC tool, with the DHHS guidelines represented by 11 items, the Kalamazoo Consensus Statement represented by 7 items, and 2 items representing factors known to affect participant comprehension of informed consent.

Similarly, principles of the Kalamazoo Consensus Statement (Bayer-Fetzer Conference, 2001) were translated into seven items: (a) greets and shows interest in the participant as a person; (b) observes and responds to nonverbal behavior; (c) invites the participant to engage in the decision-making process; (d) listens attentively and without interruption to the participant’s opinions about the research; (e) stops and answers questions during the interaction, providing specific and complete answers to questions or concerns; (f) uses language that promotes understanding, avoiding medical jargon; and (g) offers the participant the opportunity to accept, decline, or take more time to decide about enrollment in the study.

A review of the literature on comprehension of informed consent contributed to the selection of the last two items, which were found to be essential for assessing comprehension: (a) checks for participant understanding of information, asking the participant to explain the study in his or her own words, which assesses comprehension and helps to avoid the therapeutic misconception, and (b) ensures that the participant can read the consent form or is read it aloud before signing. The playback technique had been identified in the prior literature review as the best tool with which to assess comprehension (Epstein et al., 2005). The level of education, correlated with reading level was identified as the only predictive factor for comprehension of consent (Cohn and Larson, 2007).

Step 3a: Concurrent Think-Aloud

Three raters participated in this component of the testing. The concurrent thinkaloud was based on methodology developed by Forsyth and Lessler (1991) and is one component of cognitive interviewing. The purpose of the think-aloud is to elicit feedback about format, instructions, and content. This think-aloud technique was developed to determine whether questions are clear and can be answered as intended by using the observational tool (Forsyth & Lessler, 1991). In the think-aloud, the raters verbalized the steps in the thought process while using the P-QIC. Before beginning the think-aloud was practiced, for example, the rater was asked, “How many windows are in your home?” If the rater was silent while appearing to think, the researcher would gently probe, “Could you tell me what you are thinking now?” To follow the instructions, the rater would begin to describe a counting process that included, “I’ll start in the kitchen; there is one over the sink, the living room has four but one has three parts.” The practice continued until the rater was ready to use the observational tool aloud.

When the rater was ready, the researcher gave the rater four blank P-QIC tools and asked the rater to read the instructions aloud. After reading the instructions, the researcher asked the rater to start the DVD. The rater was seated next to the pause button and was instructed to pause the DVD and verbalize what he or she was thinking while using the P-QIC. For example, while watching the DVD, the rater might pause the DVD and explain to the researcher, “I heard the investigator ask the research subject to repeat back in their own words ‘the purpose of the research,’ and I thought she did it well, so I am checking the box that says ‘done well’ to indicate my selection.” This continued until the four DVDs had been viewed and the rater had verbalized the process of completing the observation check sheet.

Step 3b: Retrospective Interviews

Six new raters participated in this section. In these retrospective interviews, the researcher watched the raters as they use the form to rate the DVDs. The raters were instructed to pretend that the researcher was not present in the room and encouraged to work independently of the researcher. The researcher observed the raters and took notes while they completed the exercise. After the raters had watched the DVDs and used the observation form, the raters were interviewed, focusing on what the researcher observed about the raters using the form. For example, if a rater skipped an item, the researcher might say, “I see you left this one blank. Was there a particular reason for that?” or, if the rater frowned during the exercise, the researcher might say, “I noticed here that you seemed to be thinking really hard” or “Was there something about this question you were trying to figure out?” In a prior small pilot of this tool, a retrospective interview technique identified that raters found it hard to simultaneously read the check sheet and select answers while watching the consent encounter. On the basis of that observation, specific instructions were added that directed raters to review and familiarize themselves with the sheet before observation of the consent process.

Step 4: Final Check

A minimum of three raters were needed for participation in this component of the tool testing, and three raters participated (Dillman, 2007). The purpose of the final-check stage was to ensure that repeated changes and revisions had prevented researchers from identifying silly mistakes (Dillman, 2007). For this final-check stage of observational tool testing, three baccalaureate-level nursing instructors who regularly used video-based tools for education of nursing students, but who had not seen the tool before, used the observation tool while watching the DVDs. These instructors were experienced with a video platform for education but not specifically familiar with the informed consent process. They were seated in a room and provided with the most current version of the observational tool. They proceeded to read the instructions provided, watched all four DVDs, and rated them with the tool. This final step is designed to identify mistakes, inconsistencies, and points that needed clarification in the directions, the newly developed tool or the DVD recordings. There were no mistakes necessitating correction at this stage.

Step 5: Pilot Study

The purpose of the pilot study was to simulate the actual procedures proposed for a large-scale main study (Dillman, 2007). A minimum of 40 raters was needed to participate (Shadish, Cook, & Campbell, 2002). The total number of actual participants was 63.

After IRB approval, email contact was made with faculty of Columbia University School of Nursing whose students met the criteria for eligibility. For faculty members who agreed to participate, two weeks before the scheduled date, the faculty member explained to the students that they would receive an informational letter regarding research on informed consent and that they should read the letter and consider whether they wanted to participate. One week before the scheduled date, an e-mail letter was sent to students using the e-mail address available through the course registration system at Columbia University (Web Courseworks, Madison, WI). This ensured that each student registered for the class received the e-mail document. The letter described the research and invited participation. It was customized to each class; included the date of participation; and outlined the purpose of the research, what was involved in the study, what the student would be asked to do, the risks and benefits, the alternatives to participation, how confidentiality was maintained, that participation was voluntary, contact information for the investigator, rights as a research participant, and the contact information for the IRB at Columbia University. The students were additionally invited to contact the researcher by e-mail. Sending the letter one week in advance allowed the students a chance to consider participation or nonparticipation and avoided an on-the-spot decision; additionally, it was designed to reduce possible feelings of pressure or coercion to participate from the instructor or peers.

On the day on which the study was scheduled to take place, the classes were again informed about the voluntary nature of the study, and the faculty member turned the class over to this researcher. To maintain the confidentiality of those who participated and to reduce apprehension that nonparticipation might be viewed negatively, the faculty member was asked to leave the room and students were again given the chance to participate, to stay but not participate, or to leave. Information sheets were again distributed to the students. The investigator was prepared in the classroom with the DVDs and the observation forms. To ensure a random order for the viewing of the DVDs, a student from the class drew folded pieces of paper. The order of the draw became the order in which the scenarios were shown.

For the group of students participating in the test-retest psychometric testing, the same procedure was repeated; one week before the class, an e-mail was sent that was specific to this group and included details about the dates of participation and the purpose of the research as well as about the test-retest procedure. The investigator again brought the letter and the forms and DVDs to the classroom. For raters participating in the testretest, a sheet of paper was circulated and raters were asked to make a mark (like a heart or a star) or place their initials next to a number on the paper. At the second test period, the paper was made available to the raters to refresh their memory, if necessary, about the selected number. This number was used to match forms for test-retest and was destroyed at the completion of the matching to maintain confidentiality.

Psychometric Analysis

Psychometric analysis is the methodology used to evaluate the properties of measurement tools, including validity and reliability. Reliability is the extent to which a tool produces a consistent result.

Step 6: Reliability and Validity Testing

Reliability and validity testing were performed in SAS (SAS Institute Inc., Cary, NC), SPSS version 14.0 (SPSS Inc., Chicago, IL) and Minitab 13.31(Minitab Inc., State College, PA), as indicated.

Face Validity

Face validity reflects how a measure or procedure appears. This is used to determine whether “on its face” it is a good measure of a construct. The P-QIC was reviewed in its most final form for face validity by a professor of health literacy at the Mailman School for Public Health (committee member); by the director of research for Emergency Services at North-Shore Long Island Jewish Health System, who is also an assistant research professor at New York University School of Medicine (New York); by the members of the IRB of Columbia University; and by a physician from Columbia Presbyterian Hospital who has a specialized interest in patient-provider communication and has published articles and developed teaching curricula for increased physicianprovider communication.

Content Validity

Content validity is defined as the extent to which a measurement reflects the specific intended domain of content (Carmines & Zeller, 1979, p. 20). The content validity of the P-QIC was established using the Lawshe (1975) rating system for review of the items and the method established by Venexiano and Hooper (1997, p. 67) to calculate the CVR for each of the 20 items of the measurement tool.

Each of the respondents recruited for this section rated each item on the P-QIC using the rating system developed by Lawshe (1975), shown as Table 6. A content validity ratio (CVR) was calculated for each item to determine “a numerical index for content validity” (Venexiano & Hooper, 1997, p. 67), where the number of experts who rate each item as essential is tabulated and the researcher derives content validity using the formula CVR = ( « P – M 2 ) ,

Nil where ne = number of panelists indicated as essential and N = the total number of panelists. Polit, Beck, and Owens (2007) proposed the use of the CVR, suggesting that an item with a CVR of .78 or higher for three or more raters could be considered evidence of good content validity. Therefore, for this section an item with a CVR of .78 or higher was considered to achieve good validity. Six raters participated in this section: three informed consent raters and three communication raters.

Convergent Validity

Convergent validity is the actual general agreement among ratings where measures should be theoretically related. It is postulated that different measures of the same construct should be highly correlated with each other. For this measure, an intraclass correlation (ICC) correlation was performed in SAS, followed by a hand calculation, the results of which were additionally reviewed by a biostatistician, a member of the faculty of the Department of Biostatistics, Mailman School of Public Health.

Discriminant Validity

Discriminant validity is the lack of a relationship among measures that theoretically should not be related. It postulates that measures of different constructs should have low correlations with each other. An analysis of variance (ANOVA) was performed in SPSS 14.0 with DVD (scenario) as the dependent variable and the mean total score as the independent variable. Internal Consistency

Internal consistency is defined as the measurement of the consistency of performance of one group of individuals across the items on a single measure. A Cronbach’s alpha coefficient was calculated as an estimate of reliability.

Test-retest Reliability

Test-retest reliability is defined as consistency of measurements from the same rater on two occasions. Sixteen raters participated in viewing the DVDs and rating the encounters twice, with either 1 or 2 weeks between viewings. The DVDs were matched by scenario, and the total score from the raters combined for Time Period 1 were correlated to the total score for Time Period 2 by group using a Pearson product moment in SPSS 14.0.

Step 5: Pilot Study

A pilot study was conducted to test the psychometrics of the tool by using it to rate the DVD scenarios by 63 raters who were Masters and Doctoral students from the Columbia University School of Nursing and the Mailman School for Public Health.

Searches related to clinical trials in which informed consent

informed consent in clinical trials ppt, informed consent clinical trials example, informed consent for clinical trials a regulatory reference guide, informed consent process documentation, ich gcp informed consent guidelines, sample informed consent form research study, fda guidance informed consent, obtaining informed consent for research

Is this question part of your Assignment?

We can help

Our aim is to help you get A+ grades on your Coursework.

We handle assignments in a multiplicity of subject areas including Admission Essays, General Essays, Case Studies, Coursework, Dissertations, Editing, Research Papers, and Research proposals

Header Button Label: Get Started NowGet Started Header Button Label: View writing samplesView writing samples