Mini Review Volume 2 Issue 5
Gilead Sciences Inc.,USA
Correspondence: Tu Xu, Gilead Sciences Inc., 333 Lakeside Dr., Foster City, CA, 94404, USA, Tel 3122061073
Received: May 10, 2015 | Published: May 18, 2015
Citation: Xu T. Statistical development on determing the minimium clinically important difference. Biom Biostat Int J. 2015;2(5):137-138. DOI: 10.15406/bbij.2015.02.00041
In medical research, the concept of minimum clinically important difference (MCID) has gained its popularity among clinical practitioners and health policy markers. Over the past 20 years, intensive research has been conducted to explore how clinical significance could be interpreted from the patient reported outcomes (PROs) by using MCID. This article aims to provide a review on the statistical development on the determination of MCID.
In  clinical research, statistical significance is widely used to evaluate the  effectiveness of new drugs or medical devices. However, there has been growing  recognition that statistical significance could be misleading when evaluating  treatment effect.1 It is known that  statistical significance only infers the existence of treatment effect,  regardless of the effect size, and therefore the true treatment effect may have  little to do with its clinical significance. The statistical significance could  result from small sample variability or a huge sample size. For instance, with 
-test statistics 
, statistical significance could always be declared  when the sample size 
 is large enough, such as 
, where 
  
  The clinically important difference (MCID) was  first proposed by Guyatt et al.2 to provide  an appropriate assessment of the clinically meaningful benefit from a  post-treatment evaluation. The most influential definition of MCID was  established by Jaeschke et al.3 as “the  smallest difference which patients perceive as beneficial and which would  mandate, in the absence of troublesome side effects and excessive cost, a change in the patient’s  management”. The concept of MCID has quickly gained popularity among clinicians  and health policy makers with its interpretation of clinical significance based  on health-related patient-reported outcomes (PROs).4,5 King6 describes its popularity by  the fact that the number of literature on MCID has got trebled every 5 years  over the past 20 year. From the regulatory perspective, FDA’s final guidance  for industry on PRO measures rolled out in 2009.7  In November 2012, FDA hosted a special conference on the MCID for orthopedic  devices; c.f.
  http://www.fda.gov/MedicalDevices/NewsEvents/Workshops/Conferences/ucm327292.htm for  more details.
Although the importance of MCID has been widely recognized, only a few ad-hoc approaches have been proposed for its estimation. The existing methods are classified them into two major types: anchor-based approaches and distribution-based approaches.7
Distribution-based  approaches 
    Distribution-based  approaches are methods that compare the change in PRO scores to certain measure  of its variability such as the standard error of measurement (SEM), minimal  detectable change (MDC), the standard deviation (SD) and the effect size (ES). All  these methods are trying to capture the individual level of change that is  considered above the ranges of measurement error. Wyrwich et al.8 defined MCID equal to SEM, where
and r is the sample  correlation coefficient. Beaton et al.9  further defined MCID equal
SEM with 1:96  representing the Z-score of 95% confidence interval. The ES is a statistics  defined as
, where
denotes the  individual change of PRO score from baseline to post-treatment. In literature,  the MCID is considered to be 0:2*ES.10  Distribution-based approaches thoroughly examine the distribution of PROs, and  define the MCID through the magnitude of change in PROs above the measurement error.  However, as pointed out in,11,12 the  limitation of distribution-based approaches lies in its unclear relatedness  with clinical meaningfulness. The subjective bias in the PROs or unreliability  of a poorly designed questionnaire could cause the measured MCID problematic. Copay et al.13 even suggested that  distribution-based approaches do not really address the clinical significance  and ignore the purpose of MCID. The FDA guidance on PRO7 also mentioned that the primary evidence should be provided by  the analysis based on PROs combined with clinical anchors, and the  distribution-based approaches are considered to be supportive.
Anchor-based  approaches
    The anchor-based approaches establish the MCID  by exploring the association between the target PROs and some other external  criterion (anchors).6,7,12,13 The commonly  used anchors are clinical criteria that clinicians adopted to measure treatment  efficacy, such as the widely used Eastern Cooperative Oncology Group  Performance Status. The properly selected anchors naturally strengthen the  connection between the interpretation of PRO results and clinical significance,  and the anchor-based approaches quickly get favored by clinicians and health  policy makers.7,11 Among the existing  anchor-based approaches, the within-patients score change method and  between-patients score change method select certain group of subjects as  anchors, and defined the MCID as the average PRO changes within selected group  and difference on PRO changes between selected group, respectively. However,  the arbitrariness on the selection of subject groups makes these two methods  less attractive.13 For anchor-based  approaches, the major statistical development lies in a class of methods called  sensitivity and specificity-based approaches. In statistics literature, the  concept of sensitivity and specificity has been widely used in the research of  diagnostic tests. In the context of MCID re-search, sensitivity is the  proportion of subjects who have an improvement on the anchor criterion and PRO  scores above the MCID; specificity is the proportion of subjects who have no  improvement on the anchor criterion and PRO scores below the MCID. The MCID is  typically defined as the threshold with equal sensitivity and specificity among  researchers.13 Similarly, Bennett,14 Leisenring and Alonzo15 discussed the connection between the MCID and positive predictive  value (PPV) and negative predictive value (NPV). Shiu  and Gatsonis16 defined MCID as the maximizer of the sum of PPV and  NPV based on the argument that the sum reflects a distance from ideal  situation, where the ideal situation refers to a perfect match of responses  based on PROs and anchor criterion. Recently, Hedayat  et al.17 proposed to define the MCID as the threshold that minimizes  the mismatch between responses based on PRO scores and anchor criterion. Also  in their paper, the definition and estimation of personalized MCID were  established, which allows MCID to vary according to subjects’ clinical  profiles.
Although many statistical approaches have become available, different methods lead to different estimations of MCID and no agreement has been reached regarding the suitability of estimating MCID. As discussed in,7 the measurement of MCID will be reviewed by FDA based on the context of each individual clinical study. Therefore, in order to deliver a legitimate interpretation of RROs with MCID, close collaboration between statisticians and clinicians is critical and necessary.
None.
Authors declare that there are no conflicts of interests.
  ©2015 Xu. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.
2 7