Frequently Asked Questions

Copyrighted by Kenneth A. Wallston, Ph.D., November, 1993

The intended purpose of this FAQ is to help the users of the MHLC scales know which form of the scales to use and what to do with their data. It is meant to supplement, not substitute for, the published literature on the construct of health locus of control (Bibliography). A certain degree of statistical sophistication on the part of the reader is assumed.

DO NOT REPRODUCE WITHOUT EXPLICIT PERMISSION OF THE AUTHOR. [Failure to heed this warning may cause your data to turn out screwy.]

I want to measure general health locus of control beliefs using the MHLC scales. How do I decide whether to administer Form A or Form B?

Forms A & B were designed to be "equivalent" forms; therefore, it's pretty much of a toss-up as to which one you choose. Over the years, we've tended to use Form A when we study relatively "healthy" samples and Form B when we study people with chronic illnesses, but we have no strong rationale for doing this.

If you are doing a study in which you are administering the MHLC more than once, and if the time period between administrations is greater than a couple of weeks, make sure you administer the same form (A or B) to your subjects at each time period; just because the two forms are "equivalent", does not mean they are identical; it is not unusual for the mean subscale values on Form A to differ by a point or two from those on Form B. The only time we recommend administering different forms to your subjects is when you are doing a single session experiment in which you want to administer one form at the beginning of the session and another form at the end of the session (to see if subjects' health locus of control beliefs change as a function of some experimental intervention). In that case, we recommend you administer A as a pretest to a random half your subjects and B to the other half. Then, for your posttest, give Form B to those whom you pretested with Form A and vice versa.

My sample will consist of people who already are diagnosed with a medical condition? Should I use Forms A/B or Form C or both?

It really depends on what you're trying to measure. Using Forms A/B will allow you to assess your subjects' general health locus of control beliefs, but you won't always know what this means to your subjects (nor will they always know what you're wanting from them). If you use Form C and particularize it to your subjects' diagnosis then both you and your subjects will be much clearer about the referent, but you won't have a measure of their general health locus of control beliefs. If you can afford the space/time, we recommend using Form C along with either Form A or B.

If your sample consists of people with a variety of existing medical conditions, you have two choices. You can particularize Form C for each type of subject, or you can leave the word "condition" as is in each item, and explain in the instructions that they are to respond in terms of whatever condition they happen to have.

Lau and Ware published a different MHLC scale that looks similar to yours. How does your MHLC scale differ from theirs?

It is unfortunate that the Lau-Ware scale {which originally appeared in 1981 in Medical Care, 19, 1147-1158} bears the same name as ours, because this engenders a lot of confusion. In many ways, their instrument is similar to ours. Both have a factor tapping beliefs in control of health by chance, and each has an internality factor (although they call theirs "self-control"). Their "provider control" factor is similar to our factor assessing control of health by "powerful others" although our conceptualization of "powerful others" extends beyond those traditionally thought of as health care providers. The main difference, then, between our measures and theirs is that theirs contains a fourth factor--termed "general health threat"--which, in our opinion, gets at something other than locus of control. As such, we believe that our scales have more construct validity than theirs.

I want to use your MHLC scale, but my space is limited. Do I have to administer all 18 items?

The simple answer to this question is "no." First of all, it is perfectly appropriate to measure one or two of the separate dimensions (e.g., internality and/or chance) without measuring the other one(s). If you decide, for instance, that you are only interested in your subjects' internal beliefs, then, by all means, only ask the items that assess internality.

I read what you said in response to Question #4, but I'd like to assess all of the dimensions of the MHLC. Can't I cut back the number of items per subscale to one or two?

The simple answer to this question is "yes you can, but don't expect us to do it for you." We, too, are sensitive to the issue of "respondent burden" and are advocates for "lean and mean" measuring instruments, but we are also psychometricians who worry about such things as internal consistency (i.e., alpha reliability). We feel that the MHLC subscales, none of which is longer than six items, are "lean and mean" enough. If you shorten them, you do so at your own risk.

Occasionally, a subject will not respond to some of the items on the MHLC. What do you recommend I do with missing data?

Our rule of thumb with the MHLC subscales is that we calculate a subscale score as long as at least 2/3rds of the items are not missing -- i.e., if no more than one or two items on the 6-item subscales are missing (or if only one item is missing on the three item subscales in Form C). [We do this by calculating the mean score for the subscale items that are not missing and then multiplying this mean item score by 6 (or 3 in the case of the doctors or other people subscale on Form C).] Usually, this rule of thumb suffices to handle most missing data problems. In those rare instances where subjects do not respond to at least 2/3rds of the items on a subscale, our preference is to treat the whole subscale as missing. [You, of course, are free to do whatever you think best in this situation.]

You usually use a 6 point Likert response scale (ranging from I = "strongly disagree" to 6 = "strongly agree") with the MHLC. Why did you choose an even rather than an odd number of response alternatives, and what difference would it make if there was a different response format?

We chose an even rather than an odd number of response alternatives because we wanted to "force" subjects to agree or disagree with each item in order to get a better spread of scale scores. It turns out, however, that it probably doesn't matter all that much if the MHLC items are administered with 2-, 3-, 4-, 5-, or 6-point response scale. For instance, McCallum et al. (1988, J. Personality Assessment, 52(4), 732-736) showed that fairly comparable results came from administering the MHLC to college students with a 2-point ("agree"-"disagree") format vs. a 6-point format, although the 2-point format produced lower Cronbach's alphas for IHLC and CHLC (but not PHLC) than the 6-point format. The only time it makes sense, however, to use a 2-point response alternative is when the subject population is not all that literate or used to responding to Likert-type scales.

In addition to the danger of lower reliabilities when one reduces the number of response alternatives, the other major "problem" is in reporting mean subscale scores that are comparable to those generally found in the literature. One way around this, however, is to RECODE the responses to make them comparable to the typical 6-point format. For instance, if a 2-point format were used, code a "disagree" response as 3 and an "agree" as 4. [If an odd number of response alternatives were used, thus permitting a "neither agree nor disagree" response, code that as a 3.5.] {WARNING: If, for example, you use a 5-point response format, do not simply multiply your scores by 6/5ths thinking that is a sufficient way of making them comparable to a 6-point scale. It's close, but no cigar.)

I notice that the MHLC was developed on an adult sample. Can it be used with children and adolescents?

It really depends on the functional reading level of the subjects. The MHLC was designed to be used with people who function at an eighth grade reading level. Usually, eighth graders are 13 years old. Therefore, it can be (and has been) successfully used with middle school aged children. If you want to assess the health locus of control beliefs of children younger than that, we recommend using the version of the scale published in July, 1987 by Thompson et al. in Measurement and Evaluation in Counseling and Development (pp. 80-88).

I know you've written that the MHLC subscales are independent of (i.e., orthogonal to) one another, but I'd still like to combine them to come up with a single MHLC total score. How do I do this?

You don't. You can't. You shouldn't. [And if you don't know why, you shouldn't be using the scales in the first place.]

For analysis purposes, I'd like to be able to classify my subjects as "internals" or "externals." How do I do this?

With a multidimensional instrument such as the MHLC, this is not easy. Nor is it particularly advisable. Internality vs. externality is a false dichotomy; people can be both "internal" and "external" at the same time. If you choose to dicotomize people into these two categories, you do so at your own risk. If, despite this warning, you do decide to go ahead and dichotomize your subjects into two groups, one possible solution is to use only the internal dimension. Then do a median split on the internal scores. Those above the median could be called "high internals" and those below could be called "low internals." [Keep in mind , however, that being "low internal" may not be the same as being " external."]

Alternatively, you could take all subjects who score above the median on internality and below the median on chance and powerful others and call those people "internals." Similarly, those who score below the median on internality and above the median on chance and powerful others could be labeled "externals." [Be aware, however, that, in most samples, this procedure will result in only a minority of your subjects being classified as either "internal" or "external." Most people don't fit neatly into either of those two categories.] A third alternative involves first standardizing the scores in each dimension (using z-scores or T-scores) and then classifying the person based on which standard score is highest. [Don't, however, skip the standardization procedure; if you do this using raw subscale scores, you'll probably find that most of your subjects are classified as "internals"]

I read somewhere that you had once developed an eight cell multidimensional typology based on the MHLC scores. Whatever happened to that? Do you still recommend using it?

In a chapter we wrote about a dozen years ago (in the book edited by Sanders & Suls, 1982), we described such a typology, and then spent much of the ensuing decade fiddling with it and trying to decide if it was worthwhile. Regrettably, we have yet to demonstrate (to our satisfaction) that such a multidimensional typology has much predictive validity. Therefore, we no longer advocate using it. [Feel free, however, to see if you can come up with a better approach or a better set of evidence that our approach was really worthwhile.]

What about the cluster analytic procedure for classifying subjects into NMLC "types" that was described in Rock et al., 1987, Research in Nursing and Health, 10, pp. 185-195? Is that something I should use?

If you (1) have enough subjects, (2) have the capability of doing and interpreting a cluster analysis, and (3) want to classify subjects into MHLC "types," then the procedure described by Rock et al. is probably what you should use (instead of the "solutions" offered in the response to Question 10 above). Most of the time, this procedure will yield five or six identifiable types.

I'd like to use MHLC scores as predictors (of health behavior or health status). Do you advocate that I use an analysis of variance approach or a regression approach?

These days, we strongly favor a multiple regression approach, where you use the MHLC scores as continuous variables, rather than any ANOVA approach where you would have to use discrete predictor variables. [Remember, you can still examine interactions among continuous variables using a hierarchical multiple regression procedure, if that is something you wish to do.]

I'm doing a longitudinal study in which I gave the MHLC at Time 1 and want to use those scores to predict behavior or outcomes at some later time period. Are health locus of control orientations (as operationalized by scores on the MHLC scale) stable enough to be used as "personality-trait-like" variables?

The MHLC was never intended to assess "personality." It is a measure of beliefs, and beliefs can and do change over time depending on a myriad of factors, among which are intervening experiences. Thus, the MHLC scores are only somewhat stable over time even in samples where no new dramatic health- or illness-related experiences occur. Therefore, if you are doing a prospective, longitudinal study, it is always a good idea to reassess subjects' MHLC beliefs from time to time to see if/how they have changed.

In a number of your articles you state that researchers should use a measure of the value of health in conjunction with the MHLC scale. What, exactly, do you mean by that?

First of all, the admonition to assess health value (HV) along with health locus of control only pertains to those studies that are attempting to predict health behavior (or intentions to engage in health behavior). That is because the construct of locus of control comes from Rotter's (1954) social learning theory in which he states that the potential to engage in behavior(s) is a joint function of an expectancy--which is what is being assessed by the MHLC scales--and the value of a reinforcement. In our work, we have consistently interpreted Rotter's notion of a "joint function" to mean that health behavior potential is a function of the interaction of an expectancy measure--such as one of the MHLC subscales-and a measure of health value. [See Wallston, 1991, for a fuller explication of this.]

OK, then. Suppose I want to use a measure of health value in conjunction with the MHLC scores. What measure should I use?

For the past 20 years, we've mainly used a modification of Rokeach's value ranking procedure in order to assess health value (HV). A copy of the Value Survey we use is included in the packet of information we mail to people who request information about our scales. [For further information on measuring the value of health, read M.S. Smith and Wallston, 1992, Health Education Research, Theory & Practice, 7(l), 129-135.]

The 1992 article referred to above by M.S. Smith and Wallston talks about "relative health value." What is it and how do I calculate it?

Relative health value (RHV) is the value of "health" relative to the value of "an exciting life." Using the Value Survey, first subtract each of the value rankings from 11 [in order to convert the top rated value (#l) to a high score {11 - 1 = 10]. Then subtract the score for "An Exciting Life" from the score for "Health." For predicting certain health risk behaviors, RHV might be more a more valid assessment than HV.

OK, suppose I gave my subjects the MHLC (Form A or B), the Value Survey, and a measure of health behavior and now it's time to analyze my data. Assume the data are all in the computer and that I've calculated all of the relevant scale scores. What do I do next?

First, compute interaction terms of HV {or RHV) and the relevant MHLC subscale(s). Then do a hierarchical multiple regression where, on Step 1, you regress the health behavior measure on the "main effect" scores (i.e., IHLC, CHLC, PHLC, HV for RHV}). On Step 2, enter the interaction term(s). If these latter are significant, you may have come up with a finding congruent with Rotter's social leaming theory. You'll then have to plot out the interaction. [If you don't know how to do this, seek help from a friendly statistician.]

What if I am using Form C? Do I need to include a measure of health value?

If you are using Form C, your subjects' health status is already somewhat compromised. It has been our experience that there is too little variance in health value among samples of persons whose health is compromised. Therefore, it is probably safe to assume that HV would be "high" among such people. Instead of HV, what you may want to assess is disease severity (DS). You would then treat the measure of DS in analyses similar to how we recommend you treat HV {or RHV)--i.e., you would look for interactions among DS and the four Form C subscales. [See the chapter by Wallston & M.S. Smith (1993) in the book edited by Penny, Bennett, & Herbert for more information on this topic.]

I read your "Hocus-pocus, the focus isn't strictly on locus" paper [Wallston, 1992, Cognitive Research & Therapy, 16, 183-1991 in which you appear to say that the health locus of control scales are of limited utility. Why, then, should I bother to use them?

You read the paper correctly. They are of limited utility, but they do have some utility, especially as a moderator of the relationship between self-efficacy, health value, and health behavior. Anyway, no one is holding a gun to your head and forcing you to use them!

I'm thinking of using the MHLC scales in a study. Are they reliable and valid? What reliability and validity coefficients should I put in the Measures section of my proposal (or the article I'm writing for publication)?

These are the toughest questions I ever get asked. There is no simple answer. By now, the MHLC scales have been used in literally hundreds of studies. Generally, the results are that they are moderately reliable (i.e., they have Cronbach alphas in the .60 - .75 range and test-retest stability coefficients ranging from .60 - .70). These reliability estimates vary, of course, depending on many issues (e.g., the particular population studied; the length of time between administrations). Thus, it is fair to say that the scales are "reliable."

The validity question, on the other hand, is harder to answer. The simple answer is that there's plenty of evidence in the published literature to back up an assertion that they do, indeed, measure individuals' health locus of control beliefs (which is the construct they were designed to measure). Validity, however, is such an elusive concept and cannot really be addressed without knowing "validity for what purpose?" Because researchers want to measure health locus of control beliefs for a wide variety of purposes, there is really no easy answer to this question.

Although it's now a bit out of date, the chapter we wrote in 1981 (in the book edited by Lefcourt) is probably a good reference for the reliability & validity of Forms A/B. Additional information can be found in the vast published literature on these scales. For Form C, we have a manuscript (Wallston, Stein & Smith, 1993) that has been submitted for publication that contains information on the reliability/validity of this new form.

You've stated repeatedly that the MHLC scales are "in the public domain." Don't I need your permission to use them in my research?

We've developed the MHLC scales over the years under the auspices of a variety of federally sponsored research grants. Therefore, we have never felt right about charging other researchers a fee for utilizing these scales in their own research. It has been, and will continue to be our policy to place these scales "in the public domain" where they are freely available to the research public. Therefore, although we appreciate it if you acknowledge properly the source of the scales and cite them correctly in your reports, you do not explicitly need our permission to utilize them in your research studies. You do, however, have our blessings.

You say I don't need your permission to use the MHLC scales, but I'm a student and my advisor (and/or my committee) insists that I get your permission? What do I do?

First, have them read the answer to Question 22 (above). If that doesn't get them off your back, type out exactly what you want us to sign and send it along with a stamped, self-addressed envelope. (It wouldn't hurt to include a crisp, new $20 bill, but it's not necessary to do so!