The final major step in writing an excellent health services research survey is writing data collection questions that truly grasp the complexity of the assessment domains indicated by the statement of research purpose. While some clinical outcomes are easy to measure objectively, the other general types of outcomes we have discussed (behavioral, cognitive, and affective) must often be measured in more creative ways. Several strategies for creating data collection questions that can measure these assessment domains in greater complexity are available to survey designers. This entry discusses the multi-item scale, in particular.
Most individuals who have taken a survey are familiar with a Likert scale. This scale gives the survey respondent the option to choose between a series of values that indicate how much they agree or disagree with a statement. For example, “Indicate how much you agree with the following statement on a scale of 1 to 4 (1 being agree and 4 being disagree).”
This can be a very useful type of data collection question and creating multi-item scales allows the survey analyst to discern differences across respondents in much more detail. As opposed to asking one question, the survey designers can ask several, with the goal of aggregating the results to arrive a one global measure of the assessment domain that is the focus on of the research.
To measure the beliefs of a specific group of clients about fruits and vegetables (which can be considered a goal of a program designed to increase healthy dietary habits) a survey design team may use a multi-item scale like the following:
For all of the statements below indicate how much you agree (1) or disagree (4).
In this example, we can see how the assessment domain “beliefs about fruits and vegetables” is represented by 10 elements. Some elements are phrased positively (ex: “I like most vegetables”) while others are phrased negatively (ex: “I don’t like fruit). The items that are phrased negatively are “reverse-scored,” meaning that a score of 1 indicates a low level of the desired outcome, while a 4 represents a high level.
Before processing the results all items that are reverse scored need to be adjusted so that all “Agree” ratings are scored as 4, “Slightly Agree” ratings as 3, “Slightly Disagree” ratings as 3, and “Disagree ratings” as 1.”
Once adjustment to reverse scored items is completed, the results can be aggregated. Aggregating the results, a survey respondent could score from 10 (Agree on all 10 items) to 40 (Disagree on all 10 items). A score of 5 would indicate very positive beliefs, while a score of 40 would be very negative beliefs about fruits and vegetables.
Alternatively, subsets of the questions in a multi-item scale can be aggregated to create different indicators of the belief being measured. In the example above, the 6 questions about vegetables could form one scale, while the 4 questions about fruits could make another.
By using a multi-item scale, the survey designer has measured this assessment domain in high detail which will strengthen the data team’s ability to make more complex and meaningful statistical analysis than would a single item scale, or another type of more simple measure. For example, if the same multi-item scale is administered to a group of people at the beginning and end of a program meant to impact them in a certain way, such as to increase their belief in the accessibility and desirability of eating fruits and vegetables, a simple statistical procedure (T-test for Paired Samples) can be conducted to determine if changes associated with participation in the program are statistically significant.
Focusing specifically on developing surveys for Health Services Research studies designed to measure health outcomes, this series of articles covers:
// Various types of health-related surveys and outcomes that can be measured
// Creating meaningful research questions
// Conceptualizing and operationalizing variables
// Developing sophisticated survey questions
Part 1: Introduction to Health Services Research
Part 2: Types of Surveys & Outcomes
Part 3: Research Questions
Part 4: Conceptual & Operational Definitions
Part 5: Writing Survey Questions
For more information, check out:
Compliance Resource Center
Aday, Lu Ann and Llewellyn J. Cornelius, 2006. “Designing and Conducting Health Surveys: A Comprehensive Guide,” Jossey-Bass: San Francisco, CA.
As an experienced health care professional, Susan (Sue) Dess brings a wide range of experiences to Crestline. Her 15 year administrative and executive management background spans the operations of both managed care and provider organizations.
Additionally, Sue spent 25 years as an Emergency Room and Intensive Care Registered Nurse, further rounding out her ability to understand the “big picture.” Sue is intimately involved with each Crestline project, collaborating closely with consultants and clients.