J.L.Lemke On-line Office
Analyzing and Reporting Results of Interviews
You should report results of your interviews for each question separately. Start with the questions that were the same for everyone. Go over each person's answer to the first question of this kind and decide if you see a pattern: do different people tend to answer the question in similar ways? Summarize each pattern of similarity you see and tell how many people fit that pattern. Give two examples in people's own words that fit each pattern. Then explain any important differences in how people answered the question (differences that are not already obvious from the different patterns).
Do the same thing for each question that was asked in the same way for all people interviewed (or for all people of the same type; e.g. all students, all teachers, etc.)
Add a discussion of how people answered other questions that may not have been the same. Emphasize the most interesting or revealing answers that help you learn what you wanted to know by doing the interviews.
This method is known as Informal Content Analysis.
In written surveys you will probably have one or more of three types of items: written response items, multiple choice items, and Likert Scale items.
For written response items, use the same method of Informal Content Analysis that is described above for interviews.
For multiple choice items, you should report the number and the percentage of people who marked each possible choice. If they were to pick only one choice, the percentages should add to approximately 100%. If they were to pick all that applied, the percentages will not add up to any particular number.
For Likert Items there is a special method. You begin as for any multiple choice item by giving the number of people and the percentage of people who marked Strongly Agree, Agree, Undecided, Disagree, and Strongly Disagree. But then you take an additional step
The point of this extra step is that you can combine how people answered different items to get an overall response for each person, and for each group of people. This enables you to compare how two different groups (say students vs. teachers, or older vs. newer teachers, or teachers vs. parents) answered the same group of questions.
To do this you calculate very simply what is called a weighted average response. Assign 5 points for each Strongly Agree, 4 points for each Agree, 3 for Undecided, 2 for Disagree, and 1 for Strongly Disagree. Add up all the points for all the people answering the question in that group. Divide by the number of these people.
The answer you get, if you did the arithmetic right, will be a number between 1 and 5. Report this number for each question, for each group. You can now compare how each group answered each question by comparing their numbers, or Likert values. A high Likert value means greater agreement, a low value means greater disagreement. If the value is very close to 3.0 (2.9 to 3.1), you should double check what it means: does it mean most people were undecided? or does it mean that the group was evenly divided, with half agreeing and half disagreeing?
You can do one thing more. You can combine the values for different questions to get a more robust indicator of the attitudes of a group across several questions, and not just on one question at a time. To do this, remember that half your Likert statements were in favor of something and half were against this same thing. So you cannot simply average the Likert values for statements that go in opposite directions! What you have to do is to convert the values for the negative statements to what they would have been if the statement had been positive. It turns out that the shortcut for doing this is to subtract the Likert value for a negative statement from 6 to get the equivalent value if the statement had been positive. You can now take this new converted value and average it in with the positive statements in the normal way. [Positive = 6 - Negative].
Try to find interesting differences or comparisons between different groups of people. Do students and teachers respond the same or differently? what about teachers of one subject vs. teachers in another subject? teachers vs. parents? experienced teachers vs. new teachers? older students vs. younger ones? males vs. females? You can do this not just for Likert items, but for all your questions and even for interview results.
For more information on how to tell whether a difference in Likert value between two groups is big enough to be worth mentioning, see the webpages on statistics and experimental research.