Customizing Student Feedback Surveys

The literature is clear that student evaluations of course instructors do not adequately measure teaching effectiveness (Boring, Ottononi & Stark, 2016).  In addition, there is a fair amount of evidence that bias exists in student surveys (Centra & Gaubatz, 2000; Boring, 2015), although there is no consensus on the definition of bias in student ratings of courses (Feldman, 1998).  While Marsh (1987) indicated that student evaluations do provide useful information, more recent evidence (e.g., Faculty Senate SFS report. See Introduction to this website) indicates that course evaluations 1) do not measure an instructor’s ability to foster learning, 2) vary inversely with course rigor including grades, 3) often vary with instructor age, gender, race, and country of origin, and 4) do not provide a reliable measure of student satisfaction due to low response rates.

Qualitative student feedback may be more appropriate as one tool to obtain a student’s perception or opinion on, for example, preparedness of an instructor for teaching the material assigned, delivery of material, use of materials such as textbooks and online tools, responsiveness to questions, efforts at engaging students and fairness in grading.  As Stark and Freishtat (2014, p. 2) argue, “students’ ratings of teaching are valuable when they ask the right questions.” That noted, caution should be taken when considering instructional effectiveness based on student feedback and opinion.

As currently structured, UTA’s Student Feedback Survey (See example below) more accurately measures student satisfaction rather than teaching effectiveness.  An area of the SFS that is of particular value to chairs and other administrators evaluating faculty performance are the students’ comments, which when analyzed longitudinally, may provide insights into faculty that may be struggling with their teaching or point to patterns that may require addressing.

Student Feedback Survey

The first 5 questions are mandated by the University of Texas System and the university is required to post the results for the public to access here.

Although the literature is scant on best practices for SFS or course evaluations, one of the first tasks is to determine their role.  Clearly, the SFS is not a sufficient tool for measuring teaching effectiveness as this goal requires alternative measures that take into account a number of variations, including: the course modality (F2F, online or hybrid); course type (seminar, lecture, lab, studio); course level (grad/undergrad); class size, academic discipline, and whether team taught or solo, among other factors.

UTA is poised to use a new vendor for its SFS process which will enable college, department and faculty (if approved) to also include their own questions in the survey.  A process for allowing this type of flexibility by academic unit may help develop a more accurate measure of teaching effectiveness.  As a result of the efforts put forth by this and the previous task force, departments and colleges are now able to fill out a form in order to add discipline-specific questions to the SFSs.  (There are some colleges, for example, Nursing, that already have added several questions to the current survey.)

Included here is a request form that departments and colleges may use to request that specific questions be added to the survey.  Individual departments and/or colleges are permitted to customize the SFS forms by appending from three (3) to five (5) discipline-specific questions that students can answer.  Question formats may be open-ended or may require forced-choice responses such as ‘yes/no’ or a Likert Scale format using response categories that range from ‘strongly agree’ to ‘strongly disagree’.  All requests need to be approved by the Chair of the Departmental or Unit Committee (if applicable), the Chair of the Academic Unit, and the Dean of the College.  Approval signatures are mandatory.  Requests need to be submitted at a minimum of 30 days prior to the release of the student evaluations for any given semester, otherwise, the questions will not appear until the following semester.  Writing reliable and valid questions is not an easy task.  It is incumbent on the question writer to take the necessary steps to make sure that questions are not double-barreled (i.e., assessing two or more aspects of the course using one question), and that the questions truly are assessing what they are intended to assess.  Prior to writing SFS questions, we recommend that you read the following guidelines and that you keep these in mind when writing your questions.

A review of the student ratings literature (Linse, 2016) offers best practices in developing or improving UTA’s SFS instruments:

  1. The Student Feedback Survey is a measure of student opinion and feedback and is not a measure of teaching effectiveness. Other instruments and strategies used to assess teaching, which may include peer observations, internal and/or external review of course materials, teaching portfolios, and teaching scholarship should be given greater emphasis.
  2. Student Feedback Survey data should not be treated in isolation but should be considered over a faculty member’s history as an instructor. An examination of scores over time and types of courses rather than a composite score may offer a more accurate assessment. In addition, evidence of patterns in responses, students’ comments and scores may offer better insights into areas that may need improvement.
  3. Care should be taken to avoid comparisons between instructors. Since SFS data measures student satisfaction in a course in a particular context and time period, it is not appropriate to compare instructors that may differ in delivery style, experience in the classroom, and whose students may also differ on many levels.

If your department or college is interested in adding additional questions to the SFSs, you may use the following form: Form for Adding Questions to the SFSs (pdf)*

Additional strategies exist for determining instructional effectiveness based on student input and are discussed elsewhere on this website. These typically exist in the form of feedback and student evaluations (e.g., daily question, exit tickets and mid-semester letters). If the purpose of student evaluation and feedback is to measure student satisfaction resulting in improved learning and success, these additional strategies allow an instructor to quickly make instructional adjustments to ensure greater student success while concurrently providing students an ability to have a “voice” in their learning experience. The key to any feedback is that the instructor should immediately focus on improvements to address student concerns.

* For access to this document, please contact CRTLE at crtle@uta.edu

References: 

Boring, A. (2015). Gender biases in student evaluations of teachers. OFCE Working paper (13) 1- 68.

Boring, A,  Ottoboni, K., Stark, P.  (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. Science Open Research. https://www.scienceopen.com/document_file/25ff22be-8a1b-4c97-9d88-084c8d98187a/ScienceOpen/3507_XE6680747344554310733.pdf

Centra, J. A., & Gaubatz, N. B. (2000). Is there gender bias in student evaluations of teaching? The journal of higher education71(1), 17-33.

Feldman, K. A. (1998). Reflections on the study of effective college teaching and student ratings: One continuing quest and two unresolved issues. Higher education: Handbook of theory and research, 35-74.

Linse, A. (2016).  Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 940196.

Marsh, A.W. (1987) Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11(3), 253-388

Stark, P. & Freishtat, R. (2014). An evaluation of course evaluations. Science Open Research. Accessed on March 5, 2019 at https://www.scienceopen.com/document?id=6233d2b3-    269f-455a-ba6b-dc3bccf4b0a8