Wednesday, March 31, 2010

A Rubric for a Lecture’s Peer Review and Survey Software

By Helena Felipe

On today’s blog we will be directing you to explore the power of online surveys and share with you a rubric for peer review evaluation of lectures.

A rubric is a scoring tool for subjective assessment of a chosen piece of work.
It consists of a summary of criteria and standards related to learning objectives with the purpose of grading someone’s performance.
A rubric contains a set of relevant dimensions critical to assess a piece of work. Each dimension offer several levels of potential achievement and excellency.

Rubrics produce more detailed, objective and reproducible assessments than a single, holistic grade.

They are usually the result of a collaborative effort of a group of people interested in gathering feedback from a certain educational paper or event. Thus work can be graded in a more objective, independent, accurate and less time consuming way consistently helping in continuous improvement and refinement.
In summary a “Rubric is an assessment tool to save grading time, convey effective feedback and promote student learning. (D.Stevens, A. J. Levi)

We built a Four Level Rubric Framework for Peer review of a Lecture presentation (Lecture Design and Lecturer’s attributes assessment)


If building an effective, reliable and valid rubric can be a very challenging task ideally brainstormed by an interested group of people sharing common goals, rubric’s presentation, data collection and subsequent analysis can be extremely easy.

A rating scale can then be created, making use of the free and friendly user “Survey Monkey”. http://www.surveymonkey.com
Using this web tool you can, in a very simple way, create your own questionnaires, collect data and analyze final results.
The following series of screens show the systematic steps to proceed.
First you must make your own account, and create your survey.












The next image shows how you can chose different types of questions.

An online questionnaire is inexpensive to administer and provides a fast way of collecting results, draw conclusions and redesign attitudes to meet excellence.
You can test it before you officially administer it, which is the case of the screens here presented.
The following screen shows the example of our rubric being uploaded.














Finally, we can follow the evolution of participation and analyze results since statistical analysis is automatically performed as you can see on the next screen (at this time all in 0 because the screen capture was done before the survey started)












We thank Dr. Karl Golnik for reviewing our rubric and making very helpful suggestions.

References: 
  1. Introduction to Rubrics, Dannelle. Stevens & Antonia J. Levi – 2005, Stylus Publishing. http://Styluspub.com
  2. Barbara Walvoord and Virginia Anderson, Effective Grading: A Tool for Learning and Assessment (Jossey-Bass, 1998)
  3. The, What, How & Why of Rubrics, Health Sciences Summer Teaching Institute.August 19, 2009. Temple University Teaching and Learning Center. Retrieved 20th March 2010 at http://www.temple.edu/tlc/events/health_science/hssi.htm
  4. James Bardi: Developing A Rubric. May 11, 2009 Schreyer Institute for Teaching Excellence: Measuring Student Outcomes. Retrieved 20th March 2010 at http://www.schreyerinstitute.psu.edu/pdf/Rubric_Schreyer_5_11_09.