When Assessment is Instruction and Instruction is Assessment: Using Rubrics to Promote Thinking and Understanding

Heidi Goodrich Andrade

From: The Project Zero Classroom: Views on Understanding, eds. Hetland, Lois and Shirley Veenema. Harvard Graduate School of Education, Cambridge, 1999.

Introduction

Rubrics are excellent assessment tools: They make assessing student work quick and efficient, and they help teachers justify to parents and others the grades they assign to students. I am going to argue, however, that rubrics are just as interesting in terms of instruction as they are in terms of assessment. Rubrics, at their very best, are teaching tools that support student learning and encourage the development of sophisticated thinking skills.

Rubrics, like portfolios, exhibitions and other so-called alternative or authentic approaches to assessment, blur the distinction between instruction and assessment. Rubrics exist as a complement to instruction. When used correctly, rubrics serve the purposes of learning as well as of evaluation and accountability. For this reason, I refer to them as instructional rubrics.

My goal in this article is to convey the potential of instructional rubrics to teach as well as to evaluate. I will begin by explaining what instructional rubrics are, then discuss the ways in which my research suggests that they can be used to help students learn and develop good habits of thinking. Finally, I will explain how to create instructional rubrics and how to use them in the classroom.

What is an Instructional Rubric?

An instructional rubric is usually a one- or two- page document that describes varying levels of quality, from excellent to poor, for a specific assignment. An instructional rubric is usually used with a relatively complex assignment such as a long-term project, written essay, research paper, and the like. The purposes of instructional rubrics are both to give students informative feedback about their works in progress and to give detailed evaluations of their final products.

Although the format of an instructional rubric can vary, all rubrics have two features in common: 1) a list of criteria, or what counts in a project or assignment; and 2) gradations of quality, or descriptions of strong, middling, and problematic students work (see Figure 2).

GRADUATIONS OF QUALITY

Criteria4321
This counts
This also
This too
Don’t forget this

Figure 2. Basic features of a rubric

Figure 3 is an example of an instructional rubric that I have used in seventh- and eighth-grade humanities and English classes. It is intended to support students as they write a persuasive essay. The list of criteria, or what counts, includes the claim made in the essay, the reasons given in support of the claim, the considerations of reasons against the claim, organization, voice and tone, word choice, sentence fluency and conventions. I describe four levels of quality but do not give them words as labels. In my experience, satisfactory labels are hard to come by, and it is obvious at a glance that a 4 is what everyone should be trying to achieve and a 1 is something to avoid. Some schools indicate a cut-off point by, for instance, drawing a box around the level that is considered acceptable.

The instructional rubric in Figure 3 has the two basic components of a rubric—criteria and gradations of quality. I would also like to point out a couple of other important features that I will discuss later in the article. First, please review the second and third criteria, Reason in Support of the Claim and Reasons Against the Claim. These two criteria give the rubric an emphasis on good thinking—an emphasis missing from many rubrics. They not only tell students that good critical thinking must be demonstrated in their essays, they also guide them in how (and how not) to do it, making the rubric serve as an instructional tool as well as an evaluative one.

The second feature I’d like to note is the fact that the gradations of quality describe actual problems that real students run into as they write, such as not stating their claim early enough for a reader to understand it (level 2 of the first criterion), and using the same words over and over (level 1 of the sixth criterion). A rubric that reflects and reveals problems that students experience is more informative than one that either describes mistakes they do not recognize or that defines levels of quality so vaguely as to be practically meaningless (“poorly organized” or “boring”). Again, this feature makes the rubric instructive, not just evaluative.

Criteria4321
The claimI make a claim and explain why it is controversial.I make a claim but don’t explain why it is controversial.My claim is buried, confused and/or unclear.I don’t say what my argument or claim is.
Reasons in support of the claimI give clear and accurate reasons in support of my claim.I give reasons in support of my claim but I overlook important reasons.I give 1 or 2 weak reasons that don’t support my claim, and/or irrelevant or confusing reasons.I do not acknowledge or discuss the reasons the reasons against the claim.
Reasons against the claimI discuss the reasons against my claim and explain why it is valid anyway.I discuss the reasons against my claim but leave important reasons out and/or don’t explain why the claim still stands.I say that there are reasons against my claim but I don’t discuss them.I do not acknowledge or discuss the reasons against the claims.
OrganizationMy writing has a compelling opening, an informative middle and a satisfying conclusion.My writing has a beginning, middle and end.My organization is rough but workable. I may sometimes get off topic.My writing is aimless and disorganized.
Voice & ToneIt sounds like I care about my argument. I tell how I think and feel about it.My tone is OK but my paper could have been written by anyone. I need to tell more about how I think and feel.My writing is bland or pretentious. There is no hint of a real person in it, or it sounds like I’m faking it.My writing is too formal or inappropriately informal. It sounds like I don’t like the topic of the essay.
Word choiceThe words I use are striking but natural, varied and vivid.I make some fine, some routine word choices.The words I use are often dull or uninspired or I sound like I am trying too hard to impress.I use the same words over and over and over and over. Some words may be confusing to a reader.
Sentence FluencyMy sentences are clear, complete and of varying lengths.I have well-constructed sentences. My essay marches along but doesn’t dance.My sentences are often awkward, and/or contain run-ons and fragments.Many run-ons, fragments and awkward phrasings make my essay hard to read.
ConventionsI use correct grammar, punctuation and spelling.I generally use correct conventions. I have a couple of errors I should fix.I have enough errors in my essay to distract a reader.Numerous errors make my paper hard to read.

Figure 3. Persuasive essay instructional rubric

Why Use Instructional Rubrics?

Rubrics have become quite popular, a recognizable trend in education. Experienced teachers, however, have seen numerous trends rise up and fade away over the years and quite reasonably ask, “Why bother with this one?” My research and experience provide several answers to that question.

Instructional rubrics are easy to use and to explain. In spite of their versatility and power, rubrics are not difficult to understand. They make sense to people at a glance, they are visually accessible, and they are concise and digestible. For these reasons, teachers like to use them to assess student work, parents appreciate them when helping their children with homework, and students often request them when given a new assignment. After handing out a rubric for one project, a teacher I work with told me that upon being assigned another project, a student remarked, “You know, one of those things with the little boxes would be handy right now.” This is not an uncommon request from students experienced with rubrics.

Instructrubrics make teachers’ expectations very clear. Traditionally, as educators we have kept our criteria and standards to ourselves. The answers to the test were secret, and teachers tended not to articulate what counted when they gave grades. I often tell the story of a fifth-grade girl I know who came home with a shockingly bad report card. Her father, of course, went through the roof. He said, “Look, you are a smart child, you’ve always done well in school. Two weeks ago I asked you how you were doing in school and you said, ‘Fine, Dad.’ How can you say, ‘Fine, Dad,’ then bring home this report card? How do you explain that ?” Sobbing, the child told him, “Dad, I don’t know what the grades count on.”

At that point I made an enemy of her father by bursting in and saying, “You know, she’s right. We often expect students to just know what makes a good essay, a good drawing or a good science project, rather than articulating our standards for them. If her teacher would write it all our for her—maybe in the form of a rubric—then she would know what counts and she’d be able to do better work.” Not the ideal time to make my point, perhaps, but I was right. That little girl just needed help figuring out what the grades “count on.” Some students figure that out on their own, but other students need to have it written down or otherwise communicated to them. Instructional rubrics are one way to do that.

Instructional rubrics provide students with more informative feedback about their strengths and areas in need of improvement than traditional forms of assessment. Imagine that you are about to be evaluated in your job. You have a choice between receiving a letter grade or a rubric with circles around the boxes that most closely describe your performance. Which kind of assessment would you choose? Most people choose the rubric, knowing that it will tell them a lot more about what they do right and wrong than a simple letter grade can. The same is true for students: A well-written instructional rubric—one that describes the kinds of mistakes they tend to make as well as the ways in which their work shines—gives them valuable information. Students can learn from an instructional rubric in a way they cannot learn from a grade.

Instructional rubrics support learning. A few years ago I conducted an investigation of the effects of rubrics and self-assessment on learning and metacognition (the act of monitoring and regulating one’s own thinking). The study involved 40 seventh-grade students in a classification task. Half the students were given an instructional rubric and periodically asked to assess their reading comprehension, the classification system they set up, their explanation of the system, and so on. The other half of the students were asked to do the same task but were not given a rubric and were not asked to assess their progress as they worked.

When the students had finished the task, I gave each one a traditional quiz to test for basic content knowledge. The test scores showed that the students who used the rubric to assess themselves knew more. This is especially meaningful because I usually spent less than half an hour with each student, and the task did not emphasize the memorization of facts. Nonetheless, during that brief time the students who used the rubric to assess their own progress learned more information than the students who did not. I was able to conclude that self-assessment supported by a rubric was related to an increase in content learning.

Instructional rubrics can help students become more thoughtful judges of the quality of their own work. The same study discussed above also compared students in terms of the amount of metacognition they demonstrated. By asking students to think aloud as they worked, I was able to measure the number of times they made metacognitive, self-evaluative statements such as, “Wait a minute, that doesn’t make sense,” and “Should I try this another way?” and “This is really hard.” I found that the students who assessed themselves tended to be more metacognitive, although the only statistically significant differences were for girls. That means that the differences in metacognition between the boys who assessed their own work and the boys who did not could not have occurred by chance, but the differences between the girls are likely to reflect actual differences. There are a number of explanations for the gender differences, but for the purposes of this article the interesting finding is that self-assessment can support metacognition and can encourage students to think critically about the quality of their own thinking and their own work.

Instructional rubrics support the development of skills. Another study I conducted looked at the effects of instructional rubrics on eighth-grade students’ writing skills. A treatment group and a control group each wrote three essays over the course of several months. The treatment group was given a rubric before they began writing and the control group was not. The treatment students tended to receive better scores on two of the three essays. For one of the essays the differences were statistically significant. I concluded that simply handing out and explaining a rubric can help students write better but improvements are not guaranteed—more intensive work with the rubric is probably necessary in order to help students perform better consistently.

Instructional rubrics support the development of understanding. As part of the same study discussed above, I was also interested in whether or not students tended to internalize the criteria contained in the rubrics and thereby develop an understanding of good writing. I had each student answer the following question a month or two after writing the third and final essay for this study:

When your teachers read your essays and papers, how do they decide whether your work is excellent (A) or very good (B)?

There were some striking differences between the treatment and control groups. Broadly, the control students tended to have a vaguer notion of how grades were determined:

“Well, they give us the assignment and they know the qualifications and if you have all of them you get an A and if you don’t you get a F and so on.”

Note that this student knows that the teacher has her standards or “qualifications” but he does not suggest that he himself should know what they are. The treatment students, however, tended to refer to the rubrics, “root braks,” or “ruperts” as grading guides and often listed criteria from the rubrics they had seen:

“The teacher gives us a paper called a rubric. A rubric is a paper of information of how to do our essays good to deserve an A. If they were to give it an A it would have to be well organized, neat, good spelling, no errors and more important, the accurate information it gives. For a B it’s neat, organized, some errors and pretty good information but not perfect.”

Another treatment student wrote:

“An A would consist of a lot of good expressions and big words. He/she also uses relevant and rich details and examples. The sentences are clear, they begin in different ways, some are longer than others, and no fragments. Has good grammar and spelling. A B would be like an A but not as much would be on the paper.”

Several of the criteria referred to by this student are straight from the rubrics he or she used during the study. I compared the criteria mentioned by the control students to those mentioned by the treatment students and found that the control group students tended to mention fewer and more traditional criteria. Students in the treatment group tended to mention the same criteria the control group mentioned, plus a variety of other criteria, often the criteria from the rubrics. I concluded that instructional rubrics are related to an increase in students’ understandings of the qualities of a good essay.

Instructional rubrics support good thinking. When I pointed out the two thinking-centered criteria in the rubric in Figure 3, Reasons in Support of the Claim and Reasons Against the Claim, I promisto discuss them further. In the study mentioned above, I had over 100 eighth-grade students write a persuasive essay. Some of the students got an instructional-rubric similar to the one in Figure 3, some didn’t. The rubric included three thinking-centered criterion—considering the other side of an argument and explaining why your own position still holds up—is a very sophisticated thinking skill. That kind of thinking is something adults and students tend not to do. Rather, we just make an argument, defend it, and hope for the best. Good thinkers, in contrast, know that they also have to anticipate the other side of the argument and be prepared to explain why it doesn’t undermine the claim they are making. When I included that criterion in the rubric for the persuasive essay, the students who used the rubric tended to consider the reasons against their claim. I concluded that thinking-centered rubrics can help children think more intelligently.

In summary, I have found that instructional rubrics are easy to use, they clarify teachers’ expectations and instructional objectives, they help provide valuable feedback to students, and they support learning, thinking, understanding, the development of important skills, and self-regulation—assuming they are part of an ongoing process of feedback.

How Do You Make an Instructional Rubric?

Designing an instructional rubric is hard. The process I recommend below is the one that works best for me and the teachers with whom I’ve worked most closely. It takes time, though, and if you’re anything like me and my teacher colleagues, you won’t do it—not at first anyway. Needing a rubric tomorrow, you’re likely to sit down and try to crank one out. That might work if you have vast experience with rubric design, but if it doesn’t, don’t despair. Take some class time and create a rubric with your students. Thinking and talking about the qualities of good and poor work on a project is powerfully instructive. Your students will not only help you come up with a rubric, they will also learn a lot about the topic at hand. The following process is likely to be instructional for your students and to result in a useful rubric.

  1. Look at Models. Review and discuss examples of good and poor work on a project like the one your students are about to undertake. For example, if they are going to give an oral presentation, show them an excellent presentation—maybe a televised speech—and a flawed presentation—maybe a videotaped speech from one of last year’s students, if you can get permission to use it. Ask students what makes the good one good and the weak one weak. Track their responses during the discussion.
  2. List Criteria. Tell students that you are going to ask them to do a similar project and you want to think together about how it should be assessed. “When I grade your presentations,” you might ask, “what should I look for? What should count?” Students will draw on the list generated during the discussion of the models. Track their ideas under the heading “Criteria” or “What counts.” When they appear to have run out of ideas, ask them to think about the less obvious criteria. If they haven’t listed criteria that you think are important, such as thinking-centered criteria, add them yourself and explain why you think they are needed. You are the expert, after all. District, state and national standards are often good resources for thinking-centered criteria.
  3. Pack and Unpack Criteria. You are likely to end up with a long list of criteria. Many of the items on the list will be related or even overlap. After class, take some time to combine related and overlapping criteria. Avoid creating categories that are too big, and don’t bury criteria you want to emphasize. For example, if you are assigning a written essay and teaching students about paragraph format, you may want to assign it a separate criterion.
  4. Articulate Levels of Quality. Drawing again on students’ comments during the discussion of good and poor models, sketch out four levels of quality for each criterion. You might try a technique I learned from a teacher in Gloucester, Massachusetts. I call it “yes/yes but/no but/no.” Try using those four terms as sentence stems. For example, if the criterion is “Briefly summarize the plot of the story,” level 4 would be, “Yes, I briefly summarized the plot,” level 3 would be “Yes, I summarized the plot but I also included some included some unnecessary details or left out key information,” level 2 would be, “No, I didn’t summarize the plot but I did include some details from the story,” and level 1 would be, “No, I didn’t summarize the plot.” Don’t worry about getting it exactly right at this point; just capture some of language describing strong work and the problems students typically encounter. Ask students to tell you about the kind of mistakes they have made in the past.
  5. Create a Draft Rubric. After class, write up a draft rubric that includes the list of criteria you generated with your class and expands on the levels of quality language. Don’t get too attached to this draft—you are likely to revise it more than once.
  6. Revise the Draft. Show the draft to your students and ask them for their comments. They will probably ask you to make a few revisions.

After revision, the rubric is ready to use. Hand it out with the assignment and have students use it for self-assessment and peer assessment of the first and second drafts of their projects, respectively. It is also very important that you use the rubric to assign grades. Rubrics are relatively easy to translate into grades. Simply circle the most appropriate level of quality for each criterion and average them by adding the scores and dividing by the number of criteria. If an essay receives an average of 2.8 on a 4-point scale, that generally translates into a letter grade of B-. If you use numerical grades, simply change the 4s, 3s, 2s, and 1s into the number that represents the middle of the range for a grade (an A=93, a B=86, etc.), then average the scores and assign a number grade accordingly.

How Do You Use Instructional Rubrics to Support Thinking and Learning?

In an earlier section of this article, I wrote, “Simply handing out and explaining a rubric can help students write better but improvements are not guaranteed—more intensive work with a rubric is probably necessary in order to help students to perform better consistently.” In response to the mixed but encouraging research findings that prompted that statement, I worked with a talented teacher in San Diego, Ann Gramm, to develop a process of student self-assessment. The process involves students in using an instructional rubric to take an honest, critical look at their own work.

I included both seventh- and eighth-grade students in the self-assessment study. I gave both the treatment and control groups an instructional rubric along with their essay assignment. Only the treatment classes, however, were given self-assessment lessons. During the lessons, the students looked at the rubric, looked at their work, and identified the material in their work that showed that they had attended to the criteria in the rubric. For example, we had students write an historical fiction essay. One of the criteria was, Bring the Time and Place your Character Lived Alive. During the self-assessment lesson I said, “Take a green marker and underline the words ‘time and place’ in your rubric. Now use the same marker to underline in your essay the places where you give your reader information about the time and place in which your character lived.” Confident that this would only take a second, students turned to their essays with their green markers at the ready—and often couldn’t find the information they were looking for. To their amazement, it was not in there. Apparently, because the information is in their heads they think it is also on their paper. This process of self-assessment had them actually look and see what was and wasn’t there.

We went through this process with every criterion on their rubric and different colored markers, and, as far as I ctell, it was quite an eye opener for students. Preliminary results from the data analyses also suggest that the self-assessment process had a positive effect on the writing of many students, especially girls. I recommend including some sort of careful, specific self-assessment technique in any process of ongoing assessment, especially those supported by instructional rubrics.

Conclusion

A teacher recently told me after a workshop, “I previously found rubrics to be very unspecific, time consuming and an annoyance to assessment. I now like rubrics and am kind of excited about using a few.” I hope you too feel motivated and able to design and use instructional rubrics with your students. I also hope that you will go beyond the most basic application of rubrics by including students in the design of your rubrics, by seeking out and including thinking-centered criteria, and by engaging students in serious self- and peer assessment. I think you will find that blurring the distinction between instruction and assessment has a powerful effect on your teaching and, in turn, on your students.

Heidi Goodrich Andrade’s research centers on assessment. Her focus in this article is rubrics—matrices that define and describe levels of what counts in student work. While many educators appreciate rubrics for the clarity they offer to evaluation, Heidi emphasizes ways to design and use rubrics as tools to improve learning—what she calls “instructional rubrics.” In this article, she presents a way to bring students into the process of creating rubrics and ways her research has shown that rubrics affect learning. As part of a process of ongoing assessment, rubrics can contribute in important ways to students’ developing understanding and thinking skills.

  • About Lookstein

    The Lookstein Center is dedicated to providing critical supports for Jewish educators as they learn, teach, and lead in the twenty-first century to ensure an engaged and educated Jewish community.

  • Become a member

    Membership packages are available for individuals, schools, and organizations.

    Learn More

  • Connect on Facebook

  • Contact us

    The Lookstein Center
    Bar Ilan University
    Ramat Gan 5290002, Israel
    Phone: +972-3-531-8199
    US Number: +1-646-568-9737
    Fax :+972-3-535-1912
    info@lookstein.org