Using rubric: Difference between revisions

From MSME Knowledge Management - Assumption University
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
Rubric* is a scoring tool that explicitly represents expected (observable) learning outcomes for a learning activity (e.g., assignment or presentation).<ref>Rubrics-Teaching Excellence & Educational Innovation - Carnegie Mellon University. (n.d.). Retrieved February 15, 2017, from [https://www.cmu.edu/teaching/designteach/teach/rubrics.html]</ref>
Rubric* is a scoring tool that explicitly represents expected (observable) learning outcomes for a learning activity (e.g., assignment or presentation).<ref>Rubrics-Teaching Excellence & Educational Innovation - Carnegie Mellon University. (n.d.). Retrieved February 15, 2017, from [https://www.cmu.edu/teaching/designteach/teach/rubrics.html]</ref>
The example here is from BIS3315 Programming and Algorithms (aka Computer I in other universities). The class borrowed the notion of "design recipe" from faculty at the University of Toronto as part of the conceptualization ([https://www.coursera.org/learn/learn-to-program]). So all the assignments will follow this structure.
For the beginning exercise that students must form their function syntax, the answer will be graded against a simple two-trait rubric: 1. format of design recipe, and 2. the correctness of the output, aka specification. This the rubric for the early stages will be rubric1 (in Excel "bis3315-rubrics.xlsx", sheet name "rubric1").
{| class="wikitable"
|+Food complements
|-
|
|Poor
|Fair
|Good
|-
|Design recipe
|The design recipe is mostly missing or is written poorly
|All components of the design recipe are available (i.e., type contract, description, and examples) with some errors in writing
|All components of the design recipe are available (i.e., type contract, description, and examples) and is well written
|-
|
|0
|1
|2
|-
|Specification
|The program is producing incorrect results or produces correct results but does not display them correctly
|The program works and produces the correct results and displays them correctly. It also meets most of the other specifications.
|The program works and meets all of the specifications
|
|0
|1
|2
|}
Later I will add one more trait (i.e., criteria) to the rubric to look for appropriate structure use in the solution (in Excel "bis3315-rubrics.xlsx", sheet name "rubric2").
These two rubrics explicitly look for observable performances done by students, it's the scoring tools that we can use to capture ONE learning objective (LO). I use these two rubrics to capture objective 1.2.
You can use ANY rubric that you think it fits your learning objectives. There is no right or wrong answer on this. The number of traits also depend on what you want students to learn. Personally, I would say a rubric with 3-5 traits with 3-5 levels is easy to work with.
Eventually, what we want is the report of our results (in Excel "bis3315-rubrics.xlsx", sheet name "result"). I have prepared some example document from AACSB workshop (private use only) that may help you better understand the end process (called close the loop) of AoL. The document is a PDF file attached called "AoL 10.15 Notebook-excerpt.PDF"


== References ==
== References ==
{{reflist|30em}}
{{reflist|30em}}

Revision as of 14:47, 23 February 2017

Rubric* is a scoring tool that explicitly represents expected (observable) learning outcomes for a learning activity (e.g., assignment or presentation).[1]

The example here is from BIS3315 Programming and Algorithms (aka Computer I in other universities). The class borrowed the notion of "design recipe" from faculty at the University of Toronto as part of the conceptualization ([1]). So all the assignments will follow this structure.

For the beginning exercise that students must form their function syntax, the answer will be graded against a simple two-trait rubric: 1. format of design recipe, and 2. the correctness of the output, aka specification. This the rubric for the early stages will be rubric1 (in Excel "bis3315-rubrics.xlsx", sheet name "rubric1").

Food complements
Poor Fair Good
Design recipe The design recipe is mostly missing or is written poorly All components of the design recipe are available (i.e., type contract, description, and examples) with some errors in writing All components of the design recipe are available (i.e., type contract, description, and examples) and is well written
0 1 2
Specification The program is producing incorrect results or produces correct results but does not display them correctly The program works and produces the correct results and displays them correctly. It also meets most of the other specifications. The program works and meets all of the specifications 0 1 2


Later I will add one more trait (i.e., criteria) to the rubric to look for appropriate structure use in the solution (in Excel "bis3315-rubrics.xlsx", sheet name "rubric2").

These two rubrics explicitly look for observable performances done by students, it's the scoring tools that we can use to capture ONE learning objective (LO). I use these two rubrics to capture objective 1.2.

You can use ANY rubric that you think it fits your learning objectives. There is no right or wrong answer on this. The number of traits also depend on what you want students to learn. Personally, I would say a rubric with 3-5 traits with 3-5 levels is easy to work with.

Eventually, what we want is the report of our results (in Excel "bis3315-rubrics.xlsx", sheet name "result"). I have prepared some example document from AACSB workshop (private use only) that may help you better understand the end process (called close the loop) of AoL. The document is a PDF file attached called "AoL 10.15 Notebook-excerpt.PDF"

References

Template:Reflist

  1. Rubrics-Teaching Excellence & Educational Innovation - Carnegie Mellon University. (n.d.). Retrieved February 15, 2017, from [2]