Assessment of Student Learning
The assessment of student learning should be directly related to the learning goals of the course, and should not be testing content alone (which would be straight memorisation or basic understanding). Assessments should be geared towards the higher levels of Bloom's Taxonomy, in order to encourage the higher level thinking skills in students.
One should also refrain from creating assessments which are simply "jumping through hoops to get a grade". They should have practical application, or some aspect of reality to them.
In physics, problem sets are obligatory. Understanding how to use and manipulate equations is of primary importance, and I believe that students will learn most physics skills from actually doing the problem sets. I believe that students should get feedback on their problem sets, in order to help them understand where they went wrong. To be honest, I care very little for the actual grade from a problem set. Something that I will always add to my problem sets is a presentation grade. Students should learn how to communicate physical ideas effectively through their problem sets, as eventually, those problem sets will evolve into research papers. By specifying the quality you expect from a problem set and adding a small portion of the grade to it (~10%), the quality of the work improves greatly, and students think more about how they are presenting their physical arguments.
Exams are about as contrived as it gets when it comes to "jumping through hoops". I believe that they are inherently flawed, in that they encourage students to cram knowledge into their short term memory. However, I concede that they do play an important role in encouraging students to revise material, and allow for individual assessment.
In today's day and age, knowledge isn't so important as knowing where to find it, and how to apply it once you've found it. As such, I would much prefer to use open book or takehome exams, which I think alleviates exam anxiety (to some degree), and allows for a fairer assessment of a students' capabilities.
Students often have a lot to learn from each other, and I would like to encourage this where possible. When writing assessments are possible, I would like to have students peer assess each other, at least on a draft version of the work, if not the final version. I believe that students get a lot out of seeing how other students are performing, and can benefit greatly from assessment in this form. I think that the primary key to making this work is to provide a detailed rubric for students to grade each other on, so as to alleviate arbitrariness. Such assessment also allows for students to communicate using technology and to share ideas in an open forum.
Rubrics are an excellent way of letting students know how they will be graded, particularly on written assessments and presentations. I think that students should know what they are being assessed on, and a rubric allows them to assess themselves and adjust their performance accordingly.Download Sample Rubric
I constructed a sample rubric to assess the draft aural presentation and final paper for my hypothetical course described in course design. Like the sample syllabus, this is the first assessment rubric I have designed. If I ran such an assessment, I would give the students the assessment rubric along with the assignment, so that they would know what they were being assessed on. Then, in actually assessing students, I would put ticks along the continuum for each box, and evaluate a grade based on that. For this form of assessment, I think that a letter grade(+/-) is sufficient; it is too difficult to be fair and accurate with a numerical grade. The rubric would make grading in that manner rather straightforward: take the average position of the ticks.
In terms of the assessment itself, this sample rubric assesses the content in the paper, as well the presentation of that content, as well as peer assessment. I almost want the presentation to be like a business proposal. "Convince me that you are capable of doing what you tell me about." In a sense, that's what an aural exam is all about: pushing towards the border of understanding.
I found it fairly straightforward to construct the categories of assessment in the rubric, although trimming the descriptions down to a couple of words at most was a little challenging. I found it hard to split the "content" aspect of the rubric up into smaller chunks, however, and ended up settling on "Subject Mastery" as a catch-all description. The hardest part of writing the rubric was constructing the descriptors in each box. I had a fair idea of what standard each box would be held to, but finding words to describe them succinctly and without over-repetition of adjectives was difficult. I think that constructing a good rubric is a skill that comes with time, particularly as my expectations of students at a given level mature to reasonable standards.