Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Assessing Students' Learning
PROFESSOR 1: So I think in a course like this, assessment is tricky. That these are individual clients and projects. And we can try to standardize maybe the design and engineering challenge to some extent, but that's not necessarily what we took as our primary consideration. We really started with the people first, and tried to match people to projects. And there's no doubt, there are some projects that are more technically involved. There are some projects where the logistics are more difficult-- meeting with the client or even working with him or her, or even within the team. So it is a tricky kind of thing. But I think having us as instructors, very involved in the class, and getting a sense of all of the projects, having the mentors really have a strong understanding of different projects' progress-- so we had one mentor for every two teams-- was really key to the assessment at a high level.
We had different pieces over the course of the semester. So doing this mid-term design review counts as part of the assessment. Certainly the final presentations, the documentation is a big part. But we also rely on mentor feedback, client feedback-- we met with all the clients at the end of the semester. As well as peer feedback. This is a team-based project course. So taking in all these inputs is what we tried to do when it came to assessment. To really try to understand what students had learned and how they performed in the course.
GRACE: So for some of these we did prepare rubric, or we had a vague sense of what we were looking for. So for example, for the blog posts, we would score it based on not just how well did you write it, and grammar, and how much information is there, but we were looking for really deep, personal reflection. And so that's what we gave distinctiveness points for, I believe. And so, for example, you could have a post just on assistive dogs, and what types of assistive dogs are there. Which is great, it shows that a student is learning.
But another post-- one of my favorite posts, actually, over the semester-- was from one of my students who was struggling with, should I go into assistive technology as a career when everyone is saying you should do something that's lucrative. But this is beneficial. But what if beneficial isn't profitable, what happens then? And she was really struggling with, am I a selfish person, things like that. And so that kind of very deep, personal growth is something that I valued a lot. We all valued a lot.
There's actually very little weight put on the final project. We put a lot of weight on how well did they go about the design process. And so, how many prototypes did they manage to do? When they didn't have the skills to do a certain project, were they able to go look for resources? Did they go around MIT knocking on doors, asking professors for help, asking machine shop technicians for help? That was also highly valued. What else do you think I'm missing out?
We also took into account when the clients were-- so, some clients would change their mind about what project they wanted the students to work on over the course of semester. Which is really difficult for the students, because they would make progress on something and then have to change the project direction completely. And so we would try and take that into account as well. Like, how were the students able to manage that? And so because of these kind of little things, we really didn't put a lot of weight on how the project turned out at the end of the day. It was more about how much effort did you put into it, how much did you grow as a person, how well did you manage your clients' expectations, and were you able to satisfy the client given all the constraints.
So very, very organic. Which is why we had to spread out the grading rubric across lots of little components. And that's why we had to get feedback from mentors, and peers, and clients, and have mid-semester reviews and final semester reviews. I hope I explained that.
PROFESSOR 1: Yeah, I think there's a lot of aspects to this. One is, we really did try to structure the evaluation as much as possible in terms of being able to try to grade objectively. And so we did have a decent number of structured rubrics, that we're happy to share, in terms of how we evaluated different components of the project. So everything from the course, everything from the blog posts, to the videos, to the mid-semester and final semester panels, the documentation, all aspects of the class.
I think for the individual components, as Grace says, really trying to personalize the learning a little bit in terms of having reflections or blog posts where people could write about topics that interested them, as it related to assistive technology. I think that's something that we tried this year, as opposed to maybe more formed lab reports on some of the labs that we did or something like that. The other part I think we tried this year was really to take in, as Grace was saying, all of these human inputs or feedback.
So we met weekly as a staff, and reflected on the projects and the students' progress. We talked to clients and had students themselves talk a little bit about their peers. And so that's, I think, what we tried to take into account. To some extent we tried to start from what would be ideal in terms of understanding students' experiences in the class. And that, I think, makes it a little bit tricky to evaluate or grade. But hopefully it was one reasonable way, or one valuable way to do it.