Wednesday, January 30, 2013

7 Ways to Develop Student Affairs Assessment Competencies as #sagrad

This blog post is in response to a question posted to Twitter, and I felt the need for more than 140 characters to offer my ideas of ways that a graduate student in a student affairs program can develop competencies in assessment.

  1. Learn by doing - if you haven't already done so, seek out opportunities to volunteer with offices outside of your assistantship. Express an interest in learning about assessment, and chances are you'll get direct experience. I got into assessment because I mentioned to my supervisor that I wanted to learn about assessment. Guess who became the new assessment guy for our department. Several years later, I teach assessment.
  2. Read - of course a faculty member is going to encourage graduate students to read. There are some great books, monographs and articles addressing assessment practice. My next blog post will be a suggested reading list.
  3. Check out what student affairs divisions are currently doing. Visit websites to get a feel for the type of work going on in student affairs assessment. Some great examples can be found at Oregon State University, University of Georgia, and North Carolina State University.  There are dozens of other examples I could provide, but that list could be its own blog post (and maybe it will be).
  4. Connect with those engaged in assessment work - If you Tweet, follow the #saass hashtag. There are some individuals you should follow, as well: @lmendersby @pglove33 @drbbourke. A lot of great conversations about assessment in student affairs are taking place in that venue. Visit studentaffairsassessment.org to find a group of assessment professionals dedicated to enhancing the conversation about assessment. Their website also has some good examples of student affairs assessment websites.
  5. Determine the aspect of assessment where you'd like to begin your competency development. ACPA's Assessment Skills and Knowledge standards break assessment competencies into 13 areas. Most of us have strengths tied to specific competencies areas within assessment.
  6. Connect with professional associations.  Participating in NASPA's Assessment, Evaluation and Research Knowledge Community or ACPA's Commission on Assessment and Evaluation can be a great way to get connected with other student affairs educators interested in assessment in our field.
  7. Finally, ask questions. Whether it's via Twitter, professional associations, or individual emails, ask questions. There are a lot of people out there who love to share their thoughts on student affairs assessment.


Sunday, January 27, 2013

But I've never written a lecture before . . .

One of my tasks for developing an online course is writing scripts for videos I'll be recording. This is a new experience. Even though I've been teaching for a little while now, I've never written a lecture. The notes I write when I prep for class are generally an outline and questions to pose to the class. The scripts are important though, to provide a transcript for anyone who has a disability where the videos are an inaccessible part of the course.

Sitting down and writing these scripts for what are essentially mini-lectures (each one will be 5-10 minutes) has been a struggle. When I do spend a great deal of time talking in class, it is extemporaneous, and often in response to a student question or to elaborate on a point that was brought up in class discussion. Both of those approaches make writing lectures ahead of time a bit difficult.  

As I write these mini-lectures, I try to imagine myself seated in a circle with my students, which is my usual approach to discussion-based classes. As I reread the lectures I have written so far using this approach, they seem to be a closer match to my extemporaneous teaching style.  My hope for students watching these mini-lecture videos don't feel like they are watching static, scripted speeches. 

I am sure that like any thing else, lecturing writing is a skill, and with time and practice, hopefully my efforts will show improvement.

Wednesday, January 9, 2013

Why Ongoing Teaching Evaluations Matter, and What I Plan to Do About Them

An article from Faculty Focus titled Using Multiple Course Evaluations to Engage and Empower Your Students and Yourself has really gotten me thinking. When I started teaching at the graduate level, I used a mid-semester evaluation to gauge the course. I used a format that a colleague shared with me, which ended up being more detailed than the departmental end-of-course evaluation. As someone who was just starting out, I wanted detailed feedback on my teaching and the course. For each item, students were asked to rate the course or me on a scale, as well as offer comments.  What I saw on completed evaluations were hastily circled numbers for the scale measures and few comments.  This could explain why this practice has not endured for me.

The article from Faculty Focus brings out a lot of great pointers for ongoing course evaluations. Rather than summarize the article that is linked in the first sentence of this blog post, I am focusing on my plans for implementing ongoing course evaluations, for both face-to-face and online courses.  The difference is that face-to-face courses run for 14 weeks, while online courses run in an accelerated 7 week format.

First up: what questions to ask? I really like the open-ended questions suggested in the Faculty Focus article:
  1. What is one thing you like about this course (so far)?
  2. What is one thing you do not like about this course (so far)?
  3. What is one thing that could be improved in this course?
  4. Do you have any additional comments you would like to share?
I might also include a couple of Likert scale questions:
  1. On a scale of 1 (worst) to 10 (best) how would you rate this course (so far)?
  2. On a scale of 1 (worst) to 10 (best) how would you rate your learning in this course (so far)?
For both instructional formats (face to face and online), I will use our CMS (course management system). We use Moodle as our CMS, which allows for anonymous responses to feedback items. For this first go-round, I am not setting conditions in the CMS requiring completion of the feedback evaluations. Part of the reason actually stems from issues in the CMS, rather than a philosopical reason. If I set conditions on assignments or other items that follow an evaluation, students would be unable to access the items until the evaluation is completed. This poses a problem, as I encourage students to think ahead, rather than scrambling one class session at a time.

Following the completion of each evaluation (one after every 4-5 class sessions), I will provide a summary of the feedback to the students, and address any concerns that are raised. In so doing, my aim is to demonstrate the importance of reporting results and planned actions that come out of assessment activities.

I am excited to institute this slimmed-down version of ongoing course evaluation. I am also creating a similar approach to peer-evaluation of team-based assignments, as well as self-evaluation. But, the plans for those approaches will have to be a post for another day.