Friday, September 13, 2013

A Legitimate Use for Smiley Face Surveys :)

Most people who work in student affairs have attended or planned a program that included an evaluation form. More times than not, it has a half-sheet of paper on which roughly five questions were printed. Generally, the questions all asked the same question, but in different ways:

Did you like this program or event?

Students who have had me for class have heard me refer to this method of assessment as "smiley face surveys." I gave this method that name for two reasons:
1. participants sometimes draw a smiley face on the evaluation, and/or
2. the sole purpose of the evaluation is to make the planner feel good about herself/himself.

As we focus more and more on assessing student learning and development, we're moving away from smiley face surveys. And although I have railed against them for a decade now, I propose a revision and reintroduction of this assessment effort. Hopefully we look at a single program or event as fitting in a larger curriculum for student learning and development. We should want honest feedback from participants in these programs and events. But here's the kicker: it has to be fast and easy.

Using the half-sheet of paper approach, we can quickly ask participants to share some important feedback with some simple open-ended questions. I know what you're thinking. Open-ended questions sound like we're going to ask students to write essays. Not at all. Open-ended simply means that the respondent has the freedom to choose what she or he feels is an appropriate response. For example, we can ask students to identify both their favorite and least-favorite part of the program or event. We can also ask them to describe something they learned in 3-5 words.

Here's the most important reminder for this or any assessment approach. Use the data you collect!

Friday, September 6, 2013

Choosing a Topic for a Dissertation or Thesis (and other research projects)

A few years ago, a friend reached out to me on Facebook, asking advice on choosing a dissertation topic. I thought about the advice I offered her recently after having that same conversation with some our master's and doctoral students at LSU. I went back through my old Facebook messages, and found the advice I offered her. I hope someone finds it useful, as I think it's applicable not only to dissertations and thesis, but to other research projects as well. I've made a few edits from the original post.

Picking a topic can be tough, at least I found that to be the case. In higher ed, I think it's important for the student to choose a program that is in tune with her or his career goals. So, if you're thinking you want to remain a practitioner long term, focus on a topic that relates more to practice. If you want the faculty route, then a topic that is more theory driven, abstract, makes sense.* I knew I was going the faculty route, and my topic was basically the experiences of students of color at a predominantly White university. It's a wide open topic, and I definitely took a more theoretical approach with it, although I didn't do grounded theory or anything like that.

As you try to narrow down a topic, think in terms of answering the following questions:

1. What do I want to know?
2. What areas of literature would I need to consult?
3. What has been addressed about the specific topic already?
4. What would a study on this contribute to higher education?
5. What sources of data would be needed to answer #1?


If answering #4 for a particular topic really gets you fired up or excited, that's your topic.
When you think you've selected a topic, you have to answer the following very honestly: Can I stay energized about this topic for the next 2 years to complete the dissertation, and for 3 years after graduating? I say 3 years after, even if you remain a practitioner, because it becomes a topic of conversation whenever you meet someone and they discover you've recently completed a doctorate


If you have some topics in mind, and can't choose between them, or can't get one to a point that you think is narrow enough, I have some suggestions. Make yourself a table/grid. On the left side, put the following (one in each box of the column):
  • Along the top, write your topic ideas.
    If answers to #3 are lengthy, you may need to narrow the topic and answer to #1.
    I hope this is helpful. If you ever want to talk through ideas, I'd be happy to help.
    Brian

    *This particular piece of advice is not meant as a maxim. It isn't true for everyone, and it's simply my 2 cents.

Wednesday, August 21, 2013

Learning to Do Assessment by Asking Questions


I was recently preparing for a presentation about incorporating assessment in our daily conversations in student affairs, and I thought about my own introduction to assessment in student affairs. Prior to engaging in assessment work, I had no formal training beyond the required research methods course in my master’s program. What led me to working with assessment came as the result of feeling energized after hearing a speaker who the vice president for student affairs brought to campus. That speaker, Dr. Richard Keeling, talked about the importance of connecting what we do to what our students should be gaining from their experiences, and the importance of assessing student learning. Following Dr. Keeling’s visit, I mentioned to the director of my department that I wanted to learn more about assessment. The next thing I knew was that I would be traveling to a retention and assessment conference as part of an institutional team. The conference was structured in such a way to give institutional teams time to meet with experts in the areas of retention and assessment, including Dr. Vincent Tinto and Dr. Marilee Bresciani. I was hooked on assessment.

Following those amazing learning experiences, I returned to my campus, ready to change the world of student affairs through assessment. I was energized, but quickly realized that although I had gained a great deal through conference experiences, I really knew little about assessment. I had started to develop a baseline level of knowledge, but now I needed to translate that into skill development. I took my next step in learning about assessment by becoming a voracious reader and consumer of information about assessment-related materials. The first book I read was Assessment Essentials: Planning, Implementing and Improving Assessment in Higher Education (1999) by Palomba and Banta. This text provided me with some big-picture perspectives on assessment, and helped me think of more questions to which I needed answers.

I kept reading, but I wasn’t finding the answers in books, articles, or websites. I quickly realized that I needed to ask questions of people with expertise in a variety of areas. One of the first things I needed help with was determining a sample size for a survey-based assessment project. If you Google “sample size calculator,” you will see a large number of results, just as I did. After clicking through several options in the search results, I saw variations in sample size calculators. That was when I realized I needed to seek out someone on my own campus. I began my search for such a person by asking staff for suggestions of people with expertise in statistics. What I got was a lengthy list of faculty from a variety of academic disciplines, staff in the institutional research office, and a staff member in academic affairs whose job title included the words Quality Improvement. I emailed the staff member in academic affairs and asked if I pick his brain over a cup of coffee. He offered some great advice, not only on calculating a sample size (he said he has a sample size calculator bookmarked that he found through a web search), but also on ways to continue my path of learning assessment by doing assessment.

My learning continued in a number of ways. I kept having chats over coffee with other faculty and staff, which eventually become more about sharing and dialog than me simply seeking answers. I read more and more, part of which had to do with beginning my doctoral studies. I engaged in some trial and error when it came to statistical analysis. I learned a lot by presenting findings to constituent and stakeholder groups. Ultimately what I learned was that I had more to learn. In the three and a half years that I worked directly with assessment, I felt that I only scratched the surface of assessment knowledge.

Now that I teach a graduate course on assessment in student affairs, I often find myself drawing on the experiences I had in those three and a half years. In the nearly seven years that have passed since I left my full-time position to transition to the professoriate, I have continued to think of more questions about assessment to which I want answers. As I continue my own learning and development in assessment, I look for ways to share that with my students and other student affairs educators. After all, isn’t assessment about seeking answers to questions?

Saturday, February 16, 2013

How to Use Rubrics in Student Affairs Assessment

After re-tweeting a Faculty Focus article (http://www.facultyfocus.com/articles/teaching-and-learning/should-you-be-using-rubrics/) about rubrics the other day, I was asked by Kimberly Faith (@puzzler821) if rubrics can be used in student affairs to measure learning. My response: yes, we can use rubrics to assess learning in student affairs. In this post, I provide some thoughts on the uses and utility of rubrics as an assessment tool in student affairs, along with a few sources that can be useful in exploring rubrics further.

A lot of student affairs divisions and departments have been doing some great work in writing learning outcomes. We should definitely begin with learning outcomes, which help us answer the following question: what should a student learn/gain from an experience? These can often be very broad statements, even when written with specificity. There are generally specific elements that constitute a designed experience that lead to meeting a learning outcome. A rubric can serve as a tool to help specify those individual elements/experiences. 


One of the potential uses for rubrics in classroom settings was described in the Faculty Focus article in the following way: "When rubrics are given to students at the time an assignment is made, students can use them to better understand expectations for the assignment and then monitor and regulate their work." There's a pretty simple translation to student affairs practice. A big buzz word these days is transparency. We can make it very tangible instead of leaving it to buzz word status. By building rubrics that specify individual elements of experiences that lead to the realization of learning outcomes, and sharing them with students, we can be transparent about the outcomes of experiences, and the processes that lead to reaching those outcomes.

"Among students, there is agreement that rubrics clarify expectations and are especially useful as they prepare assignments." Imagine this quote from the Faculty Focus article in the context of student affairs: Rubrics clarify expectations and are especially useful to students as they engage in experiences that are designed to affect learning and development. The further clarify this point, the purpose of developing rubrics is to create them with the end user in mind: the student. We should use rubrics as tools to communicate expectations, but not necessarily what we would expect of students. We should use rubrics to communicate the ins and outs of the experiences students can expect to have from their engagement.

Rubrics offer students an opportunity to engage in self-assessment of their experiences. A common suggestion for using formative assessment in student affairs is the reflection paper. Reflecting on experiences can almost always be good practice. Imagine the depth of writing that could come out of reflection papers when rubrics have been shared with 

Like most things assessment, developing rubrics should not be solitary work by an individual. Collaboration is critical in assessment, and especially when it comes to developing learning outcomes and rubrics. Conversations about desired outcomes are critical, and should be a regular feature of staff meetings and chats by the coffee maker.

Now for the step by step process of getting started with rubrics . . .

Rather than duplicate some great information that is freely available on the internet, I am sharing links to some resources that I have found and that I share with my graduate-level course on student affairs assessment at LSU. There are two presentations that a number of student affairs divisions have posted on their websites. 
  • The first is from Joseph Levy who works with Campus Labs: http://bit.ly/XO2Jf7
  • The second presentation that comes up on several sites is from Dr. Carrie Zelna at North Carolina State University: http://bit.ly/15khZUw
Both presentation files provide easy to follow elements of rubrics and examples of use in student affairs.

Bonus use of rubrics . . . 

In addition to using rubrics for direct assessment of student learning and development, rubrics can be used in the assessment planning process. A division of student affairs might develop a rubric to serve as a guide to help individual departments and units develop their assessment plans, and then use the rubric to evaluate the completed plans. Examples of this use of rubrics in student affairs assessment can be found below. I provide these as examples, and encourage you to create something that fits your institution:
Rubrics don't represent a magic wand for assessment in student affairs. What they do represent is another tool we can use to assess student learning and development. Rubrics can help student affairs educators think about both student learning and development, and assessment in student affairs. While rubrics can be valuable tools, they only become valuable when we share them. And remember, developing rubrics, like our other assessment activities, need to be collaborative activities, which means we need to develop a culture of assessment in student affairs, but that's a topic for a future blog post.

Wednesday, January 30, 2013

7 Ways to Develop Student Affairs Assessment Competencies as #sagrad

This blog post is in response to a question posted to Twitter, and I felt the need for more than 140 characters to offer my ideas of ways that a graduate student in a student affairs program can develop competencies in assessment.

  1. Learn by doing - if you haven't already done so, seek out opportunities to volunteer with offices outside of your assistantship. Express an interest in learning about assessment, and chances are you'll get direct experience. I got into assessment because I mentioned to my supervisor that I wanted to learn about assessment. Guess who became the new assessment guy for our department. Several years later, I teach assessment.
  2. Read - of course a faculty member is going to encourage graduate students to read. There are some great books, monographs and articles addressing assessment practice. My next blog post will be a suggested reading list.
  3. Check out what student affairs divisions are currently doing. Visit websites to get a feel for the type of work going on in student affairs assessment. Some great examples can be found at Oregon State University, University of Georgia, and North Carolina State University.  There are dozens of other examples I could provide, but that list could be its own blog post (and maybe it will be).
  4. Connect with those engaged in assessment work - If you Tweet, follow the #saass hashtag. There are some individuals you should follow, as well: @lmendersby @pglove33 @drbbourke. A lot of great conversations about assessment in student affairs are taking place in that venue. Visit studentaffairsassessment.org to find a group of assessment professionals dedicated to enhancing the conversation about assessment. Their website also has some good examples of student affairs assessment websites.
  5. Determine the aspect of assessment where you'd like to begin your competency development. ACPA's Assessment Skills and Knowledge standards break assessment competencies into 13 areas. Most of us have strengths tied to specific competencies areas within assessment.
  6. Connect with professional associations.  Participating in NASPA's Assessment, Evaluation and Research Knowledge Community or ACPA's Commission on Assessment and Evaluation can be a great way to get connected with other student affairs educators interested in assessment in our field.
  7. Finally, ask questions. Whether it's via Twitter, professional associations, or individual emails, ask questions. There are a lot of people out there who love to share their thoughts on student affairs assessment.


Sunday, January 27, 2013

But I've never written a lecture before . . .

One of my tasks for developing an online course is writing scripts for videos I'll be recording. This is a new experience. Even though I've been teaching for a little while now, I've never written a lecture. The notes I write when I prep for class are generally an outline and questions to pose to the class. The scripts are important though, to provide a transcript for anyone who has a disability where the videos are an inaccessible part of the course.

Sitting down and writing these scripts for what are essentially mini-lectures (each one will be 5-10 minutes) has been a struggle. When I do spend a great deal of time talking in class, it is extemporaneous, and often in response to a student question or to elaborate on a point that was brought up in class discussion. Both of those approaches make writing lectures ahead of time a bit difficult.  

As I write these mini-lectures, I try to imagine myself seated in a circle with my students, which is my usual approach to discussion-based classes. As I reread the lectures I have written so far using this approach, they seem to be a closer match to my extemporaneous teaching style.  My hope for students watching these mini-lecture videos don't feel like they are watching static, scripted speeches. 

I am sure that like any thing else, lecturing writing is a skill, and with time and practice, hopefully my efforts will show improvement.

Wednesday, January 9, 2013

Why Ongoing Teaching Evaluations Matter, and What I Plan to Do About Them

An article from Faculty Focus titled Using Multiple Course Evaluations to Engage and Empower Your Students and Yourself has really gotten me thinking. When I started teaching at the graduate level, I used a mid-semester evaluation to gauge the course. I used a format that a colleague shared with me, which ended up being more detailed than the departmental end-of-course evaluation. As someone who was just starting out, I wanted detailed feedback on my teaching and the course. For each item, students were asked to rate the course or me on a scale, as well as offer comments.  What I saw on completed evaluations were hastily circled numbers for the scale measures and few comments.  This could explain why this practice has not endured for me.

The article from Faculty Focus brings out a lot of great pointers for ongoing course evaluations. Rather than summarize the article that is linked in the first sentence of this blog post, I am focusing on my plans for implementing ongoing course evaluations, for both face-to-face and online courses.  The difference is that face-to-face courses run for 14 weeks, while online courses run in an accelerated 7 week format.

First up: what questions to ask? I really like the open-ended questions suggested in the Faculty Focus article:
  1. What is one thing you like about this course (so far)?
  2. What is one thing you do not like about this course (so far)?
  3. What is one thing that could be improved in this course?
  4. Do you have any additional comments you would like to share?
I might also include a couple of Likert scale questions:
  1. On a scale of 1 (worst) to 10 (best) how would you rate this course (so far)?
  2. On a scale of 1 (worst) to 10 (best) how would you rate your learning in this course (so far)?
For both instructional formats (face to face and online), I will use our CMS (course management system). We use Moodle as our CMS, which allows for anonymous responses to feedback items. For this first go-round, I am not setting conditions in the CMS requiring completion of the feedback evaluations. Part of the reason actually stems from issues in the CMS, rather than a philosopical reason. If I set conditions on assignments or other items that follow an evaluation, students would be unable to access the items until the evaluation is completed. This poses a problem, as I encourage students to think ahead, rather than scrambling one class session at a time.

Following the completion of each evaluation (one after every 4-5 class sessions), I will provide a summary of the feedback to the students, and address any concerns that are raised. In so doing, my aim is to demonstrate the importance of reporting results and planned actions that come out of assessment activities.

I am excited to institute this slimmed-down version of ongoing course evaluation. I am also creating a similar approach to peer-evaluation of team-based assignments, as well as self-evaluation. But, the plans for those approaches will have to be a post for another day.