Originally Posted by ssh
therusty, I haven't put a lot of thought to the measuring of student outcomes, other than my personal measurements for a lesson. However, if we can't (as an industry) agree on outcomes, that could go a long way towards explaining the subjective nature of ski instruction and the religious arguments into which it often degenerates. Objectives are the definition of what we're trying to accomplish as a process, and they are the key to being able to measure how close we get. If we don't have any clearly defined, then...
Why bother with what currently passes for certification?
I have put a lot of thought into measuring student outcomes. Every time my head starts spinning. As I get more experienced, I keep discovering more measurements and more refined scales for even the simple measurements. How about something as simple as ending a lesson on time? When students have a bus to catch or parents need to meet their kids, ending right on time is a real bonus. When it's cold, the kids are hungry or need to pee, ending early could be a good thing. But when a student is on the verge of a breakthrough or maybe wants a sense of value for their money, going overtime could be just the perfect thing (as long as you don't miss the next lesson). How do you measure the skill of time management in the "real world" objectively? I think you must either choose between an overly complex (i.e. expensive) measurement or one that has such a high error rate as to make the measurement meaningless. Or maybe you just make your choice somewhere in between and live with the tradeoff.
How does a PSIA exam measure time management skill? Well, there are several times throughout the exam where you need to show up on time. If you show up late for a written exam - you won't get extra time to finish. They also typically give you a time limit for your practice teaching segments. Have you ever seen someone pass whose teaching segments ran long? (okay - probably yes - but you get my point). So even though you don't get scored on time management, it does play a small role in the exam in the same way that it plays a small role in teaching. Is the skill a lot more complex than what is tested? Absolutely.
I think the beauty of the certification process is that you get more out of it than what you put into it. Yes you can find examples where people got pins that probably should not of, examples of people earned pins long ago, but could not earn them now and examples of people who could/should have pins but have either not tried or been denied. But one of the principles of the pin is a kind of honor system. Just going through the process of preparing for an exam should make you a better pro. But once you've been through an exam, pass or fail, you should have a far better knowledge of what you don't know and things you can study/practice/work on to continue your growth as a pro. The before and after differences should be exponential. The expectation of the certification system is that one will continue to grow (and especially at least keep up) after achieving the certification. Although that is not always met, given the purpose of the organization, this is not a bad assumption to make. I'd rather have the organization play a mentor role instead of a police role.
When you look at the certification system as a gross measurement and accept that the standards represent an "average" instead of specific skill level, it does serve a useful purpose besides being a carrot to encourage professional growth. On average, there is a huge and meaningful difference in instructor quality bewteen cert levels 1, 2 and 3. At a gross level, a level one knows the words and can use the concepts mostly in a cookbook fashion. A level two understands the concepts and knows why they work and can implement more effective teaching solutions (e.g. treating causes instead of symptoms). A level 3 owns the concepts, creatively expands on them and implements most effective teaching solutions. On average. Across the hundreds or thousands of possible teaching measurements. With respect to student outcomes, it is assumed that these differences in skills have a direct relationship to the potential quality of student outcomes. It's not always this way, but on average this provides a meaningful measurement scale.
Does your school have instructor evaluations? How often do you get feedback from your supervisors? I can remember years ago when my old training director was nagging me about my "uniform". I had a habit of unzipping my jacket when I was off slope and it was hot. At the time I thought it was a silly waste. I now know how and why projecting a consistent and professional school image is important even when we are not "on duty". (So now I may look better, but I stink more
. During my first few years of teaching, we were lucky to get a few "atta boys" and "nice turns" mixed in with the "don't do that" and the "for God's sake what were you thinking?" remarks. Now we get copies whenever customers fill out comment cards on us, as a clinician I rate my pros as needs work, meets expectations or exceeds in several categories and am expected to provide subjective feedback as well (e.g. including why the turns were nice). Clinic takers are encouraged to provide feedback on the quality of the clinics. That feedback generally gets passed back to the clinic leaders (subject to confidentiality concerns). We have to get positive feedback to stay on the training staff. Everyone on staff gets a yearly evaluation documenting our contributions and areas we need to work on. Little did I know that everyone was being measured when I started. It was not as formal and it was not fed back to us like it is under the current regime, but now that I'm part of the measuring process I can look back and understand what was going on.
I submit that the function of measuring a more refined definition of instructor quality is sufficiently complex that it ought to be done by the people who manage the pro on a day to day basis. Although I believe that the most accurate mesaurement will be done through self assessment, I also believe that ski school management can get a good return on their investment by at least attempting objective as well as subjective measurement and providing that as feedback to pros. I don't believe that ski school management has a good resource base to draw upon for achieving this task. Maybe these discussions can lead to a series of articles that can consolidate complex issues down to practically applicable bullet points that can be effectively communicated and put to use? We just need to be real careful in deeming what is important and how it gets measured. For some schools, measuring who shows up to work and who actually worked is all they can handle. In these cases, a certification system that encourages self assessment is probably the best that we can hope for.
One thing that has scared me about trying to actually measure student outcomes is that the act of measurement would change the results. And how do you factor in class sizes, student abilities and environmental factors effecting outcomes versus instructor abilities? You can't take the same class and teach it with two different instructors and measure the differences. Maybe over the course of a season or multiple seasons, you could have a sample base large enough to effectively isolate instructor skill as statistically meaningful. Maybe some of the scoring criteria are useful in smaller samples. Certainly some of the scoring criteria is too expensive to perform against a 100% sample base. For some of the measurements we ought to be able to cross check instructor stats and against population stats. For example, if Wednesdays just happened to be injury prone days and a pro only taught on Wednesdays, it should be expected that their injury rate would be higher than a pro who only taught Fridays. We have some fundamental dilemnas to resolve.
Getting down to things that can be measured we ought to be able to measure lift success getting on and off, average speed, average vertical (the difference between the two implying an average width of slope consumed), a mathematical number measure of roundness of turns, number of falls and a scoring system for level of difficulty of the trail under the conditions du jour. Various methods of customer surveys can be used to measure guest satisfaction, achievement of objectives, subjective quality of experience items. Safety items can be measured over the long haul in terms of injury statistics and grossly over the short term by scoring "risk factors".
So after all of that drivel, I'll throw out one customer survey question that could do the whole quality of instructor rating all by itself:
"On a scale of 1 to 10 rate how likely you are to go skiing again sooner than you had planned because of this lesson."