Homepage

Committee Directive and Membership

About SFA Masteries

Strategic Plan Information

Committee Activities and Research

Communication and Career Preparation Mastery Committees


Research on Masteries and Assessment: Western Governors University Summary

Return to main summary of  Research on Masteries and Assessment.


Phone interview with Dr. Douglas Johnstone, provost of Western Governors University, a distance learning institution

1. How did you determine the competencies and who decided?

It evolved over time. In the beginning, when we had no faculty on our own, we established a governed structure that would cover this need for us and put together panels of experts representing colleges and universities around the country as well as practitioners. They defined the degree structures and established the different competencies that students would have to master in each degree.

We also established a parallel council that established the assessments we would use. We still have that structure in place and use it. It gives us objectivity in the degree and the designs we establish and credibility in the professional world. But we also have faculty from us, other institutions and practitioners and institutions.

What we essentially did was decide what does the competencies in X need to be able to do and to know and that led to very specific degree objectives. Each of our degrees has anywhere from four to six domains--broad areas of study within the degree and within those domains. There are probably 75 to 100 learning objectives. Some are a much smaller number &endash; most are quite large. Then the process is one of aligning the competencies to the learning resources the students will use to develop the competencies and the assessments they will focus on. We do not do our own teaching. We partner with other colleges and universities that have good distance materials that relate to our competencies. We develop with profit and non-for-profits that develop online learning materials. It's through one of our educational partners.

Clearly, there is a critical linkage between a competency that you teach and what you have a student learn, and the assessment you use to learn it. If that alignment is not present, the student will get frustrated. We are still working through that. It never goes away. One of the reasons it doesn't is because, even in your traditional situation, textbooks change, professors select different materials and every time that happens, you've got to be sure you've got the alignment.

2. How are the competencies assessed? What, if any, impact do these assessments have on graduation?

The competencies are always assessed in multiple ways. We use a combination of objective instruments, essay instruments (a lot of writing in all of our programs) and performance assessments. There's always some application task in the assessment mix. That's the only way people graduate from this place. We don't award credits; we don't award grades; they only graduate through competences. We are not "seat time." We are fixed on mastery. Students only graduate thru achieving the competencies &endash; you know everything

 

3. Is the process based on completing a set of courses or is it more product driven?

It really is product driven.

We measure just about everything and we can tell any day in the week and every mentor can tell exactly where each student is in the process. We measure starting with the date the student enrolled; how long the degree plan should take; whether the student is progressing at the required rate; when the last time an student took an assessment; how many a student's assessment has been taken; how much more to go. So, there's a lot of measurement and a lot of intervention; the faculty are expected to be in personal contact at least every two weeks; more often than not, it's at least every week. Usually that correspondence is through email; supplemented by telephone calls on the average of once a month.

4. Do you have in place a system of monitoring student progress and intervening when required?

The role of the faculty member is to be a mentor &endash; to stand in the middle of this whole network of systems and process and guide the student through it; the mentor works out the details of the degree plan with the student; average student age is 40; much more heterogeneous background than is customary with the traditional age population. They have varying levels of competencies even though they are the same in every degree. People have different starting points; That determines the order in which they take the assessments, the learning resources they need to prepare for the assessments; All that gets recorded after a good deal of discussion with the student; in an academic action plan, the degree plan the student will follow. From that point, it's the mentor's job to shepherd the student along according to that plan and to line up the learning resources the student will use. We have group conference for students that allow students to form a community and share their knowledge and preparation; If a student fails the assessment, it is the mentor's job to interpret it, to go over that and determine what resources to use to fill the gap in learning; It's an intensive guidance role; the instructor the student may have with another institution would be separate from that relationship.

5. What problems have surfaced that we can avoid?

You discover them every day. One of the most consistent errors we make is we overbuilding the programs. If you have a group of faculty together from different institutions, they all want to put something in and never take anything out. By the time 9 or 10 individuals do that, you've got a set of competency standards that no human being could finish in a decent amount of time. We tend to overbuild and then pare down. It's better than the other way around; That's an issue--you have to be very careful as to what the competencies are, how reasonable they are, how comparable to what you can rightly expect at the end of graduate level and undergraduate level. There's no way around the trial and error experience in that regard.

A second problem area: You want to be sure you never teach to the test. We go to great lengths to separate the competency from the assessment. We do a lot of work in alignment, but we never want an instructor to know literally what's on the test because the competencies should always exceed the assessment instruments and we don't want the process to be reductive. We use, for instance, external graders in assessing student work. If the assessment is not computer driven, it's graded not by the mentor, but by somebody else. Even when we contract another institution to develop a course that meets our competency standards, we don't show that institution what's on the assessment. And I think if you went to assessment people, they would, without exception, say that's essential and important. Otherwise, it just becomes a reductive exercise in being able to answer the right answers on the test.

A third one that has surfaced for us. This may not be quite so relevant. It is regarding faculty &endash; the whole sequence of faculty &endash; recruitment - especially if you are scattered all over the country, there are a lot of classroom assumptions that most faculty would bring to a new paradigm &endash; and that can be good and bad. It ties into the other issue. There will be tremendous pressure from faculty to know what's on the test &endash; training them in this new mode of operation will be a more significant issue than you might think.

Problems have surfaced with students. We've installed a quite radical model in lots of ways. It's distance learning. We don't do our own teaching. It's competency based. None of those things are customary for students. If you install a different paradigm, they are going to need coaching, support, and re-education if you will, re-orientation to what the model is and why we are doing it. There is no question it is much better. I am absolutely convinced of the benefits of competency-based education. But it is an alien world for faculty and students both.

6. How autonomous are the individual faculty within the structure? Is faculty responsible for the course design of impacted courses?

There is not autonomy. Once the competency is established, that is what every student has to master. It's not as though a professor can make up his own set of competencies. It doesn't work for him to do his own. The other way in which we are working: we are wrestling to minimize and eliminate another aspect of the autonomy. This has to do with the resources the faculty use. We've gone through a cycle in this area. Say, two years ago, we had about 50 educational partners across the country and something on the order of 2,000 courses and course modules in our catalog for faculty and students to draw on to use in preparing for the assessments. We had mapped/aligned a third of those through the competencies. We discovered it was a bewildering array of materials and we couldn't verify that a student would be prepared to use materials ABC as well as BCD. So we created standard paths through the materials so that virtually every student and every faculty member will have a reliable set of resources to call on in preparing for every set of competency assessments. That's not to say there won't be any autonomy, but there will be much less. Our objective is to be able to measure the effectiveness of every resource we use to see if it produces efficiently the competency we seek to use.

7. Is the competency approach actively promoted as part of the university identity?

I will tell you an anecdote that may be relevant. When the governors established this institution, they were determined that it would use distance learning technology and would be competency based. They entered a whole series of discussion about accreditation and at one point they didn't want it accredited at all. They were sick of it, but they swung around the other way and we're going to have everybody accredit this place. All of the regions except the southern association will be represented. They are the member states of Western Governors University. The real work and real impact was our ability to convince the associations that it was viable and offered a quality product to students. We received accreditation. Last February, our enrollments have grown about 10 percent every month. It testifies to the impact of accreditation, but also to the reassurance that students had this method was going to be viable.

8. What effect has the program had on enrollments?

One other thing we are wrestling with is &endash; has to do with grades &endash; we set out not to award grades and we haven't. We set the passing scores at competency at what we can confidently to equivalent to a B at the grad. Or under graduate level. We've not awarded grades. We do, of course (SFA). One of the things you want to think is the impact that ought to have on the grading system. Do you do something else entirely? Do you redefine it entirely? What are your intentions in regard to that? Is it because, in effect, the grading system has broken down? If that's there, then you need to give some thought to what follows it. In any case, it will have an impact on your attitude toward grading and how you implement a competency &endash; based.