Teaching Effectiveness In Information Systems Courses Bijan Mashaw Bijan.Mashaw@csuEastBay.edu California State University - East Bay Abstract Student evaluation of instructors has been widely used for evaluating the effectiveness of courses and faculty in Computer Information Systems. The exiting evaluation system rates and tallies a number of factors, and subsequently associates the rates with the course effectiveness. The rates are often correlated to the popularity of an instructor. There is no sound foundation behind the existing questionnaire that can show the rate given by students depicts the course effectiveness. There is a dire need for a better method of evaluating the effectiveness of an Information System course. This paper reviews the research in the area of student evaluation, defines teaching and teaching effectiveness and proposes a quantitative model for measuring effectiveness in teaching Information Systems courses. Keywords: Teaching Effectiveness, Student Evaluation of Faculty, Course Evaluation, Teaching Introduction More and more students are taking at least one computer related course in search of computer literacy. Some students have more difficulty than others, given their motivation and intellectual capacity. Instructors find themselves in the challenging situation of teaching a computer related course to a diverse student body in the most effective manner with efficient method. Effectiveness indicates whether or not the instructor and course were instrumental in student's learning. Efficiency indicates the level of resources used for delivering and instructing. The teaching effectiveness depends on many factors including the guidelines provided, instructions, the value of the course to the students, motivations, and the feelings of the students. The efficiency of teaching depends directly on the particular teaching method used. How can an instructor improve the teaching effectiveness for a course? What is effective teaching? And how it can be measured is of interest to many instructors. The student evaluation of instructors has been widely used as a major tool for judging the effectiveness of a course and an instructor. The tool was originally developed in 50's and 60's to provide an instructor feedback regarding the course. However, these days, the same tool and concept are used by many institutions for evaluating courses. The objectives and implication of the evaluation are not clear to many. Its validity has been analyzed in the literature (Young/Shaw 99, Greenwald 97, Centra 94). Many still believe that the current tool lacks validity and the system has caused many problems including the violation of academic freedom, grade inflation and a lower quality or standards in the educational systems(Haskell, 97) There is no theoretical foundation, or model, to show that the rate is an indication of course effectiveness or evaluation constructs. It does not offer constructive evaluation, or any valid measurement which can accurately provide a valuable assessment of the course effectiveness. Particularly, the question that asks students to rate the course and instructor (and often is used for measuring the effectiveness of a course) does not depict, nor is a fair indication of the course effectiveness. There must be a more equitable way to rate an instructor for teaching effectiveness. Research in Measuring Effectiveness and the Tool Researchers have been trying to identify what effective teaching is and how it can be measured. However, there is no consensus with methodology, factors or dimension of effective teaching. The research and the conclusions are influenced by the researcher's opinion and biases (Abrahim 97.) The majority confirm the idea that teaching effectiveness is multiperspective in nature (Abrami, d'Apollamia, & Rosenfield 97, Marsh/Dunkin, 97, Young, Shaw 99). Abrami et all, identify the dimensions as the type of the course, class size, student abilities, and grading policies (Young/Shaw 99, Abrami, et all 90, Centra 94, Cohen 81, 87, Fieldman 89, Marsh 87, 93 ) Majority of research rely on a correlational analysis among factors in an attempt to identify which one has a high impact on "effectiveness." However, the method, the measurement objectives, and the conclusions are all subject to question and interpretation. For example, all the researchers report a significant correlation between "a well organized course" and "effectiveness." But, they also report that not all the organized courses are an indication of effectiveness of a teacher, nor all the effective teachers with high ratings are well organized (Young/Shaw, 99). Generally, the research supports that the ratings are highly correlated with the instructors personality and traits (Fieldman 86, Murray et all 90, Renamad and Murray 96). The classical fox's experiment in 1970 showed that the students regard charismatic, expressive teachers as highly effective regardless of substantive contend of the lecture (reported in Marsh 87, Naftulin, Ware, Donnelly 73). A follow up study which used a comedian to lecture on a topic in an expressive manner and got a high rating, supports the claim that students may not consider the lack of substance in rating a course. The content of evaluation instrument varies by researchers. However, nine factors reported by Marsh (87) is typical. These contents are: Learning values, instructor enthusiasm , course organization, breadth of coverage, group interaction, individual rapport, exam/grading policies, assignment, and difficulty/work load. Sometimes, the findings and explanations are contradictory. Feldman (97) reports that stimulating of interest and clarity of presentation were the most important dimensions of good teaching. And that effective teachers are seen as knowledgeable about the subject matter, well organized and prepared, and demonstrated enthusiasm. He reported that these items are more important than factors related to classroom management such as course difficulty, workload, and interpersonal traits such as friendliness, helpfulness and openness. Young and Shaw (99) have done an extensive study of the profiles of effective teachers with a large sample size. They initially identified 25 factors related to course content, course delivery and teachers personal attributes. They asked the students who had taken a teacher in their most recent years to rate instructors. After an extensive correlational analysis of data with all 25 factors, and a cluster analysis, they identified six factors that had highest multiple correlation with the overall rating and could capture all the variances These six items were: value of the course, motivating students to do their best, comfortable learning atmosphere, course organization, effective communication, and concern for student learning. The most interesting point of their finding was identifying factors that could distinguish between effective and ineffective teachers in student's opinion. At the top of the list for an effective teacher were the value of the course, and motivating students to do their best. The most important characteristic for an ineffective teacher was those who did not motivate students to learn. Majority of research in measuring effectiveness of a course and instructor is a survey evaluated by students. Due to difficulties of measuring effectiveness, other methods are rarely used. Beaument reports an interesting approach that basically measures the success of the students in the "next course", to evaluate the effectiveness of an instructor in an introductory course. The method is a follow up study of students in the "advance" course. It measures the percentage of A, B, C, … grades in the advance course and compares the result with the previous course prepared by each instructor to determine the effectiveness of the instructor. The Current Tool The majority of the institutions use a questionnaire and ask students at the end of a semester to rate the course and instructor. However, it is not clear how the results are used, and whether or not it is a rating of certain variables or the rating of the instructor, or is it the rating of the course, or is it a measurement of effectiveness of a teacher. Furthermore, it is not clear what is it measuring, what is the consequences of measurements, or what is its impact on the career of the instructor. Often the survey asks students to rate some variables on a scale of 1 to 5. Though the questions vary from institution to institution, there are some common concept-questions that can be seen on many forms. The typical questions are whether or not the requirements for the course were communicated, students were treated with respect, lectures and material were related to the course, classes met regularly, the instructor was enthusiastic or the instructor was available for help, and the instructor made the class challenging. More importantly, there are two typical questions that asks students to rate the instructor. These two important questions are: * Give an overall rating of the course * Give an overall rating of the instructor Often, the second question is viewed by administrators very carefully for an instructor's performance evaluation and for retention, promotion, or tenure. Notice that among the questions, there are very few that asks whether or not the instructor motivated the student, and all the questions are directed to the rating of the instructor as being good or not. Generally, if a teacher is liked by students, then majority of questions' ratings are done favorably, and the ratings are correlated to each other. This was clearly shown by Young and Shaw research (99.) The validity of the current tool The validity of student rating of a course and instructor has been analyzed by many researchers. Some researchers support the validity of student rating by the questionnaires by students (Young/Shaw 99.) For example, Greenwald claims that evidences support the construct, convergent and consequential validity of the rating. However, some argue against the student rating for a course evaluation. The validity of an instrument is as good as the researchers method and theory. Many report indicate that the rating depends on many nonsubstantial factors such as the gender (of both teacher and student), age (of both), course load, nature of the course (for example computer related courses, or quantitative oriented courses generally get a lower rating), or grade distribution. Some researcher report that student rating is highly correlated to "Student Achievement" (Cohen 81, 87, Greenwald & Gilmore 97, Marsh 87.) But, some indicated that the correlation is hard to explain. Again, the conclusions are inconsistent. Particularly, it is an overall believe that the rating depends on the grading leniency of the teacher and the student expectations of receiving a "good grade." There are many reports on grading policies and ratings. Although the grading leniency are correlated with rating, there is no positive support for rating to be dependent on grade distribution. Marsh and Roche(97) summarize the research on this issue and conclude that although leniency biases rating, its effects are nonconsequential. Instructors challenge: Motivating Students to learn Research on motivation indicates that a motivated student not only learns "faster", but also has a stronger and a more positive self image, and can learn more efficiently. Those who are not motivated to learn, resist new information, tend to make snap decisions, and use categorical reasoning. Research also shows that motivation to learn can be a dynamic process and can be changed. However, a semester is too short to change it. It is a common understanding that each student can be motivated differently, and there are some common motivators and some common "de-motivators." In an interesting research in 1979, Clegg listed possible motivational factors that could help college students to learn. She asked students to rate the items in relation to "The teaching approach and/or attitude of the instructor" in motivating students in a course. She then identified seventeen factors which had 60% or higher correlation. Five of these factors were related to the instructor’s enthusiasm and expressiveness. The remaining twelve were: * Explained course material clearly, and explanations were to the point * Made it clear that he/she wanted to help students learn * Changed approaches to meet new situations * Summarized material in a manner which aided retention * Demonstrated the importance and significance of the subject matter * Made it clear how each topic fit into the course * Clearly stated the objectives of the course * Used humor in a way I appreciated * Found ways to help students answer their own questions * Introduced stimulating ideas about the subject * Was available to help students individually * Explained the reasons for criticisms of students academic performance These findings strongly support the theory that teaching is about providing guidance and motivation to facilitate learning. It is not simply showmanship and expressiveness which helps students to learn, but also is about finding ways, styles, tools, and a variety of teaching methods and approaches to motivate the learner. Furthermore, the motivation and commitments are personal matters and depends on an individual. A teacher should not only motivate a learner, but also should eliminate the barriers that blocks the learning. That includes eliminating demotivators. The Need For A Better Model To Measure Teaching Effectiveness The current student survey measures the rate of some factors without regard to the objective of the measurement. For example, if the objective is to measure teaching effectiveness, there is no definition of teaching used in the current instrument to measure its effectiveness. However, effectiveness is a well defined term. In the current tool, there are some questions which may show the characteristics of "a good teacher." But, it is not clear whether or not a good teacher means an effective teacher. What is teaching anyway? and how can a questionnaire rated by students show its effectiveness? If a tool is designed to measure something, the objectives and standards must be defined so the measurements can be compared. Of course the measurements can be of nominal, ordinal, interval, or ratio scale. To have measurement validity and reliability, the constructs, objectives, and factors that fulfils the objectives need to be identified. Since teaching and its objectives are not defined in the current tool used by many institutions, it does not capture teaching nor its effectiveness. Often, one of the question in the entire survey (that asks student to rate the instructor) is used for rating an instructor. This rating is not a fair indication of the effectiveness of a teacher, nor the course. Teaching and its effectiveness Unfortunately, teaching is a vague term used by many, and the existing instrument used by many institution does not capture teaching nor its effectiveness. Actually, it is not clear what the tool is and what does it measure. If an evaluation tool is suppose to measure teaching or teaching effectiveness, then the teaching or teaching effectiveness must be defined to be able to identify its dimensions, and then a tool can be developed by identifying sub-dimensions. What is teaching and what is the role of a teacher? Is it the ability of transferring knowledge? Or is it to instruct someone to do something? Or is it to motivate someone to learn. The most common notion of teaching implies that someone, or some entity, who is knowledgeable in a particular subject is able to communicate and transfer the knowledge to a learner, assuming that the knowledge is transferable. The communication is not the only skill required by a teacher. It also requires that the teacher provide guidelines, and proper environment for delivering the content, and finding a proper way to effectively deliver the content (motivating and reinforcement.) For the purpose of developing a better tool, we define the teaching as: The process of delivering, motivating, instructing, and providing guidance to facilitate a learner to learn. Based on this definition, a good teacher is the one who facilitates the learning and that maximizes the level of learned subject for a given entity and a given period of time. One of the major factor in effectiveness of a teacher is the ability to motivate. A Model To Rate Effectiveness of a Course Teaching is defined here as a process for learner to learn. Therefore the dimensions of its effectiveness are not only the amount of learned material, but also the elements used in the process. The following model is proposed to capture the quantitative rate of the effectiveness of a course and the instructor in delivering the material. The Model The objective is to develop a model that can measure the effectiveness of a teacher and to be used for comparisons -- a nominal rate of an instructor. The rate in a numeral, in the form of a "score" similar to the credit rate for measuring the financial strength of an individual that has been used by institutions. This numerical rate, if designed and developed properly, can be used to measure teaching effectiveness or teaching strength for a given period. The proposed model is a quantitative model based on the previous definition of teaching effectiveness that has two components: "amount of learning", and motivational/demotivational factors, and can be captured by a function in the form of: E = a X + f ( x, y, z, . . . , - p, -q, -r, . . ) Where E is a score or teaching effectiveness of an instructor, X is the relative amount of learned material, x, y, z are the motivational scores, and p, q, r are the demotivational scores. The feature of this model is that it can create an index of teaching effectiveness so that both the effectiveness of the teacher and the effectiveness of a course can be compared. The factors can be measured by the teacher, students, and even outsiders. The index can change (like the credit score), or more importantly, the instructor can identify and change the method, style, tools, and motivational factors for a more favorable situation. Some of these factors can be measured directly. A questionnaire can be developed to measure the factors, either by students, by self evaluation, or colleagues. A questionnaire can be used for self evaluation, improvements, or for comparison purposes particularly if a standard for scoring is followed. The Components and Factors of Teaching Motivation As mentioned, the major components are Motivational and Demotivational factors, and their dimensions are as follows: Motivational constructs that can be used by the instructor, the major ones are: Attractors are the method used to draw the learner's attention by emotion, charm, and fascination. They play an active role in getting the initial reaction of the learner to the subject. They include attention getters of some forms, expressive lecturing, enthusiasm, humor etc. Maintainers are the constructs that can hold a student over a period of time, after their initial attentions. These factors include comfortable learning atmosphere, course organization, effective communication, and concern for student learning. Facilitators are factors that makes it easier for the learner to explore the subject. They include clear explanation, particularly the abstracts, vague terms or topics of a difficult nature. The most important idea in this category is to teach students to explore - - finding their own answer. Attainments are the factors that shows the students their progress and achievements. They include immediate feedback and encouragement, and showing the consequences of their learning, (for example, how the learned material can be practical in real life or job) Demotivators are factors that distract or discourage the learner. They include: Repellants factors are those which causes the learner to turning away from the subject or the learning environment. Impedances are factors that make the learner become resistant to learning. Obstructers are those which block learning or prevent a learner from progressing or at least result in the feeling that no progress is being made. Or, they can be a roadblock that prevents a learner from achieving an objective. An example would be a very difficult or impractical assignment. Discontenters include factors that create dissatisfaction in a learning environment. They include attitude or behavior that is considered "negative" and creates discontent. A good example of this is when a student feels that he/she has been discriminated against (for grade, assignment, or other things.) Some Suggestions For Developing an Instrument For using the proposed model to measure the effectiveness of a course or and instructor, a questionnaire can be developed for this purpose. The questionnaire should be developed with the objective of measuring the components of the model (amount of learned, motivational, demotivational factors). However, the author suggests several instruments to be developed and be used to evaluate a course: one to be filed up by the instructor who has taught the course, one by an outsider and one by students. Student Self Evaluation Of Learning The best way to rate effectiveness is to actually measure the increase in knowledge during a period of time by a pretest, post-test mechanism. However, there could be some indicators that can be used to judge the relative amount of leaning. One indicator would be personal judgment of both the instructor and student. A learner is in the position to judge the relative amount of learning. For example, a student is in the position to judge how much he/she has learned during a semester (of course relatively.) Often students would make a comment like "I did not learn anything from this course, even though I got a good grade." Or, I learned a great amount in this course." To measure teaching effectiveness, asking a student to rate his learning is more accurate than to ask a student to rate the instructor. A student can evaluate himself/herself regarding how much she/he has learned during a period. For example, numerous report have shown that if the course has less substance, the students are aware of the amount of the learning, even though a student might have had "a good time" during the semester. Some simple questions in a questionnaire can capture the relative amount of learning. An example of a question that can be used is: On a scale of 1 to 5, how much did you learn from this course (compared to others) 1 2 3 4 5 References Abrami, P. C., Dickens, W. J., Perry, R. P., & Leventhal, L. (1980). Do teacher standards for assigning grades affect student evaluations of instruction? Journal of Educational Psychology, 72, 107-118. Abrami, P. C., d’Apollonia, S., & Cohen, P. A. (1990). Validity of student ratings of instruction: What we know and what we do not. Journal of Educational Psychology, 82, 219—231. Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (1997). The dimensionality of student ratings of instruction: What we know and what we do not. In R. P. Perry & J. C. Smart (Eds.), Effective teaching in higher education (pp. 321 -367). New York: Agathon. Arubayi, E. A. (1986). Students’ evaluations of instruction in higher education: A review. Assessment and Evaluation in Higher Education, 11, 1—10. Avi-Itzhak, T., and Lya K. (1986). An investigation into the relationship between university faculty attitudes toward student rating and organizational and background factors. Educational Research Quarterly, 10 . 31-38 Beaumont, Henry, The Measurements of Teaching, Vol. IX, No. 2 Beaumont, Henry, A Method for Measuring the Effectiveness of Teaching Introductory Courses", The Journal of Educational Psychology. Cahn, S. (1987, October 14). Faculty members should be evaluated by their peers, not by their students. Chronicle of Higher Education, p. B2 Carey, G. W. (1993). Thoughts on the lesser evil: student evaluations. Perspectives on Political Science, 22, 17-20. Cashin, W. E. (1996). Developing an effective faculty evaluation system. Idea Paper No. 33, Manhattan: Kansas State University, Center For Faculty Evaluation and Development (January) Centra, A. (1994). The use of the teaching portfolio and student evaluations for summative evaluations. Journal of Higher Education, 65, 555—570. Chacko, T. I. (1983). Student ratings of instruction: A function of grading standards. Educational Research Quarterly, 8(2), 19-25. Chau, H., & Hocevar, D. (1994, April). Higher-order factor analysis of multidimensional students’ evaluations of teaching effectiveness. Paper presented at the annual conference of the American Educational Research Association, New Orleans, LA. 1994 Clegg, V.L. Teaching Behaviors which stimulate students motivation to learn. Doctoral dissertation, Kansas State University, 1979. Cohen, P. A. (1981). Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Review of Educational Research, 51, 281 - 309. Cohen, P. A. (1987, April). A critical analysis and reanalysis of the multisection validity meta-analysis. Paper presented at the 1987 annual meeting of the American Educational Research Association. Washington, DC (ERIC Document Reproduction Service No. ED 283 876). d’Apollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist, 52, 1198 - 1208. Damron, J. C. (1996). Instructor personality and the politics of the classroom. Manuscript, Douglas College, New Westminster, British Columbia, Canada V3L 5B2. (An Earlier Versions in the June 1994 Issue of Faculty Matters (No. 5, Pages 9-12) and the September, 1994 Issue of Update (The Newsletter of the Okanagan University College Faculty Association). Douglas College. Dershowitz, A. (1994). Contrary to popular opinion. New York: Berkley Books Dilts, D. A., Samavati, H., Moghadam, M.R., and Haber, L.J. (1994). Student evaluation of instruction: Objective evidence and decision making. Journal of individual employment rights, 2,73- 86. Dowell, D. A., & Neal, J. A. (1982). A selective view of the validity of student ratings of teaching. Journal of Higher Education, 53, 51-62. DuCette, J. and Kenney, J. (1982). Do grading standards affect student evaluations of teaching? Some new evidence on an old question. Journal of Educational Psychology, 74. 308-314. Education Employment Law News (1994). How big a role should student evaluations play in the assessment of a professor for tenure? (January) pp. 3-4. Ericksen, S.C. , Motivation for Learning: A guide for the Teacher of the Young Adult, Ann Arbor: University of Michigan Press, 1974 Feldman, K. A. (1976). The superior college teacher from the student’s view. Research in Higher Education, 5, 243 - 288. Feldman, K. A. (1986). The perceived instructional effectiveness of college teachers as related to evaluations they receive from students. Research in Higher Education, 18, 3—124. Feldman, K. A. (1988). Effective college teaching from the students’ and facultys' view: Matched or mismatched priorities. Research in Higher Education, 28, 291—344. Feldman, K. A. (1989). Instructional effectiveness of college teachers as judged by teachers themselves, current and former students, colleagues, administrators, and external (neutral) observers. Research in Higher Education, 30, 113 —135. Feldman, K.A. (1993). College students’ view of male and female college teachers: Part II---Evidence from students’ evaluations of their classroom teachers. Research In Higher Education, 34, No. 2. 151-191. Greenwald, A. G. (1997). Validity concerns and usefulness of student ratings of instruction. American Psychologist, 52, 1182—1186. Greenwald, A. G., & Gillmore, 0. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209—1217. Haskell, Robert, Academic Freedom, Tenure, and Student Evaluation of Faculty, E., Education Policy Analysis Archives Volume 5 Number 6 February 12, 1997, ISSN 1068-2341 Holmes, D. S. (1972). Effects of grades and disconfirmed grade expectancies on students’ evaluations of their instructor. Journal of Educational Psychology, 63, 130-133. Howard, G. S., & Maxwell, S. E. (1982). Do grades contaminate student evaluations of instruction? Research in Higher Education, 16, 175-188. Howard, G. S., & Maxwell, S. E. (1980). Correlation between student satisfaction and grades: A case of mistaken causation? Journal of Educational Psychology, 72, 810-820. Koon, S., & Murray, H. G. (1995). Using multiple outcomes to validate student ratings of overall teacher effectiveness. Journal of Higher Education, 66, 61—81. Marsh, H. W. (1984). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases, and utility. Journal of Educational Psychology, 76, 707—754. Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for further research. Journal of Educational Research, 11, 253—388. Marsh, H. W., & Bailey, M. (1993). Multidimensional students’ evaluations of teaching effectiveness. Journal of Higher Education, 64, 1—18. Marsh, H. W., & Dunkin, M. 3. (1997). Students’ evaluations of university teaching: A multidimensional perspective. In H. P. Perry & S. C. Smart (Eds.) Effective teaching in higher education (pp. 241—320). New York: Agathon. Marsh, H. W., & Overall, J. U. (1979). Long-term stability of students’ evaluations of teaching effectiveness: A note on Feldman’s “Consistency and variability among college students in rating their teachers and courses.” Research in Higher Education, 10, 139—147. Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187—1197. McKeachie, W. J. (1983). The role of faculty evaluation in enhancing college teaching, National Forum, 63(1) 37—39. McKeachie, W. J. (1997). Student ratings, American Psychologist, 52, 1218—1225. Miller, A. H. (1988). Student assessment of teaching in higher education. Higher Education, 17, 3 -15. Miller, Donald W., Dangers of using student evaluation for administrative purposes, Collegiate News and Views Vol XXXI, No. 3, Spring 1978 Murray, H. 0., Rushton, J. P., & Paunonen, S. V. (1990). Teacher personality traits and student instructional ratings in six types of university courses. Journal of Educational Psychology, 82, 250—261. Naftulin, D. H., Ware, S. E., Jr., & Donnelly, F. A. (1973), The Dr. Fox lecture: A paradigm of educational seduction. Journal of Medical Evaluation, 48, 630—635. Nelson, J. P. Lynch, K. A. (1984). Grade inflation, real income, simultaneity, and teaching evaluations. Journal of Economic Education, 15, 21-37. Overall, S. U., & Marsh, H. W. (1980). Students’ evaluations of instruction: A longitudinal study of their stability. Journal of Educational Psychology, 72, 321—325. Perry, R. P., & Smart S. C. (Eds.). (1997). Effective teaching in higher education. New York: Agathon. Renaud, R. D., & Murray, H. G. (1996). Aging, personality, and teaching effectiveness in academic psychologists. Research in Higher Education, 37, 323—340. Renner, Richard, R. (1981). Comparing professors: how student ratings contribute to the decline in quality of higher education. Phi Delta Kappan, 63 2 Oct. 128-30. Ware, Williams, R.G., J.E., "Validity of Student ratings of Instruction Under Different Incentives Condition: A Further Study of Dr. Fox Effect." Journal of Educational Psychology, 68 , 1976, PP 48-56. Young, Suzanne, and Dale G. Shaw, Profiles of Effective College and University Teachers, The Journal of Higher Education, Vo. 70, No. 6 (November/December 1999), pp670 - 686.